text
stringlengths
6
2.78M
meta
dict
--- abstract: 'The most popular face recognition benchmarks assume a distribution of subjects without much attention to their demographic attributes. In this work, we perform a comprehensive discrimination-aware experimentation of deep learning-based face recognition. The main aim of this study is focused on a better understanding of the feature space generated by deep models, and the performance achieved over different demographic groups. We also propose a general formulation of algorithmic discrimination with application to face biometrics. The experiments are conducted over the new DiveFace database composed of 24K identities from six different demographic groups[^1]. Two popular face recognition models are considered in the experimental framework: ResNet-50 and VGG-Face. We experimentally show that demographic groups highly represented in popular face databases have led to popular pre-trained deep face models presenting strong algorithmic discrimination. That discrimination can be observed both qualitatively at the feature space of the deep models and quantitatively in large performance differences when applying those models in different demographic groups, e.g. for face biometrics.' author: - | Ignacio Serna,^1^ Aythami Morales,^1^ Julian Fierrez,^1^ Manuel Cebrian,^2^ Nick Obradovich,^2^ Iyad Rahwan^2^\ ^1^Universidad Autonoma de Madrid, Madrid, Spain\ ^2^Max Planck Institute for Human Development, Berlin, Germany\ ^1^{ignacio.serna, aythami.morales, julian.fierrez}@uam.es\ ^2^{cebrian, obradovich, sekrahwan}@mpib-berlin.mpg.de\ bibliography: - 'AAAI.bib' title: | Algorithmic Discrimination: Formulation and Exploration\ in Deep Learning-based Face Biometrics --- Introduction ============ Face recognition algorithms are good examples of recent advances in Artificial Intelligence (AI). The performance of automatic face recognition has been boosted during the last decade, achieving very competitive accuracies in the most challenging scenarios [@1]. These improvements have been possible due to improved machine learning approaches (e.g., deep learning), powerful computation (e.g., GPUs), and larger databases (e.g., at scale of millions of images). However, the recognition accuracy is not the only aspect to consider when designing biometric systems. Algorithms have an increasingly important role in the decision-making of several processes involving humans. These decisions have therefore increasing effects in our lives. Thus, there is currently a growing need for studying AI behavior to better understand its impact in our society [@31]. Face recognition systems are especially sensitive due to the personal information present in face images (e.g., identity, gender, ethnicity, and age). Previous works suggested that face recognition accuracy is affected by demographic covariates. In [@6; @5], authors demonstrated that the performance of commercial face recognition systems varies according to demographic attributes. In [@7; @8], the authors evaluated how covariates affect the performance of face recognition systems based on deep neural network models. Among the different covariates, the skin color is repetitively remarked as a factor with high impact in the performance [@6; @7]. However, ethnic face attributes are beyond skin color. The shape and size of facial features are partially defined by the ancestry origin. These differences can be used to accurate classify subjects according to their ancestry origin [@8]. ![image](AAAI/figura1.pdf) The number of published works pointing out the biases in the results of face detection [@3] and recognition algorithms is large [@5; @8; @4; @6; @7; @32]. Yet, only a limited number of works analyze how biases affect the learning process of these algorithms. The aim of this work is to analyze face recognition models using a discrimination-aware perspective. Previous studies have demonstrated that ethnicity and gender affect the performance of face recognition models [@gong2019debface]. However, there is a lack of understanding regarding how this demographic information affects the model beyond the performance. The main contributions of this work are: > - A general formulation of algorithmic discrimination for machine learning tasks. In this work, we apply this formulation in the context of face recognition. > > - Discrimination-aware performance analysis based on a new dataset [@9], with 24K identities equally distributed between six demographic groups. > > - Study of the effects of gender and ethnicity in the feature representation of deep models. > > - Analysis of the demographic diversity present in some of the most popular face databases. > The rest of the paper is structured as follows: Section 2 presents our general formulation of algorithmic discrimination. Section 3 analyzes some of the most popular face recognition architectures and the experimental protocol followed in this work. Section 4 evaluates the causes and effects of biased learning in face recognition algorithms. Finally, Section 5 summarizes the main conclusions. Formulation of Algorithmic Discrimination ========================================= Discrimination is defined by the Cambridge Dictionary as treating a person or particular group of people differently, especially in a worse way than the way in which you treat other people, because of their skin color, sex, sexuality, etc. For the purpose of studying discrimination in artificial intelligence at large, we now formulate mathematically algorithmic discrimination based on the previous dictionary definition. Even though similar ideas as the ones embedded in our formulation can be found elsewhere [@23; @22], we didn’t find this kind of formulation in related works. We hope that formalizing these concepts can be beneficial to foster further research and discussion in this hot topic. Let’s begin with notation and preliminary definitions. Assume $\textbf{x}_s^i$ is a learned representation of individual $i$ (out of $I$ different individuals) corresponding to an input sample $s$ (out of $S$ samples) of that particular subject. That representation $\textbf{x}$ is assumed to be useful for task $T$, e.g., face authentication or emotion recognition. That representation $\textbf{x}$ is learned using an artificial intelligence approach with parameters $\theta$. We also assume that there is a goodness criterion $G$ on that task maximizing some performance real-valued function $f$ in a given dataset $\mathcal{D}$ (collection of multiple samples) in the form: $$\label{eqn:goodnes_criterion} \textit{G}(\mathcal{D}) = \max_{\theta}\textit{f}(\mathcal{D},\theta)$$ The most popular form of the previous expression minimizes a loss function $\mathcal{L}$ over a set of training samples $\mathcal{D}$ in the form: $$\label{eqn:learning_strategy} \theta^*=\arg\min_{\theta}{\sum_{\textbf{x}_s^i\in \mathcal{D}}\mathcal{L}(\textit{O}(\textbf{x}_s^i|\theta),T^i_s)}$$ where *O* is the output of the learning algorithm that we seek to bring closer to the target function (or groundtruth) *T* defined by the task at hand. On the other hand, the *I* individuals can be classified according to *D* demographic criteria $\textit{C}_d$, with $d = 1,..., D$, which can be the source for discrimination, e.g., $\textit{C}_1 = \textit{Gender} = \{\textit{Male, Female}\}$ (demographic criterion $1 = \textit{Gender}$ has two classes in this example). The particular class $k=1,...,K$ for a given demographic criterion $d$ and a given sample is noted as $\textit{C}_d (\textbf{x}_s^i)$, e.g., $\textit{C}_1 (\textbf{x}_s^i)=\textit{Male}$. We assume that all classes are well represented in dataset $\mathcal{D}$, i.e., the number of samples for each class in all criteria in $\mathcal{D}$ is significant. $\mathcal{D}_d^k \in \mathcal{D}$ represents all the samples corresponding to class $k$ of demographic criterion $d$. Finally, **our definition of algorithmic discrimination**: an algorithm discriminates the group of people represented with class $k$ (e.g., *Female*) when performing the task *T* (e.g., face verification, or emotion recognition), if the goodness *G* in that task when considering the full set of data $\mathcal{D}$ (including multiple samples from multiple individuals), is significantly larger than the goodness $\textit{G}(\mathcal{D}_d^k)$ in the subset of data corresponding to class $k$ of the demographic criterion $d$. The representation $\textbf{x}$ and the model parameters $\theta$ will typically be real-valued vectors, but they can be any set of features combining real and discrete values. Note that the previous formulation can be easily extended to the case of varying number of samples $S_i$ for different subjects, which is a usual case; or to classes *K* that are not disjoint. Note also that the previous formulation is based on average performances over groups of individuals. Different performance across specific individuals is usual in many artificial intelligence tasks due to diverse reasons, e.g., specific users who were not sensed properly [@24], even for algorithms that on average may perform similarly for the different classes that can be the source of discrimination. Face Recognition Algorithms =========================== A face recognition algorithm, as other machine learning systems, can be divided into two different algorithms: screener and trainer. Both algorithms are used for a different aim and therefore should be studied with a different perspective [@33]. The screener (see Fig. \[Figure1\]) is an algorithm that given two face images generates an output associated to the probability that they belong to the same person. This probability is obtained comparing the two learned representations obtained from a face model defined by the parameters $\theta$. These parameters are trained previously based on a training dataset $\mathcal{D}$ and the goodness criterion *G* (see Fig. \[Figure1\]). If trained properly, the output of the trainer would be a model with parameters $\theta^*$ capable of representing the input data (e.g., face images) in a highly discriminant feature space $\textbf{x}$. The most popular architecture used to model face attributes is the Convolutional Neural Network (CNN). This type of network has drastically reduced the error rates of face recognition algorithms in the last decade [@28] by learning highly discriminative features from large-scale databases. In our experiments we consider two popular face recognition pre-trained models: VGG-Face and ResNet-50. These models have been tested on competitive evaluations and public benchmarks [@13; @12]. VGG-Face is a model based on the VGG-Very-Deep-16 CNN architecture trained on the VGGFace dataset [@13]. ResNet-50 is a CNN model with 50 layers and 41M parameters initially proposed for general purpose image recognition tasks [@29]. The main difference between ResNet architecture and traditional convolutional neural networks is the inclusion of residual connections to allow information to skip layers and improve gradient flow. Before applying the face models, we cropped the face images using the algorithm proposed in [@26]. The pre-trained models are used as embedding extractor where $\textbf{x}$ is a $l_2$-normalised learned representation of a face image. The similarity between two face descriptors $\textbf{x}_r$ and $\textbf{x}_s$ is calculated as the Euclidean distance $||\textbf{x}_r-\textbf{x}_s||$. Two faces are assigned to the same identity if their distance is smaller than a threshold $\tau$. The recognition accuracy is obtained by comparing distances between positive matches (i.e., $\textbf{x}_r$ and $\textbf{x}_s$ belong to the same person) and negative matches (i.e., $\textbf{x}_r$ and $\textbf{x}_s$ belong to different persons). The two face models considered in our experiments were trained with the VGGFace2 dataset according to the details provided in [@12]. As we will show in Section \[Bias in face databases\], databases used to train these two models are highly biased. Therefore, it is expected that the recognition models trained with this dataset present algorithmic discrimination. Experimental protocol --------------------- Labeled Faces in the Wild (LFW) is a database for research on unconstrained face recognition [@20]. The database contains more than 13K images of faces collected from the web. In this study we consider the aligned images from the test set provided with view 1 and its associated evaluation protocol. This database is composed by images acquired in the wild, with large pose variations, and varying face expressions, image quality, illuminations, and background clutter among other variations. The performance achieved by the VGG-Face and ResNet-50 models for the LFW database is $4.1\%$ and $1.7\%$ Equal Error Rate respectively. These performances serve as a baseline for the models and the rest of experiments. We can observe the superior performance of the ResNet-50 model, with a performance ca. 3 times better than the VGG-Face model. The experiments with DiveFace will be carried out following a cross-validation methodology using three images for each of the 4K identities from each of the six classes available in DiveFace (72K face images in total). This results in 72K genuine comparisons and near 3M impostor comparisons. DiveFace database: an annotation dataset for face recognition trained on diversity ---------------------------------------------------------------------------------- DiveFace was generated using the Megaface MF2 training dataset [@11]. MF2 is part of the publicly available Megaface dataset with 4.7 million faces from 672K identities and it includes their respective bounding boxes. All images in the Megaface dataset were obtained from Flickr Yahoo’s dataset [@27]. DiveFace contains annotations equally distributed among six classes related to gender and ethnicity (see Fig. \[Figure4\] for example images). Gender and ethnicity have been annotated following a semi-automatic process. There are 24K identities (4K for class). The average number of images per identity is 5.5 with a minimum number of 3 for a total number of images greater than 120K. Users are grouped according to their gender (male or female) and three categories related with ethnic physical characteristics: > - **Group 1**: people with ancestral origins in Europe, North-America, and Latin-America (with European origin). > > - **Group 2**: people with ancestral origins in Sub-Saharan Africa, India, Bangladesh, Bhutan, among others. > > - **Group 3**: people with ancestral origin in Japan, China, Korea, and other countries in that region. > We are aware of the limitations of grouping all human ethnic origins into only three categories. According to studies, there are more than 5K ethnic groups in the world. We categorized according to only three groups in order to maximize differences among classes. Automatic classification algorithms based on these three categories show performances of up to 98% accuracy [@9]. Causes and Effects of Biased Learning in Face Recognition Algorithms ==================================================================== Performance of face recognition: role of demographic information ---------------------------------------------------------------- [@>p[2cm]{}>p[2.2cm]{}>p[2.2cm]{} >p[2.2cm]{}>p[2.2cm]{}>p[2.2cm]{}c@]{} (l)[1-7]{} & & &\ (lr)[2-3]{}(lr)[4-5]{}(l)[6-7]{} & **Male** & & **Male** & & **Male** &\ (r)[1-1]{} (lr)[2-2]{}(lr)[3-3]{} (lr)[4-4]{}(lr)[5-5]{} (lr)[6-6]{}(l)[7-7]{} VGG-Face & 7.99 & 9.38 ($\uparrow$17%) & 12.03 ($\uparrow$50%) & 13.95 ($\uparrow$76%) & 18.43 ($\uparrow$131%) & 23.66 ($\uparrow$196%)\ ResNet-50 & 1.60 & 1.96 ($\uparrow$22%) & 2.15 ($\uparrow$34%) & 3.61 ($\uparrow$126%) & 3.25 ($\uparrow$103%) & 5.07 ($\uparrow$217%)\ \[table1\] This section explores the effects of biased models in the performance of face recognition algorithms. Table \[table1\] shows the performances obtained for each demographic group present in DiveFace. Traditional face recognition benchmarks usually do not explore this kind of demographic covariates. Results reported in Table \[table1\] exhibit large gaps between performances obtained by different demographic groups, suggesting that both gender and ethnicity significantly affect the performance of biased models. These effects are particularly high for ethnicity, with a very large degradation of the results for the class less represented in the training data (*Group* 3 *Female*). This degradation produces a relative increment of the Equal Error Rate (EER) of 196% and 217% for VGG-Face and ResNet-50, respectively, with regard to the best class (*Group* 1 *Male*). These differences are important as they mark the percentage of faces successfully matched and faces incorrectly matched. These results suggest that your ethnic origin can highly affect your possibilities to be incorrectly matched (false positives). Understanding biased performances --------------------------------- The relatively low performance in *Group* 3 seems to be originated by a limited ability to capture the best discriminant features for the groups underrepresented in the training databases. The results suggest that features capable of reaching high accuracy for a specific demographic group may be less competitive in others. Let’s analyze the causes behind these degradations. Fig. \[Figure2\] represents the probability distributions of genuine and impostor scores for *Group* 1 *Male* (the best group) and *Group 3 Female* (the worst group). The comparison between genuine and impostor distributions reveals large differences for the impostor’s ones. The genuine distribution (intra-class variability) between *Group* 3 and *Group* 1 is similar, but the impostor distribution (inter-class variability) is significantly different. The model has difficulties to differentiate face attributes from different subjects. **Algorithmic discrimination implications:** define the performance function $f$ as the accuracy of the face recognition model, and $\textit{G}(\mathcal{D}_d^k)=f(\mathcal{D}_d^k,\theta^* )$ the goodness considering all the samples corresponding to class $k$ of the demographic criterion $d$, for an algorithm $\theta^*$ trained on the full set of data $\mathcal{D}$ (as described in Eq. \[eqn:goodnes\_criterion\]). Results suggest large differences between the goodness $\textit{G}(\mathcal{D}_d^k)$ for different classes, especially for classes $k=\textit{Group} \, 1,\textit{Group} \, 2,\textit{Group} \, 3$. Bias in face databases {#Bias in face databases} ---------------------- ![ResNet-50 face recognition score distributions for Group 3 females and Group 1 males.[]{data-label="Figure2"}](AAAI/figura2.png){width="85mm"} Bias and discrimination concepts are related to each other, but they are not necessarily the same thing. Bias is traditionally associated with unequal representation of classes in a dataset. The history of automatic face recognition has been linked to the history of the databases used for algorithm training during the last two decades. The number of publicly available databases is high, and they allow training models using millions of face images. Fig. \[Figure3\] summarizes the demographic statistics of some of the most cited face databases. Each of these databases is characterized by its own biases (e.g. image quality, pose, backgrounds, and aging). In this work, we highlight the unequal representation of demographic information in very popular face recognition databases. As it can be seen, the differences between ethnic groups are severe. Even though the people in *Group* 3 are more than 35% of the world’s population, they represent only 9% of the users in those popular face recognition databases. Biased databases imply a double penalty for underrepresented classes. On the one hand, models are trained according to non-representative diversity. On the other hand, benchmark accuracies are reported over privileged classes and overestimate the real performance over a diverse society. Recently, diverse and discrimination-aware databases have been proposed in [@3; @25; @wang2019mitigate]. These databases are valuable resources to explore how diversity can be used to improve face biometrics. However, some of these databases do not include identities [@3; @25], and face images cannot be matched to other images. Therefore, these databases do not allow to properly train or test face recognition algorithms. **Algorithmic discrimination implications**: classes $k$ are unequally represented in the most popular face databases $\mathcal{D}$. ![Demographic statistics of the 12 most cited face databases available in the literature. BioSecure [@21], YouTubeFaces [@14], PubFig [@17], CasiaFace [@15], VGGFace [@13], CelebA [@16], MS-Celeb-1M [@10], Megaface [@11], LFW [@20], UTKface [@19], VGGFace2 [@12], IJB-C [@18], DiveFace [@9].[]{data-label="Figure3"}](AAAI/figura3.pdf) Biased embedding space of deep models ------------------------------------- We now analyze the effects of ethnicity and gender attributes in the embedding space generated by VGG-Face and ResNet-50 models. CNNs are composed of a large number of stacked filters. These filters are trained to extract the richest information for a pre-defined task (e.g. face recognition). As face recognition models are trained to identify individuals, it is reasonable to think that the response of the models can slightly vary from one person to another. In order to visualize the response of the model to different faces, we consider the specific Class Activation MAP (CAM) proposed in [@30], named Grad-CAM. This visualization technique uses the gradients of any target concept, flowing into the selected convolutional layer to produce a coarse localization map. The resulting heat map highlights the activated regions in the image for the mentioned target (e.g. an individual identity in our case). Fig. \[Figure4\] represents the heat maps obtained by the ResNet-50 model for faces from different demographic groups. Additionally, we include the heat map obtained after averaging results from 120 different individuals from the six demographic groups included in DiveFace. The activation maps show clear differences between ethnic groups with the highest activation for *Group* 1 and the lowest for *Group* 3. These differences suggest that features extracted by the model are, at least, partially affected by the ethnic attributes. ![Examples of the six classes available in the DiveFace database (columns 1 to 4). Column 5 shows the averaged Class Activation MAP (first filter of the third convolutional block of ResNet-50) obtained from 20 random face images from each of the classes. Columns 1-4 show Class Activation MAPs for each of the face images. Maximum and minimum activations are represented by red and blue colors respectively. Average pixel value of the activation maps generated for the six classes (*Groups* 1 to 3, and *Male*/*Female*): G1M=0.23, G1F=0.19, G2M=0.21, G2F=0.18, G3M=0.12, G3F=0.13. (This is a colored image, see the digital version for a better quality.)[]{data-label="Figure4"}](AAAI/figura4.pdf) On a different front, we applied a popular data visualization algorithm to better understand the importance of ethnic features in the embedding space generated by deep models. t-SNE is an algorithm to visualize high-dimensional data. This algorithm minimizes the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. Fig. \[Figure5\] shows the projection of each face into a 2D space generated from ResNet-50 embeddings and the t-SNE algorithm. Additionally, we have colored each point according to its ethnic attribute. As we can see, the resulting face representation results in three clusters highly correlated with the ethnicity attributes. Note that ResNet-50 has been trained for face recognition, not ethnicity detection. However, the ethnicity information is highly embedded in the feature space and a simple t-SNE algorithm reveals the presence of this information. These two simple experiments illustrate the presence and importance of ethnic attributes in the feature space generated by face deep models. **Algorithmic discrimination implications**: popular deep models trained for task *T* on biased databases (i.e., unequally represented classes $k$ for a given demographic criterion $d$ such as gender) result in feature spaces (corresponding to the solution $\theta^*$ of the Eq. \[eqn:goodnes\_criterion\]) that introduce strong differentiation between classes $k$. This differentiation affects the representation $\textbf{x}$ and enables classifying between classes $k$ using $\textbf{x}$, even though **x** was trained for solving a different task ${T}$. Conclusions =========== This work has presented a comprehensive analysis of face recognition models according to a new discrimination-aware perspective. This work presents a new general formulation of algorithmic discrimination with application to face recognition. We have shown the high bias introduced when training the deep models with the most popular databases employed in the literature, and testing with the DiveFace dataset with well balanced data across demographic groups[^2]. We have evaluated two popular models according to the proposed formulation. Biased models based on competitive deep learning algorithms have been shown to be very sensitive to gender and ethnicity attributes. This sensitivity results in different feature representations and a large gap between performances depending on the ethnic origin. This gap between performances reached up to 200% of relative error degradation between the best class (*Group* 1 *Male*) and the worst (*Group* 3 *Female*). These results suggest that false positives are 200% more likely in *Group* 3 *Female* than in *Group* 1 *Male* for the models evaluated in this work. These results encourage training more diverse models and developing methods capable to deal with the differences inherent to demographic groups. Future work will go in line with this approach, as authors do in [@wang2019mitigate]. ![Projections of the ResNet-50 embeddings into the 2D space generated with t-SNE.[]{data-label="Figure5"}](AAAI/figura5.png){width="80mm"} Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported by projects: BIBECA (RTI2018-101248-B-I00 MINECO/FEDER), Bio-Guard (Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2017). [^1]: Available at GitHub: https://github.com/BiDAlab/DiveFace [^2]: Available at GitHub: https://github.com/BiDAlab/DiveFace
{ "pile_set_name": "ArXiv" }
--- abstract: 'Pairs of pseudoscalar neutral mesons from decays of vector resonances are studied as bipartite systems in the framework of density operator. Time-dependent quantum entanglement is quantified in terms of the entanglement entropy and these dependences are demonstrated on data on correlated pairs of $\PK$ and $\PB$ mesons, as measured by the KLOE and Belle experiments . Another interesting characteristics of such bipartite systems are moments of the CP distributions. These moments are directly measurable and they appear to be very sensitive to the initial degree of entanglement of a pair.' address: 'National Centre for Nuclear Research, Warsaw, Poland' author: - Wojciech Wiślicki title: 'Entanglement, fluctuations and discrete symmetries in particle decays [^1] ' --- Introduction ============ Interferometry of neutral mesons is recognized as a powerful and sensitive tool for testing fundamentals of the quantum mechanics. In particular, pairs of pseudoscalar mesons, originating from strong decays of vector resonances, were proved to be particularly useful in many precision measurements due to their well-defined initial state’s quantum numbers and relatively high production rates. The decays $\Pphi(1020)\rightarrow \PKl\PKs$, $\Ppsi(3770)\rightarrow \PDzero\APDzero$, $\PUpsilon(10580)\rightarrow \PBd\APBd$ and $\PUpsilon(10860)\rightarrow \PBs\APBs$ were recognized long time ago as very useful for the study of the CP violation [@branco], validity of the CPT and Lorentz invariance [@cpt] or search for quantum decoherence [@frascati]. On the other hand, in case of complex initial states where the meson source does neither have well-defined quantum numbers nor symmetry properties and can be a spatially extended object, as e.g. in nuclear collisions, meson interferometry is often used to study both the spatio-temporal characteristics and the degree of coherence of the source. In any case, either the resonance with well-defined symmetry or a nuclear fireball, the meson pair can be considered as a quantum bipartite system with an arbitrary degree of entanglement in the initial state. In order to describe the entanglement and its dynamics, a formalism using the reduced density matrix and the entanglement entropy can be easily incorporated and applied to experimental data. This approach provides a measure of the entanglement in course of the time evolution and an interesting insight into its dependence on the CP, oscillation frequency or the degree of the initial-state entanglement. Reduced density matrices and entropic approach to bipartite systems show a history of being applied to quantify quantum correlations in condensed matter physics [@song]. In case of pairs of pseudoscalar mesons, many aspects of quantum entanglement, although not the entropic measures, are a subject of interest, e.g. in the context of decoherence [@frascati; @decoh] or testing the validity or violation of the CP, T or CPT symmetries [@othercpt]. Another aspect of interrelations between the CP, entanglement and decay dynamics can be studied by treating the CP of one part of the bipartite system as a random variable and investigating its properties by measuring its moments: the mean value, variance etc. It turns out that the time-dependent moments of CP, both their absolute values and shapes, are very sensitive to the initial-state entanglement and the decay dynamics of mesons. Such approach is as yet unknown in the literature. Density matrix and entanglement entropy for bipartite systems ============================================================= Consider two entangled subsystems A and B in states $|\psi_{A,B}\rangle$ such that the whole bipartite system is in the state $|\psi\rangle=|\psi_A\rangle\otimes |\psi_B\rangle$. Each subsystem evolves according to its own Hamiltonian acting only in its subspace, $|\psi_{A,B}(t_{A,B})\rangle=\exp(-iH_{A,B}t_{A,B})|\psi_{A,B}\rangle$, where $t_A$ and $t_B$ are the proper times of evolution of the subsystems A and B. Initially $$\begin{aligned} |\psi(t_A=t_B=0)\rangle=\sqrt{\alpha}|\psi_A\rangle|\psi_B\rangle+\sqrt{1-\alpha}|\psi_B\rangle|\psi_A\rangle,\end{aligned}$$ where $0\le\alpha\le 1$ parametrizes an initial degree of entanglement between subsystems A and B. Using the state $|\psi(t_A,t_B)\rangle$, depending on two proper times and represented in any orthonormal basis, one defines the density operator $\rho(t_A,t_B)=|\psi(t_A,t_B)\rangle\langle\psi(t_A,t_B)|$ and the von Neumann entropy $$\begin{aligned} S(t_A,t_B)=-\mbox{Tr}\,\rho(t_A,t_B)\ln \rho(t_A,t_B). \label{eq2.9}\end{aligned}$$ Evolution of the density operator in time of any of the subsystems, $t_A$ or $t_B$, is represented by a unitary transformation defined by the subsystem’s Hamiltonian and acting only in its appropriate subspace: $$\begin{aligned} \rho(t_A,\,.\,) & = & e^{-iH_At_A}\,\rho(0,\,.\,)\,e^{iH_At_A} \nonumber \\ \rho(\,.\,,t_B) & = & e^{-iH_Bt_B}\,\rho(\,.\,,0)\,e^{iH_Bt_B}. \label{eq2.8}\end{aligned}$$ Since (\[eq2.8\]) represents unitary transformations, so $\rho(t_A,t_B)$ has all required properties of the density operator: Hermiticity, positivity and unit trace. Moreover, it has the same von Neumann entropy (\[eq2.9\]) as $\rho(0,0)$. Here the time evolution is naturally determined by the subsystems’ dynamics. This opens a way to test some possible extensions of the physical picture, e.g. by adding the non-Standard Model or non-Hermitean terms to $H_{A,B}$ and looking for their effects on the time evolution, modifications to meson’s lifetimes or masses, etc. By using the reduced density operator $\rho_A=\mbox{Tr}_B\rho$, obtained by tracing over the degrees of freedom of the subsystem B and integration over $t_B$, one defines the entanglement entropy $$\begin{aligned} S_A(t_A)=-\mbox{Tr}\,\rho_A(t_A)\ln \rho_A(t_A). \label{eq2.3}\end{aligned}$$ Since $S\le S_A+S_B$ (equality for unentangled subsystems) and $S_A=S_B$, the mutual information $I$ $$\begin{aligned} I(\rho) & = & S(\rho_A)+S(\rho_B)-S(\rho) \nonumber \\ & = & 2S_A-S\end{aligned}$$ quantifies the entanglement between subsystems A and B. Degree of entanglement from fits to time-dependent decay spectra of pairs of neutral mesons =========================================================================================== Availability of experimental data motivates us to perform calculations for two decays: $\Pphi(1020)\rightarrow \PKl\PKs$ [@kloe] and $\PUpsilon(4S)=\PUpsilon(10580)\rightarrow \PBd\APBd$ [@belle]. Flavour eigenstates of the final-state mesons exhibit a particle-antiparticle mixing due to the weak box processess [@branco]. We consider these decays in the rest frame of the initial resonance but decay times are measured each in the rest frame of the decaying neutral meson. Final-state mesons fly back-to-back with identical momenta and are detected using their decays in two detectors. For simplicity, we assume that both mesons decay to the same final state. For $\Pphi(1020)\rightarrow \PKl\PKs$, where $J^{PC}(\Pphi)=1^{--}$, the final state has to be antisymmetric and at the moment of decay it reads $$\begin{aligned} |\psi(t_A=0,t_B=0)\rangle = 2^{-1/2}(\sqrt{\alpha}|\PKl\rangle_{A}|\PKs\rangle_{B}-\sqrt{1-\alpha}|\PKs\rangle_{A}|\PKl\rangle_{B}),\end{aligned}$$ where subcripts $_{A,B}$ refer to the detectors. We consider only decays to the same final states $\Ppiplus\Ppiminus$. The decay intensity dependence on $\Delta t=|t_B-t_A|$, after integrating over $t_A+t_B$, reads $$\begin{aligned} I(\Delta t)=\alpha e^{-\Gamma_L\Delta t}+(1-\alpha)e^{-\Gamma_S\Delta t}-2\sqrt{\alpha(1-\alpha)}e^{-\bar\Gamma\Delta t}\cos(\Delta m\Delta t) \label{eq3.2}\end{aligned}$$ where $\Gamma_{L,S}$ stand for decay rates of $\PKl,\PKs$, $\bar\Gamma=(\Gamma_L+\Gamma_S)/2$ and $\Delta m =m_L-m_S\sim 1\,\mbox{ns}^{-1}$. The long- and shortliving kaons $\PKl$ and $\PKs$ were experimentally identified by their decay times. Fitting eq. (\[eq3.2\]) to the decay spectrum measured by KLOE [@kloefp] one gets $\alpha=0.71\pm 0.31$ (cf. fig. \[fig1\]). ![Left: Intensity spectrum of decay time difference of pairs $\PKl,\PKs\rightarrow\Ppiplus\Ppiminus$ measured by KLOE [@kloe] with a fit of eq. (\[eq3.2\]); Right: Asymmetry between unmixed and mixed final states in semileptonic decays of $\PBz, \PaBz$, as measured by Belle [@belle] with a fit of eq. (\[eq3.3\]). In both panels, the statistical and systematic errors were combined.[]{data-label="fig1"}](./fit_kloe_ent.jpg "fig:") ![Left: Intensity spectrum of decay time difference of pairs $\PKl,\PKs\rightarrow\Ppiplus\Ppiminus$ measured by KLOE [@kloe] with a fit of eq. (\[eq3.2\]); Right: Asymmetry between unmixed and mixed final states in semileptonic decays of $\PBz, \PaBz$, as measured by Belle [@belle] with a fit of eq. (\[eq3.3\]). In both panels, the statistical and systematic errors were combined.[]{data-label="fig1"}](./belle_asym.jpg "fig:") In the similar case of $\PUpsilonFourS\rightarrow \PBd\APBd$, the antisymmetric final state parametrized with the degree of entanglement $\alpha$ is initially (neglecting small CP-violation effect) equal to $$\begin{aligned} |\psi(t_A=t_B=0)\rangle & = & 2^{-1/2}(\sqrt{\alpha}|\PB_H\rangle_{A}|\PB_L\rangle_{B}-\sqrt{1-\alpha}|\PB_{L}\rangle_{A}|\PB_H\rangle_{B}) \nonumber \\ & = & 2^{-1/2}(\sqrt{\alpha}|\PBz\rangle_{A}|\PaBz\rangle_{B}-\sqrt{1-\alpha}|\PaBz\rangle_{A}|\PBz\rangle_{B})\end{aligned}$$ and from its time-dependent states we build up the time-dependent asymmetry between the flavour-unmixed and mixed states [@belle] $$\begin{aligned} A(\Delta t) & = & \frac{N_u(\Delta t)-N_m(\Delta t)}{N_u(\Delta t)+N_m(\Delta t)} \nonumber \\ & = & \frac{2\sqrt{\alpha(1-\alpha)}\cos(\Delta m\Delta t)}{\alpha e^{-\Delta\Gamma\Delta t/2}+(1-\alpha)e^{\Delta\Gamma\Delta t/2}}, \label{eq3.3}\end{aligned}$$ where $\Delta\Gamma=\Gamma_H-\Gamma_L$ is a small number consistent with zero within experimental errors and $\Delta m=m_H-m_L=0.506\,\mbox{ps}^{-1}$. The mixed and unmixed states are identified using the lepton sign of the decay $\PBz(\PaBz)\rightarrow \PB^{(\ast)-(+)}\Pmu^{+(-)}\Pnum(\APnum)$. By fitting eq. (\[eq3.3\]) to the Belle data [@belle] (cf. Fig. \[fig1\]) one obtains $\alpha=0.55\pm 0.07$. In both cases, the values of $\alpha$ from fits to data do not indicate any significant deviation from the maximal entanglement $\alpha=1/2$. Important to note, the non-maximal entanglement would violate antisymmetry of the final state. That could indicate an ill-defined CPT operator and a particle-antiparticle identity, and lead to exotic and intriguing consequences [@bernabeu]. Entanglement entropies for pairs of neutral mesons ================================================== The density operator $\rho(t_A,t_B)$ has to be expressed in an orthonormal basis. In case of neutral mesons, the orthonormal basis $(|\PK_1\rangle,|\PK_2\rangle )$ differs from the non-orthonormal one $(|\PKl\rangle,|\PKs\rangle )$ due to the CP violation, parametrized by a complex parameter $\varepsilon$, where $|\varepsilon |=2.2\times 10^{-3}$ and $\phi_{\varepsilon}=43.5^{\circ}$ are precisely known from experiment. Direct CP violation effects are much smaller and are neglected. For neutral $\PB$ mesons, the CP violation effect is smaller than $10^{-3}$ and can be neglected so that the basis $(|\PB_H\rangle, |\PB_L\rangle )$ is considered to be orthonormal. For kaons, the matrix elements of the reduced density matrix $\rho_{A_{i,j}}=\linebreak _B\langle \PK_{1,2}|\rho |\PK_{1,2}\rangle_B$, i.e. after tracing over degrees of freedom of the meson in detector B, and after integrating over $t_B$ and to the order ${\mathcal O}(\varepsilon)$, read $$\begin{aligned} \rho_{A_{11}}(t) & = & \frac{\alpha}{\Gamma_L}e^{-\Gamma_St} \nonumber \\ \rho_{A_{22}}(t) & = & \frac{1-\alpha}{\Gamma_S}e^{-\Gamma_Lt} \nonumber \\ \rho_{A_{12}}(t) & = & \frac{\varepsilon^{\ast}\alpha}{\Gamma_L}e^{-\Gamma_St} + \frac{\varepsilon(1-\alpha)}{\Gamma_S}e^{-\Gamma_Lt} \nonumber \\ & + & \frac{2(\Re\epsilon)\sqrt{\alpha(1-\alpha)}}{\bar\Gamma^2+(\Delta m)^2}e^{-\bar\Gamma t}(\bar\Gamma e^{i(\Delta m)t}+(\Delta m)e^{i(\Delta m-\pi/2)t}) \nonumber \\ \rho_{A_{21}}(t) & = & \rho_{A_{12}}^\ast(t). \label{eq4.1}\end{aligned}$$ Since the mesons decay, the density operator has to be renormalized $\rho_A(t)\rightarrow \rho_A(t)/\mbox{Tr}\rho_A(t)$ in order to meet the normalization requirement $\mbox{Tr}\rho_A(t)=1$. Using eq.(\[eq4.1\]), the entanglement entropy (\[eq2.3\]) is found to be (t-dependence omitted for simplicity) $$\begin{aligned} S_A & = & -\rho_{A_{11}}\ln\rho_{A_{11}}-\rho_{A_{22}}\ln\rho_{A_{22}} \nonumber \\ & - & 2[(\Re\,\rho_{A_{12}})\ln |\rho_{A_{12}}|+(\Im\,\rho_{A_{12}})\arg \rho_{A_{12}}]. \label{eq4.2}\end{aligned}$$ Similar formulae to eqns (\[eq4.1\]) and (\[eq4.2\]) can be found for $\PB$ mesons. ![Upper left: Time evolution of the entanglement entropy $S_A$ for neutral $\PK$’s, for a number of entanglement parameters $\alpha$; Upper right: $\alpha$-dependence of $S_A$ for a number of times; Lower left and right: the same dependencies for neutral $\PB$’s.[]{data-label="fig2"}](./SA.jpg "fig:") ![Upper left: Time evolution of the entanglement entropy $S_A$ for neutral $\PK$’s, for a number of entanglement parameters $\alpha$; Upper right: $\alpha$-dependence of $S_A$ for a number of times; Lower left and right: the same dependencies for neutral $\PB$’s.[]{data-label="fig2"}](./SA_alpha.jpg "fig:") ![Upper left: Time evolution of the entanglement entropy $S_A$ for neutral $\PK$’s, for a number of entanglement parameters $\alpha$; Upper right: $\alpha$-dependence of $S_A$ for a number of times; Lower left and right: the same dependencies for neutral $\PB$’s.[]{data-label="fig2"}](./SA_t.jpg "fig:") ![Upper left: Time evolution of the entanglement entropy $S_A$ for neutral $\PK$’s, for a number of entanglement parameters $\alpha$; Upper right: $\alpha$-dependence of $S_A$ for a number of times; Lower left and right: the same dependencies for neutral $\PB$’s.[]{data-label="fig2"}](./SA_alpha_b.jpg "fig:") Fig. \[fig2\] presents the entanglement entropy dependence on time and the initial entanglement degree $\alpha$. For $\PK$ mesons, the entanglement entropy exhibits interesting dependence on $\alpha$, and a time-dependence governed by the oscillation frequency $\Delta m$ and large lifetime difference between $\PKs$ and $\PKl$. The time when the entanglement is maximal during evolution strongly depends on $\alpha$ and is related to the location of the interference maximum. Contrary to kaons, the initial value of $S_A$ for pairs of $\PB$’s is very sensitive to $\alpha$ but its later time dependence is weak, due to much smaller difference of lifetimes of $\PB_H$ and $\PB_L$. However, for given $\alpha$ an average entanglement entropy in later times for $\PB$’s is larger compared to $\PK$’s. Dependence of $S_A$ on $\alpha$ for $\PB$’s is almost the same for all times. Fluctuations of CP ================== Mesons in pairs originating from decays of vector resonances carry opposite CP. Since the direction of emission of a meson with given CP is purely random, the CP at given detector could be naively expected to be a simple, binary random variable. But quantum entanglement, the dynamics of time evolution and the initial degree of entanglement make this observable less obvious and more intriguing. We show here some of its interesting dependencies and argue that the moments of CP are sensitive probes of the initial entanglement of a pair $\alpha$. Unlike the entanglement entropy, it does not quantify the time-dependent entanglement itself but the CP registered by one detector which, contrary to the naive expectation, exhibits a highly non-trivial and $\alpha$-sensitive time-dependence. Moments of CP can be found using the time-dependent moment generating function $\chi(\lambda,t)$ and differentiating it to obtain cummulants $C_n,\;,n=1,2,\ldots $ $$\begin{aligned} \chi(\lambda,t) & = & \langle \exp (i\lambda\cdot \mbox{CP}_A)\rangle \nonumber \\ & = & \sum_m P(CP_A=m) e^{i\lambda m}, \nonumber \\ \nonumber \\ C_n & = & \big (-i\frac{\partial}{\partial\lambda}\big )^n\ln\chi(\lambda,y)|_{\lambda=0},\end{aligned}$$ where $\langle \ldots\rangle$ stands for the expected value in the state $|\psi(t)\rangle$ and $\mbox{CP}_A$ is the CP registered in detector A. The first moments $C_1$ and $C_2$ correspond to the expected value and variance, respectively. Noteworthy, these moments are directly measurable by registering the identified long- or short-living mesons in one of detectors and correcting for CP-violation. For the $\Pphi(1020)\rightarrow \PKl\PKs$, keeping only terms to linear order in $\varepsilon$, the cummulant generating function is equal to $$\begin{aligned} \chi(\lambda,t) & = & P(\mbox{CP}_A=+1)e^{i\lambda}+ P(\mbox{CP}_A=-1)e^{-i\lambda} \nonumber \\ & = & \frac{\alpha e^{-\Gamma_St}e^{-i\lambda}}{\Gamma_L^2/4+m_L^2} +\frac{(1-\alpha) e^{-\Gamma_Lt}e^{i\lambda}}{\Gamma_S^2/4+m_S^2}.\end{aligned}$$ The first moment, or the mean value, of CP is equal to $$\begin{aligned} C_1(t,\alpha) & = & \langle \mbox{CP}_A(t)\rangle \nonumber \\ & = & \frac{\alpha-(1-\alpha)\frac{\Gamma_L^2+4m_L^2}{\Gamma_S^2+4m_S^2}e^{\Delta\Gamma t}}{\alpha+(1-\alpha)\frac{\Gamma_L^2+4m_L^2}{\Gamma_S^2+4m_S^2}e^{\Delta\Gamma t}}. \label{eq5.4}\end{aligned}$$ In particular $$\begin{aligned} C_1(t,\alpha) & \xrightarrow[t\rightarrow 0]{} & 2\alpha -1 \nonumber \\ C_1(t,\alpha) & \xrightarrow[t\rightarrow \infty]{} & -1.\end{aligned}$$ Similar formulae are valid for $\PB$ mesons. In the limit of long time, only the long-living components with CP=-1 survive and all short-living ones with CP=+1 die out. This effect is clearly seen for $\PK$ mesons where the difference between the $\PK_1$ and $\PK_2$ lifetimes is large. For the $\PB$ mesons, the time dependence is qualitatively the same but weaker since the $\PB_H$ and $\PB_L$ lifetimes are close to each other. Fig. \[fig3\] presents the $C_1$ time-dependence for the $\PK$ and $\PB$ pairs. ![Left: the time dependence of the mean CP in detector A for neutral $\PK$ meson pairs, parametrized by the degree of initial coherence $\alpha$; Right: the same for the $\PB$ meson pairs.[]{data-label="fig3"}](./MeanCP.jpg "fig:") ![Left: the time dependence of the mean CP in detector A for neutral $\PK$ meson pairs, parametrized by the degree of initial coherence $\alpha$; Right: the same for the $\PB$ meson pairs.[]{data-label="fig3"}](./MeanCP_t_Belle.jpg "fig:") The second moment, or the variance of CP, is equal to $$\begin{aligned} C_2(t,\alpha) & = & \langle \mbox{CP}_A^2\rangle - \langle \mbox{CP}_A\rangle ^2 \nonumber \\ & = & 1 - \bigg[\frac{\alpha-(1-\alpha)\frac{\Gamma_L^2+4m_L^2}{\Gamma_S^2+4m_S^2}e^{\Delta\Gamma t}}{\alpha+(1-\alpha)\frac{\Gamma_L^2+4m_L^2}{\Gamma_S^2+4m_S^2}e^{\Delta\Gamma t}}\bigg]^2.\end{aligned}$$ and similar formulae hold for $\PB$ mesons. Particular values are $$\begin{aligned} C_2(t,\alpha) & \xrightarrow[t\rightarrow 0]{} & 4\alpha (1-\alpha) \nonumber \\ C_2(t,\alpha) & \xrightarrow[t\rightarrow \infty]{} & 0.\end{aligned}$$ For large values of $t$ only the long-living components $\PKl$ and $\PB_H$ survive and the CP distribution narrows down to zero width. ![Left: the time dependence of the variance of CP in detector A for neutral $\PK$ meson pairs, parametrized by the degree of initial coherence $\alpha$; Right: the same for the $\PB$ meson pairs.[]{data-label="fig4"}](./VarCP.jpg "fig:") ![Left: the time dependence of the variance of CP in detector A for neutral $\PK$ meson pairs, parametrized by the degree of initial coherence $\alpha$; Right: the same for the $\PB$ meson pairs.[]{data-label="fig4"}](./VarCP_Belle.jpg "fig:") The variance can be non-monotonic only for kaons and the maximum is located at $t_0=\frac{1}{\Delta \Gamma}\ln\big[\alpha(1-\alpha)\frac{\Gamma_S^2+4m_S^2}{\Gamma_L^2+4m_L^2}\big]$ and $t_0>0$ only for $\alpha\gtrsim 0.5$. The $t_0$ becomes infinite as $\alpha\rightarrow 1$. Conclusions =========== It this paper we propose an approach providing a new insight into the quantum entanglement in neutral meson pairs. Such pair is treated as a quantum bipartite system where the degree of initial entanglement is allowed to be a free parameter, thus allowing for a possible imperfect coherence in the initial decay and quark hadronization. At the same time, it also allows to test the parity of the final state and thus examine the correctness of the fundamental assumptions on the Bose-Einstein symmetry and CPT invariance. The entanglement parameter was determined from data on decays $\Pphi(1020)\rightarrow \PKl\PKs$ and $\PUpsilon(4S)\rightarrow \PBd\APBd$ and found to be consistent with maximal entanglement, although with a rather large error. In order to quantify the degree of entanglement in course of the time evolution of a system, the entanglement entropy is calculated and discussed. This quantity appears to be very sensitive to the initial entanglement and exhibits interesting dynamics. Another quantity, strongly dependent on both the entanglement and dynamics of meson decays, are moments of the CP of one subsystem. These observables exhibit stronger and non-monotonic time dependence for pairs of $\PK$ mesons than for $\PB$ mesons due to larger lifetime difference between the components of opposite CP. This work was supported by the NCN grant 2013/08/M/ST2/00323. [99]{} G.C. Branco, L. Lavoura and J.P. Silva, [*CP Violation*]{}, Clarendon Press, Oxford, 2007 KLOE-2: D. Babusci et al., [*Phys. Lett.*]{} B730 (2014) 89, LHCb: R. Aaij et al., [*Phys. Rev. Lett. 116 (2016) 241601*]{} , ed. A. di Domenico, Frascati Physics Series, vol. XLIII, Frascati, 2007 H. Francis Song et al., [*Phys. Rev.*]{} B85 (2012) 035409 J. Bernabeu et al., [*Phys. Rev.*]{} D74 (2006) 045014 M. Nebot, [*J. Phys. Conf. Ser.*]{} 873 (2017) 012024,\ A. di Domenico, [*Acta Phys. Polon.*]{} A127 (2015) 1563,\ J. Bernabeu and F. Martinez-Vidal, [*Rev. Mod. Phys.*]{} 87 (2015) 165,\ Zhije Huang and Yu Shi, [*Phys. Rev.*]{} D89 (2014) no.1, 016018 KLOE: F. Ambrosino et al., [*Phys. Lett.*]{} B636 (2006) 173 Belle: A. Go et al., [*Phys. Rev. Lett.*]{} 99 (2007) 131802 KLOE: A. di Domenico et al., [*Found. Phys.*]{} 40 (2010) 852 J. Bernabeu et al., [*Nucl. Phys.*]{} B744 (2006) 180 [^1]: Presented at the Workshop on [*Discrete Symmetries and Entanglement*]{}, Jagiellonian University, Cracow, Poland, 9-11 June 2017
{ "pile_set_name": "ArXiv" }
--- abstract: 'Discontinuous Galerkin (DG) methods for hyperbolic partial differential equations (PDEs) with explicit time-stepping schemes, such as strong stability-preserving Runge-Kutta (SSP-RK), suffer from time-step restrictions that are significantly worse than what a simple Courant-Friedrichs-Lewy (CFL) argument requires. In particular, the maximum stable time-step scales inversely with the highest degree in the DG polynomial approximation space and becomes progressively smaller with each added spatial dimension. In this work we introduce a novel approach that we have dubbed the regionally implicit discontinuous Galerkin (RIDG) method to overcome these small time-step restrictions. The RIDG method is based on an extension of the Lax-Wendroff DG (LxW-DG) method, which previously had been shown to be equivalent to a predictor-corrector approach, where the predictor is a locally implicit spacetime method (i.e., the predictor is something like a block-Jacobi update for a fully implicit spacetime DG method). The corrector is an explicit method that uses the spacetime reconstructed solution from the predictor step. In this work we modify the predictor to include not just local information, but also neighboring information. With this modification we show that the stability is greatly enhanced; in particular, we show that we are able to remove the polynomial degree dependence of the maximum time-step and show how this extends to multiple spatial dimensions. A semi-analytic von Neumann analysis is presented to theoretically justify the stability claims. Convergence and efficiency studies for linear and nonlinear problems in multiple dimensions are accomplished using a [matlab]{} code that can be freely downloaded.' author: - 'Pierson T. Guthrey[^1]' - 'James A. Rossmanith[^2]' title: '[[The Regionally-Implicit Discontinuous Galerkin Method: Improving the Stability of DG-FEM]{}]{}[^3]' --- discontinuous Galerkin, hyperbolic conservation laws, Courant-Friedrichs-Lewy condition, time-setpping, numerical stability 65M12, 65M60, 35L03 Introduction {#sec:intro} ============ Hyperbolic conservation laws model phenomena characterized by waves propagating at finite speeds; examples include the shallow water (gravity waves), compressible Euler (sound waves), Maxwell (light waves), magnetohydrodynamic (magneto-acoustic and Alfvén waves), and Einstein (gravitational waves) equations. In recent years, the discontinuous Galerkin (DG) finite element method (FEM) has become a standard approach for solving hyperbolic conservation laws alongside other methods such as weighted essentially non-oscillatory (WENO) schemes (e.g., see Shu [@article:ShuWENO2009]) and various finite volume methods (e.g., see LeVeque [@book:Le02]). The DG method was first introduced by Reed and Hill [@article:ReedHill73] for neutron transport, and then fully developed for time-dependent hyperbolic conservation laws in a series of papers by Cockburn, Shu, and collaborators (see [@article:CoShu98] and references therein for details). An important feature of DG methods is that they can, at least in principle, be made arbitrarily high-order in space by increasing the polynomial order in each element; and therefore, the DG method is an example of a spectral element method (e.g., see Chapter 7.5 of Karniadakis and Sherwin [@book:KaSh2005]). If DG is only used to discretize the spatial part of the underlying PDE, it remains to also introduce a temporal discretization. Many time-stepping methods are possible, including various explicit and implicit schemes. In general, one time-step of an explicit scheme is significantly cheaper than an implicit one; the trade-off is that implicit schemes usually allow for larger time-steps. In many applications involving hyperbolic conservation laws, however, it is necessary to resolve the fastest time scales, in which case explicit methods are more efficient and easier to implement than implicit ones. An upper bound on the largest allowable time-step for explicit schemes is provided by the Courant-Friedrichs-Lewy (CFL) condition, which requires that the domain of dependence of the numerical discretization subsumes the domain of dependence of the continuous PDE [@article:CFL1928]. For example, a 1D hyperbolic PDE for which information propagates at a maximum wave speed of $\lambda_{\text{max}}$, on a uniform mesh of elements of size $h=\Delta x$, and with a time-stepping method that updates the solution on the element $\Tm_i^h$ only using existing solution values from $\Tm_{i-1}^h$, $\Tm_{i}^h$, and $\Tm_{i+1}^h$, has the following constraint on $\Delta t$: $$\nu := \frac{\lambda_{\text{max}} \Delta t}{\Delta x} \le 1.$$ This has a clear physical interpretation: a wave that emanates from the boundaries of element $i$ that is traveling at the maximum speed, $\lambda_{\text{max}}$, is not allowed to propagate further than one element width. If we wanted to allow the wave to travel more than one element width, we would need to widen the numerical stencil. The CFL condition as described above is a [*necessary*]{} condition for stability (and therefore convergence), but it is not [*sufficient*]{}. For high-order DG methods with explicit time-stepping, a fact that is well-known in the literature is that the actual maximum linearly stable value of the CFL number, $\nu = {\lambda_{\text{max}} \Delta t}/{\Delta x}$, is significantly smaller than what the CFL condition predicts (see for example Liu et al. [@article:LiuShuTadmorZhang08] and Sections 4.7 and 4.8 of Hesthaven and Warburton [@book:HesWar2007]). Two popular explicit time-stepping schemes for DG are strong-stability-preserving Runge-Kutta DG (SSP-RK) [@article:GoShu98; @gottliebShuTadmor01] and Lax-Wendroff [@article:GasDumHinMun2011; @article:QiuDumShu2005]. SSP-RK time-steps are one-step multistage Runge-Kutta methods that can be written as convex combinations of forward Euler steps. Lax-Wendroff utilizes the Cauchy-Kovalevskaya [@article:Kovaleskaya1875] procedure to convert temporal derivatives into spatial derivatives; the name Lax-Wendroff is due to the paper of Lax and Wendroff [@article:LxW1960]. In \[table:CFL\_gap\] we illustrate for both the SSP-RK and Lax-Wendroff DG methods the gap between the CFL condition, a necessary but not sufficient condition for stability, and the semi-analytically computed maximum CFL number needed for linear stability. Shown are the methods with space and time order $k=1,2,3,4$. The SSP-RK DG numbers are reported from Liu et al. [@article:LiuShuTadmorZhang08], while the Lax-Wendroff DG numbers are from von Neumann analysis done in this paper. Note that the maximum CFL number from the CFL condition for SSP-RK DG grows with $k$ due to the fact that the number of Runge-Kutta stages grows with $k$; and therefore, the numerical domain of dependence is increased. For both sets of methods, the clear trend is that the maximum CFL numbers are much smaller than what a simple CFL domain of dependence argument would dictate. In particular, the relationship between the maximum CFL number and the order of the method is roughly: $\nu_{\text{max}} \propto 1/k$. The goal of this paper is to develop an alternative time discretization for DG that allows for a linearly stable time-step that is closer to what is predicted by the CFL condition. The starting point of this work is the interpretation of the Lax-Wendroff DG method developed by Gassner et al. [@article:GasDumHinMun2011], where it was shown that Lax-Wendroff DG can be formulated as a predictor-corrector method. The predictor is a local version of a spacetime DG method [@article:KlaVegVen2006; @article:Sudirham2006] (i.e., the predictor is something like a block-Jacobi update for a fully implicit spacetime DG method), and the corrector is an explicit method that uses the spacetime reconstructed solution from the predictor step. In this work we modify the predictor to include not just local information, but also neighboring information. The name that we are giving to this new approach is the [*regionally-implicit*]{} discontinuous Galerkin (RIDG) scheme, which contrasts with the [*locally-implicit*]{} (LIDG) formulation of the Lax-Wendroff DG scheme developed by Gassner et al. [@article:GasDumHinMun2011]. In this new formulation, we are able to achieve all of the following: - Develop RIDG schemes for 1D, 2D, and 3D advection; - Show that RIDG has larger maximum CFL numbers than explicit SSP-RK and Lax-Wendroff DG; - Show that the maximum linearly stable CFL number is bounded below by a constant that is independent of the polynomial order; - Demonstrate experimentally the correct convergence rates on 1D, 2D, and 3D advection examples. - Demonstrate experimentally the correct convergence rates on 1D and 2D nonlinear examples. All of the methods described in this work are written in a [matlab]{} code that can be freely downloaded [@code:ridg-code]. The organization of this paper is as follows. In \[sec:dg-fem-space\] we briefly review how space is discretized in the discontinuous Galerkin (DG) method. In \[sec:one-dimension\] we review the Lax-Wendroff DG scheme, then develop the one-dimensional version of the proposed regionally implicit DG (RIDG) scheme, and carry out von Neumann stability analysis for both methods. The generalization to multiple dimensions is done in \[sec:higher-dimensions\]. In \[sec:results\] we carry out numerical convergence tests to validate the new approach and to quantify the computational efficiency of RIDG relative to the Lax-Wendroff method. Finally, in \[sec:burgers\] we show how to extend the method to a nonlinear scalar problem: the 1D and 2D Burgers equation. In this case we compare the efficiency and accuracy of our method against the fourth-order Runge-Kutta discontinous Galerkin (RKDG) scheme. ------------------- ------- ------- ------- ------- ------- ------- ------- ------- $k=1$ $k=2$ $k=3$ $k=4$ $k=1$ $k=2$ $k=3$ $k=4$ [**CFL cond.**]{} 1.00 2.00 3.00 4.00 1.000 1.000 1.000 1.000 [**Neumann**]{} 1.00 0.33 0.13 0.10 1.000 0.333 0.171 0.104 ------------------- ------- ------- ------- ------- ------- ------- ------- ------- : Shown here are the maximum CFL numbers for the SSP-RK DG and Lax-Wendroff DG methods with the same time and space order of accuracy. The line labeled “CFL cond.” is the upper bound of the CFL number as predicted just by looking at the domain of dependence of the numerical method. The line label “Neumann” is the numerically calculated maximum linearly stable CFL number. The SSP-RK DG numbers are reported from Liu et al. [@article:LiuShuTadmorZhang08], while the Lax-Wendroff DG numbers are from von Neumann analysis done in this paper. For both sets of methods, the clear trend is that the maximum linearly stable CFL numbers are much smaller than what a simple CFL domain of dependence argument would dictate. \[table:CFL\_gap\] DG-FEM spatial discretization {#sec:dg-fem-space} ============================= Consider hyperbolic conservation laws of the form $$\label{eqn:conslaw} {{{\underline{q}}}}_{,t} + {{{{\underline{\nabla}}}} \cdot}{{{\underline{{\underline{F}}}}}}\left( {{{\underline{q}}}} \right) = {{{\underline{0}}}},$$ where ${{{\underline{q}}}}\left(t,{{{\underline{x}}}} \right): {\mathbb R}^+ \times {\mathbb R}^\mdim \mapsto {\mathbb R}^\meq$ is the vector of conserved variables, ${{{\underline{{\underline{F}}}}}}\left({{{\underline{q}}}}\right): {\mathbb R}^\meq \mapsto {\mathbb R}^{\meq \times \mdim}$ is the flux function, $\mdim$ is the number of spatial dimensions, and $\meq$ is the number of conserved variables. We assume that the system is hyperbolic, which means that the flux Jacobian, $${{{\underline{{\underline{A}}}}}}\left( {{{\underline{q}}}}; {{{\underline{n}}}} \right) = \frac{\partial \left( {{{\underline{n}}}} \cdot {{{\underline{{\underline{F}}}}}} \right)}{\partial {{{\underline{q}}}}},$$ for all ${{{\underline{q}}}} \in {\mathcal S} \subset {\mathbb R}^\meq$, where ${\mathcal S}$ is some physically meaningful convex subset of ${\mathbb R}^\meq$, and for all directions, ${{{\underline{n}}}} \in {\mathbb R}^\mdim$ such that $\| {{{\underline{n}}}} \| = 1$, must be diagonalizable with only real eigenvalues (e.g., see Chapter 18 of LeVeque [@book:Le02]). Next consider discretizing system \[eqn:conslaw\] in space via the discontinuous Galerkin (DG) method, which was first introduced by Reed and Hill [@article:ReedHill73] for neutron transport, and then fully developed for time-dependent hyperbolic conservation laws in a series of papers by Bernardo Cockburn, Chi-Wang Shu, and collaborators (see [@article:CoShu98] and references therein for details). We define $\Omega \subset {\mathbb R}^\mdim$ to be a polygonal domain with boundary $\partial \Omega$, and discretize $\Omega$ using a finite set of non-overlapping elements, $\Tm_i$, such that $\cup_{i=1}^\melems \Tm_i = \Omega$, where $\melems$ is the total number of elements. Let ${\mathbb P}\left(\mdeg, \mdim \right)$ denote the set of polynomials from ${\mathbb R}^\mdim$ to ${\mathbb R}$ with maximal polynomial degree $\mdeg$[^4]. On the mesh of $\melems$ elements we define the [*broken*]{} finite element space: $$\label{eqn:broken_space} \WS^h := \left\{ {{{\underline{w}}}}^h \in \left[ L^{\infty}(\Omega) \right]^{\meq}: \, {{{\underline{w}}}}^h \bigl|_{\Tm_i} \in \left[ {\mathbb P} \left(\mdeg, \mdim \right) \right]^{\meq} \, \, \forall \Tm_i \right\},$$ where $h$ is the grid spacing, $M_{\text{dim}}$ is the number of spatial dimensions, $\meq$ is the number of conserved variables, and $\mdeg$ is the maximal polynomial degree in the finite element representation. The above expression means that ${{{\underline{w}}}} \in \WS^h$ has $\meq$ components, each of which when restricted to some element $\Tm_i$ is a polynomial in ${\mathbb P}\left(\mdeg, \mdim \right)$, and no continuity is assumed across element faces. Let $\varphi_{k}\left({{{\underline{x}}}}\right)$ for $k=1,\ldots,\mbasis$ be an appropriate basis that spans ${\mathbb P} \left(\mdeg, \mdim \right)$ over $\Tm_i$ (e.g., Legendre or Lagrange polynomials). In order to get the DG semi-discretization, we multiply \[eqn:conslaw\] by $\varphi_{k} \in {\mathbb P} \left(\mdeg, \mdim \right)$, integrate over the element $\Tm_i$, use integration-by-parts in space, and replace the true solution, ${{{\underline{q}}}}$, by the following ansatz: $${{{\underline{q}}}}^h\left(t, {{{\underline{x}}}} \right) \Bigl|_{\Tm_i} = \sum_{\ell=1}^{\mbasis} {{{\underline{Q}}}}_{i}^{\ell}(t) \, \varphi_{\ell}\left({{{\underline{x}}}}\right).$$ All of these steps results in the following semi-discrete system: $$\label{eqn:semi_discrete_dg} \sum_{\ell=1}^{\mbasis} \left[ \int_{\Tm_i} \varphi_{k} \varphi_{\ell} \, d{{{\underline{x}}}} \right] \frac{d{{{\underline{Q}}}}^{\ell}}{dt} = \int_{\Tm_i} {{{\underline{{\underline{F}}}}}}\left( \, {{{\underline{q}}}}^h \right) \cdot {{{{\underline{\nabla}}}}}\varphi_{k} \, d{{{\underline{x}}}} - \oint_{\partial \Tm_i} \varphi_k \, {{{\underline{{\mathcal F}}}}}\left({{{\underline{q}}}}^h_{+}, {{{\underline{q}}}}^h_{-}; {{{\underline{n}}}} \right) \, d{{{\underline{s}}}},$$ where ${{{\underline{n}}}}$ is an outward-pointing normal vector to $\partial \Tm_i$, ${{{\underline{q}}}}^h_{+}$ and ${{{\underline{q}}}}^h_{-}$ are the states on either side of the boundary $\partial \Tm_i$, and ${{{\underline{{\mathcal F}}}}}$ is the numerical flux, which must satisfy the following two conditions: - Consistency: ${{{\underline{\NF}}}}\left( \, {{{\underline{q}}}}, \, {{{\underline{q}}}}; \, {{{\underline{n}}}} \right) = {{{\underline{{\underline{F}}}}}}\left( \, {{{\underline{q}}}} \, \right) \cdot {{{\underline{n}}}}; $ - Conservation: ${{{\underline{\NF}}}}\left({{{\underline{q}}}}^h_-, \, {{{\underline{q}}}}^h_+; \, {{{\underline{n}}}} \right) = -{{{\underline{\NF}}}}\left({{{\underline{q}}}}^h_+, \, {{{\underline{q}}}}^h_-; \, -{{{\underline{n}}}} \right)$. Equation \[eqn:semi\_discrete\_dg\] represents a large system of coupled ordinary differential equations in time. RIDG in one space dimension {#sec:one-dimension} =========================== We present in this section the proposed regionally-implicit discontinuous Galerkin (RIDG) method as applied to a one dimensional advection equation. Each RIDG time-step is comprised of two key steps: a predictor and a corrector. The predictor is a truncated version of an implicit spacetime DG approximation, which is not consistent, at least by itself, with the PDE that it endeavors to approximate. The corrector is a modified forward Euler step that makes use of the predicted solution; this step restores consistency, and indeed, high-order accuracy, with the underlying PDE. In the subsections below we begin with a brief description of the advection equation in \[sec:RIDG1D\_advection\]. We then review the Lax-Wendroff (aka locally-implicit) prediction step in \[sec:LIDG1D\_predict\], which provides the motivation for RIDG. The RIDG prediction step is developed in \[sec:RIDG1D\_predict\]. The correction step for both predictors is detailed in \[sec:RIDG1D\_correct\]. Finally, we carry out semi-analytic von Neumann analysis for both schemes in \[sec:RIDG1D\_stability\] and demonstrate the improved stability of RIDG over Lax-Wendroff DG. 1D advection equation {#sec:RIDG1D_advection} --------------------- We consider here the 1D advection equation for $(t,x) \in [0,T] \times \Omega$, along with some appropriate set of boundary conditions: $$\label{eqn:adv1d} q_{,t} + u q_{,x} = 0.$$ Next, we introduce a uniform Cartesian spacetime mesh with spacetime elements: $$\label{eqn:spacetime_elem_1d} {\mathcal S}^{n+1/2}_i = \left[t^{n}, t^n + \Delta t \right] \times \left[ x_i - {\Delta x}/{2}, x_i + {\Delta x}/{2} \right],$$ which can ben written in local coordinates, $[\tau, \xi] \in [-1,1]^2$, where $$t = t^{n+1/2} + \tau \left( {\Delta t}/{2} \right) \quad \text{and} \quad x = x_i + \xi \left( {\Delta x}/{2} \right).$$ In these local coordinates the advection equation \[eqn:adv1d\] becomes $$\label{eqn:adv1d_nondim} q_{,\tau} + \nu q_{,\xi} = 0, \quad \text{where} \quad \nu = \frac{u \Delta t}{\Delta x},$$ and $|\nu|$ is the CFL number. Lax-Wendroff DG (aka LIDG) prediction step {#sec:LIDG1D_predict} ------------------------------------------ We review here the prediction step for the Lax-Wendroff DG scheme as formulated by Gassner et al. [@article:GasDumHinMun2011]. In order to contrast with the proposed RIDG method, we will refer to this method as the locally-implicit DG (LIDG) method. We fix the largest polynomial degree to $\mdeg$ in order to eventually achieve an approximation that has an order of accuracy ${\mathcal O}\left(\Delta x^{\mdeg+1} + \Delta t^{\mdeg+1}\right)$. At the old time, $t=t^n$, we are given the following approximate solution on each space element, $\Tm_i = \left[x_i - \Delta x/2, \, x_i + \Delta x/2\right]$: $$\label{eqn:old_ansatz} q(t^{n},x) \Bigl|_{{\mathcal T}_i} \approx q^{n}_{i} := {{{\underline{\Phi}}}}^T {{{\underline{Q}}}}^{n}_i,$$ where ${{{\underline{Q}}}}^{n}_i \in {\mathbb R}^{\mcorr}$, ${{{\underline{\Phi}}}} \in {\mathbb R}^{\mcorr}$, $\mcorr:=\mdeg+1$, and $$\label{eqn:phi_basis_1d} {{{\underline{\Phi}}}} = \left( 1, \, \sqrt{3} \xi, \, \frac{\sqrt{5}}{2} \left( 3 \xi^2 - 1 \right), \, \cdots \right), \quad \text{s.t.} \quad {\frac{1}{2}}\int_{-1}^{1} {{{\underline{\Phi}}}} \, {{{\underline{\Phi}}}}^T \, d\xi = {{{\underline{{\underline{\mathbb I}}}}}} \in {\mathbb R}^{\mcorr\times\mcorr},$$ are the orthonormal space Legendre polynomials. In order to compute a predicted solution on each spacetime element \[eqn:spacetime\_elem\_1d\] we make the following ansatz: $$\label{eqn:pred_ansatz} q(t,x) \Bigl|_{{\mathcal S}^{n+1/2}_i} \approx w^{n+1/2}_{i} := {{{\underline{\Psi}}}}^T {{{\underline{W}}}}^{n+1/2}_i,$$ where ${{{\underline{W}}}}^{n+1/2}_i \in {\mathbb R}^{\mpred}$, ${{{\underline{\Psi}}}} \in {\mathbb R}^{\mpred}$, $\mpred := (\mdeg+1)(\mdeg+2)/2$, and $$\label{eqn:psi_basis_1d} {{{\underline{\Psi}}}} = \left( 1, \, \sqrt{3} \tau, \, \sqrt{3} \xi, \, \cdots \right), \quad \text{s.t.} \quad \frac{1}{4} \int_{-1}^{1} \int_{-1}^{1} {{{\underline{\Psi}}}} \, {{{\underline{\Psi}}}}^T \, d\tau \, d\xi = {{{\underline{{\underline{\mathbb I}}}}}} \in {\mathbb R}^{\mpred\times\mpred},$$ are the spacetime Legendre basis functions. Next, we pre-multiply \[eqn:adv1d\_nondim\] by ${{{\underline{\Psi}}}}$ and integrate over ${\mathcal S}^{n+1/2}_i$ to obtain: $$\label{eqn:integrate_1d_adv} \frac{1}{4} \int_{-1}^{1} \int_{-1}^{1} {{{\underline{\Psi}}}} \, \left[ q_{,\tau} +\nu q_{,\xi} \right] \, d\tau \, d\xi = {{{\underline{0}}}}.$$ We then replace the exact solution, $q$, by \[eqn:pred\_ansatz\]. We integrate-by-parts in time, first forwards, then backwards, which introduces a jump term at the old time $t=t^n$. No integration-by-parts is done in space – this is what gives the local nature of the predictor step. All of this results in the following equation: $$\begin{split} \iint & {{{\underline{\Psi}}}} \, \left[ {{{\underline{\Psi}}}}_{,\tau} + \nu {{{\underline{\Psi}}}}_{,\xi} \right]^T \, {{{\underline{W}}}}^{n+1/2}_i \, d\tau \, d\xi + \int {{{\underline{\Psi}}}}_{|_{\tau=-1}} \left[ {{{\underline{\Psi}}}}_{|_{\tau=-1}}^T {{{\underline{W}}}}^{n+1/2}_i - {{{\underline{\Phi}}}}^T {{{\underline{Q}}}}^{n}_i \right] \, d\xi = {{{\underline{0}}}}, \end{split}$$ where all 1D integrals are over $[-1,1]$, which can be written as $$\begin{gathered} \label{eqn:prediction_soln} {{{\underline{{\underline{L^0}}}}}} \, {{{\underline{W}}}}^{n+1/2}_i = {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^{n}_i, \\ \label{eqn:predictedA} {{{\underline{{\underline{L^0}}}}}} = \frac{1}{4} \int_{-1}^{1} \int_{-1}^{1} {{{\underline{\Psi}}}} \, \left[ {{{\underline{\Psi}}}}_{,\tau} + \nu {{{\underline{\Psi}}}}_{,\xi} \right]^T \, d\tau \, d\xi + \frac{1}{4} \int_{-1}^{1} {{{\underline{\Psi}}}}_{|_{\tau=-1}} \, {{{\underline{\Psi}}}}_{|_{\tau=-1}}^T \, d\xi \in {\mathbb R}^{\mpred \times \mpred}, \\ \label{eqn:predictedB} {{{\underline{{\underline{T}}}}}} = \frac{1}{4} \int_{-1}^{1} {{{\underline{\Psi}}}}_{|_{\tau=-1}} \, {{{\underline{\Phi}}}}^T \, d\xi \in {\mathbb R}^{\mpred \times \mcorr}.\end{gathered}$$ As is evident from the formulas above, the predicted spacetime solution as encoded in the coefficients ${{{\underline{W}}}}^{n+1/2}_i$ is entirely local – the values only depend on the old values from the same element: ${{{\underline{Q}}}}^{n}_i$. Therefore, we refer to this prediction step as [*locally-implicit*]{}. Gassner et al. [@article:GasDumHinMun2011] argued that the locally-implicit prediction step as presented above produces the key step in the Lax-Wendroff DG scheme [@article:QiuDumShu2005]. We briefly illustrate this point here. Lax-Wendroff [@article:LxW1960] (aka the Cauchy-Kovalevskaya [@article:Kovaleskaya1875] procedure) begins with a Taylor series in time. All time derivatives are then replaced by spatial derivatives using the underlying PDE – in this case \[eqn:adv1d\]. For example, if we kept all the time derivatives up to the third derivative we would get: $$\begin{split} q^{n+1} &\approx q^n + 2 q^n_{,\tau} + 2 q^n_{,\tau,\tau} + \frac{4}{3} q^n_{,\tau,\tau,\tau} = q^n - 2 \nu q^n_{,\xi} + 2 \nu^2 q^n_{,\xi,\xi} - \frac{4}{3} \nu^3 q^n_{,\xi,\xi,\xi} \\ &= q^n - 2 \nu \left\{ q^n - \nu q^n_{,\xi} + \frac{2}{3} \nu^2 q^n_{,\xi,\xi} \right\}_{,\xi} = q^n - 2 \nu {\mathcal G}^{\text{LxW}}_{,\xi}. \end{split}$$ Using the third-order DG approximation, $$\label{eqn:third_order_dg} q^n := \varphi_1 Q_1^n + \varphi_2 Q_2^n + \varphi_3 Q_3^n = Q_1^n + \sqrt{3} \xi Q_2^n + \frac{\sqrt{5}}{2} \left( 3\xi^2 -1 \right) Q_3^n,$$ we obtain $$\label{eqn:lxw_flux_lxw} {\mathcal G}^{\text{LxW}} = \varphi_1 \left( Q_1^n - \sqrt{3} \nu \, Q_2^n + 2 \sqrt{5} \nu^2 Q_3^n \right) + \varphi_2 \left( Q_2^n - \sqrt{15} \nu Q_3^n \right) + \varphi_3 Q_3^n.$$ Alternatively, the time-averaged flux, ${\mathcal G}^{\text{LxW}}$, can be directly obtained from the locally-implicit predictor described above. We first calculate the predicted spacetime solution, $w^{n+1/2}$, via \[eqn:prediction\_soln\], \[eqn:predictedA\], and \[eqn:predictedB\]. From this we compute the time averaged flux: $$\label{eqn:predict_flux_lxw} {\mathcal G}^{\text{LxW}} = {\frac{1}{2}}\int_{-1}^{1} w^{n+1/2}\left(\tau,\xi\right) d\tau ={\frac{1}{2}}\left\{ \int_{-1}^{1} {{{\underline{\Psi}}}}\left(\tau,\xi\right) \, d\tau \right\}^T \left( {{{\underline{{\underline{L^0}}}}}} \right)^{-1} {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^n.$$ A straightforward calculation shows that \[eqn:predict\_flux\_lxw\] with \[eqn:third\_order\_dg\] and the corresponding ${\mathcal P}(2,2)$ spacetime basis[^5] is exactly the same as \[eqn:lxw\_flux\_lxw\]. (-0.5,-0.5)–(-0.5,1) node\[above,black\][$t$]{}; (-0.5,-0.5)–(1,-0.5) node\[right,black\][$x$]{}; (5.5,-0.5)–(5.5,1) node\[above,black\][$t$]{}; (5.5,-0.5)–(7,-0.5) node\[right,black\][$x$]{}; (0,0) – (4,0) – (4,4) – (0,4) – cycle; node (c2) at (0, 2) \[circ\]; node (c5) at (4, 2) \[circ\]; node (c8) at (2, 4) \[amp\]; node (c11) at (2, 0) \[amp\]; (6,0) – (10,0) – (10,4) – (6,4) – cycle; (10,0) – (14,0) – (14,4) – (10,4) – cycle; (14,0) – (18,0) – (18,4) – (14,4) – cycle; node (r2) at (6, 2) \[circ\]; node (r5) at (10, 2) \[dia\]; node (r8) at (14, 2) \[dia\]; node (r11) at (18, 2) \[circ\]; node (r14) at (8, 4) \[amp\]; node (r17) at (8, 0) \[amp\]; node (r20) at (12, 4) \[amp\]; node (r23) at (12, 0) \[amp\]; node (r26) at (16, 4) \[amp\]; node (r29) at (16, 0) \[amp\]; at (2,4.75) [LIDG]{}; at (12,4.75) [RIDG]{}; at (2,2) [${{{\underline{\Psi}}}}^T \, {{{\underline{W}}}}^{n+1/2}_i$]{}; at (8,2) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1}$]{}; at (12,2) [${{{\underline{\Psi}}}}^T \, {{{\underline{W}}}}^{n+1/2}_i$]{}; at (16,2) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i+1}$]{}; node (l1) at (2, -1.4) \[amp,label=right:[upwind-in-time]{}\]; node (l2) at (7, -1.4) \[circ,label=right:[interior flux]{}\]; node (l3) at (11.5, -1.4) \[dia,label=right:[proper upwind flux]{}\]; RIDG prediction step {#sec:RIDG1D_predict} -------------------- As we have argued in \[sec:intro\] and with \[table:CFL\_gap\], the locally-implicit predictor described above in \[sec:LIDG1D\_predict\] will result in a scheme that has a maximum linearly stable CFL number that is both small and becomes progressively smaller with increasing order of accuracy. We introduce a modified prediction step here to remedy these shortcomings. The starting point is the same as in \[sec:LIDG1D\_predict\]: ansatz \[eqn:old\_ansatz\] and \[eqn:pred\_ansatz\], but now with the full spacetime ${\mathcal Q}({\mdeg},\mdim+1)$ basis in the prediction step: $$\mcorr=\mdeg+1 \qquad \text{and} \qquad \mpred=(\mdeg+1)^2.$$ We integrate the advection equation in spacetime to get \[eqn:integrate\_1d\_adv\], but this time we integrate-by-parts in both space and time, which yields: $$\label{eqn:ridg1d_pred} \begin{split} \iint & {{{\underline{\Psi}}}} \, \left( {{{\underline{\Psi}}}}_{,\tau} + \nu {{{\underline{\Psi}}}}_{,\xi} \right)^T \, {{{\underline{W}}}}^{n+1/2}_i \, d\tau d\xi + \int {{{\underline{\Psi}}}}_{|_{\tau=-1}} \left[ {{{\underline{\Psi}}}}_{|_{\tau=-1}}^T {{{\underline{W}}}}^{n+1/2}_i - {{{\underline{\Phi}}}}^T {{{\underline{Q}}}}^{n}_i \right] \, d\xi \, \, - \\ \int & \left\{ {{{\underline{\Psi}}}}_{|_{\xi=1}} \left[ \nu {{{\underline{\Psi}}}}_{|_{\xi=1}}^T {{{\underline{W}}}}^{n+1/2}_i - {\mathcal F}^{\star}_{i+{\frac{1}{2}}} \right] - {{{\underline{\Psi}}}}_{|_{\xi=-1}} \left[ \nu {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T {{{\underline{W}}}}^{n+1/2}_i - {\mathcal F}^{\star}_{i-{\frac{1}{2}}} \right] \right\} d\tau = {{{\underline{0}}}}, \end{split}$$ where all 1D integrals are over $[-1,1]$, ${{{\underline{\Phi}}}}$ is the Legendre basis \[eqn:phi\_basis\_1d\], ${{{\underline{\Psi}}}}$ is the spacetime Legendre basis \[eqn:psi\_basis\_1d\], and ${\mathcal F}^{\star}$ are some appropriately defined numerical fluxes. The crux of the idea of the regionally-implicit DG scheme in one spatial dimension can be summarized as follows: - We define a [*region*]{} to be the current spacetime element, ${\mathcal S}^{n+1/2}_i$, and its immediate neighbors: ${\mathcal S}^{n+1/2}_{i-1}$ and ${\mathcal S}^{n+1/2}_{i+1}$. This is illustrated in \[fig:RIDG\_1D\]. - For ${\mathcal S}^{n+1/2}_i$, we use the correct upwind fluxes to define the numerical fluxes, ${\mathcal F}^{\star}$, on its faces. - For the immediate neighbors, ${\mathcal S}^{n+1/2}_{i-1}$ and ${\mathcal S}^{n+1/2}_{i+1}$, we again use the correct upwind fluxes on the faces that are shared with ${\mathcal S}^{n+1/2}_i$, but on the outer faces we use one-sided interior fluxes. See \[fig:RIDG\_1D\]. - We use the ${\mathcal Q}({\mdeg},\mdim+1)$ spacetime basis in the prediction step (i.e., the full tensor product spacetime basis). Numerical experimentation showed us that using the ${\mathcal Q}({\mdeg},\mdim+1)$ basis for the prediction step, rather than the ${\mathcal P}({\mdeg},\mdim+1)$ basis, produces significantly more accurate results; in the case of linear equations, this creates little additional computational expense since all the relevant matrices can be precomputed. The result of this is a collection of three elements with solutions that are coupled to each other, but that are completely decoupled from all remaining elements. This RIDG setup is depicted in \[fig:RIDG\_1D\], where we also show the LIDG setup as a point of comparison. The precise form of the fluxes for the RIDG prediction step on spacetime element ${\mathcal S}^{n+1/2}_i$ can be written as follows: $$\begin{gathered} \label{eqn:ridg1d_flux1} {\mathcal F}^{\star}_{i-{3}/{2}} = \nu {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T {{{\underline{W}}}}^{n+1/2}_{i-1}, \quad {\mathcal F}^{\star}_{i-1/2} = \nu^+ {{{\underline{\Psi}}}}_{|_{\xi=1}}^T {{{\underline{W}}}}^{n+1/2}_{i-1} + \nu^- {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T {{{\underline{W}}}}^{n+1/2}_{i}, \\ \label{eqn:ridg1d_flux2} {\mathcal F}^{\star}_{i+1/2} = \nu^+ {{{\underline{\Psi}}}}_{|_{\xi=1}}^T {{{\underline{W}}}}^{n+1/2}_{i} + \nu^- {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T {{{\underline{W}}}}^{n+1/2}_{i+1}, \quad {\mathcal F}^{\star}_{i+{3}/{2}} = \nu {{{\underline{\Psi}}}}_{|_{\xi=1}}^T {{{\underline{W}}}}^{n+1/2}_{i+1}.\end{gathered}$$ Combining \[eqn:ridg1d\_pred\] with numerical fluxes \[eqn:ridg1d\_flux1\] and \[eqn:ridg1d\_flux2\], yields the following block $3 \times 3$ system: $$\label{eqn:ridg1d_system} \left[ \begin{array}{c;{2pt/2pt}c;{2pt/2pt}c} {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^-}}}}}} & {{{\underline{{\underline{X^-}}}}}} & \\ \hdashline[2pt/2pt] {{{\underline{{\underline{X^+}}}}}} & {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^-}}}}}} + {{{\underline{{\underline{L^+}}}}}} & {{{\underline{{\underline{X^-}}}}}} \\ \hdashline[2pt/2pt] & {{{\underline{{\underline{X^+}}}}}} & {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^+}}}}}} \end{array} \right] \left[ \begin{array}{c} \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1} \\ \hdashline[2pt/2pt] {{{{\underline{W}}}}}^{n+1/2}_{i} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i+1} \end{array} \right] = \left[ \begin{array}{c} {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^n_{i-1} \\ \hdashline[2pt/2pt] {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^n_{i} \\ \hdashline[2pt/2pt] {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^n_{i+1} \end{array} \right],$$ where ${{{\underline{{\underline{L^0}}}}}}$ is given by \[eqn:predictedA\], ${{{\underline{{\underline{T}}}}}}$ is given by \[eqn:predictedB\], and $$\begin{gathered} {{{\underline{{\underline{L^+}}}}}} = \frac{\nu^+}{4} \int_{-1}^{1} {{{\underline{\Psi}}}}_{|_{\xi=-1}} \, {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T \, d\tau, \quad {{{\underline{{\underline{L^-}}}}}} = -\frac{\nu^-}{4} \int_{-1}^{1} {{{\underline{\Psi}}}}_{|_{\xi=1}} \, {{{\underline{\Psi}}}}_{|_{\xi=1}}^T \, d\tau, \\ {{{\underline{{\underline{X^+}}}}}} = -\frac{\nu^+}{4} \int_{-1}^{1} {{{\underline{\Psi}}}}_{|_{\xi=-1}} \, {{{\underline{\Psi}}}}_{|_{\xi=1}}^T \, d\tau, \quad {{{\underline{{\underline{X^-}}}}}} = \frac{\nu^-}{4} \int_{-1}^{1} {{{\underline{\Psi}}}}_{|_{\xi=1}} \, {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T \, d\tau,\end{gathered}$$ where ${{{\underline{{\underline{L^-}}}}}}, {{{\underline{{\underline{L^+}}}}}}, {{{\underline{{\underline{X^-}}}}}}, {{{\underline{{\underline{X^+}}}}}} \in {\mathbb R}^{\mpred \times \mpred}$, $\nu^+ = \max\left(\nu, 0 \right)$, and $\nu^- = \min\left(\nu, 0 \right)$. Note that the states to the immediate left and right of the current spacetime element ${\mathcal S}^{n+1/2}_{i}$ are only temporary variables and will be discarded once the predicted solution in element $i$ has been computed – to make note of this we place hats over the temporary variables. This also means that we have to solve a block $3 \times 3$ system of the form \[eqn:ridg1d\_system\] on every single element ${\mathcal S}^{n+1/2}_{i}$. Correction step for both LIDG and RIDG {#sec:RIDG1D_correct} -------------------------------------- In order to go from the predictor to the corrector step, we multiply \[eqn:adv1d\_nondim\] by ${{{\underline{\Phi}}}} \in {\mathbb R}^{\mcorr}$ and integrate in spacetime: $$\begin{gathered} {{{\underline{Q}}}}^{n+1}_i = {{{\underline{Q}}}}^{n}_i + \frac{\nu}{2} \int_{-1}^{1} \int_{-1}^{1} {{{\underline{\Phi}}}}_{,\xi} \, q \, d\tau \, d\xi - {\frac{1}{2}}\int_{-1}^{1} \left[ {{{\underline{\Phi}}}}_{|_{\xi=1}} {\mathcal F}_{i+1/2} - {{{\underline{\Phi}}}}_{|_{\xi=-1}} {\mathcal F}_{i-1/2} \right] \, d\tau,\end{gathered}$$ where ${\mathcal F}$ is the numerical flux. Next we replace $q$ by the predicted solution from either \[sec:LIDG1D\_predict\] or \[sec:RIDG1D\_predict\], and use the upwind flux: $${\mathcal F}_{i-1/2} = \nu^+ \, {{{\underline{\Psi}}}}_{|_{\xi=1}}^T \, {{{\underline{W}}}}^{n+1/2}_{i-1} + \nu^- \, {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T \, {{{\underline{W}}}}^{n+1/2}_{i},$$ which results in $$\begin{gathered} \label{eqn:correct_1d} {{{\underline{Q}}}}^{n+1}_i = {{{\underline{Q}}}}^{n}_i + {{{\underline{{\underline{C^-}}}}}} \, {{{\underline{W}}}}^{n+1/2}_{i-1} + {{{\underline{{\underline{C^0}}}}}} \, {{{\underline{W}}}}^{n+1/2}_i + {{{\underline{{\underline{C^+}}}}}} \, {{{\underline{W}}}}^{n+1/2}_{i+1}, \\ \label{eqn:correct_1d_1} {{{\underline{{\underline{C^0}}}}}} = \frac{\nu}{2} \int_{-1}^{1} \int_{-1}^{1} {{{\underline{\Phi}}}}_{,\xi} \, {{{\underline{\Psi}}}}^T \, d\tau \, d\xi - \frac{1}{2} \int_{-1}^{1} \left[ \nu^+ {{{\underline{\Phi}}}}_{|_{\xi=1}} {{{\underline{\Psi}}}}_{|_{\xi=1}}^T - \nu^- {{{\underline{\Phi}}}}_{|_{\xi=-1}} {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T \right] \, d\tau, \\ \label{eqn:correct_1d_2} {{{\underline{{\underline{C^-}}}}}} = \frac{\nu^+}{2} \int_{-1}^{1} {{{\underline{\Phi}}}}_{|_{\xi=-1}} \, {{{\underline{\Psi}}}}_{|_{\xi=1}}^T \, d\tau, \quad {{{\underline{{\underline{C^+}}}}}} = -\frac{\nu^-}{2} \int_{-1}^{1} {{{\underline{\Phi}}}}_{|_{\xi=1}} \, {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T \, d\tau,\end{gathered}$$ where ${{{\underline{{\underline{C^0}}}}}}, {{{\underline{{\underline{C^-}}}}}}, {{{\underline{{\underline{C^+}}}}}} \in {\mathbb R}^{\mcorr \times \mpred}$. \(a) ![Plot of the stability function $f(\nu)$ defined by \[eqn:fstab\_func\] for $M_{\text{deg}}=5$ for the (a) locally-implicit DG (LIDG) and (b) regionally-implicit DG (RIDG) schemes. There is a clear dichotomy between the linearly stable region and the unstable region.\[fig:fnu\]](Images/fstab_lidg-crop.pdf "fig:"){height="45mm"} (b) ![Plot of the stability function $f(\nu)$ defined by \[eqn:fstab\_func\] for $M_{\text{deg}}=5$ for the (a) locally-implicit DG (LIDG) and (b) regionally-implicit DG (RIDG) schemes. There is a clear dichotomy between the linearly stable region and the unstable region.\[fig:fnu\]](Images/fstab_ridg-crop.pdf "fig:"){height="45mm"} Von Neumann stability analysis for both LIDG and RIDG {#sec:RIDG1D_stability} ----------------------------------------------------- The Lax-Wendroff DG scheme (aka LIDG) with prediction step detailed in \[sec:LIDG1D\_predict\] and correction step given by \[eqn:correct\_1d\], \[eqn:correct\_1d\_1\], and \[eqn:correct\_1d\_2\], uses a stencil involving three elements: $\Tm_{i-1}$, $\Tm_{i}$, and $\Tm_{i+1}$; the RIDG scheme detailed in \[sec:RIDG1D\_predict\] with the same correction step as LIDG uses a stencil involving five elements: $\Tm_{i-2}$, $\Tm_{i-1}$, $\Tm_{i}$, $\Tm_{i+1}$, and $\Tm_{i+2}$. This resulting CFL conditions for the LIDG and RIDG schemes are $$\label{eqn:cfl_condition_1d} | \nu | = \frac{|u| \Delta t}{\Delta x} \le 1 \qquad \text{and} \qquad | \nu | = \frac{|u| \Delta t}{\Delta x} \le 2,$$ repectively. In reality, the CFL number, $|\nu|$, for which linear stability is achieved is smaller than what this CFL argument provides; we investigate this in more detail here. In order to study linear stability, we employ the technique of von Neumann stability analysis (e.g., see Chapter 10.5 of LeVeque [@book:Leveque2007]). In particular, we assume the following Fourier ansatz: $$\label{eqn:neumann_ansatz_1d} {{{\underline{Q}}}}_i^{n+1} = {{{\underline{\widetilde{Q}}}}}^{\, n+1} \, e^{I \omega i} \qquad \text{and} \qquad {{{\underline{Q}}}}_i^{n} = {{{\underline{\widetilde{Q}}}}}^{\, n} \, e^{I \omega i},$$ where $I = \sqrt{-1}$ and $0 \le \omega \le 2\pi$ is the wave number. After using this ansatz, the next step is to write the resulting update in the form: $${{{\underline{\widetilde{Q}}}}}^{\, n+1} = {{{\underline{{\underline{\mathcal M}}}}}}(\nu, \omega) \, {{{\underline{\widetilde{Q}}}}}^{\, n},$$ for some matrix ${{{\underline{{\underline{\mathcal M}}}}}} \in {\mathbb R}^{\mcorr \times \mcorr}$. If we apply ansatz \[eqn:neumann\_ansatz\_1d\] to LIDG and RIDG, assuming w.l.o.g. that $\nu \ge 0$, we obtain the following: $$\begin{aligned} {{{\underline{{\underline{{\mathcal M}_{\text{LIDG}}}}}}}}(\nu,\omega) &= \left( {{{\underline{{\underline{\mathbb I}}}}}} + {{{\underline{{\underline{C^0}}}}}} \left({{{\underline{{\underline{L^0}}}}}}\right)^{-1} {{{\underline{{\underline{T}}}}}} \right) + e^{-I \omega} \, {{{\underline{{\underline{C^-}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} \right)^{-1} {{{\underline{{\underline{T}}}}}}, \\ \begin{split} {{{\underline{{\underline{{\mathcal M}_{\text{RIDG}}}}}}}}(\nu,\omega) &= \left( {{{\underline{{\underline{\mathbb I}}}}}} + {{{\underline{{\underline{C^0}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^+}}}}}} \right)^{-1} {{{\underline{{\underline{T}}}}}} \right) \\ &+ e^{-I \omega} \left( {{{\underline{{\underline{C^-}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^+}}}}}} \right)^{-1} {{{\underline{{\underline{T}}}}}} - {{{\underline{{\underline{C^0}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^+}}}}}} \right)^{-1}{{{\underline{{\underline{X^+}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} \right)^{-1} {{{\underline{{\underline{T}}}}}} \right) \\ &- e^{-2I\omega} \, {{{\underline{{\underline{C^{-}}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} + {{{\underline{{\underline{L^+}}}}}} \right)^{-1} {{{\underline{{\underline{X^+}}}}}} \left( {{{\underline{{\underline{L^0}}}}}} \right)^{-1} {{{\underline{{\underline{T}}}}}}, \end{split}\end{aligned}$$ where ${{{\underline{{\underline{\mathbb I}}}}}} \in {\mathbb R}^{\mcorr \times \mcorr}$ is the identity matrix. The final step in the stability analyis is to study the spectral properties ${{{\underline{{\underline{\mathcal M}}}}}}$ as a function of the CFL number $\nu$. In particular, to find the largest $\nu$ for which LIDG or RIDG are linearly stable, we define the following function: $$\label{eqn:fstab_func} f(\nu) := \max_{0 \le \omega \le 2\pi} \rho \left( {{{\underline{{\underline{\mathcal M}}}}}}(\nu, \omega) \right) - 1,$$ where $\rho\left( {{{\underline{{\underline{\mathcal M}}}}}} \right)$ is the spectral radius of ${{{\underline{{\underline{\mathcal M}}}}}}$. For both LIDG and RIDG, the function $f(\nu)$ satisfies $f(0) = 0$, and there exists a finite range of $\nu$ for which $f(\nu) \approx 0$, and there exists a value of $\nu$ for which $f(\nu)$ transitions from being approximately zero to rapidly increasing with increasing $\nu$. We illustrate this point in \[fig:fnu\] for both LIDG and RIDG for the case $\mdeg = 5$ ($\mcorr=6$, LIDG: $\mpred=21$, RIDG: $\mpred=36$); for each scheme we note approximately where the linear stability transition occurs. In order to numerically estimate the location of the linear stability transition we look for the value of $\nu$ that satisfies $f(\nu) = \varepsilon$. We do this via a simple bisection method where we set $\varepsilon = 0.0005$ and we replace the true maximization in \[eqn:fstab\_func\] over the maximization of 2001 uniformly spaced wave numbers over $0 \le \omega \le 2\pi$. The result of this bisection procedure for both LIDG and RIDG for $\mdeg = 0,1,2,3,4,5$, is summarized in \[table:stab1d\]. In all cases we have also run the full numerical method at various grid resolution to verify that the simulations are indeed stable at the various CFL numbers shown in \[table:stab1d\]. There are three key take-aways from \[table:stab1d\]: : Both methods give stability regions smaller than their CFL conditions \[eqn:cfl\_condition\_1d\]; : The LIDG CFL number degrades roughly as the inverse of the method order; : The RIDG CFL number is roughly one, independent of the method order. [**1D**]{} $\mdeg=0$ $\mdeg=1$ $\mdeg=2$ $\mdeg=3$ $\mdeg=4$ $\mdeg=5$ -------------- ----------- ----------- ----------- ----------- ----------- ----------- [**LIDG**]{} 1.000 0.333 0.171 0.104 0.070 0.050 [**RIDG**]{} 1.000 1.168 1.135 1.097 1.066 1.047 : Numerically estimated maximum CFL numbers for the LIDG (aka Lax-Wendroff DG) and RIDG schemes in 1D. \[table:stab1d\] Generalization to higher dimensions {#sec:higher-dimensions} =================================== We present in this section the generalization of the proposed regionally-implicit discontinuous Galerkin (RIDG) method to the case of the two and three-dimensional versions of the advection equation. The key innovation beyond what was developed in \[sec:one-dimension\] for the one-dimensional case is the inclusion of [*transverse*]{} cells in the prediction step. With these inclusions, the prediction gives enhanced stability for waves propagating at all angles to the element faces. RIDG method in 2D {#sec:ridg_in_2d} ----------------- We consider here the two-dimensional advection equation for $(t,x,y) \in [0,T] \times \Omega$ with appropriate boundary conditions: $$\label{eqn:adv2d} q_{,t} + u_x q_{,x} + u_y q_{,y} = 0.$$ We define a uniform Cartesian mesh with grid spacings $\Delta x$ and $\Delta y$ in each coordinate direction. On each spacetime element: $$\label{eqn:spacetime_elem_2d} {\mathcal S}^{n+1/2}_{ij} = \left[t^{n}, t^n + \Delta t \right] \times \left[ x_i - {\Delta x}/{2}, x_i + {\Delta x}/{2} \right] \times \left[ y_j - {\Delta y}/{2}, y_j + {\Delta y}/{2} \right],$$ we define the local coordinates, $[\tau, \xi, \eta] \in [-1,1]^3$, such that $$t = t^{n+1/2} + \tau \left( {\Delta t}/{2} \right), \quad x = x_i + \xi \left( {\Delta x}/{2} \right), \quad \text{and} \quad y = y_j + \eta \left( {\Delta y}/{2} \right).$$ In these local coordinates, the advection equation is given by $$q_{,\tau} + \nu_x q_{,\xi} + \nu_y q_{,\eta} = 0, \quad \nu_x = \frac{u_x \Delta t}{\Delta x}, \quad \nu_y = \frac{u_y \Delta t}{\Delta y},$$ where $|\nu_x|$ and $|\nu_y|$ are the CFL numbers in each coordinate direction and the multidimensional CFL number is $$\label{eqn:twod_cfl} |\nu| := \max\left\{ |\nu_x|, |\nu_y| \right\}.$$ At the old time, $t=t^n$, we are given the following approximate solution on each space element, $\Tm_{ij} = \left[x_i - \Delta x/2, \, x_i + \Delta x/2\right] \times \left[y_j - \Delta y/2, \, y_j + \Delta y/2\right]$: $$\label{eqn:old_ansatz_2d} q(t^{n},x,y) \Bigl|_{{\mathcal T}_{ij}} \approx q^{n}_{ij} := {{{\underline{\Phi}}}}^T {{{\underline{Q}}}}^{n}_{ij},$$ where ${{{\underline{Q}}}}^{n}_{ij} \in {\mathbb R}^{\mcorr}$, ${{{\underline{\Phi}}}} \in {\mathbb R}^{\mcorr}$, $\mcorr:=(\mdeg+1)(\mdeg+2)/2$, and $$\label{eqn:phi_basis_2d} {{{\underline{\Phi}}}} = \left( 1, \, \sqrt{3} \xi, \, \sqrt{3} \eta, \, \cdots \right), \quad \text{s.t.} \quad \frac{1}{4} \int_{-1}^{1}\int_{-1}^{1} {{{\underline{\Phi}}}} \, {{{\underline{\Phi}}}}^T \, d\xi \, d\eta = {{{\underline{{\underline{\mathbb I}}}}}} \in {\mathbb R}^{\mcorr\times\mcorr},$$ are the orthonormal space Legendre polynomials. In order to compute a predicted solution on each spacetime element we make the following ansatz: $$\label{eqn:pred_ansatz_2d} q(t,x) \Bigl|_{{\mathcal S}^{n+1/2}_{ij}} \approx w^{n+1/2}_{ij} := {{{\underline{\Psi}}}}^T {{{\underline{W}}}}^{n+1/2}_{ij},$$ where ${{{\underline{W}}}}^{n+1/2}_i \in {\mathbb R}^{\mpred}$, ${{{\underline{\Psi}}}} \in {\mathbb R}^{\mpred}$, $\mpred := (\mdeg+1)^3$, and $$\label{eqn:psi_basis_2d} {{{\underline{\Psi}}}} = \left( 1, \, \sqrt{3} \tau, \, \sqrt{3} \xi, \, \sqrt{3} \eta, \, \cdots \right), \quad \text{s.t.} \quad \frac{1}{8} \iiint{{{\underline{\Psi}}}} \, {{{\underline{\Psi}}}}^T \, d\tau \, d\xi \, d\eta = {{{\underline{{\underline{\mathbb I}}}}}} \in {\mathbb R}^{\mpred\times\mpred},$$ are the spacetime Legendre basis functions, where all 1D integrals are over $[-1,1]$. Just as in the one-dimensional case outlined in \[sec:one-dimension\], for the RIDG scheme we make use of the ${\mathcal P}({\mdeg},\mdim)$ spatial basis for the correction step (i.e., $\mcorr = (\mdeg+1)(\mdeg+2)/2$), and the ${\mathcal Q}({\mdeg},\mdim+1)$ spacetime basis for the prediction step (i.e., $\mpred = (\mdeg+1)^3$). at (1,12.75) [LIDG]{}; at (12,12.75) [RIDG]{}; (-1,0) – (3,0) – (3,4) – (-1,4) – cycle; node (r2) at (-1, 2) \[circ\]; node (r2) at ( 3, 2) \[circ\]; node (r2) at ( 1, 4) \[circ\]; node (r2) at ( 1, 0) \[circ\]; at (1,2) [${{{\underline{\Psi}}}}^T \, {{{\underline{W}}}}^{n+1/2}_{ij}$]{}; (5.25-7,-0.75)–(5.25-7,1) node\[above,black\][$y$]{}; (5.25-7,-0.75)–(7-7,-0.75) node\[right,black\][$x$]{}; (5.25,-0.75)–(5.25,1) node\[above,black\][$y$]{}; (5.25,-0.75)–(7,-0.75) node\[right,black\][$x$]{}; (6,0) – (10,0) – (10,4) – (6,4) – cycle; (10,0) – (14,0) – (14,4) – (10,4) – cycle; (14,0) – (18,0) – (18,4) – (14,4) – cycle; (6,0+4) – (10,0+4) – (10,4+4) – (6,4+4) – cycle; (10,0+4) – (14,0+4) – (14,4+4) – (10,4+4) – cycle; (14,0+4) – (18,0+4) – (18,4+4) – (14,4+4) – cycle; (6,0+8) – (10,0+8) – (10,4+8) – (6,4+8) – cycle; (10,0+8) – (14,0+8) – (14,4+8) – (10,4+8) – cycle; (14,0+8) – (18,0+8) – (18,4+8) – (14,4+8) – cycle; node (r2) at (6, 2) \[circ\]; node (r5) at (10, 2) \[dia\]; node (r8) at (14, 2) \[dia\]; node (r11) at (18, 2) \[circ\]; node (r14) at (8, 4) \[dia\]; node (r17) at (8, 0) \[circ\]; node (r20) at (12, 4) \[dia\]; node (r23) at (12, 0) \[circ\]; node (r26) at (16, 4) \[dia\]; node (r29) at (16, 0) \[circ\]; node (r29) at (16, 12) \[circ\]; node (r29) at (12, 12) \[circ\]; node (r29) at (8, 12) \[circ\]; node (r29) at (16, 8) \[dia\]; node (r29) at (12, 8) \[dia\]; node (r29) at (8, 8) \[dia\]; node (r29) at (10, 6) \[dia\]; node (r29) at (10, 6+4) \[dia\]; node (r29) at (14, 6) \[dia\]; node (r29) at (14, 6+4) \[dia\]; node (r29) at (6, 6+4) \[circ\]; node (r29) at (6, 6) \[circ\]; node (r29) at (18, 6+4) \[circ\]; node (r29) at (18, 6) \[circ\]; at (8,2) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1 \, j-1}$]{}; at (12,2) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i \, j-1}$]{}; at (16,2) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{{i+1 \, j-1}}$]{}; at (8,2+4) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1 \, j}$]{}; at (12,2+4) [${{{\underline{\Psi}}}}^T \, {{{\underline{W}}}}^{n+1/2}_{ij}$]{}; at (16,2+4) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{{i+1 \, j}}$]{}; at (8,2+8) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1 \, j+1}$]{}; at (12,2+8) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{i \, j+1}$]{}; at (16,2+8) [${{{\underline{\Psi}}}}^T \, \widehat{{{{\underline{W}}}}}^{n+1/2}_{{i+1 \, j+1}}$]{}; node (l2) at (4, -2.2) \[circ,label=right:[interior flux]{}\]; node (l3) at (8.5, -2.2) \[dia,label=right:[proper upwind flux]{}\]; We integrate the advection equation over a spacetime element and apply integrate-by-parts in all three independent variables: $\tau$, $\xi$, $\eta$, which yields: $$\label{eqn:ridg2d_pred} \begin{split} & \iiint {{{\underline{\Psi}}}} \, {{{\underline{\mathcal R}}}}\left({{{\underline{\Psi}}}}\right)^T {{{\underline{W}}}}^{n+1/2}_{ij} d{{{\underline{\mathcal S}}}} + \hspace{-1mm} \iint \hspace{-1mm} {{{\underline{\Psi}}}}_{|_{\tau=-1}} \hspace{-1mm} \left[ {{{\underline{\Psi}}}}_{|_{\tau=-1}}^T {{{\underline{W}}}}^{n+1/2}_{ij} \hspace{-1mm} - {{{\underline{\Phi}}}}^T {{{\underline{Q}}}}^{n}_{ij} \right] d{{{\underline{\mathcal S}}}}_{\tau} - \\ \iint \hspace{-1mm} & \left\{ {{{\underline{\Psi}}}}_{|_{\xi=1}} \hspace{-1mm} \left[ \nu_x {{{\underline{\Psi}}}}_{|_{\xi=1}}^T {{{\underline{W}}}}^{n+1/2}_{ij} \hspace{-1mm} - {\mathcal F}^{\star}_{i+{\frac{1}{2}}\, j} \right] \hspace{-1mm} - {{{\underline{\Psi}}}}_{|_{\xi=-1}} \hspace{-1mm} \left[ \nu_x {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T {{{\underline{W}}}}^{n+1/2}_{ij} \hspace{-1mm} - {\mathcal F}^{\star}_{i-{\frac{1}{2}}\, j} \right] \right\} d{{{{\underline{\mathcal S}}}}}_{\xi} - \\ \iint \hspace{-1mm} & \left\{ {{{\underline{\Psi}}}}_{|_{\eta=1}} \hspace{-1mm} \left[ \nu_y {{{\underline{\Psi}}}}_{|_{\eta=1}}^T {{{\underline{W}}}}^{n+1/2}_{ij} \hspace{-1mm} - {\mathcal G}^{\star}_{i \, j+{\frac{1}{2}}} \right] \hspace{-1mm} - {{{\underline{\Psi}}}}_{|_{\eta=-1}} \hspace{-1mm} \left[ \nu_y {{{\underline{\Psi}}}}_{|_{\eta=-1}}^T {{{\underline{W}}}}^{n+1/2}_{ij} \hspace{-1mm} - {\mathcal G}^{\star}_{i \, j-{\frac{1}{2}}} \right] \right\} d{{{{\underline{\mathcal S}}}}}_{\eta} \hspace{-1mm} = {{{\underline{0}}}}, \end{split}$$ where ${{{\underline{\mathcal R}}}}\left({{{\underline{\Psi}}}}\right) = {{{\underline{\Psi}}}}_{,\tau} + \nu_x {{{\underline{\Psi}}}}_{,\xi} + \nu_y {{{\underline{\Psi}}}}_{,\eta}$, $d{{{\underline{\mathcal S}}}} = d\tau \, d\xi \, d\eta$, $d{{{\underline{\mathcal S}}}}_{\tau} = d\xi \, d\eta$, $d{{{\underline{\mathcal S}}}}_{\xi} = d\tau \, d\eta$, $d{{{\underline{\mathcal S}}}}_{\eta} = d\tau \, d\xi$, and all 1D integrals are over $[-1,1]$. The crux of the idea of the regionally-implicit DG scheme in two spatial dimension can be summarized as follows: - We define a [*region*]{} to be the current spacetime element, ${\mathcal S}^{n+1/2}_{ij}$, and the eight neighbors that have a face that shares at least one point in common with one of the faces of ${\mathcal S}^{n+1/2}_{ij}$. This is illustrated in \[fig:RIDG\_2D\]. - For the current spacetime element, ${\mathcal S}^{n+1/2}_{ij}$, we use the correct upwind fluxes to define ${\mathcal F}^{\star}$ and ${\mathcal G}^{\star}$. - For the remaining eight elements we use the correct upwind fluxes on all faces that are interior to the region and one-sided fluxes on all faces that are on the boundary of the region. See \[fig:RIDG\_2D\]. - We use the ${\mathcal Q}({\mdeg},\mdim+1)$ spacetime basis in the prediction step (i.e., the full tensor product spacetime basis). Numerical experimentation showed us that using the ${\mathcal Q}({\mdeg},\mdim+1)$ basis for the prediction step, rather than the ${\mathcal P}({\mdeg},\mdim+1)$ basis, produces significantly more accurate results; in the case of linear equations, this creates little additional computational expense since all the relevant matrices can be precomputed. The result of this is a collection of nine elements with solutions that are coupled to each other, but that are completely decoupled from all remaining elements. This RIDG setup is depicted in \[fig:RIDG\_2D\], where we also show the LIDG setup as a point of comparison. One of the key innovations in going from the 1D RIDG scheme to its 2D counterpart is the inclusion of the [*transverse*]{} elements in the prediction step: ${\mathcal S}^{n+1/2}_{i-1 j-1}$, ${\mathcal S}^{n+1/2}_{i+1 j-1}$, ${\mathcal S}^{n+1/2}_{i-1 j+1}$, and ${\mathcal S}^{n+1/2}_{i+1 j+1}$. Without these transverse cells, the maximum allowable two-dimensional CFL number \[eqn:twod\_cfl\] remains small for any waves traveling transverse to the mesh. In the recent literature, there exist several variants of genuinely multidimensional Riemann solvers (e.g., Balsara [@article:Balsara2014]); by including the transverse elements, the current work can be viewed as an example of a novel type of multidimensional Riemann solver. Applying all of the above principles to \[eqn:ridg2d\_pred\] for all of the nine elements that are in the current region yields a block $9 \times 9$ linear system. The left-hand side of this system can be written as $$\label{eqn:ridg2d_system} \left[ \hspace{-1mm} \begin{array}{c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c;{2pt/2pt}c} \Lone & {{{\underline{{\underline{X^-}}}}}} & & {{{\underline{{\underline{Y^-}}}}}} & & & & &\\ \hdashline[2pt/2pt] {{{\underline{{\underline{X^+}}}}}} & \Ltwo & {{{\underline{{\underline{X^-}}}}}} & & {{{\underline{{\underline{Y^-}}}}}} & & & &\\ \hdashline[2pt/2pt] & {{{\underline{{\underline{X^+}}}}}} & \Lthree & & & {{{\underline{{\underline{Y^-}}}}}} & & &\\ \hdashline[2pt/2pt] {{{\underline{{\underline{Y^+}}}}}} & & & \Lfour & {{{\underline{{\underline{X^-}}}}}} & & {{{\underline{{\underline{Y^-}}}}}} & &\\ \hdashline[2pt/2pt] & {{{\underline{{\underline{Y^+}}}}}} & & {{{\underline{{\underline{X^+}}}}}} & \Lfive & {{{\underline{{\underline{X^-}}}}}} & & {{{\underline{{\underline{Y^-}}}}}} &\\ \hdashline[2pt/2pt] & & {{{\underline{{\underline{Y^+}}}}}} & & {{{\underline{{\underline{X^+}}}}}} & \Lsix & & & {{{\underline{{\underline{Y^-}}}}}} \\ \hdashline[2pt/2pt] & & & {{{\underline{{\underline{Y^+}}}}}} & & & \Lseven & {{{\underline{{\underline{X^-}}}}}} &\\ \hdashline[2pt/2pt] & & & & {{{\underline{{\underline{Y^+}}}}}} & & {{{\underline{{\underline{X^+}}}}}} & \Leight & {{{\underline{{\underline{X^-}}}}}}\\ \hdashline[2pt/2pt] & & & & & {{{\underline{{\underline{Y^+}}}}}} & & {{{\underline{{\underline{X^+}}}}}} & \Lnine \end{array} \hspace{-1mm} \right] \hspace{-2mm} \left[ \hspace{-1mm} \begin{array}{c} \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1 j-1} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i j-1} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i+1 j-1} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1 j} \\ \hdashline[2pt/2pt] {{{{\underline{W}}}}}^{n+1/2}_{i j} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i+1 j} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i-1 j+1} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i j+1} \\ \hdashline[2pt/2pt] \widehat{{{{\underline{W}}}}}^{n+1/2}_{i+1 j+1} \end{array} \hspace{-1mm} \right],$$ and the right-hand side can be written as $$\begin{split} \Bigl[ \, \, &{{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^{n}_{i-1 j-1}, \quad {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^{n}_{i j-1}, \quad \cdots, \quad {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^{n}_{i j}, \quad \cdots, \quad {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^{n}_{i j+1}, \quad {{{\underline{{\underline{T}}}}}} \, {{{\underline{Q}}}}^{n}_{i+1 j+1} \, \Bigr]^T, \end{split}$$ where $$\begin{gathered} {{{\underline{{\underline{X}}}}}}^{\pm} = \mp \frac{\nu_x^\pm}{8} \iint {{{\underline{\Psi}}}}_{|_{\xi=\mp1}} \, {{{\underline{\Psi}}}}_{|_{\xi=\pm1}}^T \, d{{{\underline{\mathcal S}}}}_{\xi}, \quad {{{\underline{{\underline{Y}}}}}}^{\pm} = \mp \frac{\nu_y^\pm}{8} \iint {{{\underline{\Psi}}}}_{|_{\eta=\mp1}} \, {{{\underline{\Psi}}}}_{|_{\eta=\pm1}}^T \, d{{{\underline{\mathcal S}}}}_{\eta}, \\ {{{\underline{{\underline{L^{\alpha \beta \gamma \delta}}}}}}} = {{{\underline{{\underline{L^{0}}}}}}} + \alpha \, {{{\underline{{\underline{L^{-}_x}}}}}} + \beta \, {{{\underline{{\underline{L^{+}_x}}}}}} + \gamma \, {{{\underline{{\underline{L^{-}_y}}}}}} + \delta \, {{{\underline{{\underline{L^{+}_y}}}}}}, \quad \alpha, \beta, \gamma, \delta \in \left\{ 0 , 1 \right\}, \\ {{{\underline{{\underline{L^0}}}}}} = \frac{1}{8} \iiint {{{\underline{\Psi}}}} \, {{{\underline{\mathcal R}}}}\left({{{\underline{\Psi}}}}\right)^T \hspace{-1mm} d{{{\underline{\mathcal S}}}} + \frac{1}{8} \iint {{{\underline{\Psi}}}}_{|_{\tau=-1}} \, {{{\underline{\Psi}}}}_{|_{\tau=-1}}^T d{{{\underline{\mathcal S}}}}_{\tau}, \quad {{{\underline{{\underline{T}}}}}} = \frac{1}{8} \iint {{{\underline{\Psi}}}}_{|_{\tau=-1}} \, {{{\underline{\Phi}}}}^T d{{{\underline{\mathcal S}}}}_{\tau}, \\ {{{\underline{{\underline{L_x^\pm}}}}}} = \pm \frac{\nu_x^{\pm}}{8} \iint {{{\underline{\Psi}}}}_{|_{\xi=\mp1}} \, {{{\underline{\Psi}}}}_{|_{\xi=\mp1}}^T \, d{{{\underline{\mathcal S}}}}_{\xi}, \quad {{{\underline{{\underline{L_y^\pm}}}}}} = \pm \frac{\nu_y^{\pm}}{8} \iint {{{\underline{\Psi}}}}_{|_{\eta=\mp1}} \, {{{\underline{\Psi}}}}_{|_{\eta=\mp1}}^T \, d{{{\underline{\mathcal S}}}}_{\eta},\end{gathered}$$ where ${{{\underline{{\underline{T}}}}}} \in {\mathbb R}^{\mpred \times \mcorr}$ and ${{{\underline{{\underline{X}}}}}}^{\pm}, {{{\underline{{\underline{Y}}}}}}^{\pm}, {{{\underline{{\underline{L^0}}}}}}, {{{\underline{{\underline{L_x^\pm}}}}}}, {{{\underline{{\underline{L_y^\pm}}}}}} \in {\mathbb R}^{\mpred \times \mpred}$. The correction step can be written as $$\begin{gathered} \label{eqn:correct_2d} {{{\underline{Q}}}}^{n+1}_{ij} = {{{\underline{Q}}}}^{n}_{ij} + {{{\underline{{\underline{C_x^-}}}}}} \, {{{\underline{W}}}}^{n+{\frac{1}{2}}}_{i-1 j} + {{{\underline{{\underline{C_y^-}}}}}} \, {{{\underline{W}}}}^{n+{\frac{1}{2}}}_{i j-1} + {{{\underline{{\underline{C^0}}}}}} \, {{{\underline{W}}}}^{n+{\frac{1}{2}}}_{ij} + {{{\underline{{\underline{C_x^+}}}}}} \, {{{\underline{W}}}}^{n+{\frac{1}{2}}}_{i+1 j} + {{{\underline{{\underline{C_y^+}}}}}} \, {{{\underline{W}}}}^{n+{\frac{1}{2}}}_{ij+1}, \\ \label{eqn:correct_2d_1} \begin{split} {{{\underline{{\underline{C^0}}}}}} = \frac{1}{4} \iiint {{{\underline{\mathcal U}}}}\left( {{{\underline{\Phi}}}} \right) \, {{{\underline{\Psi}}}}^T d{{{\underline{\mathcal S}}}} - \frac{1}{4} & \iint \left[ \nu^+_x {{{\underline{\Phi}}}}_{|_{\xi=1}} {{{\underline{\Psi}}}}_{|_{\xi=1}}^T - \nu^-_x {{{\underline{\Phi}}}}_{|_{\xi=-1}} {{{\underline{\Psi}}}}_{|_{\xi=-1}}^T \right] d{{{\underline{\mathcal S}}}}_{\xi} \\ - \frac{1}{4} & \iint \left[ \nu^+_y {{{\underline{\Phi}}}}_{|_{\eta=1}} {{{\underline{\Psi}}}}_{|_{\eta=1}}^T - \nu^-_y {{{\underline{\Phi}}}}_{|_{\eta=-1}} {{{\underline{\Psi}}}}_{|_{\eta=-1}}^T \right] d{{{\underline{\mathcal S}}}}_{\eta}, \end{split} \\ \label{eqn:correct_2d_2} {{{\underline{{\underline{C^{\mp}_x}}}}}} = \pm \frac{\nu^{\pm}_x}{4} \iint {{{\underline{\Phi}}}}_{|_{\xi=\mp1}} \, {{{\underline{\Psi}}}}_{|_{\xi=\pm1}}^T d{{{\underline{\mathcal S}}}}_{\xi}, \quad {{{\underline{{\underline{C^{\mp}_y}}}}}} = \pm \frac{\nu^{\pm}_y}{4} \iint {{{\underline{\Phi}}}}_{|_{\eta=\mp1}} \, {{{\underline{\Psi}}}}_{|_{\eta=\pm1}}^T d{{{\underline{\mathcal S}}}}_{\eta},\end{gathered}$$ where ${{{\underline{\mathcal U}}}}({{{\underline{\Phi}}}}) = \nu_x {{{\underline{\Phi}}}}_{,\xi} + \nu_y {{{\underline{\Phi}}}}_{,\eta}$, ${{{\underline{{\underline{C^0}}}}}}, {{{\underline{{\underline{C^\pm_x}}}}}}, {{{\underline{{\underline{C^\pm_y}}}}}} \in {\mathbb R}^{\mcorr \times \mpred}$. RIDG method in 3D ----------------- We consider here the three-dimensional advection equation for $(t,x,y,z) \in [0,T] \times \Omega$ with appropriate boundary conditions: $$\label{eqn:adv3d} q_{,t} + u_x q_{,x} + u_y q_{,y} + u_z q_{,z} = 0.$$ We define a uniform Cartesian mesh with grid spacings $\Delta x$, $\Delta y$, and $\Delta z$ in each coordinate direction. On each spacetime element: $$\label{eqn:spacetime_elem_3d} {\mathcal S}^{n+1/2}_{ijk} = {\mathcal S}^{n+1/2}_{ij} \times \left[ z_k - {\Delta z}/{2}, z_k + {\Delta z}/{2} \right],$$ where ${\mathcal S}^{n+1/2}_{ij}$ is defined by \[eqn:spacetime\_elem\_2d\], we define the local coordinates, $[\tau, \xi, \eta, \zeta] \in [-1,1]^4$, such that $$t = t^{n+1/2} + \tau \left( {\Delta t}/{2} \right), \, x = x_i + \xi \left( {\Delta x}/{2} \right), \, y = y_j + \eta \left( {\Delta y}/{2} \right), \, z = z_k + \zeta \left( {\Delta z}/{2} \right).$$ In these local coordinates, the advection equation is given by $$q_{,\tau} + \nu_x q_{,\xi} + \nu_y q_{,\eta} + \nu_z q_{,\zeta} = 0, \quad \nu_x = \frac{u_x \Delta t}{\Delta x}, \quad \nu_y = \frac{u_y \Delta t}{\Delta y}, \quad \nu_z = \frac{u_z \Delta t}{\Delta z},$$ where $|\nu_x|$, $|\nu_y|$, and $|\nu_z|$ are the CFL numbers in each coordinate direction and the multidimensional CFL number is $$\label{eqn:threed_cfl} |\nu| := \max\left\{ |\nu_x|, |\nu_y|, |\nu_z| \right\}.$$ The development of the RIDG scheme in 3D is completely analogous to the 2D RIDG scheme from \[sec:ridg\_in\_2d\]. In 1D the prediction step requires a stencil of 3 elements, in 2D we need $3^2=9$ elements, and in 3D we need $3^3 = 27$ elements. For the sake of brevity we omit the details. [**2D**]{} $\mdeg=0$ $\mdeg=1$ $\mdeg=3$ $\mdeg=5$ $\mdeg=7$ $\mdeg=9$ -------------- ----------- ----------- ----------- ----------- ----------- ----------- [**LIDG**]{} 0.50 0.23 0.08 0.04 0.025 0.01 [**RIDG**]{} 1.00 1.00 0.80 0.75 0.75 0.75 [**3D**]{} $\mdeg=0$ $\mdeg=1$ $\mdeg=3$ $\mdeg=5$ $\mdeg=7$ $\mdeg=9$ [**LIDG**]{} 0.33 0.10 0.03 0.025 0.02 0.01 [**RIDG**]{} 1.00 0.80 0.60 0.60 0.60 0.60 : Numerically estimated maximum CFL numbers, $|\nu|$, for the LIDG (aka Lax-Wendroff DG) and RIDG schemes in 2D and 3D. \[table:stab2d\] Von Neumann stability analysis for both LIDG and RIDG {#von-neumann-stability-analysis-for-both-lidg-and-ridg} ----------------------------------------------------- Linear stability analysis proceeds in 2D and 3D in the same manner as in 1D. We take the numerical update and make the Fourier ansatz: $${{{\underline{Q}}}}_{ijk}^{n+1} = {{{\underline{\widetilde{Q}}}}}^{\, n+1} \, e^{I \left( \omega_x i + \omega_y j + \omega_z k \right)} \qquad \text{and} \qquad {{{\underline{Q}}}}_{ijk}^{n} = {{{\underline{\widetilde{Q}}}}}^{\, n} \, e^{I \left( \omega_x i + \omega_y j + \omega_z k \right)},$$ where $I = \sqrt{-1}$ and $0 \le \omega_x, \, \omega_y, \, \omega_z \le 2\pi$ are the wave numbers in each coordinate direction. After using this ansatz, we again write the resulting update in the form: $${{{\underline{\widetilde{Q}}}}}^{\, n+1} = {{{\underline{{\underline{\mathcal M}}}}}}\left(\nu_x, \nu_y, \nu_z, \omega_x, \omega_y, \omega_z \right) \, {{{\underline{\widetilde{Q}}}}}^{\, n},$$ for some matrix ${{{\underline{{\underline{\mathcal M}}}}}} \in {\mathbb R}^{\mcorr \times \mcorr}$. Finally, we define the function $$\label{eqn:fstab_func_3d} f(\nu_x, \nu_y, \nu_z) := \max_{0 \le \omega_x, \, \omega_y, \, \omega_z \le 2\pi} \rho \left( {{{\underline{{\underline{\mathcal M}}}}}}(\nu_x, \, \nu_y, \, \nu_z, \, \omega_x, \, \omega_y, \, \omega_z) \right) - 1,$$ where $\rho\left( {{{\underline{{\underline{\mathcal M}}}}}} \right)$ is the spectral radius of ${{{\underline{{\underline{\mathcal M}}}}}}$. Just as in 1D, we estimate the maximum CFL numbers of LIDG and RIDG by studying the values of \[eqn:fstab\_func\_3d\]. Our numerically obtained estimates for the maximum value of $|\nu|$ as defined by \[eqn:twod\_cfl\] and \[eqn:threed\_cfl\] are summarized in \[table:stab2d\]. Again we see the following: - LIDG: the maximum stable CFL number tends to zero as the polynomial degree is increased; and - RIDG: the maximum stable CFL number has a finite lower bound with increasing polynomial degree (approximately $0.75$ in 2D and $0.60$ in 3D). To get a more detailed view of the stability function \[eqn:fstab\_func\_3d\] in 2D, we show false color plots of $f(\nu_x,\nu_y)+1$ in \[fig:stabilityr0m2\] for both LIDG and RIDG for various method orders. The transverse elements that were included in the prediction step for RIDG (see \[sec:ridg\_in\_2d\]) are critically important in achieving a stability region that does not significantly degrade in going from 1D to 2D. ------------------- ------------------- 2D LIDG $\mdeg=1$ 2D RIDG $\mdeg=1$ (a) (b) 2D LIDG $\mdeg=3$ 2D RIDG $\mdeg=3$ (c) (d) 2D LIDG $\mdeg=5$ 2D RIDG $\mdeg=5$ (e) (f) 2D LIDG $\mdeg=7$ 2D RIDG $\mdeg=7$ (g) (h) ------------------- ------------------- Numerical convergence studies {#sec:results} ============================= In this section we present convergence studies in 1D, 2D, and 3D for both LIDG and RIDG, and compare the errors and runtimes for the two methods. In all cases, we compute an approximate order of accuracy using the following approximation: $$\label{eqn:Mratio} \text{error}(h) = c h^M + {\mathcal O}\left(h^{M+1} \right) \quad \Longrightarrow \quad M \approx \frac{\log{\left( {\text{error}(h_1)}/{\text{error}(h_2)} \right)}}{\log{\left( {h_1}/{h_2} \right)} }.$$ [|r|r|c|c|c|c|c|c|]{}\ & [**$T_r$(s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 $& 0.221 &$ 1.83 e{ -1 }$& – &$ 1.83 e{ -1 }$& – &$ 1.92 e{ -1 }$& –\ $ 80 $& 0.784 &$ 1.08 e{ -2 }$& 4.08 &$ 1.07 e{ -2 }$& 4.09 &$ 1.13 e{ -2 }$& 4.09\ $ 160 $& 3.073 &$ 6.52 e{ -4 }$& 4.05 &$ 6.46 e{ -4 }$& 4.05 &$ 6.66 e{ -4 }$& 4.09\ $ 320 $& 12.246 &$ 4.01 e{ -5 }$& 4.02 &$ 4.00 e{ -5 }$& 4.01 &$ 4.10 e{ -5 }$& 4.02\ $ 640 $& 48.928 &$ 2.49 e{ -6 }$& 4.01 &$ 2.50 e{ -6 }$& 4.00 &$ 2.79 e{ -6 }$& 3.88\ \ & [**$T_r$(s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 $& 0.032 &$ 8.46 e{ -2 }$& – &$ 8.77 e{ -2 }$& – &$ 1.02 e{ -1 }$& –\ $ 80 $& 0.121 &$ 3.67 e{ -3 }$& 4.53 &$ 3.72 e{ -3 }$& 4.56 &$ 4.68 e{ -3 }$& 4.45\ $ 160 $& 0.473 &$ 1.51 e{ -4 }$& 4.61 &$ 1.52 e{ -4 }$& 4.62 &$ 1.76 e{ -4 }$& 4.73\ $ 320 $& 1.885 &$ 7.96 e{ -6 }$& 4.24 &$ 8.02 e{ -6 }$& 4.24 &$ 8.95 e{ -6 }$& 4.30\ $ 640 $& 7.618 &$ 4.75 e{ -7 }$& 4.07 &$ 4.77 e{ -7 }$& 4.07 &$ 5.57 e{ -7 }$& 4.01\ \ & [**$T_r$(s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 $& 0.518 &$ 1.11 e{ -3 }$& – &$ 1.11 e{ -3 }$& – &$ 1.25 e{ -3 }$& –\ $ 80 $& 2.041 &$ 1.74 e{ -5 }$& 6.00 &$ 1.76 e{ -5 }$& 5.98 &$ 1.88 e{ -5 }$& 6.06\ $ 160 $& 8.060 &$ 2.73 e{ -7 }$& 5.99 &$ 2.72 e{ -7 }$& 6.02 &$ 2.86 e{ -7 }$& 6.04\ $ 320 $& 32.196 &$ 4.24 e{ -9 }$& 6.01 &$ 4.23 e{ -9 }$& 6.01 &$ 4.36 e{ -9 }$& 6.03\ $ 640 $& 129.172 &$ 6.61 e{ -11 }$& 6.00 &$ 6.61 e{ -11 }$& 6.00 &$ 6.78 e{ -11 }$& 6.01\ \ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 $& 0.033 &$ 1.50 e{ -4 }$& – &$ 1.65 e{ -4 }$& – &$ 4.64 e{ -4 }$& –\ $ 80 $& 0.123 &$ 2.68 e{ -6 }$& 5.81 &$ 2.79 e{ -6 }$& 5.89 &$ 5.19 e{ -6 }$& 6.48\ $ 160 $& 0.482 &$ 3.91 e{ -8 }$& 6.10 &$ 4.05 e{ -8 }$& 6.11 &$ 4.89 e{ -8 }$& 6.73\ $ 320 $& 1.936 &$ 5.85 e{ -10 }$& 6.06 &$ 6.12 e{ -10 }$& 6.05 &$ 8.37 e{ -10 }$& 5.87\ $ 640 $& 7.733 &$ 8.94 e{ -12 }$& 6.03 &$ 9.46 e{ -12 }$& 6.02 &$ 1.36 e{ -11 }$& 5.94\ 1D convergence tests -------------------- We consider the 1D advection equation \[eqn:adv1d\] with $u=1$, $\Omega=[-1,1]$, periodic BCs, and initial condition: $$\label{cc1D} q(t=0,x) = \sin(16 \pi x).$$ We run the code [@code:ridg-code] to $t=2$ with $\mdeg=3$ (LIDG: $\nu=0.104$, RIDG: $\nu=0.9$) and $\mdeg=5$ (LIDG: $\nu=0.04$, RIDG: $\nu=0.9$) and compare runtimes and errors; the results are shown in \[table:RIvLI1D\]. We see that both methods exhibit the expected convergence rates in $L^1$, $L^2$, and $L^\infty$. For a fixed number of elements, usage of the RIDG method leads to smaller errors. We also notice that for a fixed number of elements the experiment runtime for the RIDG method is shorter than that of the LIDG method – this is due to the increase in the maximum linearly stable CFL number from LIDG to RIDG. [|r|r|c|c|c|c|c|c|]{}\ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 ^2$& 22.2 &$ 8.75 e{ -1 }$& – &$ 7.87 e{ -1 }$& – &$ 7.93 e{ -1 }$& –\ $ 80 ^2$& 176.8 &$ 6.37 e{ -2 }$& 3.78 &$ 5.72 e{ -2 }$& 3.78 &$ 6.54 e{ -2 }$& 3.6\ $ 160 ^2$& 1415.7 &$ 1.98 e{ -3 }$& 5.01 &$ 1.81 e{ -3 }$& 4.98 &$ 2.94 e{ -3 }$& 4.48\ $ 320 ^2$& 11335.2 &$ 7.66 e{ -5 }$& 4.69 &$ 7.09 e{ -5 }$& 4.67 &$ 1.61 e{ -4 }$& 4.19\ \ & [**$T_r $ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 ^2$& 3.5 &$ 6.29 e{ -1 }$& – &$ 5.58 e{ -1 }$& – &$ 5.62 e{ -1 }$& –\ $ 80 ^2$& 27.1 &$ 2.81 e{ -2 }$& 4.49 &$ 2.54 e{ -2 }$& 4.46 &$ 3.45 e{ -2 }$& 4.03\ $ 160 ^2$& 218.6 &$ 1.04 e{ -3 }$& 4.75 &$ 9.58 e{ -4 }$& 4.73 &$ 1.76 e{ -3 }$& 4.29\ $ 320 ^2$& 1740.0 &$ 5.37 e{ -5 }$& 4.28 &$ 4.95 e{ -5 }$& 4.27 &$ 1.09 e{ -4 }$& 4.02\ \ & [**$T_r $ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 ^2$& 42.0 &$ 2.25 e{ -2 }$& – &$ 2.24 e{ -2 }$& – &$ 5.25 e{ -2 }$& –\ $ 80 ^2$& 338.1 &$ 2.94 e{ -4 }$& 6.26 &$ 2.77 e{ -4 }$& 6.34 &$ 6.80 e{ -4 }$& 6.27\ $ 160 ^2$& 2704.0 &$ 2.81 e{ -6 }$& 6.71 &$ 2.75 e{ -6 }$& 6.66 &$ 1.05 e{ -5 }$& 6.02\ $ 320 ^2$& 21730.0 &$ 3.53 e{ -8 }$& 6.31 &$ 3.50 e{ -8 }$& 6.30 &$ 1.67 e{ -7 }$& 5.98\ \ & [**$T_r $ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 40 ^2$& 4.1 &$ 5.76 e{ -3 }$& – &$ 5.86 e{ -3 }$& – &$ 3.30 e{ -2 }$& –\ $ 80 ^2$& 32.9 &$ 1.62 e{ -4 }$& 5.15 &$ 1.54 e{ -4 }$& 5.25 &$ 6.30 e{ -4 }$& 5.71\ $ 160 ^2$& 254.5 &$ 2.25 e{ -6 }$& 6.18 &$ 2.16 e{ -6 }$& 6.15 &$ 9.18 e{ -6 }$& 6.10\ $ 320 ^2$& 2035.1 &$ 3.04 e{ -8 }$& 6.21 &$ 3.00 e{ -8 }$& 6.17 &$ 1.51 e{ -7 }$& 5.93\ 2D convergence tests -------------------- We consider the 2D advection equation \[eqn:adv2d\] with $u_x=u_y=1$, $\Omega=[-1,1]^2$, double periodic BCs, and initial condition: $$\label{cc2D} q(t=0,x,y) = \sin(16 \pi x)\sin(16 \pi y).$$ We run the code [@code:ridg-code] to $t=2$ with $\mdeg=3$ (LIDG: $\nu=0.05$, RIDG: $\nu=0.75$) and $\mdeg=5$ (LIDG: $\nu=0.03$, RIDG: $\nu=0.75$) and compare runtimes and errors; the results are shown in \[table:RIvLI2D\]. We again see that both methods exhibit the expected convergence rates in $L^1$, $L^2$, and $L^\infty$. For a fixed number of elements, usage of the RIDG method leads to slightly smaller errors. We also notice that for a fixed number of elements the experiment runtime for the RIDG method is shorter than that of the LIDG method – this is due to the increase in the maximum linearly stable CFL number from LIDG to RIDG. [|r|r|c|c|c|c|c|c|]{}\ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 20 ^2$& 123.9 &$ 1.21 e{ -3 }$& – &$ 1.20 e{ -3 }$& – &$ 6.16 e{ -3 }$& –\ $ 40 ^3$& 2027.4 &$ 6.82 e{ -5 }$& 4.15 &$ 6.95 e{ -5 }$& 4.11 &$ 3.92 e{ -4 }$& 3.97\ $ 80 ^3$& 32632.4 &$ 4.21 e{ -6 }$& 4.02 &$ 4.31 e{ -6 }$& 4.01 &$ 2.50 e{ -5 }$& 3.97\ \ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 20 ^3$& 28.5 &$ 9.24 e{ -4 }$& – &$ 9.86 e{ -4 }$& – &$ 5.02 e{ -3 }$& –\ $ 40 ^3$& 457.8 &$ 5.85 e{ -5 }$& 3.98 &$ 6.21 e{ -5 }$& 3.99 &$ 3.15 e{ -4 }$& 3.99\ $ 80 ^3$& 7299.2 &$ 3.68 e{ -6 }$& 3.99 &$ 3.89 e{ -6 }$& 6.20 &$ 1.96 e{ -5 }$& 4.01\ \ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 20 ^3$& 318.8 &$ 1.41 e{ -5 }$& – &$ 1.34 e{ -5 }$& – &$ 9.19 e{ -5 }$& –\ $ 40 ^3$& 4960.1 &$ 1.91 e{ -7 }$& 6.21 &$ 1.79 e{ -7 }$& 6.23 &$ 1.53 e{ -6 }$& 5.90\ $ 80 ^3$& 77582.5 &$ 2.65 e{ -9 }$& 6.17 &$ 2.50 e{ -9 }$& 6.16 &$ 2.48 e{ -8 }$& 5.95\ \ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 20 ^3$& 61.6 &$ 1.01 e{ -5 }$& – &$ 9.77 e{ -6 }$& – &$ 6.91 e{ -5 }$& –\ $ 40 ^3$& 960.2 &$ 1.37 e{ -7 }$&6.20 &$ 1.36 e{ -7 }$& 6.16 &$ 1.08 e{ -6 }$& 6.00\ $ 80 ^3$& 15510.1 &$ 1.81 e{ -9 }$& 6.24 &$ 1.88 e{ -9 }$& 6.18 &$ 1.72 e{ -8 }$& 5.98\ 3D convergence tests -------------------- We consider the 3D advection equation \[eqn:adv3d\] with $u_x=u_y=u_z=1$, $\Omega=[-1,1]^3$, triple periodic BCs, and initial condition: $$\label{cc3D} q(t=0,x,y,z) = \sin(2 \pi x)\sin(2 \pi y)\sin(2 \pi z).$$ We run the code [@code:ridg-code] to $t=2$ with $\mdeg=3$ (LIDG: $\nu=0.03$, RIDG: $\nu=0.6$) and $\mdeg=5$ (LIDG: $\nu=0.025$, RIDG: $\nu=0.6$) and compare the error properties of the solution produced by the LIDG and RIDG methods; the results are shown in \[table:RIvLI3D\]. We again see that both methods exhibit the expected convergence rates in $L^1$, $L^2$, and $L^\infty$. As in the one and two-dimensional settings, the RIDG method exhibits better error and runtime properties than the LIDG method. Nonlinear RIDG {#sec:burgers} ============== We show in this section how to extend the regionally-implicit discontinous Galerkin scheme (RIDG) to nonlinear problems. We show computational comparisons of the proposed RIDG scheme to a standard Runge-Kutta discontinuous Galerkin (RKDG) scheme on the 1D and 2D Burgers equation. Burgers equation in 1D ---------------------- We consider the nonlinear inviscid Burgers equation in 1D: $$\label{eqn:burgers1D} q_{,t} + \frac{1}{2}{\left( q^2 \right)}_{,x} = 0,$$ where $(t,x) \in \left[ 0, T \right] \times \left [0,2\pi \right ]$ and periodic boundary conditions are assumed. $T$ is chosen as some time before shock formation occurs in the exact solution. The initial conditions are taken to be $$\label{eqn:burgers1D_IC} q(t=0,x) = 1 - \cos(x).$$ Unlike in the linear advection case, nonlinear conservation laws will require us to solve nonlinear algebraic equations in each of the regions depicted in \[fig:RIDG\_1D\]. These nonlinear algebraic equations can be written in terms of a nonlinear residual defined on each region: $$\label{eqn:ridg1d_residual_total} {{{\underline{\mathcal R}}}} = {\left(\begin{array}{c} {{{\underline{\mathcal R_1}}}}\\ {{{\underline{\mathcal R_2}}}} \\ {{{\underline{\mathcal R_3}}}} \end{array} \right)},$$ where $$\begin{aligned} \label{eqn:ridg1d_residual_1} \begin{split} {{{\underline{\mathcal R_1}}}} = & \int { {{{\underline{\Psi}}}}_{|_{\tau=1}} }{ {{{\underline{\Psi}}}}_{|_{\tau=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }d{{{\underline{\mathcal S}}}}_{\tau} - \int { {{{\underline{\Psi}}}}_{|_{\tau=-1}} }{ {{{\underline{\Psi}}}}_{|_{\tau=1}} }^T { {{{\underline{W}}}}^{n}_{i-1} }d{{{\underline{\mathcal S}}}}_{\tau} \\ & + \nu_x\int { {{{\underline{\Psi}}}}_{|_{\xi=1}} }\tilde f{\left( { {{{\underline{\Psi}}}}_{|_{\xi=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }, { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i} }\right)} d{{{{\underline{\mathcal S}}}}}_{\xi} \\ & - \nu_x\int { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }f{\left( { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }\right)} d{{{{\underline{\mathcal S}}}}}_{\xi} \\ & - \iint { {{{\underline{\Psi}}}}_{|_\tau } }{{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i} }d{{{\underline{\mathcal S}}}} -\nu_x\iint { {{{\underline{\Psi}}}}_{|_\xi } }f{\left( {{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i} }\right)} d{{{\underline{\mathcal S}}}}, \end{split} \\ \label{eqn:ridg1d_residual_2} \begin{split} {{{\underline{\mathcal R_2}}}} = &\int { {{{\underline{\Psi}}}}_{|_{\tau=1}} }{ {{{\underline{\Psi}}}}_{|_{\tau=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i} }d{{{\underline{\mathcal S}}}}_{\tau} - \int { {{{\underline{\Psi}}}}_{|_{\tau=-1}} }{ {{{\underline{\Psi}}}}_{|_{\tau=1}} }^T { {{{\underline{W}}}}^{n}_{i} }d{{{\underline{\mathcal S}}}}_{\tau} \\ & + \nu_x\int { {{{\underline{\Psi}}}}_{|_{\xi=1}} }\tilde f{\left( { {{{\underline{\Psi}}}}_{|_{\xi=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i} }, { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i+1} }\right)} d{{{{\underline{\mathcal S}}}}}_{\xi} \\ & - \nu_x\int { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }\tilde f{\left( { {{{\underline{\Psi}}}}_{|_{\xi=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }, { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i} }\right)} d{{{{\underline{\mathcal S}}}}}_{\xi} \\ & - \iint { {{{\underline{\Psi}}}}_{|_\tau } }{{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i} }d{{{\underline{\mathcal S}}}} -\nu_x \iint { {{{\underline{\Psi}}}}_{|_\xi } }f{\left( {{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i} }\right)} d{{{\underline{\mathcal S}}}}, \end{split} \\ \label{eqn:ridg1d_residual_3} \begin{split} {{{\underline{\mathcal R_3}}}}= & \int { {{{\underline{\Psi}}}}_{|_{\tau=1}} }{ {{{\underline{\Psi}}}}_{|_{\tau=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i+1} }d{{{\underline{\mathcal S}}}}_{\tau} - \int { {{{\underline{\Psi}}}}_{|_{\tau=-1}} }{ {{{\underline{\Psi}}}}_{|_{\tau=1}} }^T { {{{\underline{W}}}}^{n}_{i+1} }d{{{\underline{\mathcal S}}}}_{\tau} \\ &+ \nu_x \int { {{{\underline{\Psi}}}}_{|_{\xi=1}} }f{\left( { {{{\underline{\Psi}}}}_{|_{\xi=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i+1} }\right)} d{{{{\underline{\mathcal S}}}}}_{\xi} \\ & - \nu_x\int { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }\tilde f{\left( { {{{\underline{\Psi}}}}_{|_{\xi=1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i} }, { {{{\underline{\Psi}}}}_{|_{\xi=-1}} }^T { {{{\underline{W}}}}^{n+1/2}_{i+1} }\right)} d{{{{\underline{\mathcal S}}}}}_{\xi} \\ &- \iint { {{{\underline{\Psi}}}}_{|_\tau } }{{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i+1} }d{{{\underline{\mathcal S}}}} -\nu_x \iint { {{{\underline{\Psi}}}}_{|_\xi } }f{\left( {{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i+1} }\right)} d{{{\underline{\mathcal S}}}}. \end{split}\end{aligned}$$ For such general nonlinear problems, we use the Rusanov [@article:Ru61] numerical flux in lieu of the upwinded fluxes in the prediction step (seen in \[fig:RIDG\_1D\]) and again for the time-averaged fluxes in the correction step. For a scalar conservation law with flux function $f(q)$ and flux Jacobian $f'(q)$, the Rusanov flux is $$\label{eqn:rusanov} \tilde f(q_{\ell}, q_r) = \frac{1}{2} \left( f(q_{\ell}) + f(q_r) \right)- \frac{\lambda \left( q_{\ell}, q_r \right)}{2} {\left( q_r - q_{\ell} \right)},$$ where for scalar conservation laws: $$\lambda\left( q_{\ell}, q_r \right) = \max \left\{{\left| f'(q_{\ell}) \right|},{\left| f'((q_{\ell}+q_r)/2) \right|},{\left| f'(q_r) \right|}\right\}.$$ For Burgers equation \[eqn:burgers1D\], the numerical flux \[eqn:rusanov\] becomes $$\tilde f(q_{\ell}, q_r) = \frac{1}{4}q_{\ell}^2 + \frac{1}{4}q_r^2 - \frac{\max \left\{ {\left| q_{\ell} \right|},{\left| q_r \right|} \right\}}{2} \left(q_r - q_{\ell}\right).$$ The goal in each region in each time-step is to minimize residual \[eqn:ridg1d\_residual\_total\] with respect to the unknown space-time Legendre coefficients of the approximate solution. We accomplish this by utilizing a Newton iteration. When forming the Newton iteration Jacobian (not to be confused with the flux Jacobian of the hyperbolic conservation law), one must compute the Jacobian of \[eqn:ridg1d\_residual\_total\] by differentiating with respect to each coefficient. That is, we must compute $$\label{eqn:ridg1d_jacobian_total} {{{\underline{{\underline{\mathcal J}}}}}} = \begin{pmatrix}[2] \frac{\partial {{{\underline{\mathcal R_1}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i-1} }} & \frac{\partial {{{\underline{\mathcal R_1}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i} }} & 0 \\ \frac{\partial {{{\underline{\mathcal R_2}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i-1} }} & \frac{\partial {{{\underline{\mathcal R_2}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i} }} & \frac{\partial {{{\underline{\mathcal R_2}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i+1} }} \\ 0 & \frac{\partial {{{\underline{\mathcal R_3}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i} }} & \frac{\partial {{{\underline{\mathcal R_3}}}}}{\partial { {{{\underline{W}}}}^{n+1/2}_{i+1} }} \end{pmatrix},$$ which is analogous to the coefficient matrix in the linear advection case (e.g., see \[eqn:ridg1d\_system\]). When computing the entries in \[eqn:ridg1d\_jacobian\_total\], one must deal with the fact that the wave speed that appears in the Rusanov flux \[eqn:rusanov\] is not a smooth function of the coefficients; in order to handle this issue we impose in the computation of Jacobian \[eqn:ridg1d\_jacobian\_total\] the following condition: $\frac{\partial}{\partial {{{\underline{W}}}}} \lambda = 0$. This assumption seems to work well in practice, as evidenced in the results below, though other assumptions may be considered in the future. The stopping criterion for the Newton iteration that seems to be most effective at producing efficient solutions is the following - Stop if the residual of the region’s main cell (the cell for whom we are forming a prediction) is below a certain tolerance ($\text{TOL} = 10^{-4}$); - Stop if a maximum number of iterations is reached ($N_{\text{iters}} = 3$). Unlike in the linear case, for nonlinear conservation laws we cannot completely precompute the prediction update. However, we are able to leverage a so-called quadrature-free implementation [@Atkins1998] to increase the efficiency of the quadrature part of the prediction step. We review this methodology here: when computing the space-time quadrature, the most computationally expensive pieces are associated with the term in the residual and Jacobian where we integrate in space-time over the cell. For example, one such term for Burgers equation is: $$\begin{split} {{{\underline{\mathcal R_1}}}}_{|\text{volume}} &= -\nu_x\iint {{{\underline{\Psi}}}}_{,\xi} \, f{\left( {{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }\right)} \, d{{{\underline{\mathcal S}}}} \\ & = - \nu_x\iint {{{\underline{\Psi}}}}_{,\xi} \, \frac{1}{2}{\left( {{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }\right)}^2 \, d{{{\underline{\mathcal S}}}}. \end{split}$$ The contribution to the Jacobian matrix from this term is $$\label{expensive} \frac{ \partial {\mathcal R_1}_{|\text{volume}}}{ \partial { {{{\underline{W}}}}^{n+1/2}_{i-1} }} = -\nu_x \iint {{{\underline{\Psi}}}}_{,\xi} \, {{{\underline{\Psi}}}}^T { {{{\underline{W}}}}^{n+1/2}_{i-1} }\, {{{\underline{\Psi}}}}^T \, d{{{\underline{S}}}}.$$ Each entry of this matrix has the form: $$\label{eq:quadfree_expressions} \left [ \frac{ \partial {\mathcal R_1}_{|\text{volume}}}{ \partial { {{{\underline{W}}}}^{n+1/2}_{i-1} }} \right ]_{ab} = \iint \psi^{(a)}_{,\xi} \, \sum\limits_{\ell = 1}^{\theta_T} \psi^{(\ell)} \, Q_{\ell} \psi^{(b)} \, d{{{\underline{S}}}} = {\left( \sum\limits_{\ell = 1}^{\theta_T} \iint \psi^{(a)}_{,\xi} \psi^{(\ell)} \psi^{(b)} d{{{\underline{S}}}} \right)} Q_{\ell}.$$ Notice that the expression within the parentheses found in \[eq:quadfree\_expressions\] can be precomputed using exact expressions (e.g., with a symbolic toolbox if the exact expressions are too laborious to derive by hand). This allows us to forego an expensive quadrature routine in favor of an exact expansion of the coefficients $Q_\ell$ for forming the Newton iteration Jacobian. In [@Shu1988] this idea was effectively expanded to certain types of non-polynomial flux functions, indicating that this idea can be generalized. Thus we can avoid space-time quadrature of the volume integrals by integrating the expressions such as in \[eq:quadfree\_expressions\] analytically. [|r|r|r|c|c|c|c|c|c|]{}\ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ 39 & 30 & 0.404 & 1.45E-07 & - & 2.39E-07 & - & 1.30E-06 & -\ 52 & 39 & 0.657 & 4.69E-08 & 3.91 & 7.69E-08 & 3.94 & 4.26E-07 & 3.89\ 65 & 48 & 0.879 & 1.95E-08 & 3.94 & 3.18E-08 & 3.95 & 1.77E-07 & 3.93\ 77 & 57 & 1.243 & 9.93E-09 & 3.98 & 1.63E-08 & 3.96 & 9.08E-08 & 3.95\ 91 & 66 & 1.732 & 5.11E-09 & 3.98 & 8.39E-09 & 3.96 & 4.68E-08 & 3.97\ 105 & 76 & 2.283 & 2.94E-09 & 3.86 & 4.76E-09 & 3.97 & 2.66E-08 & 3.96\ 158 & 114 & 5.138 & 5.72E-10 & 4.01 & 9.36E-10 & 3.98 & 5.23E-09 & 3.98\ \ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ 39 & 3 & 0.125 & 1.47E-07 & - & 2.35E-07 & - & 1.48E-06 & -\ 52 & 4 & 0.217 & 4.70E-08 & 3.97 & 7.55E-08 & 3.94 & 4.85E-07 & 3.88\ 65 & 5 & 0.330 & 1.93E-08 & 4.00 & 3.12E-08 & 3.96 & 2.01E-07 & 3.95\ 77 & 6 & 0.501 & 9.69E-09 & 4.05 & 1.61E-08 & 3.93 & 1.06E-07 & 3.79\ 91 & 7 & 0.578 & 4.95E-09 & 4.03 & 8.24E-09 & 4.00 & 5.65E-08 & 3.75\ 105 & 8 & 0.810 & 2.82E-09 & 3.93 & 4.69E-09 & 3.94 & 3.24E-08 & 3.89\ 158 & 12 & 2.136 & 5.54E-10 & 3.98 & 9.26E-10 & 3.97 & 6.50E-09 & 3.93\ \ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ 13 & 1 & 0.085 & 4.03E-08 & - & 6.79E-08 & - & 4.98E-07 & -\ 26 & 2 & 0.303 & 6.90E-10 & 5.87 & 1.20E-09 & 5.82 & 9.38E-09 & 5.73\ 39 & 3 & 0.653 & 6.73E-11 & 5.74 & 1.22E-10 & 5.64 & 1.35E-09 & 4.79\ 53 & 4 & 0.958 & 1.03E-11 & 6.11 & 1.75E-11 & 6.31 & 1.81E-10 & 6.54\ 66 & 5 & 1.437 & 2.68E-12 & 6.15 & 4.77E-12 & 5.93 & 5.33E-11 & 5.57\ \ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ 3 & 1 & 0.046 & 1.31E-05 & - & 2.43E-05 & - & 1.06E-04 & -\ 8 & 1 & 0.192 & 9.53E-09 & 7.37 & 1.48E-08 & 7.55 & 1.22E-07 & 6.90\ Using this approach, we compared the nonlinear RIDG methods of various orders to the $4^{\text{th}}$ order RKDG method as discussed in [@Shu1988]. Shown in \[fig:rkdg\_compare\] are the computed errors and runtimes for the RKDG method with $\mdeg = 3$ (i.e., the fourth-order RKDG method) and the RIDG method $\mdeg = 3, 5, 7$; both methods were applied to Burgers equation \[eqn:burgers1D\] with initial conditions \[eqn:burgers1D\_IC\], and run out to time $T=0.4$ (i.e, before shockwaves form). We see that all methods exhibit the expected convergence rates in the $L^1$, $L^2$, and $L^\infty$ norms. We note the following: - For any fixed error that we consider for the RKDG method, the RIDG method of $\mdeg=3$ can obtain a solution of similar accuracy about 2.5 to 3 times faster. Furthermore, the RIDG method of $\mdeg=5$ can obtain a solution of similar accuracy 15 times faster. - For any fixed error that we consider for the RKDG method, the RIDG method of $\mdeg=3$ can obtain a solution of similar accuracy while taking an order of magnitude fewer timesteps. Furthermore, the RIDG method of $\mdeg=5$ can obtain a solution of similar accuracy while taking almost two orders of magnitude fewer timesteps. Burgers equation in 2D ---------------------- Now we consider the nonlinear inviscid Burgers equation in 2D: $$\label{eqn:burgers2D} q_{,t} + {\left( \frac{1}{2}q^2 \right)}_{,x} + {\left( \frac{1}{2}q^2 \right)}_{,y} =0,$$ where $(t,x,y) \in \left [ 0, T \right] \times \left [0,2\pi \right]^2$. $T$ is some time before the shock forms in the solution. We consider the initial conditions $$\label{eqn:burgers2D_IC} q(t=0,x) = \frac{1}{4}\bigl(1 - \cos(x)\bigr)\bigl(1-\cos(y)\bigr).$$ [|r|r|r|c|c|c|c|c|c|]{}\ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 11 ^2$ & 18 & 2.793 & 3.03e-05 & – & 4.44e-05 & – & 3.73e-04 & –\ $ 22 ^2$ & 33 & 18.166 & 1.93e-06 & 3.97 & 2.97e-06 & 3.90 & 2.39e-05 & 3.96\ $ 33 ^2$ & 49 & 61.442 & 4.00e-07 & 3.88 & 6.11e-07 & 3.90 & 5.40e-06 & 3.67\ $ 44 ^2$ & 64 & 144.332 & 1.31e-07 & 3.88 & 1.98e-07 & 3.92 & 1.72e-06 & 3.98\ $ 55 ^2$ & 80 & 284.119 & 5.50e-08 & 3.89 & 8.21e-08 & 3.94 & 7.01e-07 & 4.01\ $ 66 ^2$ & 95 & 485.266 & 2.70e-08 & 3.91 & 3.99e-08 & 3.95 & 3.47e-07 & 3.85\ $ 122 ^2$ & 174 & 3075.613 & 2.37e-09 & 3.96 & 3.50e-09 & 3.96 & 2.99e-08 & 3.99\ \ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 11 ^2$ & 1 & 6.021 & 2.89e-05 & – & 4.26e-05 & – & 2.76e-04 & –\ $ 22 ^2$ & 2 & 45.500 & 1.85e-06 & 3.97 & 2.89e-06 & 3.88 & 1.87e-05 & 3.89\ $ 33 ^2$ & 3 & 142.633 & 3.93e-07 & 3.83 & 6.03e-07 & 3.86 & 4.38e-06 & 3.58\ $ 44 ^2$ & 4 & 315.483 & 1.29e-07 & 3.86 & 1.96e-07 & 3.91 & 1.33e-06 & 4.15\ $ 55 ^2$ & 5 & 585.140 & 5.43e-08 & 3.89 & 8.17e-08 & 3.92 & 5.84e-07 & 3.68\ $ 66 ^2$ & 6 & 984.351 & 2.66e-08 & 3.91 & 3.98e-08 & 3.94 & 2.75e-07 & 4.14\ $ 122 ^2$ & 12 & 6499.030 & 2.35e-09 & 3.95 & 3.53e-09 & 3.94 & 2.68e-08 & 3.78\ \ & $N_T$ & [**$T_r$ (s)**]{} & $L^1$ [**error**]{} & \[eqn:Mratio\] & $L^2$ [**error**]{} & \[eqn:Mratio\] & $L^\infty$ [**error**]{} & \[eqn:Mratio\]\ $ 11 ^2$ & 1 & 158.842 & 1.38e-07 & – & 2.42e-07 & – & 1.82e-06 & –\ $ 22 ^2$ & 2 & 1190.925 & 2.35e-09 & 5.87 & 4.57e-09 & 5.73 & 4.53e-08 & 5.33\ $ 33 ^2$ & 3 & 3750.531 & 2.29e-10 & 5.74 & 4.41e-10 & 5.77 & 6.28e-09 & 4.87\ We again use the Rusanov numerical flux for both the space-time surface integrals in the prediction step and the time-averaged fluxes in the correction step. In \[fig:compare\_2d\] we compare the performance of the RKDG method to that of the RIDG method for $\mdeg=3, 5$ for $T = 0.4$. We observe the following - For all mesh sizes, the RKDG and RIDG methods with $\mdeg=3$ have similar error. The RIDG $\mdeg=3$ solutions take about 2 times longer to obtain, yet are obtained in 16 times fewer time steps. - For a fixed error of $\mathcal O (10^{-9})$, RKDG method takes 2.6 times longer to run than the the RIDG method $\mdeg=5$. Furthermore, the RIDG method $\mdeg=5$ obtains solutions with almost two orders of magnitude fewer time steps. We conclude that with respect to serial code execution in 2D, the RIDG method $\mdeg = 3$ is not as efficient as the RKDG method, but the RIDG method $\mdeg = 5$ is more efficient than the RKDG method. This demonstrates the fact that the RIDG methods do not experience an analogous [*Butcher barrier*]{}, which causes efficiency deterioration for Runge-Kutta methods of orders higher than $4$. Furthermore, with respect to minimizing the number of time steps (such as in the context of distributed memory programming), the RIDG method $\mdeg = 3$ require an order of magnitude fewer time steps for a fixed error than the RKDG method, while the RIDG method $\mdeg = 5$ requires almost two orders of magnitude fewer time steps for a fixed error. Conclusions {#sec:conclusions} =========== The purpose of this work was to develop a novel time-stepping method for high-order discontinuous Galerkin methods that has improved stability properties over traditional approaches (e.g., explicit SSP-RK and Lax-Wendroff). The name we gave to this new approach is the [*regionally-implicit*]{} discontinuous Galerkin (RIDG) scheme, due to the fact that the prediction for a given cell is formed via an implicit method using information from small [*regions*]{} of cells around a cell, juxtaposed with the [*local*]{} predictor that forms a prediction for a given cell using only past information from that cell. More exactly, the RIDG method is comprised of a semi-localized version of a spacetime DG method, and a corrector step, which is an explicit method that uses the solution from the predictor step. In this sense, the stencil of the RIDG schemes are slightly larger than similar explicit methods, and yet are able to take significantly larger time steps. With this new scheme we achieved all of the following: - Developed RIDG schemes for 1D, 2D, and 3D advection; - Demonstrated experimentally the correct convergence rates on 1D, 2D, and 3D advection examples; - Showed that the maximum linearly stable CFL number is bounded below by a constant that is independent of the polynomial order (1D: 1.00, 2D: 0.75, 3D: 0.60); - Developed RIDG schemes for 1D and 2D nonlinear scalar equations; - Demonstrated experimentally the correct convergence rates on 1D and 2D nonlinear examples. - Showed that RIDG has larger maximum CFL numbers than explicit SSP-RKDG and Lax-Wendroff DG; - Demonstrated the efficiency of the RIDG schemes for nonlinear problems in 1D and 2D. Namely, we showed that the RIDG methods become more efficient as you increase the method order, as opposed to RKDG methods, whose efficiency deteriorates as you move beyond fourth-order accuracy due to the Butcher barrier. All of the methods described in this work were written in a [matlab]{} code that can be freely downloaded [@code:ridg-code]. There are many directions for future work for this class of methods, including - Exploring different methods for finding solutions to the nonlinear rootfinding problem that forms the predictions in each time step. This includes the possibility of constrained optimization so that the predictions fit some desired criterion such as maintaining positivity. - Extending the RIDG method to systems of equations while maintaining efficiency. A crucial development will be to extend existing limiter technology, including non-oscillatory limiters and positivity-preserving limiters, to the case of the RIDG scheme. Since the RIDG method takes orders of magnitude fewer time steps when compared to SSP-RKDG and Lax-Wendroff DG, limiters that are used once or twice per time-step have a reduced effect on the overall runtime of the scheme. - Implementing domain decomposition schemes and demonstrating efficient many-core scaling for RIDG. The RIDG stencil is small (almost like nearest neighbors), and would need to communicate only twice per time-step. RKDG, Lax-Wendroff DG, and ADER-DG methods are known to be efficient on many-core systems [@Dumbser2018], and so the RIDG method is similar enough to these methods that we expect similar results. However, since the RIDG method takes orders of magnitude fewer time steps when compared to these other methods, communication costs are minimized and so even greater efficiency might be achieved. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank the anonymous referees for their thoughtful comments and suggestions that helped to improve this paper. This research was partially funded by NSF Grant DMS–1620128. [10]{} , [*Quadrature-free implementation of discontinuous [G]{}alerkin method for hyperbolic equations*]{}, AIAA Journal, 36 (1998), pp. 775–782. , [*Multidimensional [R]{}iemann problem with self-similar internal structure. [P]{}art [I]{} – [A]{}pplication to hyperbolic conservation laws on structured meshes*]{}, J. Comput. Physics, 277 (2014), pp. 163–200. , [*The [R]{}unge–[K]{}utta discontinuous [G]{}alerkin method for conservation laws [V]{}*]{}, J. Comput. Physics, 141 (1998), pp. 199–224. , [*[Ü]{}ber die partiellen [D]{}ifferenzengleichungen der mathematischen [P]{}hysik*]{}, Mathematische Annalen, 100 (1928), pp. 32–74. , [ *[Efficient implementation of ADER discontinuous Galerkin schemes for a scalable hyperbolic PDE engine]{}*]{}, Axioms, 7 (2018), p. 63. , [*Explicit one-step time discretizations for discontinuous [G]{}alerkin and finite volume schemes based on local predictors*]{}, J. Comput. Physics, 230 (2011), pp. 4232–4247. , [*Total variation diminshing [R]{}unge-[K]{}utta schemes*]{}, Math. of Comput., 67 (1998), pp. 73–85. , [*Strong stability-preserving high-order time discretization methods*]{}, SIAM Rev., 43 (2001), pp. 89–112. , [*[Regionally-Implicit Discontinuous Galerkin (RIDG) Code]{}*]{}. <https://github.com/pguthrey/regionally-implicit-dg>. , [*Nodal Discontinuous Galerkin Methods: Algorithms, Analysis, and Applications*]{}, Springer, 2007. , [*Spectral/hp Element Methods for Computational Fluid Dynamics*]{}, Oxford University Press, second ed., 2013. , [*Space-time discontinuous [G]{}alerkin method for the compressible [N]{}avier-[S]{}tokes equations*]{}, J. Comput. Physics, 217 (2006), pp. 589–611. , [*Systems of conservation laws*]{}, Comm. Pure Appl. Math., 13 (1960), pp. 217–237. , [*Finite Volume Methods for Hyperbolic Problems*]{}, Cambridge University Press, 2002. height 2pt depth -1.6pt width 23pt, [*Finite Difference Methods for Ordinary and Partial Differential Equations: Steady State and Time Dependent Problems*]{}, SIAM, 2007. , [*[$L^2$]{} stability analysis of central discontinuous [G]{}alerkin method and a comparison between central and regular discontinuous [G]{}alerkin methods*]{}, ESAIM: M2AN, 42 (2008), pp. 593–607. , [*[The discontinuous Galerkin method with Lax-Wendroff type time discretizations]{}*]{}, Computer Methods in Applied Mechanics and Engineering, 194 (2005), pp. 4528–4543. , [*Triangular mesh methods for the neutron transport equation*]{}, Tech. Rep. LA-UR-73-479, Los Alamos Scientific Laboratory, 1973. , [*Calculation of interaction of non-steady shock waves with obstacles*]{}, J. Comp. Math. Phys. USSR, 1 (1961), pp. 267–279. , [*High order weighted essentially nonoscillatory schemes for convection dominated problems*]{}, SIAM Review, 51 (2009), pp. 82–126. , [*Efficient implementation of essentially non-oscillatory shock-capturing schemes*]{}, Journal of Computational Physics, 77 (1988), pp. 439–471. , [*Space-time discontinuous [G]{}alerkin method for advection-diffusion problems on time-dependent domains*]{}, Applied Numerical Mathematics, 56 (2006), pp. 1491–1518. , [*Zur [T]{}heorie der partiallen [D]{}ifferentialgleichungen*]{}, Journal für die reine und angewandte Mathematik, 80 (1875), pp. 1–32. [^1]: Michigan State University, Department of Computational Mathematics, Science and Engineering, 428 S. Shaw Lane, East Lansing, Michigan 48824, USA (). [^2]: Iowa State University, Department of Mathematics, 411 Morrill Road, Ames, Iowa 50011, USA (). [^3]: Submitted to the editors on . [^4]: In 1D (i.e., $\mdim=1$), this definition is unambiguous. In higher dimensions, ${\mathbb P}\left(\mdeg, \mdim \right)$ could refer the set of polynomials that have a total degree $\le \mdeg$ (we refer to this as the ${\mathcal P}\left(\mdeg, \mdim \right)$ basis), it could refer to the set of polynomials that have degree $\le \mdeg$ in each independent variable (we refer to this as the ${\mathcal Q}\left(\mdeg, \mdim \right)$ basis), or it could be something in between. [^5]: Actually, for LIDG the result is the same whether we use ${\mathcal P}(\mdeg,\mdim+1)$, ${\mathcal Q}(\mdeg,\mdim+1)$, or something in between.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the Spitzer Space Telescope Infrared Array Camera (IRAC) observations for a sample of local elliptical galaxies to study later stages of AGN activities. A sample of 36 elliptical galaxies is selected from the Palomar spectroscopic survey. We detect nuclear non-stellar infrared emission in 9 of them. There is unambiguous evidence of circumnuclear dust in these 9 galaxies in their optical images. We also find a remarkable correlation between the infrared excess emission and the nuclear radio/X-ray emission, suggesting that infrared excess emission is tightly related with nuclear activity. Possible origin of infrared excess emission from hot dust heated by the central AGN is supported by spectral indices of IR excess emission.' title: 'Infrared-red Cores in Nearby Elliptical Galaxies' --- galaxies: active - galaxies: elliptical and lenticular,cD - galaxies: nuclei - infrared: galaxies Introduction ============ The tight correlation between the velocity dispersion of the bulge of the host galaxy and the black hole mass suggests that supermassive black hole (SMBH) is an ubiquitous component in elliptical galaxies (Gebhardt et al. 2000; Ferrarese & Merritt 2000; Kormendy & Richstone, 1995; Magorrian et al. 1998). Activities of SMBHs are well known for elliptical galaxies at high redshifts, these systems are host galaxies for most radio-loud and the bright quasars (Hutchings & Morris 1995; Bahcall, Kirkakos & Schneider 1996; Falomo et al. 2005). However, most SMBHs in nearby ellipticals are no longer very active. Ho et al.(1995,1997) found that, in the Palomar spectroscopic survey of nearby galaxies, only about 50% of ellipticals show detectable emission-line nuclei, most of which belong to be low-ionization nuclear emission-line regions (LINERs; Heckman 1980). Mulit-wavelength band surveys in elliptical galaxies have been carried out to search for nuclear emission from LINERs and other low-luminosity active galactic nuclei (LLAGNs) to study the true ionization mechanism. Such observations were performed in optical (Ho, et al. 1995, 1997); X-ray (Terashima, et al. 2000, 2002; Flohic et al. 2006), radio (Nagar et al. 2002, 2005; Filho et al. 2006), and ultraviolet band (Maoz, et al. 1995; 1996). None of these observation were conclusive in determining the nature of these objects. A widely accepted perspective is that LLAGNs display remarkably different properties with their high-luminosity counterparts such as quasars and Seyferts. The optical-to-ultraviolet “big blue bump”, a typical Spectral Energy Distribution (SED) of high-luminosity AGNs, is weak or absent in LLAGNs (Ho, 1999). At the same time, an anticorrelation between radio-loudness and Eddington ratio for AGNs is reported and confirmed by recent studies (Ho, 2002; Greene et al. 2006; Sikora et al. 2007). In the infrared band, the composite SEDs of LLAGNs display a mid-infrared peak (Ho, 2008). Ho (2008) concludes that LLAGNs have an radiatively inefficient accretion flow (RIAF) with a truncated thin disk. Maoz (2007), however, found that SEDs for a sample of relatively unobscured low luminosity LINERs show no difference with that for higher luminosity AGNs. A similar conclusion was obtained recently by Dudik et al. (2009) that \[NeV\]$24\mu m$ /\[OIV\]$26\mu m$ mid-infrared line flux ratio for LLANGs similar with standard AGN, arguing against a UV-to-optical deficiency due to inefficient accretion in LLANGs. On the other hand, recent observations show that the circumnuclear region of elliptical galaxies is also complicated. Elliptical galaxies are thought to contain only old stellar population and hot gas. This picture has been challenged by recent observations. Shields (1991) detected warm gas ($T\sim10^4K$) in many elliptical galaxies (see also Macchetto et al. 1996). Recent neutral hydrogen observations reveal substantial amount of neutral hydrogen gas in many early-type galaxies (Morganti et al. 2006; Noordermeer 2006). Moreover, even cool interstellar medium (ISM) including dust and molecular gas were detected in elliptical galaxies (Knapp et al. 1985; Knapp et al. 1989; van Dokkum et al. 1995; Wiklind et al.1995; Temi et al. 2004; Lauer et al. 2005; Sage et al. 2007; Kaneda et al. 2008). The circumnuclear cold interstellar media are mostly detected in elliptical galaxies with nuclear activities (Tran et al. 2001; Krajnovic & Jaffe, 2002; Xilouris & Papadakis, 2002; Simóes Lopes et al. 2007; Zhang et al. 2008). It is more likely that active elliptical galaxies possess circumnuclear dust feature. This fact challenges the RIAF accretion scenario, which assumes LLAGNs are not gas-starving, but have a low radiative efficiency. The Spitzer Space Telescope (Werner et al. 2004) with 2” spatial resolution (Fazio et al. 2004) in mid-infrared ($3.6-8.0\mu m$) offers us a new approach to study both LLAGNs and their environment in ellipticals. Although contributions of photospheric emission from evolved stellar population and hot dust in the circum-stellar envelopes of AGB stars form a considerable mid-infrared background in ellipticals (Athey & Bregman 2002; Xilouris et al. 2004; Temi et al. 2008), smooth spatial distribution of surface brightness of elliptical galaxies permit to separate non-stellar nuclear source from galaxy component. There were several detections of mid-infrared core feature in previous studies (Pahre et al. 2004; Gu et al. 2007). This excess emission from AGN component can not be easily extracted from host galaxy extended component due to poor spatial resolution. In this paper, we present results of MIR observation of a sample of nearby elliptical galaxies at mid-infrared wavelength with Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) aboard the Spitzer Space Telescope in search for mid-infrared core, and studies of correlation between infrared core feature and other nuclear properties of elliptical galaxies. This MIR observation permits to investigate the origin of nuclear infrared emission in ellipticals. This paper is organized as follows: the sample selection and data reduction are described in Section 2; the basic results of the observation are presented in Section 3; the results are discussed in Section 4; the conclusion is given in Section 5. Sample Selection and Data Reduction =================================== --------- --------- -------- -------- -------------- ----- Galaxy D $M_B$ Nuclear Name ($Mpc$) (mag) Class Dust Ref (1) (2) (3) (4) (5) (6) NGC315 65.8 -22.22 L1.9 Disk/Ring 3 NGC410 70.6 -22.01 T ... ... NGC777 66.5 -21.94 S ... ... NGC821 23.2 -20.11 A No Dust 2 NGC1052 17.8 -19.90 L1.9 Lane/Chaotic 5 NGC2832 91.6 -22.24 L2: No Dust 4 NGC3226 23.4 -19.40 L1.9 Disk/Ring 3 NGC3377 8.1 -18.47 A Lane/Chaotic 1 NGC3379 8.1 -19.36 L2/T2: Disk/Ring 2 NGC3608 23.4 -20.16 L2/S2: No Dust 2 NGC3610 29.2 -20.79 A No Dust 2 NGC3640 24.2 -20.73 A No Dust 1 NGC4125 24.2 -21.25 T Disk/ring 1 NGC4168 16.8 -19.07 S Lane/Chaotic 1 NGC4261 35.1 -21.37 L2 Disk/Ring 3 NGC4291 29.4 -20.09 A No Dust 2 NGC4278 9.7 -18.96 L1.9 Lane/Chaotic 2 NGC4374 16.8 -21.12 L2 Lane/Chaotic 5 NGC4406 16.8 -21.39 A No Dust 2 NGC4473 16.8 -20.10 A No Dust 2 NGC4552 16.8 -20.56 T Lane/Chaotic 2 NGC4564 16.8 -19.17 A No Dust 2 NGC4621 16.8 -20.60 A No Dust 1 NGC4636 17.0 -20.72 L1.9 Lane/Chaotic 7 NGC4649 16.8 -21.43 A No Dust 2 NGC4660 16.8 -19.06 A No Dust 2 NGC5077 40.6 -20.83 L1.9 Lane/Chaotic 6 NGC5322 31.6 -21.46 L2: Disk/Ring 2 NGC5557 42.6 -21.17 A No Dust 1 NGC5576 26.4 -20.43 A No Dust 2 NGC5813 28.5 -20.85 L2 Disk/Ring 1 NGC5831 28.5 -19.96 A No Dust 1 NGC5846 28.5 -21.36 T Lane/Chaotic 1 NGC5982 38.7 -20.89 L2 No Dust 1 NGC6482 52.3 -21.75 T ... ... NGC7626 45.6 -21.23 L2 Dust/Ring 3 --------- --------- -------- -------- -------------- ----- : Global and Nuclear Properties. \[table1\] Notes— Col(4):Nuclear spectral type from the Palomar survey (Ho et al. 1997):L=LINER; S=Seyfert; T=Transition object; A=absorption-line nuclei (inactive). Col(5):Morphology of optical circumnuclear dust, obtained from references listed in Col(6), including three types: nuclear dust disk or dust ring; dust lane or disorganized dust patch; no dust. Col(6):References—(1) Tran et al. 2001; (2) Lauer et al. 2005; (3) González Delgado et al. 2008; (4) Lauer et al. 2007; (5) Ravindranath et al. 2001; (6) Rest et al. 2001; (7) van Dokkum & Franx, 1995. A sample of 36 elliptical galaxies were selected from the Palomar optical spectroscopic survey for this study. The Palomar optical spectroscopy survey comprises all nearby galaxies brighter than $B_T=12.5 mag$ in the northern hemisphere (Ho et al. 1995, 1997). This sample is statistical complete, and contains both galaxies with and without nuclear activity while the latter ones serve as a control sample. Multi-band observations of nuclear region has been carried out and studied for a large fraction of this sample, making it suitable for our study of the infrared properties. Our sample includes 2 Seyferts, 15 LINERs, 5 transition objects and 15 inactive galaxies. The IRAC Basic Calibrated Data (BCD) and MIPS Post Basic Calibrated Data (Post BCD) of these galaxies are downloaded from the archive of Spitzer Science Center. The IRAC BCD images were performed with basic image processing, including dark subtraction, detector linearization corrections, flat-field corrections, and flux calibration. We further use the custom IDL software (Huang et al. 2004) to make the final mosaic image for each object. The absolute flux calibration for IRAC flux densities is better than 10% (Fazio et al. 2004). We adopt the AB magnitude system for magnitudes and colors throughout this paper. To obtain the MIR color distribution of each galaxy, we first cross-convolve each image by using the corresponding PSF[^1](Gordon et al. 2008, Tom Jarrett, private communication). For example, the color difference between image at $3.6\mu m$ and at $8.0\mu m$ was obtained by following steps, firstly: $$\textrm{Image(3.6')} = \textrm{Image(3.6)} \otimes \textrm{PSF(8.0)}$$ $$\textrm{Image(8.0')} = \textrm{Image(8.0)} \otimes \textrm{PSF(3.6)}$$ Where “$\otimes$” means convolution, then: $$\textrm{Color(3.6-8.0)}=(\textrm{Image3.6'} \times A_{3.6}) -(\textrm{Image8.0'} \times A_{8.0})$$ Where $A_{3.6}$ and $A_{8.0}$ are PSF aperture correction factors for an infinite aperture. We used ellipse program in the ISOPHOT package of IRAF to perform the surface photometry of each galaxy. Hot pixels and foreground stars were identified by eye and masked out before isophotal fitting. The isophotal parameters, such as ellipticity and position angle, were measured at $3.6\mu m$ where the Signal-to-Noise (S/N) ratio is the highest. These parameters were then applied for the surface photometry at 4.5, 5.8 and $8.0\mu m$. Considering the scattered light, we employed extended sources aperture correction for calibration provided by Tom Jarrett[^2]. Since we particularly focus on the nuclear region, we also obtained the nuclear flux density from a circular region within an aperture of 10” (for NGC 1052 and NGC 3226, with an aperture of 15”) to extract the non-stellar excess emission. The size of such a central region is determined by the radial color distribution shown in Figure 1. --------- -------------- -------------- --------------- --------------- ---------------- -- -- Galaxy $L_{[OIII]}$ $L_{15GHz}$ $L_{2-10keV}$ $L_{60\mu m}$ $L_{100\mu m}$ Name ($erg/s$) ($erg/s/Hz$) ($erg/s$) ($erg/s/Hz$) ($erg/s/Hz$) (1) (2) (3) (4) (5) (6) NGC315 39.38 30.39 41.64 30.22 30.27 NGC410 $<39.32$ $<27.78$ ... 0 0 NGC777 $<39.11$ $<27.90$ ... 0 0 NGC821 ... ... ... 0 22.45 NGC1052 39.43 29.14 40.78 29.53 29.72 NGC2832 $<39.05$ $<28.18$ ... ... ... NGC3226 38.93 27.55 39.62 ... ... NGC3377 ... ... ... 28.04 28.39 NGC3379 37.73 $<25.89$ 37.89 0 0 NGC3608 37.80 $<26.99$ 38.64 ... ... NGC3610 ... ... ... 0 0 NGC3640 ... ... ... 0 0 NGC4125 38.85 $<26.85$ 38.93 29.70 30.02 NGC4168 37.91 27.01 ... 0 29.30 NGC4261 39.70 29.65 40.65 29.07 29.28 NGC4291 ... ... ... 0 0 NGC4278 38.88 28.10 39.96 28.83 29.27 NGC4374 39.03 28.79 39.58 29.24 29.54 NGC4406 ... ... ... 28.57 29.00 NGC4473 ... ... ... 0 0 NGC4552 38.05 28.30 39.41 28.73 29.20 NGC4564 ... ... ... 0 0 NGC4621 ... ... ... 0 0 NGC4636 ... ... ... 0 0 NGC4649 ... ... ... 29.43 29.52 NGC4660 ... ... ... 0 0 NGC5077 39.52 ... ... ... ... NGC5322 38.54 28.18 ... 29.71 30.03 NGC5557 ... ... ... 0 0 NGC5576 ... ... ... 28.88 29.20 NGC5813 38.35 27.37 ... 0 0 NGC5831 ... ... ... ... ... NGC5846 38.32 27.79 39.54 0 0 NGC5982 38.55 ... ... 0 29.77 NGC6482 ... $<27.52$ 39.40 0 0 NGC7626 38.52 29.00 ... 0 0 --------- -------------- -------------- --------------- --------------- ---------------- -- -- : Multi-wavelength Nuclear Emissions. \[table2\] Notes—Col(2): Luminosity of $[OIII]\lambda5007$ taken from Ho et al. (1997), except NGC 1052, NGC 4125 and NGC 5813, which are based on observations under non-photometric conditions. Ho et al. (2003b) give their updated $H\alpha$ luminosities and we use them to derive their $[OIII]\lambda5007$ luminosities, assuming same ratios of $[OIII]\lambda5007/H\alpha$ from Ho et al.(1997). Col(3): Nuclear Luminosity at 15GHz, taken from Nagar et al.(2005), except NGC1052, which is taken from Kellermann et al.(1998). Col(4): Nuclear Luminosity at 2-10keV, taken from Gonzalez-Martin et al. (2006), except NGC 4278, which is taken from Terashima & Wilson (2003). Col(5) - (6): Total luminosities at $60\mu m$ and $100\mu m$ from IRAS observation (Knapp et al. 1989) Results ======= Figure 1 shows color distributions of $[3.6]-[4.5]$, $[3.6]-[5.8]$, $[3.6]-[8.0]$ for all elliptical galaxies in our sample. In $10'' <R<40''$, the infrared colors for almost all galaxies do not change significantly, and are generally consistent with photospheric emission of late-type stars with a minor contribution of hot dust in circum-stellar envelopes of AGB stars (Pahre et al. 2004; del Burgo, Carter & Sikkema, 2008; Temi, Brenghti & Mathews, 2008). However, the colors in the central region with $R<10''$ tell a different story: 9 out of 36 galaxies exhibit much redder colors. The \[3.6\]$-$\[8.0\] shows most color excess in the central region. Thus we use \[3.6\]$-$\[8.0\] colors to distinguish galaxies with significant excess emission from normal galaxies. For galaxies with no significant color excess, the deviation of \[3.6\]$-$\[8.0\] color in central 10” region is only $-1.44 \pm 0.02$. The remaining 9 galaxies all have \[3.6\] - \[8.0\] color redder than -1.34, indicating a redder central color above $3\sigma$ level. ![Color distributions of all non-core galaxies (black) in comparison with core (red) galaxies, the x-axis refers to distances from the center along major axis, in units of arcsec. Core galaxies have obvious redder color in central several arcseconds, especially seen from $3.6 - 8.0\mu m$.[]{data-label="fig01"}](f1.ps){width="9cm"} Figure 2 is the color-color diagram for galaxies with no color excess in the center. The dispersions of their IRAC colors are very small, with $[3.6]-[4.5]=0.61\pm0.02$, $[3.6]-[5.8]=-1.11\pm0.02$, $[3.6]-[8.0]=-1.44\pm0.02$, in agreement with colors of M-type star ( Pahre et al. 2004). This result supports that normal ellipticals are dominated by old stellar population. Moreover, LLAGNs and inactive galaxies show no systematic difference in mid-infrared color, which agrees with the nuclear stellar population analysis of LLAGNs in previous studies (Boisson et al. 2000; Ho et al. 2003b). Zhang et al.(2008) studied the nuclear stellar population for a sample of early-type galaxies that is highly overlapped with our sample, and they found no difference in stellar age distribution between LLAGNs and inactive galaxies in their sample. The mid-infrared emission from elliptical galaxies consists of two components: stellar and non-stellar emission (dust, AGN, etc.). The $3.6\mu m$ emission is dominated by later-type stellar photospheric emission (Pahre et al. 2004; Temi et al. 2008), thus traces stellar mass distribution. The non-stellar component becomes significant at longer wavelength, which is shown clearly by the $[3.6]-[8.0]$ color. To derive the flux density of excess non-stellar emission in infrared core galaxies, we use the mean colors (\[3.6\]-\[5.8\], \[3.6\]-\[8.0\]) for central regions of non-core galaxies as a template of the old stellar component. For five infrared core galaxies with visible excess emissions (NGC 315, NGC 1052, NGC 3226, NGC 4261 and NGC 5322) at both $5.8\mu m$ and $8.0\mu m$, by assuming that the shape of SED for the central excess emission as a power-law function, we are able to disentangle excess component from stellar emission by solving following equations: $$f_{exs,i}+f_{star,i}=f_{tot,i}$$ $$f_{star,i}/f_{star,j}=R_{i,j}$$ $$f_{exs,i}/f_{exs,j}=(\nu_i/\nu_j)^\alpha$$ where subscript numbers $i/j=1,2,3$ correspond to different wavelength $\lambda_{1,2,3}=3.6, 5.8, 8.0$, $\nu_{i/j}$ are corresponding frequency of $\lambda_{i/j}$, $f_{exs,i/j}$ and $f_{star,i/j}$ are flux densities of core emission and stellar emission at different bands, respectively. $f_{exs,i/j}$, $f_{star,i/j}$ and $\alpha$ are set as variables, with a total number of 7, equal to the number of equations. $R_{i,j}$ are flux ratios of emission for old stellar population, obtained by averaging central colors of all non-core galaxies. $f_{tot,i}$ are flux densities of stellar emission at different bands, extracted from central region within an aperture of 15” for NGC 1052/ NGC 3226 and 10” for other core galaxies. The size of aperture is determined by two factors, it should be large enough to include the whole infrared core structure, and as small as possible to reduce the influence from offset of zero-point caused by uncertainty in true nuclear stellar color, this influence bring larger uncertainty at shorter wavelength, where dilution by stellar emission is more serious. The special resolution of IRAC is 2”. In \[3.6 - 8.0\] distribution shown in Figure 1, except NGC 1052, NGC 3226 and NGC 4278, the sizes of red core structures for all the rest six galaxies are smaller or close to 10 arcsec. Furthermore, due to the convolution procedure mentioned in Section 2, the sizes of core structure in Figure 1 appear larger than their real values. Therefore, we consider 10 arcsec a proper size for the aperture to extract nuclear excess emission. For NGC 1052 and NGC 3226, 15” aperture is adopted, NGC4278 show a redder color throughout the galaxy and a vague core structure, we simply use 10” aperture to derive a “nuclear” flux in this object. For such apertures, the propagated relative errors of excess emission due to uncertainties of $R_{ij}$ are several to about ten percent. ![Central colors of non-core galaxies within 10”, stars are LLAGNs while triangles are inactive galaxies.[]{data-label="fig02"}](f2.ps){width="9cm"} Flux densities and luminosities obtained through above approachs are listed in Table 3 and 4. In Table 3, the errors of total IRAC and MIPS flux densities of whole galaxies are simply set as $10\%$ (Fazio et al. 2004; Rieke et al. 2004), which is an estimation for the uncertainties of absolute calibration, in contrast, uncertainties derived from image statistics are neglibile. The uncerntainties of excess emission and spectral indices are estimated from deviation of excess emissions extracted from $\pm 5''$ apertures (10”/15”/20” for NGC 1052 and NGC 3226, 5”/10”/15” for other galaxies). The proportion of excess emission at 3.6 is very small, contributes less than 5% even in the strong infrared core galaxy, NGC 1052, and less than 0.5% in NGC 4261. Hence it is reasonable to assume all $3.6\mu m$ flux densities in the other four galaxies (NGC 4125, NGC 4278, NGC 4374 and NGC 5077) are produced by stellar component, with this assumption we obtain the excess flux densities at $8.0\mu m$ for these four galaxies, since their excess emissions at short wavelength, even if exist, will be too weak to be detected. The excess emissions are generally small comparing with stellar emission except in NGC 1052, where $8.0\mu m$ excess emission contributes to nearly 50 percents of total emission. ![image](f3.ps){width="12cm"} Figure 3 shows $8.0\mu m$ residual images of nine infrared core galaxies, which are obtained by utilizing $3.6\mu m$ image and flux ratio $R_{3.6, 8.0}$ appearing in equation (5) to remove the contribution of underlying stellar population. The $8\mu m$ excess emissions in four galaxies (NGC 315, NGC 1052, NGC 4261 and NGC 5322) show point-like structure with ring-like feature of $8.0\mu m$ Point Spread Function (PSF), and the other five galaxies show extended excess emission, indicating off-nuclear sources of $8\mu m$ excess. NGC 1052 show a substructure on the right of the center. This is due to the bandwidth effect[^3] and was masked for measuring photometry. To show the contribution of extended emission, in Figure 4 we compare nuclear $8\mu m$ excess luminosity with the compactness of excess emission, quantified by $L(<5'')/Lex,total$, the proportion of nuclear excess luminosities within 5” aperture to total extended luminosities. For point-like sources, the surface brightness profile within 5” are uniform and consistent with $8\mu m$ PSF. The proportion of central emission within 5” are generally higher than $50\%$ and are higher than $80\%$ for point-like sources. The compactness decreases as nuclear luminosity decreases. Extended emissions become considerable only for sources with nuclear luminosities lower than $3 \times 10^{27} erg/s/Hz$. Discussion ========== The excess non-stellar emission may originate from the central AGNs or nuclear hot dust heated by AGN. PAHs emission feature could also contribute to excess emission at $8.0 \ \mu m$, this is exactly the case for NGC 1052, where $7.7\mu m$ PAHs emission feature has been detected (Kaneda et al. 2008), though this feature by itself is not able to explain the excess emission at $4.5\mu m$ and $5.8\mu m$. AGNs and kpc-scale circumnuclear dust are commonly detected in nearby elliptical galaxies. In the Palomar spectroscopic survey of nearby galaxies, about half of the ellipticals show detectable emission-line nuclei, while most of which are classified as LINERs (Ho, et al. 1997). On the other hand, recent observations from Hubble Space Telescope (HST) images show that circumnuclear dust appear in about $\sim40\%$ of ellipticals in their optical images (Tran, et al. 2001; Lauer, et al. 2005; Simoes Lopes, et al. 2007). PAHs were thought to be rare in ellipticals considering the sputtering destruction in the hot plasma environment. However, recently, Kaneda et al.(2008) reported detection of PAHs emission features in 14 out of 18 dusty ellipticals, implying that PAHs are more common than ever thought in these systems. In the following discussion, we confine our interest to the role of different components playing in produce the excess infrared emission. Columns (4) and (5) of Table 1 summarize the classification on nuclear activity and circumnuclear dust morphology in optical images of our sample galaxies. All infrared core galaxies, both point-like and extended sources, show AGN activities, 8 of them are LINERs, except for NGC 4125, which is classified as a Transition object. Thus infrared core galaxies account for $41\%$ of AGNs in our sample. With respect to circumnuclear dust, 17 galaxies, about half of our sample, have been detected of circumnuclear dust in optical band, all infrared core galaxies belong to this group. Therefore, the fact that both AGNs and circumnuclear dust coincide with a central infrared core makes it difficult to clarify their contributions to the infrared excess emission. --------- ---------------- ---------------- ---------------- ----------------- ---------------- -------------- -------------- -------------- ------------------ ----------------- ------------------ Galaxy $S_{T3.6}$ $S_{T4.5}$ $S_{T5.8}$ $S_{T8.0}$ $S_{T24}$ $S_{E3.6}$ $S_{E5.8} $ $S_{E8.0}$ Name (mJy) (mJy) (mJy) (mJy) (mJy) (mJy) (mJy) (mJy) $\alpha_{3.6-8}$ $\alpha_{8-24}$ $\alpha_{radio}$ (1) ( 2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) NGC 315 $207.7\pm20.7$ $122.7\pm12.2$ $74.0\pm7.4$ $51.4\pm5.1$ $100.1\pm10.0$ $1.3\pm0.3$ $4.4\pm0.5$ $10.0\pm0.6$ $2.5\pm0.1$ 2.1 0.27 NGC1052 $337.7\pm33.7$ $207.0\pm20.7$ $163.4\pm16.3$ $136.1\pm13.6$ $...$ $17.6\pm0.5$ $38.7\pm0.5$ $65.9\pm0.4$ $1.8\pm0.1$ 1.3$^a$ 0.33 NGC4261 $355.6\pm35.5$ $209.1\pm20.9$ $134.6\pm1.3$ $83.8\pm8.3$ $50.5\pm5.0$ $0.8\pm0.1$ $2.0\pm0.2$ $3.9\pm0.3$ $2.0\pm0.1$ 2.3 -0.24 NGC5322 $360.2\pm36.0$ $216.2\pm21.6$ $146.6\pm14.6$ $90.6\pm9.0$ $33.5\pm3.3$ $1.3\pm0.4$ $4.0\pm0.6$ $8.4\pm0.4$ $2.3\pm0.5$ 1.2 0.15 NGC3226 $117.1\pm11.7$ $61.7\pm6.1$ $44.1\pm4.4$ $33.5\pm3.3$ $30.1\pm3.0$ $1.4\pm0.2$ $4.6\pm0.3$ $10.2\pm0.4$ $2.3\pm0.2$ 1.0 0.11 NGC4125 $541.2\pm54.1$ $309.2\pm30.9$ $211.0\pm21.0$ $122.5\pm12.2$ $31.9\pm3.2 $ ... ... $4.5\pm0.8$ ... 1.8 ... NGC4278 $386.3\pm38.6$ $233.2\pm23.3$ $149.7\pm14.9$ $110.6\pm 11.0$ $29.1\pm2.9$ ... ... $11.0\pm2.0$ ... 0.9 0.29 NGC4374 $773.6\pm77.3$ $462.9\pm46.3$ $284.9\pm28.4$ $180.4\pm 18.0$ $28.9\pm2.8$ ... ... $6.7\pm1.3$ ... 1.3 0.13 NGC5077 $162.6\pm16.3$ $94.1\pm9.4$ $79.9\pm8.0$ $37.8\pm 3.7$ $21.5\pm2.1$ ... ... $1.9\pm0.5$ ... 2.2 ... --------- ---------------- ---------------- ---------------- ----------------- ---------------- -------------- -------------- -------------- ------------------ ----------------- ------------------ Notes—Col(2)—(6):Total infrared flux density at $3.6$,$4.5$,$5.8$,$8.0$ and $24\mu m$. Col(7)—(9):Flux density of central excess emission at $3.6$, $5.8$, $8.0\mu m$. Col(10): Power-law index of central excess emission between $3.6-8\mu m$, derived from $3.6$, $5.8$ and $8.0\mu m$ through method described in Section 3. Col(11): Power-law index of central excess emission between $8-24\mu m$, derived from Col(6) and Col(9). Col(12): Spectral index of nuclei derived from flux densities at 5GHz and 15GHz, which are taken from Nagar et al. (2005). $^a$ Since NGC1052 has no MIPS data, the power-law index is derived from the mid-infrared spectra, that is given by Kaneda et al, 2008. --------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- Galaxy $L_{\nu,T3.6}$ $L_{\nu,T4.5}$ $L_{\nu,T5.8}$ $L_{\nu,T8.0}$ $L_{\nu,T24.0}$ $L_{\nu,E3.6}$ $L_{\nu,E5.8} $ $L_{\nu,E8.0}$ Name ($10^{28}erg/sec$) ($10^{28}erg/sec$) ($10^{28}erg/sec$) ($10^{28}erg/sec$) ($10^{28}erg/sec$) ($10^{28}erg/sec$) ($10^{28}erg/sec$) ($10^{28}erg/sec$) (1) (2) (3) (4) (5) (6) (7) (8) (9) NGC315 $107.61$ $63.57$ $38.35$ $26.64$ $51.28$ $0.68$ $2.26$ $5.08$ NGC1052 $12.80$ $7.85$ $6.19$ $5.16$ ... $0.67$ $1.47$ $2.50$ NGC4261 $52.41$ $30.82$ $19.84$ $12.35$ $7.41$ $0.11$ $0.35$ $0.57$ NGC5322 $43.03$ $25.83$ $17.52$ $10.82$ $3.98$ $0.16$ $0.47$ $1.00$ NGC3226 $7.67$ $4.04$ $2.89$ $2.20$ $1.95$ $0.09$ $0.30$ $0.67$ NGC4125 $37.92$ $21.67$ $14.78$ $8.58$ $2.24$ ... ... $0.31$ NGC4278 $4.35$ $2.63$ $1.69$ $1.24$ $0.33$ ... ... $0.12$ NGC4374 $26.12$ $15.63$ $9.62$ $6.09$ $0.97$ ... ... $0.22$ NGC5077 $32.07$ $18.55$ $15.71$ $7.45$ $4.27$ ... ... $0.38$ --------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- Notes—Col(2)—(5):Total infrared luminosity at $3.6$, $4.5$, $5.8$, & $8.0\mu m$. Col(6)—(8):Luminosity of central excess emission at $3.6$, $5.8$, $8.0\mu m$. Optical observations of early-type galaxies show that active early-type galaxies more tend to possess circumnuclear dust than inactive ones (Tran et al. 2001; Krajnovic & Jaffe, 2002; Xilouris & Papadakis, 2002; Simoes Lopes et al. 2007). Simoes Lopes et al. (2007) reported that 100% AGNs show ciucumnuclear dust feature in optical image, in contrast, only 2 out of 15 ellipticals are detected with dust. This correlation can be understood in two ways. Firstly, the accretion of a black hole requires fuel, and circumnuclear dust is a good indication of cold gas inflow and thus an evidence of fuel supply for the central of AGN. Nevertheless, the existence of ellipticals without visible dust but hosting AGNs, although in small amount: NGC 2832, NGC 3608, NGC 5982, exclude the presumption that nuclear dust is necessary for nuclear activity. On the other hand, Temi et al. (2007a, b) suggested that AGN feedback may play an important role in transporting dust from central reservoir to the interstellar space in ellipticals, this mechanism may also assist to establish such a correlation. Figure 5 shows plots of $60\mu m$ luminosity versus $100\mu m$ luminosity based on IRAS data (Knapp et al. 1989), most of the far-infrared detected galaxies are infrared core galaxies hosting luminous AGN, while 4 inactive galaxies and 1 non-core AGN also show detection but lower luminosities on the whole. This is consistent with the fact that AGN nuclei preferentially exist in optically dusty ellipticals. Although for the most powerful AGNs: NGC 1052, NGC 315, NGC 4261, the far-infrared emission may come from strong radio continuum powered by AGN, it is unlikely that this is the dominant contribution for other fainter AGNs, since their far-infrared emission are too strong comparing with the extrapolation of radio continuum. Additional evidence arises from resolved far-infrared emission in NGC 5077 (Temi et al. 2007b) and equal dust mass between that inferred from far-infrared measurement and that from optical extinction observation in NGC 4125 (Bregman et al. 1998). In dusty ellipticals the dust mass inferred from far-infrared is generally an order of magnitude greater than that from optical extinction observation (Goudfrooij et al. 1994), indicating that the distribution of dust is extended. Therefore, the higher far-infrared luminosities in luminous AGNs than faint AGN and inactive galaxies reveal that correlation between AGN and dust does not only hold for dust at circumnuclear region but also for extended dust throughout the galaxy. ![Nuclear excess luminosity plotted against the proportion of nuclear excess luminosity within 5” aperture to total extended luminosity[]{data-label="fig04"}](f4.ps){width="9cm"} ![Far infrared luminosities at $60\mu m$ and $100\mu m$, symbols have same meaning as figure 4. Stars denote point-like core galaxies, triangles denote extended core galaxies and crosses denote other LLAGNs without infrared core. []{data-label="fig06"}](f5.ps){width="9cm"} ![Nuclear $[OIII]\lambda5007$ luminosity plotted against 15GHz radio luminosity, symbols with red color refer to infrared core galaxies, stars are point-like core galaxies, triangles are extended core galaxies, and crosses are other LLAGNs without infrared core. []{data-label="fig05"}](f6.ps){width="9cm"} Figure 6 shows the 15 GHz nuclear radio luminosity versus nuclear $[OIII]\lambda5007$ luminosity for all AGNs in our sample. The nuclear 15 GHz radio luminosities are mainly taken from Nagar et al. (2005), and the nuclear $[OIII]\lambda5007$ luminosities are taken from Ho et al.(1997), which have been corrected for reddening effect by using the Balmer decrement. It is obvious that infrared core AGNs distinctly separate from non-core AGNs with higher luminosities both at optical and radio wavelength. Especially, for $[OIII]\lambda5007$ emission, core AGNs and non-core AGNs are completely distinguished at $\log(L_{O[III]})\sim 38.5$. With respect to radio emission, 7 out of 9 core galaxies have been detected of compact radio nuclei, all of which exhibit flat spectrum—as shown in the last column of Table 3—and high brightness temperature with $T>10^7K$ indicating non-stellar origin. Figure 6 strongly suggests that the activity of central black hole accounts for infrared excess emission. If infrared excess luminosity is correlated with AGN luminosity, a infrared-red core will intrinsically exist for every AGN, but be too weak to be detected in cases of low luminosity. The four panels of Figure 7 show $8.0\mu m$ excess luminosity plotted against nuclear 15GHz radio luminosity, MIPS $24\mu m$ luminosity, nuclear $[OIII]\lambda5007$ luminosity and nuclear X-ray luminosity at 2-10keV. The linear fit of data points yields linear-correlation R values of $0.65$, $0.92$, $0.32$ and $0.78$, respectively. While the correlation between excess $8\mu m$ emission and $24\mu m$ emission can be simply attributed to similar origins, correlations between $8\mu m$ excess and radio and hard X-ray emission, further support an AGN origin for infrared core. In comparison, the weak correlation between excess emission and $[OIII]\lambda5007$ luminosity might be resulted from both extinction and shock ionization. A possible interpretation for the considerable scatter in Figure 7 is unresolved non-AGN sources. Infrared core galaxies with weak excess emission in Figure 3 generally have extended structure, while higher-luminosity sources mainly display point-like morphology, suggesting non-AGN contribution in low luminosity sources. Particularly, the weak emission at $8\mu m$ in extended sources imply PAHs origin, which are generally detected in dusty ellipticals (Bregman et al. 2006; Kaneda et al. 2008). Yet, PAHs emission, even if exist (For example: NGC 1052, Kaneda et al. 2008), will still play a minor role in strong and point-like sources considering the remarkable emission at wavelength shorter than $8\mu m$ in these objects. ![image](f7.ps){width="18cm"} The contribution of PAHs can be further evaluated by comparing spectral indices at long and short wavelength. Since uncertainties of excess emission are serious at wavelength shorter than $5.8\mu m$, here we only consider three galaxies (NGC 315, NGC 1052, NGC 3226) given their reliable measurement at short wavelength. The approach to derive excess flux density described in Section 3 is based on the assumption that excess emission through IRAC band satisfy a single power-law distribution with a uniform $alpha$. In order to compare spectral indices between different IRAC bands, the flux density need to be reevaluated in another way. For NGC 315 and NGC 3226, the proportion of non-stellar component at $3.6\mu m$ flux density is negligible, thus we assume that all $3.6\mu m$ emission is due to old stellar population, following this assumption the derived spectral index $\alpha_{4.5-5.8}$ is 2.0 for NGC 315 and 3.4 for NGC 3226, while $\alpha_{5.8-8.0}=2.8$ for both galaxies. The spectral indices are similar enough to rule out the possibility of dominant PAHs emission. For NGC 1052, we simply derive the spectral index $\alpha_{3.6,4.5,5.8}$ following the same approach described in Section 3, the derived value is 1.7, also consistent with $\alpha_{3.6,5.8,8.0}=1.8$. In normal elliptical galaxies, mid-infrared emission is dominated by circumstellar hot dust around AGB stars (Knapp et al. 1992; Temi et al. 2007a). Nevertheless, the tight correlation between MIPS $24\mu m$ emission and $8\mu m$ emission in Figure 7 suggests that the sources of both are the same in infrared core galaxies. To examine the origin of MIPS $24\mu m$ emission, we compare $24\mu m$ luminosities with optical B-band absolute magnitude $M_B$ in Figure 8. As Temi et al. (2007a) noted in their work, for non-active ellipticals, $M_B$ scales with $24\mu m$ luminosity, supporting a stellar origin for $24\mu m$ emissions. Half of galaxies with $8\mu m$ excess emission also show $24\mu m$ excess with respect to normal galaxies, while the other four galaxies do not show obvious signs of $24\mu m$ excess. Thus the tightness of the correlation between MIPS $24\mu m$ emission and $8\mu m$ emission should be taken with care when it comes to faint sources, which could be diluted by circumstellar emission. Three AGN-associated mechanisms can be responsible for the observerd excess infrared emission. The first is thermal emission from nuclear hot dust heated by central AGN, which could be expected in normal Seyfert galaxies. As discussed earlier, in active ellipticals, optically observed circumnuclear dust feature is common , generally with scales of a few hundreds pc. With such distance from nuclei, dust could not be heated sufficiently by AGN to generate observed emission at IRAC band (van Bemmel et al. 2003 & 2004). Hot dust responsible for observed emission, if exist, will lie within a central region of several to about ten pc, where is traditionally assumed as torus and is difficult to be resovled. However, the viewpoint that LLAGNs possess torus as Serferts is challenged by the observed low X-ray column densities in LLAGNs (Terashima et al. 2002; Gonz¨¢lez-Mart¨ªn et al. 2006) and high detection rate of optical compact core (Chiaberge et al. 1999), which indicates mild obscuration. The disappearance of dust emission is also predicted by “wind-torus” model, in which obscuring torus is essentially clumpy dusty wind emanating from accretion disk and could not maintain while accretion rate declines to the level insufficient for out flow (Emmering et al. 1992; Konigl & Kartje, 1994; Elitzur et al. 2006). Without hot dust, a second choice to interpret the infrared excess is the synchrotron emission from thermal electrons in accretion flows, existing models of RIAF predict a submillimeter to infrared bump in the SED (Quataert et al. 1999; Yuan et al. 2003). Finally, it is worthwhile to notice that 8 out of 9 core galaxies in our sample present compact radio core, and 4 of which (NGC 315, NGC 1052, NGC 4261, NGC 4374) belong to the category of FR I radio galaxies with kpc scale jets. There is a long-time consideration that the SED of FR I radio galaxies, or even other types of LLAGNs, could be dominated by synchrotron emission from the jet (Chiaberge et al. 1999; Yuan et al. 2002; Falcke et al. 2004). One typical example of jet dominated LLAGN is M87 (Shi et al. 2007; Perlman et al. 2007), where emission from bright knots of jet could be seen at all IRAC bands and nuclear mid-infrared emission is primarily due to the non-thermal emission from the base of the jet. Yet, unlike M87, none of core galaxies in our sample show any signs of jet emission at IRAC band, and our core galaxies generally have radio luminosities lower than M87 by a factor of 100, but comparable infrared excess emission. ![MIPS $24\mu m$ luminosity plotted against optical B-band absolute magnitude. Symbols have the same meanings as in Figure 5 and Figure 6.[]{data-label="fig08"}](f8.ps){width="7cm"} As shown in Table 3, for 5 sources in our sample with detectable excess emission throughout the IRAC band, we give their spectral indices $\alpha_{3.6-8}$ following the method described in Section 3. The spectral indices of the 5 galaxies are similar, with values $\sim$ 2, implying a common origin of infrared excess. Such an index value is too steep for a jet dominated SED, which is not expected to be significantly larger than 1.0 (Markoff et al. 2003; Perlman et al. 2007), and too flat for the model of radiation from RIAF, which lies around 3 (Yuan et al. 2003 and private communication with Yuan). A combination of this two components might interpret our derived indices. which could be expected in FR I radio galaxies (Wu et al. 2007). However, it should be noticed that the emission characters at radio wavelength of this 5 core galaxies are not uniform, while 3 galaxies (NGC 315, NGC 1052, NGC 4261) are FR I galaxies with large scale jets and strong radio emission, NGC 3226 and NGC 5322 only show relatively faint compact core and show no extended radio emission (Nagar et al. 2005). Thus it is questionable why the galaxies with distinctive radio-loudness could produce similar spectral energy distribution through hybrid emission including jet and accretion flows. On the contrary, this value is consistent with that of general spectral index of Seyfert nuclei (Alonso-Herrero et al. 2003), supporting origin of thermal emission from hot dust. In addition, Barth et al. (1999) detected polarized broad $H_\alpha$ emission in and only in the 3 FR I galaxies mentioned above (NGC 315, NGC 1052, NGC 4261), out of 14 LLAGNs, and all of them also show considerable absorption X-ray columns ($\sim 10^{22} cm^{-2}$) (NGC 1052: Guainazzi et al. 2000; NGC 4261: Sambruna et al. 2003; NGC 315: Gonz¨¢lez-Mart¨ªn et al. 2006 ), indicating the existence of obscuring structure and at least some contribution of thermal emission. Since these three objects have relatively higher AGN luminosities, it is possible they more closely resemble classical AGNs than other objects in our sample. Therefore, while we are more inclined to attribute the infrared excess of relatively higher luminosity AGNs to dust emission, for evidence of torus in some of them, and for their similar spectral indices with Seyfert2s, the infrared emission mechanism of fainter objects is more uncertain. Our results show that infrared excess emission decreases with the decrease of other AGN indicators, and current data show no sign for a change of infrared emission mechanism. On the other hand, although LLAGNs generally have low X-ray column density, X-ray absorption does not have a direct connection with infrared emission. Firstly, the X-ray inferred column density is determined by obscuring material along our line of sight, while the infrared emission is integrated emission emission from all sources surrounding AGN. Secondly, while both produce X-ray absorption, the contribution of dust-free gas is larger as compared to dusty gas. The important role of dust-free gas in X-ray absorption is supported by lower column density inferred from reddening effect than that from X-ray absorption. (Maccacaro et al. 1982; Maiolino et al. 2001). The insignificant X-ray absorption does not automatically indicate disappearance of dust emission. By assuming Galactic dust-to-gas ratio and “standard” model of interstellar dust, the column density $N_H \sim 2 \times 10^{21} \tau_V cm^{-2}$ (Elitzur 2008), a neutral hydrogen column density with a few $10^{21} cm^{-2}$ could still cause a considerable absorption in optical/UV band and thus thermal re-emission in infrared band. After all, we should acknowledge that current data are insufficient to draw a ultimate conclusion. At the resolution of IRAC, it is difficult to further identify the mechanism to produce nuclear infrared emission among different possibilities. Comprehensive study of the SED might provide more information on this issue. Conclusion ========== We performed the Spitzer IRAC observations of 36 local elliptical galaxies. 9 out of 36 galaxies display red core structure with nuclear infrared excess emission. The infrared excess emissions only and universally appear in galaxies with relatively luminous central AGN, strongly support a relation between the two. We also confirmed the correlation between the activities of AGN and optically observed circumnuclear dust and found this correlation also holds for extended dust in elliptical galaxies. We found correlation with considerable scatter between the luminosity of central AGN and excess emission, which indicates unresolved non-AGN contamination of excess emission in low-luminosity sources. While the specific mechanism to produce infrared emission could not be identified by current data, thermal origin from hot dust is supported by similar infrared spectral indices with Seyfert galaxies. In order to clarify origins for LLAGNs’ infrared emission, a further study calls for multi-wavelength study, which will be provided in our next work. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== We are very grateful to Tom Jarrett and Feng Yuan for their valuable advices. We are also thankful to Tao Wang and Song Huang for helpful suggestions. This work is supported by Program for New Century Excellent Talents in University (NCET), the national Natural Science Foundation of China under grants 10878010 and 10633040, and the National Basic Research Program (973 Program No. 2007CB815405). This research has made use of NASA’s Astrophysics Data System Bibliographic Services and the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. [99]{} Alonso-Herrero A., Quillen A. C., Rieke G. H., et al., 2003, AJ, 125, 81 Athey A., Bregman J., Bregman J. 2002, ApJ, 571, 272 Bahcall J. N., Kirhakos S., Schneider D. P. 1996, ApJ, 457, 557 Barth A. J., Filippenko A. V., Moran E. C. 1999, ApJ, 525, 673 Boisson C., Joly M., Moultaka J., et al. 2000, A&A, 357, 850 Bregman J. N., Snider B. A., Grego L. & Cox C. V. 1998, ApJ, 499, 670 Chiaberge M., Capetti A., & Celotti A., 1999, A&A, 349, 77 Del Burgo C., Carter D., & Sikkema G. 2008, A&A, 477, 105 Donley J. L., et al. 2007, ApJ, 660, 167 Dudik R. P., Satyapal S., & Marcu, D. 2009, ApJ, 691, 1501 Elitzur M. & Asensio R. A., 2006, MNRAS, 365, 779 Elitzur M. 2008, NewAR, 52, 274 Emmering R. T., Blandford R. D., Shlosman I. 1992, ApJ, 385, 460 Falcke H., Korrding E. & Markoff S., 2004, A&A, 414, 895 Falomo R., Kotilainen J. K., Scarpa R., Treves A., 2005, A&A, 434, 469 Fazio G. G., Hora J. L., Allen L. E., et al., 2004, ApJS, 154, 10 Ferrarese L. & Merritt D., 2000, ApJ, 539, 9 Flohic, H¨¦l¨¨ne M. L. G., Eracleous M., Chartas G., et al., 2006, ApJ, 647, 140 Fomalont E. B., Frey S., Paragi Z., 2000, ApJS, 131, 95 Gallagher S. C., Johnson K. E., Hornschemeier A. E., et al. 2008, ApJ, 673, 730 Gebhardt K., Bender R., Bower G., et al. 2000, ApJ, 539, 13 Giovannini G., Cotton W. D., Feretti L., et al., 2001, ApJ, 552, 508 Gonz¨¢lez Delgado R. M., et al. AJ, 135, 747 Gonz¨¢lez-Mart¨ªn O., et al. 2006, A&A, 460, 45 Goudfrooij P., de Jong T., Hansen L., Norgaard-Nielsen H. U. 1994, MNRAS, 271, 833 Greene J. E.; Ho L. C.; Ulvestad J. S., 2006, ApJ, 636, 56 Gu Q.-S., Huang J.-S., Wilson G., Fazio G. G., 2007, ApJ, 671, 105 Guainazzi M., Oosterbroek T., Antonelli L. A., Matt, G. 2000, A&A, 364, 80 Heckman, T. M., 1980, A&A, 87, 152 Ho L. C., 2008, ARA&A, 46, 475 Ho L. C., Filippenko A. V., Sargent W. L. W. 1995, ApJS, 98, 477 Ho L. C., Filippenko A. V., Sargent W. L. W. 1997, ApJS, 112, 315 Ho L. C. 1999 ApJ 516 672 Ho L. C. 2002, ApJ, 564, 120 Ho L. C., Terashima Yuichi, Ulvestad J. S. 2003a, ApJ, 589, 783 Ho L. C., Filippenko A. V., Sargent W. L. W. 2003b, ApJ, 583, 159 Huang J.-S. et al. 2004, ApJS, 154, 44 Hutchings J. B. & Morris S. C. 1995, AJ, 109, 1541 Jones D. L., Wehrle A. E., Piner B. G., & Meier D. L. 2001, ApJ, 553, 968 Kadler M., Ros E., Lobanov A. P., et al. 2004, A&A, 426, 481 Kaneda H., Onaka T., Sakon I., Kitayama T., et al. 2008, ApJ, 684, 270 Kauffmann G. & Heckman T. M. 2005, RSPTA, 363, 621 Kellermann K. I., Vermeulen R. C., Zensus J. A. & Cohen, M. H. 1998, AJ, 115, 1295 Knapp G. R., et al. 1989, ApJS, 70, 329 Knapp G. R., Turner E. L., Cunniffe P. E. 1985, AJ, 90, 454 Knapp G R, Gunn J E, Wynn-Williams, C G. 1992, ApJ, 399, 76 Konigl A. & Kartje J. F. 1994, ApJ, 434, 446 Kormendy J., & Richstone D. 1995, ARA&A, 33, 581 Krajnovic D. & Jaffe W. 2002, A&A, 390, 423 Lauer T. R., Ajhar E. A., Byun, Y.-I., et al. 1995, AJ, 110, 2622 Lauer T. R., Faber S. M., Gebhardt K. 2005, AJ, 129, 2138 Lauer T. R., Gebhardt K., Faber S. M. et al. 2007, ApJ, 664, 226 Lira P., Johnson R. A., Lawrence A., Cid Fernandes R. 2007, MNRAS, 382, 1552 Maccacaro T., Perola G. C. & Elvis M. 1982, ApJ, 257, 47 Macchetto F., Pastoriza M., Caon N., et al. 1996, A&A S, 120, 463 Magorrian J., Tremaine S., Richstone D., et al. 1998, AJ, 115, 2285 Maiolino R., Marconi A., Salvati M., et al. 2001, A&A, 365, 28 Maoz D, Filippenko A. V., Ho L. C., et al. 1996, ApJS, 107, 215 Maoz D, Filippenko A. V., Ho L. C. 1995, ApJ, 440, 91 Maoz D. 2007. MNRAS, 377, 1696 Markoff S., Nowak M., Corbel S., Fender R., & Falcke, H. 2003, A&A, 397, 645 Nagar N. M., Falcke H., Wilson A. S., Ulvestad J. S. 2002, A&A, 392, 53 Nagar N. M., Falcke H., Wilson A. S. 2005, A&A, 435, 521 Pahre Michael A., Ashby M. L. N., Fazio G. G., Willner S. P. 2004, ApJS, 154, 229 Pellegrini S., Baldi A., Fabbiano G., Kim D.-W. 2003, ApJ, 597, 175 Perlman E. S., Mason R. E., Packham C., et al., 2007, ApJ, 633, 808 Quataert E., di Matteo T., Narayan R. & Ho L. C., ApJ, 1999, 525, 89 Ravindranath S., Ho Luis C., Peng C. Y. et al. 2001, AJ, 122, 653 Rest A., van den Bosch Frank C., Jaffe W., et al. 2001, AJ, 121, 2431 Sage L. J., Welch G. A., Young L. M. 2007, ApJ, 657, 232 Sambruna R. M., Gliozzi M., Eracleous M., et al. 2003, ApJ, 586, 37 Shi Y., Rieke G. H., Hines D. C., Gordon K. D., Egami E., 2007, ApJ, 655, 781 Shields Joseph C. 1991, AJ, 102, 1314 Sikora M., Stawarz L., & Lasota J.-P. 2007, ApJ, 658, 815 Simoes Lopes R. D., et al. 2007 ApJ, 655, 718 Temi P., Brighenti F. & Mathews W. G. 2007a, ApJ, 660, 1215 Temi P., Brighenti F. & Mathews W. G. 2007b, ApJ, 666, 222 Temi P., Brighenti F. & Mathews W. G. 2008, ApJ, 672, 244 Temi P., Brighenti F. Mathews W. G. & Bregman J. D. 2004, ApJS, 151, 237 Terashima Yuichi, Ho L. C., Ptak A. F., et al. 2000, ApJ, 533, 729 Terashima Yuichi, Iyomoto Naoko, Ho L. C., Ptak A. F. 2002, ApJS, 139, 1 Terashima Yuichi & Wilson A. S. 2003, ApJ, 583, 145 Tran H. D., Tsvetanov Z., Ford H. C. 2001, et al., AJ, 121, 2928 van Dokkum, P. G. & Franx, M. 1995, AJ, 110, 2027 Verolme E. K., Cappellari M., Copin Y. 2002, MNRAS, 335, 517 Weedman D. W. et al. 2005, ApJ, 633, 706 Weedman D. W. et al. 2006, ApJ, 653, 101 Werner M. W., Roellig T. L., Low F. J. 2004, ApJS, 154, 1 Wiklind T., Combes F., Henkel C. 1995, A& A, 297, 643 Wu Q. -W., Yuan F., Cao, X. -W. 2007, ApJ, 669, 96 Xilouris E. M., Madden S. C., Galliano F. 2004, A&A, 416, 41 Xilouris E. M. & Papadakis I. E. 2002, A&A, 387, 441 Yuan F., Markoff S., Falcke H., Biermann P. L. 2002, A&A, 391, 139 Yuan F., Quataert E. & Narayan, R. 2003, ApJ, 598, 301 Zang Z. & Meurs E. J. A. 1999, NewA, 4, 521 Zhang, Y., Gu Q.-S., Ho L. C. 2008, A&A, 487, 177 [^1]: http://dirty.as.arizona.edu/kgordon/mips/conv\_psfs/conv\_psfs.html [^2]: http://spider.ipac.caltech.edu/staff/jarrett/irac/calibration/ [^3]: IRAC Data Handbook: http://ssc.spitzer.caltech.edu/irac/dh/
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quadrature formulas for spheres, the rotation group, and other compact, homogeneous manifolds are important in a number of applications and have been the subject of recent research. The main purpose of this paper is to study coordinate independent quadrature (or cubature) formulas associated with certain classes of positive definite and conditionally positive definite kernels that are invariant under the group action of the homogeneous manifold. In particular, we show that these formulas are accurate – optimally so in many cases –, and stable under an increasing number of nodes and in the presence of noise, provided the set $X$ of quadrature nodes is quasi-uniform. The stability results are new in all cases. In addition, we may use these quadrature formulas to obtain similar formulas for manifolds diffeomorphic to ${\mathbb{S}}^n$, oblate spheroids for instance. The weights are obtained by solving a single linear system. For ${\mathbb{S}}^2$, and the restricted thin plate spline kernel $r^2\log r$, these weights can be computed for two-thirds of a million nodes, using a preconditioned iterative technique introduced by us.' author: - 'E. Fuselier[^1], T. Hangelbroek[^2], F. J. Narcowich[^3], J. D. Ward[^4], G. B Wright[^5]' bibliography: - 'rbf.bib' title: Kernel Based Quadrature on Spheres and Other Homogeneous Spaces --- Introduction {#intro} ============ Quadrature formulas for spheres, the rotation group, and other compact, homogeneous manifolds are important in many applications and have been the subject of recent research [@Mhaskar-etal-01-1; @Narcowich-etal-06-1; @graef-kunis-potts-2009; @hesse-et-al-2010; @graef-potts-2009; @Pesenson-Geller-2011]. The main purpose of this paper is to study quadrature (or cubature) formulas associated with certain classes of positive definite and conditionally positive definite kernels that are invariant under the group action of the homogeneous manifold, and to show that these formulas are accurate and stable, provided the set $X$ of quadrature nodes is quasi-uniform. The invariance of the kernels is a key ingredient in giving simple, easy to construct linear systems that determine the weights. The weights themselves have size comparable to $1/N$, where $N$ is the number nodes in $X$. For ${\mathbb{S}}^2$, and the restricted thin plate spline kernel $r^2\log r$, these weights can be computed for very large values of $N$ using the preconditioned iterative technique described in [@FHNWW2012]. The quadrature formulas developed here are for the general setting of a compact, homogeneous, $n$-dimensional manifold ${\mathbb{M}}$ that is equipped with a group invariant Riemannian metric $g_{ij}$ and its associated invariant measure $d\mu(x)=\sqrt{\det(g_{ij}(x))}dx$. From the point of view of current applications, the two most important of these manifolds are ${\mathbb{S}}^2$ and $SO(3)$. However, other homogeneous spaces, such as Stiefel and Grassmann manifolds, are arising in applications [@absil_etal_2008; @edelman_1999], and we expect that in the future our results will be applied to them. The type of quadrature formula we will be concerned with here has the form $$\label{quadrature_formula_general} \int_{\mathbb{M}}f(x)d\mu(x) \doteq \sum_{\xi \in X}c_\xi f(\xi) =:Q(f), \ f\in C({\mathbb{M}}),$$ where the set $X\subset {\mathbb{M}}$ of centers/nodes is finite. The weights $\{c_\xi\}_{\xi\in X}$ are chosen so that quadrature operator $Q$ integrates exactly a given finite dimensional space of continuous functions, $V$. In the case of ${\mathbb{S}}^n$ and $SO(3)$, popular choices for $V$ are spaces of spherical harmonics [@Mhaskar-etal-01-1; @Narcowich-etal-06-1; @graef-kunis-potts-2009; @hesse-et-al-2010], and Wigner D-functions [@graef-potts-2009]. Work also has been done on compact two-point homogeneous manifolds [@Brown-Dai-05-1] and on general compact homogeneous manifolds [@Pesenson-Geller-2011], with $V$ being chosen to be a set of eigenfunctions of a manifold’s Laplace-Beltrami operator. We will use the term *polynomial quadrature* for methods that integrate such spaces exactly; this is because, for spheres and a few other spaces, $V$ consists of restrictions of harmonic polynomials. The quadrature methods developed here are *kernel* methods; they use a space $V$ consisting of linear combinations of kernels. In [@sommariva_womer2005], Sommariva and Womersley used spaces of rotationally invariant radial basis functions (RBFs) and spherical basis functions (SBFs) to derive a linear system of equations for the weights $c_\xi$. However, neither accuracy, control over the size of the weights, nor stability was addressed in [@sommariva_womer2005]. In this paper these and other issues are dealt with employing recent results developed by us in [@HNW2010; @HNW_3_2011; @FHNWW2012]. Accuracy is measured in terms of the mesh norm $h$, which is defined in section \[kernel\_interp\_quad\]. All the kernels we deal with are associated with a Sobolev space $W_2^m$, for some $m$ in ${\mathbb{N}}$. Previous error estimates for quadrature with positive definite kernels on ${\mathbb{S}}^2$ were given in [@hesse-et-al-2010]. For the general case of a homogeneous manifold, the error we obtained is ${\mathcal{O}}(h^m)$, for functions in $W_2^m$. However, on ${\mathbb{S}}^n$ or $SO(3)$, we get even better rates, in fact optimal: if a function is in $C^{2m}$, then the order is ${\mathcal{O}}(h^{2m})$. For example, the thin-plate spline kernel restricted to ${\mathbb{S}}^2$ has $m=2$, so, for a function in $C^4({\mathbb{S}}^2)$, the error would be ${\mathcal{O}}(h^4)$. In the course of studying accuracy for $Q$ in various spaces, we obtain new error estimates for interpolation/reproduction problems on two-point homogeneous manifolds. First, for a general such manifold, the estimates hold for $f\in W_2^m({\mathbb{M}})$, which is the “native space” of the kernel. Second, for spheres (new when reproduction is required) and real projective spaces, the estimates apply to $f\in W_2^\mu$, with $n/2 <\mu \le m$. Thus the estimates allow an “escape” from the native space, in the sense that they are for functions *not* smooth enough to be in that space. These quadrature formulas are stable both under an increase in the number of points and in the presence of noise. If the number of points is increased, then the norm of the quadrature operator remains uniformly bounded, as long as the level of quasi-uniformity is maintained. Thus there is no oscillatory “Runge phenomenon." To examine the effect of noise, we assume the measured function values differ from the actual ones by independent, identically distributed, zero mean random variables. Under these conditions, the standard deviation of the quadrature formula decreases as ${\mathcal{O}}(N^{-1/2})$. To illustrate how the method works, consider a positive definite SBF kernel $\phi(x\cdot y)$, $x,y\in {\mathbb{S}}^n$, and let $V = {\mathop{\mathrm{span}}}\{\phi(x\cdot \xi)\}_{\xi\in X}$. We want to integrate $s \in V$ exactly; that is, we require $\sum_{\xi\in X}c_\xi s(\xi) = \int_{{\mathbb{S}}^n} s(x)d\mu(x)$. Doing so results in a systems of linear equations for the weights. If $A$ is the interpolation matrix with entries $A_{\xi,\eta}=\phi(\xi\cdot \eta)$, $\xi,\eta\in X$, then the vector of weights $c$ satisfies $(Ac)_\xi=\int_{{\mathbb{S}}^n} \phi(\xi \cdot x)d\mu(x)$. At this point, we encounter an apparent difficulty. We have to compute *every* one of the integrals $\int_{{\mathbb{S}}^n} \phi(\xi \cdot x)d\mu(x)$ for *every* $\xi\in X$. The rotational invariance of the SBF allows us to overcome this difficulty. Because of rotational invariance, all of the integrals are independent of $\xi$, and thus have the same value $J_0$; the system then becomes $(Ac)_\xi=J_0$. The constant $J_0$ only needs to be computed *once* for a given kernel. (For many SBFs/RBFs, values of $J_0$ are known; see [@sommariva_womer2005].) The same is true for group invariant kernels on ${\mathbb{M}}$, as we will see in section \[quadrature\] below. The point to be emphasized here is that group invariance of the kernels allows us to deal with quadrature as an interpolation problem. Without it, the problem requires computing many integrals and in fact becomes prohibitively expensive, computationally. Numerical tests of the quadrature formulas were carried out for ${\mathbb{M}}={\mathbb{S}}^2$, in connection with the SBF kernel $\Phi(x\cdot y)=(1-x\cdot y)\log(1-x\cdot y)$, $x,y\in {\mathbb{S}}^2$. This kernel is the thin-plate spline $r^2\log r$ restricted to the sphere, and it corresponds to one of the Sobolev spaces mentioned above, namely, $W_2^2({\mathbb{S}}^2)$. For these tests, the sets of quasi-uniform nodes were generated via three different, commonly used methods: icosahedral, Fibonacci (or *phyllotaxis*), and quasi-minimum energy. The number of nodes employed varied over a substantial range, from a few thousand to two-thirds of a million. Weights corresponding to these nodes were computed using a pre-conditioning method developed in [@FHNWW2012]. The tests themselves focused on the accuracy and stability of the method, both in terms of increasing the number of nodes and adding noise. The tests, which are discussed in section \[numerics\], gave excellent results, in agreement with the theory. There are situations where the manifold ${\mathbb{M}}$ involved is *not* a homogeneous space, but quadrature formulas can still be obtained. If ${\mathbb{M}}$ is diffeomorphic to a homogeneous space, then it is possible to obtain quadrature weights for ${\mathbb{M}}$ from the ones for the corresponding homogeneous space. In section \[manif\_diffeo\], we will show how this can be done for ${\mathbb{S}}^n$. We will then apply this to the specific case where ${\mathbb{M}}$ is an oblate spheroid (e.g., earth with flattening accounted for), which is of course diffeomorphic to ${\mathbb{S}}^2$. The paper is organized this way. Section \[kernel\_interp\_quad\] begins with a brief discussion of positive definite/conditionally positive definite kernels, notation, and, in section \[interpolation\], interpolation. Section \[quadrature\] contains a derivation and discussion of kernel quadrature formulas, with special emphasis on the role played by group invariance of the kernel employed in the formula. In section \[accuracy\_stability\], the questions of accuracy and stability mentioned in the introduction are taken up. The results obtained there are aimed at invariant kernels, such as Sobolev and polyharmonic kernels discussed in sections \[sobolev\_kernels\] and \[polyharmonic\_kernels\]. Sobolev kernels on a compact manifold are positive definite reproducing kernels for the Sobolev space $W_2^m$, $m>n/2$. In section \[sobolev\_kernels\], we study these in terms of their invariance, interpolation errors, and properties of their Lagrange functions and Lebesgue constants. Finally, in section \[kappa\_m\_quadrature\] we look at their use in quadrature formulas. Section \[polyharmonic\_kernels\] is devoted to a very important class of kernels on a compact, two-point homogeneous manifold: the polyharmonic kernels. These kernels, which may be either positive definite or conditionally positive definite, are Green’s functions for operators that are polynomials in the the Laplace-Beltrami operator. On spheres, they include restricted surface splines, and on $SO(3)$ similar kernels. All of these are given in terms of simple, explicit formulas. The whole section is a self-contained discussion of these kernels, culminating in their application to quadrature formulas. The results from various numerical tests that we conducted are discussed in detail in section \[numerics\]. Finally, in section \[manif\_diffeo\] we discuss ways of using quadrature weights for a compact, homogeneous manifold ${\mathbb{M}}$ to obtain invariant, coordinate independent weights for manifolds diffeomorphic to ${\mathbb{M}}$. Interpolation and Quadrature via Kernels {#kernel_interp_quad} ======================================== The spaces that we will work with will be homogeneous manifolds, ultimately. However, for interpolation we need very little in the way of structure. In fact, we could take our underlying space ${\mathbb{M}}$ to be a metric space. The set $X\subset {\mathbb{M}}$ will be assumed finite. Its *mesh norm* (or *fill distance*) $h:=\sup_{x\in {\mathbb{M}}} {\mathrm{dist}}(x,X)$ measures the density of $X$ in ${\mathbb{M}}$, while the *separation radius* $q:=\frac12 \inf_{\substack{\xi,\zeta\in X\\ \xi\ne \zeta}} {\mathrm{dist}}(\xi,\zeta)$ determines the spacing of $X$. The *mesh ratio* $\rho:=h/q$ measures the uniformity of the distribution of $X$ in ${\mathbb{M}}$. We say that a continuous kernel $\kappa: {\mathbb{M}}\times{\mathbb{M}}\to {\mathbb{R}}$ is (strictly) *positive definite* on ${\mathbb{M}}$ if, for every finite subset $X\subset {\mathbb{M}}$, the matrix $A$ with entries $A_{\xi,\eta}:=\kappa(\xi,\eta)$, $\xi,\eta\in X$, is positive definite. *Conditionally (strictly) positive definite* kernels are defined with respect to a finite dimensional space $\Pi:={\mathop{\mathrm{span}}}\{\psi_k:{\mathbb{M}}\to {\mathbb{R}}\}_{k=1}^m$, where the $\psi_k$’s are linearly independent, continuous functions on ${\mathbb{M}}$. In addition, given a finite set of centers $X\subset {\mathbb{M}}$, where we let $N:=\# X$ be the cardinality of $X$, we say that $\Pi$ is *unisolvent* on $X$ if the only function $\psi\in \Pi$ for which $\psi|_X=0$ is $y\equiv 0$. This means that $\{\psi_k|_X\}_{k=1}^m$ is a linearly independent set in ${\mathbb{R}}^N$. Given that $\Pi$ is unisolvent on $X$, we say that the kernel $\kappa$ is *conditionally* positive definite if for every nonzero set $\{a_\xi\in {\mathbb{R}}\}_{\xi\in X}$ such that $\sum_\xi a_\xi \psi_k(\xi)=0$, $k=1,\ldots,m$, one has $$\label{cpd_kernel} \sum_{\xi,\eta\in X}a_\xi a_\eta \kappa(\xi,\eta)>0.$$ Interpolation ------------- Positive definite and conditionally positive definite kernels can be used to interpolate a continuous function $f:{\mathbb{M}}\to {\mathbb{R}}$, given the data $f|_X$, by means of a function of the form $$\label{interpolant} s = \sum_{\xi\in X} a_\xi \kappa(\cdot,\xi) + \sum_{k=1}^m b_k\psi_k,\ \text{where }\sum_{\xi\in X} a_\xi \psi_k(\xi)=0,\ k=1,\ldots,m.$$ We will denote the space of such functions by $V_X$. In the case where the kernels are RBFs or SBFs, the space $\Pi$ is usually taken to be either the polynomials or spherical harmonics with degree less than some fixed number. We now turn to the interpolation problem. Let $\Psi_k=\psi_k|_X$, $k=1,\ldots,m$, and define the $N\times m$ matrix $\Psi=[\Psi_1\ \Psi_2 \cdots \Psi_m]$. In addition, let $a=(a_\xi)_{\xi\in X}$ and $b=(b_1 \ \cdots b_m)^T$. The constraint condition that $\sum_\xi a_\xi \psi_k(\xi)=0$ can now be stated as $\Psi^Ta=0$. Requiring that $s$ interpolate a function $f\in C({\mathbb{M}})$ on $X$ is then $f|_X=s|_X = Aa + \Psi b$, where $A_{\xi,\eta}=\kappa(\xi,\eta)$. Written in matrix form, the interpolation equations are $$\label{interp_matrix_form} \underbrace{\left(\begin{array}{cc} A&\Psi \\ \Psi^T&0_{m\times m} \end{array} \right) }_{\mathbf A} \left(\begin{array}{c} a\\ b \end{array}\right) = \left(\begin{array}{c} f|_X\\ 0_{m\times 1} \end{array}\right).$$ Using the constraint condition $\Psi^Ta=0$ and the positivity condition (\[cpd\_kernel\]), one can easily show that the matrix $\mathbf A$ on the left above is invertible. In addition, the interpolation process reproduces $\Pi$; that is, if $f\in \Pi$, then $s=f$. Finally, much is known about how well $s$ fits $f$. In many cases, the approximation is excellent (cf. [@Wendland_book] and references therein). In the sequel, we will need the *Lagrange function* centered at $\xi\in X$, $\chi_\xi\in V_X$. We define $\chi_\xi$ to be the unique function in $V_X$ that satisfies $\chi_\xi(\eta) = \delta_{\xi,\eta}$; that is, $\chi_\xi$ is $1$ when $x=\xi$ and $0$ when $x=\eta\in X$, $\eta\ne\xi$. Quadrature ---------- We will now develop our quadrature formula for a $C^\infty$, $n$-dimensional Riemannian manifold ${\mathbb{M}}$ that is a *homogeneous* space for a Lie group $\calg$ [@Warner-71-1]. This just means that $\calg$ acts *transitively* on ${\mathbb{M}}$: for two points $x,y\in{\mathbb{M}}$ there is a $\gamma\in \calg$ such that $y=\gamma x$. Equivalently, ${\mathbb{M}}$ is a left coset of $\calg$ for a closed subgroup. ${\mathbb{S}}^2$ is a homogeneous space for $SO(3)$. In fact, the Lie group $\calg$ is a homogeneous space for itself. Thus, $SO(3)$ is its own homogeneous space. Homogeneous spaces also include Stiefel manifolds, Grassmann manifolds an many others. All spheres and projective spaces belong to a special class known as *two-point* homogeneous spaces. Such spaces are characterized by the property that if two pairs of points $x,y$ and $x',y'$ satisfy ${\mathrm{dist}}(x,y) = {\mathrm{dist}}(x',y')$, then there is a group element $\gamma\in \calg$ such that $x'=\gamma x$ and $y'=\gamma y$. While spheres and projective spaces are compact, there are also non compact spaces: ${\mathbb{R}}^n$ and certain hyperbolic spaces also belong to the class. *Invariance under $\calg$* plays an important here. We will assume throughout that the kernel $\kappa$ is invariant [@vilenkin1968 §I.3.4] under $\calg$. Moreover, we will take $d\mu$ to be the $\calg$-invariant measure associated with the Riemannian metric tensor for ${\mathbb{M}}$ [@vilenkin1968 §I.2.3]. Thus, for all $x,y\in {\mathbb{M}}$, $\gamma \in \calg$, and $f\in L_1({\mathbb{M}})$ we have $$\label{invariance_ker_def} \kappa(\gamma x, \gamma y) = \kappa(x,y) \quad \text{and}\quad \int_{\mathbb{M}}f(x)d\mu(x) = \int_{\mathbb{M}}f(\gamma x)d\mu(x).$$ In particular, all SBF kernels on ${\mathbb{S}}^n$, which have the form $\phi(x\cdot y)$, are invariant, as is the standard measure on ${\mathbb{S}}^n$. The following lemma is a consequence of $\kappa$ and $d\mu$ being invariant. \[invariance\_integral\] The integral $J(y):=\int_{\mathbb{M}}\kappa(x,y)d\mu(x)$ is independent of $y$. Take $z\in {\mathbb{M}}$ to be fixed. Because of the group action on ${\mathbb{M}}$, we may find $\gamma\in \calg$ such that $z=\gamma y$. From the invariance of $\kappa$ under $\gamma$, we have $\kappa(x,y)=\kappa(\gamma x,z)$. Hence, $J(y)=\int_{\mathbb{M}}\kappa(\gamma x,z)d\mu(x)$. However, the integral being invariant under $\calg$ then yields $$J(y)=\int_{\mathbb{M}}\kappa(\gamma x,z)d\mu(x)=\int_{\mathbb{M}}\kappa(x,z)d\mu(x)=J(z),$$ which completes the proof. Since $J(y)$ is independent of $y$, we may drop $y$ and denote it by $J_0$, which we will do throughout the sequel. Note that $J_0$ may be $0$. In addition, we define these quantities. $$\left\{\begin{array}{rcl} J_k&:=&\int_{\mathbb{M}}\psi_k(x)d\mu(x), \ k=1,\ldots, m; \\ [7pt] J&:=& (J_1\cdots J_m)^T;\\ \mathbf 1 &:=&1|_X. \end{array}\right.$$ We point out that, for the constant function $1$ on ${\mathbb{M}}$, $\mathbf 1 =1|_X$ is the column vector in ${\mathbb{R}}^N$ with all entries equal to 1. The result below gives a formula for the integral of a function $s\in V_X$ and provides a system of equations that determine the quadrature weights. It follows the one given in [@sommariva_womer2005], with rotational invariance replaced by the invariance under $\calg$ proved in Lemma \[invariance\_integral\]. \[integral\_V\] Let $\kappa$ and $d\mu$ satisfy (\[invariance\_ker\_def\]) and suppose that $c$ and $d$ are the $N\times 1$ and $m\times 1$ column vectors that uniquely solve the $(N+m)\times(N+m)$ system of equations $$\label{weight_eqns} Ac+\Psi d=J_0{\mathbf 1} \ \text{and}\ \Psi^Tc=J.$$ If $s\in V_X$, then $$\label{integra_formula_V} \int_{\mathbb{M}}s(x)d\mu(x) = c^T s|_X=\sum_{\xi\in X}c_\xi s(\xi)$$ In addition, if $\chi_\xi \in V_X$ is the Lagrange function centered at $\xi$, then we also have $$\label{weight_bound} c_\xi=\int_{\mathbb{M}}\chi_\xi(x)d\mu(x) \ \text{and}\ |c_\xi| \le \|\chi_\xi\|_{L_1({\mathbb{M}})}$$ Integrating $s(x)$ from (\[interpolant\]) results in this chain of equations: $$\begin{aligned} \int_{\mathbb{M}}s(x)d\mu(x) &=& \sum_{\xi\in X}a_\xi \underbrace{\int_{\mathbb{M}}\kappa(x,\xi)d\mu(x)}_{J_0} + \sum_{k=1}^m b_k \underbrace{\int_{\mathbb{M}}\psi_k(x)d\mu(x)}_{J_k} \label{integral_s}\\ &=& J_0{\mathbf 1}^Ta+J^Tb \nonumber \\ &=& (J_0{\mathbf 1}^T\ J^T)\left(\begin{array}{cc} a\\ b \nonumber \end{array}\right).\end{aligned}$$ Using (\[interp\_matrix\_form\]), with $f|_X$ replaced by $s|_X$, together with the invertibility and self adjointness of $\mathbf A$, we obtain $$(J_0{\mathbf 1}^T\ J^T)\left(\begin{array}{cc} a\\ b \end{array}\right) = \bigg\{\underbrace{{\mathbf A}^{-1}\left(\begin{array}{cc} J_0{\mathbf 1} \\ J \end{array}\right)}_{\left(\begin{array}{cc} c\\ d \end{array}\right)}\bigg\}^T \left(\begin{array}{c} s|_X\\ 0_{m\times 1} \end{array}\right)=c^Ts|_X.$$ Multiplying $\left(\begin{array}{cc} c\\ d \end{array}\right)$ by $\mathbf A$ and writing out the equations for $c$, $d$ yields the system (\[weight\_eqns\]). Combining the two previous equations then yields (\[integra\_formula\_V\]). Moreover, from (\[integra\_formula\_V\]), with $s$ replaced by $\chi_\xi$ and the values of $s|_X$ replaced by those of $\chi_\xi$ on the set $X$, we obtain the formula for $c_\xi$ in (\[weight\_bound\]). Finally, the bound on the right in (\[weight\_bound\]) follows immediately from the integral formula for $c_\xi$. As we mentioned earlier, the quadrature formula for $f\in C({\mathbb{M}})$ is obtained by replacing $f$ with its interpolant in $V_X$. To that end, we define the linear functional $Q_{V_X}(f)$ that will play the role of our quadrature operator. \[quadrature\_def\] Let $f\in C({\mathbb{M}})$ and let $s_f\in V_X$ be the unique interpolant for $f$, so that $s_f|_X=f|_X$. Then, we define the linear functional $Q_{V_X}:C({\mathbb{M}})\to {\mathbb{R}}$ via $$\label{quadrature_functional} Q_{V_X}(f):=\int_{\mathbb{M}}s_f(x)d\mu(x)=\sum_{\xi\in X} c_\xi f(\xi),$$ where the $c_\xi$’s, the weights, are given in Proposition \[integral\_V\]. The invariance assumption on the kernel $\kappa$ produces a system with the same attractive feature as one for an SBF $\phi(x\cdot y)$. The integral $J_0$ depends only on $\kappa$ and ${\mathbb{M}}$; it is entirely independent of $X$. This is also true of the other $J_k$’s. As a result the equations (\[weight\_eqns\]) defining the weight vector $c$ only depend on $X$ through function evaluations. The integrals $J_0, J_1, \ldots,J_m$ are all known in advance and are *independent* of $X$. It is important to note that, *without* the invariance of $\kappa$, obtaining the system for $c$ would require computing integrals of the form $\int_{\mathbb{M}}\kappa(x,\xi)d\mu(x)$ for each $\xi\in X$. This follows just by looking at equation (\[integral\_s\]) in the derivation of (\[weight\_eqns\]). This would make finding $\,c\,$ numerically very expensive, not only because each integral would have to be computed for every $\xi\in X$, but also because the whole set would have to be recomputed whenever $X$ was changed. ### Weights {#weights_properties} In many cases, the weights appearing in the quadrature formula above may be interpreted as coming from simple interpolation problems. In particular, if $\kappa$ is strictly positive definite and $\Pi = \{0\}$, then the system (\[weight\_eqns\]) becomes $Ac = J_0{\mathbf 1} $. This is the set of equations for interpolating the function $f(x) \equiv 1/J_0$. To obtain the interpolation problem for quadrature with $\kappa$ being merely conditionally positive definite with respect to $\Pi$, start with $\Psi^Tc=J$. This equation completely determines the orthogonal projection of $c$, $c_\parallel := Pc$, onto the range of $\Psi$: The standard normal equations give $c_\parallel = Pc=\Psi (\Psi^T\Psi)^{-1}\Psi^T c = \Psi (\Psi^T\Psi)^{-1}J$. Thus, $c_\parallel$ is known. Next, let $c_\perp :=P^\perp c$, which obviously satisfies $Pc_\perp=0$, so $\Psi^Tc_\perp=0$, and, consequently, the following system: $$\label{eq:weight_eqns_perp} Ac_\perp +\Psi d=J_0{\mathbf 1}-A\Psi (\Psi^T\Psi)^{-1}J \ \text{and}\ \Psi^Tc_\perp=0.$$ This is the interpolation problem (\[interp\_matrix\_form\]), with $f_X= J_0{\mathbf 1}-A\Psi (\Psi^T\Psi)^{-1}J$. The final weights are then $$\label{eq:weights} c=c_\perp+\underbrace{\Psi (\Psi^T\Psi)^{-1}J}_{\displaystyle{c_\parallel}}.$$ Note that (\[eq:weight\_eqns\_perp\]) may be solved for $c_\perp$, without also having to solve for $d$ as follows. Start by eliminating $d$ from (\[eq:weight\_eqns\_perp\]). Multiply both sides of (\[eq:weight\_eqns\_perp\]) by $P^\perp$. Since $P^\perp c_\perp = c_\perp$ and $P^\perp \Psi=0$, we get $$\label{remove_d} P^\perp A P^\perp c_\perp =J_0P^\perp {\mathbf 1}-P^\perp A\Psi (\Psi^T\Psi)^{-1} J.$$ Because $\kappa$ is conditionally positive definite relative to $\Pi$, $P^\perp A P^\perp$ is positive definite on the orthogonal complement of the range of $\Psi$. Restricted to that space, it is invertible. Carrying out the inverse gives us $c_\perp$. Often, the space $\Pi$ contains the constant function; that is, $1 \in \Pi$. When this happens, the term with $J_0$ drops out of (\[remove\_d\]), which then becomes $$P^\perp A P^\perp c_\perp = -P^\perp Ac_\parallel = - P^\perp A\Psi(\Psi^T\Psi)^{-1} J.$$ This equation is homogeneous in $A$ and is therefore independent of $J_0$. This implies that $c_\perp$ can be determined *independently* of $J_0$. There is another consequence of having $1 \in \Pi$. First, the column vector ${\mathbf{1}}$ is in the range of $\Psi$, so $P{\mathbf{1}}={\mathbf{1}}$. Let $\langle c_\perp\rangle$ be the average of $c_\perp$. Since $Pc_\perp =0$, we have $N\langle c_\perp\rangle= {\mathbf{1}}^Tc_\perp = (P{\mathbf{1}})^Tc_\perp= 0$. Second, since $1\in \Pi$, we also have that the quadrature formula is exact for it, and so $Q(1) = {\mathrm{vol}}({\mathbb{M}})= {\mathbf{1}}^Tc = N\langle c\rangle$. Using $c=c_\perp+c_\parallel$, along with a little algebra, yields $$\label{weight_averages} \langle c_\perp\rangle = 0, \ \langle c\rangle = \langle c_\parallel \rangle = \frac{{\mathrm{vol}}({\mathbb{M}})}{N}, \ \text{provided }1\in \Pi.$$ Signs of weights in quadrature formulas are usually nonnegative. This is true for the polynomial quadrature formulas developed for spheres [@Mhaskar-etal-01-1] and for two-point homogeneous manifolds [@Brown-Dai-05-1]. Numerical experiments by Sommariva and Womersley [@sommariva_womer2005] produced negative weights for kernels formed by restricting Gaussians to ${\mathbb{S}}^2$. On the other hand, their experiments involving the thin-plate splines (cf. section \[polyharmonic\_kernels\]) restricted to ${\mathbb{S}}^2$ resulted in only positive weights. Determining for what kernels and with what restrictions on $X$ all of the weights are positive is an open problem. Accuracy and Stability of $Q_{V_X}$ {#accuracy_stability} ----------------------------------- There are two important questions that arise concerning the quadrature method that we have been discussing. First, how accurate is it? Second, how stable is it? The *accuracy* of $Q_{V_X}$ depends on how well the underlying space of functions, $V_X$ in our case, reproduces functions from the class to be integrated. Specifically, we have the following standard estimate: $$\label{quad_op_accuracy} \big|Q_{V_X}(f) - \int_{\mathbb{M}}f(x)d\mu(x)\big| \le \| s_f - f\|_{L_1({\mathbb{M}})} \le (\text{vol}({\mathbb{M}}))^{1/2} \| s_f - f\|_{L_2({\mathbb{M}})}.$$ This inequality converts estimates of the accuracy of $Q_{V_X}(f)$ to error bounds for kernel interpolation. These are known in many cases of importance. *Stability* is related to how well the quadrature formula performs under the presence of noise. Lack of stability can amplify the effect that noise in $f|_X$ will have on the value of $Q_{V_X}(f)$. Stability also relates to performance as the number of data sites increases. Standard one-dimensional equally spaced quadrature formulas that reproduce polynomials can be quite unstable, due to the well-known Runge phenomenon. A measure of stability is the $C({\mathbb{M}})$ norm of the quadrature operator. From the definition of $Q_{V_X}$ in (\[quadrature\_functional\]) and equation (\[weight\_bound\]), we have that $$\label{quad_op_norm} \| Q_{V_X}\|_{C({\mathbb{M}})} = \sum_{\xi\in X} |c_\xi| \le \sum_{\xi\in X} \|\chi_\xi\|_{L_1({\mathbb{M}})}.$$ When ${\mathbb{M}}$ is a compact manifold, this bound can be given in terms of the Lebesgue constant, $\Lambda_{V_X} = \max_{x\in {\mathbb{M}}} \sum_{\xi\in X}|\chi_\xi(x)|$. The reason is that $$\label{quad_op_norm_lebesgue} \| Q_{V_X}\|_{C({\mathbb{M}})} \le \sum_{\xi\in X} \|\chi_\xi\|_{L_1({\mathbb{M}})}\le \sum_{\xi\in X}\int_{\mathbb{M}}|\chi_\xi(x)|d\mu(x)\le \text{vol}({\mathbb{M}}) \Lambda_{V_X}.$$ Let us briefly see how noise affects $Q_{V_X}(f)$. Suppose that at each of the sites $\xi$ we measure $f(\xi)+\nu_\xi$, where $\nu_\xi$ is a zero mean random variable. Further, we suppose that the $\nu_\xi$’s are independent and identically distributed, with variance $\sigma_\nu^2$. Our quadrature formula gives us $Q_{V_X}(f+\nu)$ rather than $Q_{V_X}(f)$. Because the $\nu_\xi$’s have zero mean, we have that the mean $E\{Q_{V_X}(f+\nu)\} = Q_{V_X}(f)$. The variance of $Q_{V_X}(f+\nu)$ is thus $$\label{expectation_variance_Q} \sigma_{Q}^2 = E\{\big(Q_{V_X}(f+\nu)-Q_{V_X}(f)\big)^2\} = E\{Q_{V_X}(\nu)^2\}.$$ We can both calculate and estimate $\sigma_Q$: \[Q\_standard\_dev\_prop\] Let $\nu_\xi$ be independent, identically distributed, zero mean random variables having standard deviation $\sigma_\nu$. Then the standard deviation $\sigma_Q$ satisfies $$\label{Q_standard_dev} \sigma_Q^2=\sigma_\nu^2\|c\|_2^2\le \sigma_\nu^2 \|c\|_1\|c\|_\infty \le \text{\rm vol}({\mathbb{M}}) \sigma_\nu^2\,\Lambda_{V_X} \max_{\xi\in X} \|\chi_\xi\|_{L_1({\mathbb{M}})}$$ We begin by evaluating the term on the right in (\[expectation\_variance\_Q\]). to do this, we need to compute $E\{\nu_\xi\nu_\eta\}$. Since the $\nu_\xi$’s are i.i.d., we have $E\{\nu_\xi\nu_\eta\}=\sigma_\nu^2\delta_{\xi,\eta}$. It follows that $\sigma_{Q}^2=E\{Q_{V_X}(\nu)^2\}= \sigma_\nu^2\|c\|_2^2$. Moreover, $\|c\|_2^2\le \|c\|_\infty \|c\|_1$. Combining this with (\[weight\_bound\]) and the previous equation results in (\[Q\_standard\_dev\]). Sobolev Kernels {#sobolev_kernels} =============== There are two classes of kernels that we will discuss here. The first class was introduced in [@HNW2010 § 3.3], in the context of a compact $n$-dimensional Riemannian manifold[^6] ${\mathbb{M}}$ equipped with a metric $g$. This class comprises positive definite reproducing kernels for the Sobolev spaces $W^m_2({\mathbb{M}})$, as defined in [@aubin1982; @hebey1996]. For ${\mathbb{M}}$ a homogeneous space with $g$ being the invariant metric that ${\mathbb{M}}$ inherits from a Lie group $\calg$, we will show below that these kernels are invariant under the action of $\calg$. The second class, which was introduced and studied in [@HNW_3_2011], comprises polyharmonic kernels on two-point manifolds – spheres, projective spaces, which include $SO(3)$, along with a few others. These kernels are conditionally positive definite with respect to finite dimensional subspaces of eigenfunctions of the Laplace-Beltrami operator and they are invariant under appropriate transformations. We will discuss these in section \[polyharmonic\_kernels\] below. The Sobolev space $W^m_2({\mathbb{M}})$, $m\in {\mathbb{N}}$, is defined as follows. Let $\langle \cdot,\cdot\rangle_{g,x}$ be the inner product for a Riemannian metric $g$ defined on $T{\mathbb{M}}_x$, the tangent space at $x\in {\mathbb{M}}$. This inner product can also be applied to spaces of tensors at $x$. We denote by $\nabla^k$ the $k^{th}$ order covariant derivative associated with the metric $g$, and let $d\mu$ be the measure associated with $g$. For $W^m_2({\mathbb{M}})$, define the inner product $$\label{def_sn} \langle f, h\rangle_{m,{\mathbb{M}}}:=\langle f,h\rangle_{W_2^m({\mathbb{M}})}:= \sum_{k=0}^m \int_{{\mathbb{M}}} \big\langle \nabla^k f, \nabla^k h \big\rangle_{g,x} \, {\mathrm{d}}\mu(x),$$ and norm $\|f\|_{m,{\mathbb{M}}}^2 := \langle f,f\rangle_{m,{\mathbb{M}}}$, where $f,h:{\mathbb{M}}\to {\mathbb{R}}$ are assumed smooth enough for their $W^m_2$ norms to be finite. The advantage of this definition is that it yields Sobolev spaces that are coordinate independent and can also be defined on measurable regions $\Omega\subseteq {\mathbb{M}}$. Using the Sobolev embedding theorem for manifolds [@aubin1982 §2.7], one can show that if $m>n/2$ these spaces are reproducing kernel Hilbert spaces, with $\kappa_m$ being the unique, strictly positive definite reproducing kernel for $W^m_2({\mathbb{M}})$; that is, $$f(x) = \langle f(\cdot), \kappa_m(x,\cdot)\rangle_{m,{\mathbb{M}}}$$ In the remainder of this section we will discuss invariance, interpolation error estimates, Lagrange functions, Lebesgue constants, and quadrature formulas derived from $\kappa_m$. Invariance of $\kappa_m$ {#invariance_kappa_m} ------------------------ We now turn to a discussion of the invariance of $\kappa_m$ under the action of a diffeomorphism that is also an isometry. We will then apply this to the case of a homogeneous space. Here is what we will need. \[homogeneous\] Let ${\mathbb{M}}$ be a compact Riemannian manifold of dimension $n$ with metric $g$. If $\Phi:{\mathbb{M}}\to {\mathbb{M}}$ is a diffeomorphism that is also an isometry, then the kernel $\kappa_m$ satisfies $\kappa_m(\Phi(x),\Phi(y))=\kappa_m(x,y)$ and $d\mu(\Phi(x))=d\mu(x)$. The proof proceeds in two steps. The first is showing this. Let $f:{\mathbb{M}}\to {\mathbb{R}}$ and let $f^\Phi=f\circ \Phi$ be the pullback of $f$ by $\Phi$. Then, $$\label{pullback_invariance} \langle \nabla^k f^\Phi, \nabla^k h^\Phi \big\rangle_{g,x}= \langle \nabla^k f, \nabla^k h \big\rangle_{g,\Phi(x)}.$$ We will follow a technique used in [@Helgason_1984 Proposition 2.4, p. 246] and in [@HNW_3_2011]. Let $({\mathfrak{U}},\phi)$ be a local chart, with coordinates $u^j=\phi^j(x)$, $j=1,\ldots,n$ for $x\in {\mathfrak{U}}$. Since $\Phi$ is a diffeomorphism, $(\Phi({\mathfrak{U}}), \phi\circ \Phi^{-1})$ is also a local chart. Let $\psi=\phi\circ \Phi^{-1}$, and use the coordinates $v^j=\psi^j(y)$ for $y\in \Phi({\mathfrak{U}})$. The choice of coordinates has the effect of assigning the *same* point in ${\mathbb{R}}^n$ to $x$ and $y$, provided $y=\Phi(x)$ – i.e., $u^j(x) = v^j(y)$. Thus, relative to these coordinates the map $\Phi$ is the identity, and consequently, the two tangent vectors $(\frac{\partial}{\partial v^j})_y\in T_y{\mathbb{M}}$ and $(\frac{\partial}{\partial u^j})_x\in T_x{\mathbb{M}}$ are related via $$\left(\frac{\partial}{\partial v^j}\right)_{\Phi(x)} = d\Phi_x\left(\frac{\partial}{\partial u^j}\right)_x.$$ So far, we have only used the fact that $\Phi$ is a diffeomorphism. The map $\Phi$ being in addition an isometry then implies that $$\left\langle \frac{\partial}{\partial v^j}, \frac{\partial}{\partial v^k} \right\rangle_{\Phi(x)} = \left\langle d\Phi_x\left(\frac{\partial}{\partial u^j}\right), d\Phi_x\left(\frac{\partial}{\partial u^k}\right) \right\rangle_{\Phi(x)}= \left\langle \frac{\partial}{\partial u^j}, \frac{\partial}{\partial u^k} \right\rangle_x.$$ The expressions on the left and right are the metric tensors at $y=\Phi(x)$ and $x$; the equation implies that, as functions of $v$ and $u$, $g_{jk}(v) = g_{jk}(u)$. From this it follows that the expressions for the Christoffel symbols, covariant derivatives and various expressions formed from them also will be the same, as functions of local coordinates. In addition, note that the local forms for $f^\Phi$ and $h^\Phi$ at $u=u(x)$ are $f^\Phi\circ \phi^{-1}$ and $h^\Phi\circ \phi^{-1}$, respectively, and those for $f$ and $h$ at $v=v(y)$ are $f\circ \psi^{-1}$ and $h\circ \psi^{-1}$. At $u=u(x)$ and $v=v(y)$, $y=\Phi(x)$, $f\circ \psi^{-1}(v)=f\circ \Phi\circ \phi^{-1}(v)=f^\Phi\circ \phi^{-1}(v)$, and similarly for $h$. Again, this is functional equality in local coordinates, so matching partial derivatives are also equal. Consequently, (\[pullback\_invariance\]) holds. The equality of the coordinate forms of the metric $g_{jk}$, which was established above, implies invariance of the Riemannian measure: $$\label{measure_invariance} d\mu(x) = \sqrt{\det(g_{jk}(u))}d^nu = \sqrt{\det(g_{jk}(v))}d^nv = d\mu(\Phi(x)).$$ Using this we see that $$\begin{aligned} \int_{\mathbb{M}}\langle \nabla^k f^\Phi, \nabla^k h^\phi \big\rangle_{g,x}d\mu(x)&=&\int_{\mathbb{M}}\langle \nabla^k f, \nabla^k h \big\rangle_{g,\Phi(x)}d\mu(x) \nonumber \\ &=& \int_{\mathbb{M}}\langle \nabla^k f, \nabla^k h \big\rangle_{g,\Phi(x)}d\mu(\Phi(x)) \nonumber \\ &=& \int_{\mathbb{M}}\langle \nabla^k f, \nabla^k h \big\rangle_{g,y}d\mu(y) \nonumber\end{aligned}$$ Finally, from this and (\[def\_sn\]), we have invariance of the Sobolev inner product: $$\label{invariance_sn} \langle f^\Phi,h^\Phi\rangle_{m,{\mathbb{M}}}=\langle f,h\rangle_{m,{\mathbb{M}}}$$ The second step is to show that the kernel is invariant. Since $\kappa_m$ is a reproducing kernel, we have $ f(x)=\langle f(\cdot),\kappa_m(x,\cdot)\rangle_{m,{\mathbb{M}}} $. By (\[invariance\_sn\]), we also have $f(x)=\langle f^\Phi(\cdot),\kappa_m(x,\Phi(\cdot)) \rangle_{m,{\mathbb{M}}} $. Replacing $x$ by $\Phi(x)$ then yields $$f(\Phi(x))=f^\Phi(x)=\langle f^\Phi(\cdot),\kappa_m(\Phi(x),\Phi(\cdot)) \rangle_{m,{\mathbb{M}}}.$$ Finally, replacing $f$ by $f^{\Phi^{-1}}$ above gives us $$f(x)=\langle f(\cdot),\kappa_m(\Phi(x),\Phi(\cdot)) \rangle_{m,{\mathbb{M}}},$$ from which it follows that $\kappa_m(\Phi(x),\Phi(y))$ is also a reproducing kernel for $W_2^m({\mathbb{M}})$. But, reproducing kernels are unique, so $\kappa_m(\Phi(x),\Phi(y))=\kappa_m(x,y)$. Thus $\kappa_m$ is invariant. Homogeneous spaces have two properties that allow us to use the proposition just proved. First, they inherit a Riemannian metric invariant under the action of the Lie group $\calg$. Second, the action of a group element produces an isometric diffeomorphism [@Helgason_1984; @vilenkin1968]. These observations then yield this: \[kernel\_homog\_space\_invariance\] Let ${\mathbb{M}}$ be a homogeneous space for a Lie group $\calg$ and suppose that ${\mathbb{M}}$ is equipped with the invariant metric $g$ from $\calg$. Then the reproducing kernel $\kappa_m$ for $W_2^m({\mathbb{M}})$ is invariant under the action of $\gamma\in \calg$. Error estimates for interpolation via $\kappa_m$ {#error_estimates_kappa_m} ------------------------------------------------ Recall that the accuracy of the quadrature formula associated with $\kappa_m$ is directly dependent on error estimates for interpolation via $\kappa_m$. To obtain these, we will first state a theorem that provides estimates on functions with many zeros, quasi-uniformly distributed over a compact manifold. \[zeros\_lemma\] Suppose that ${\mathbb{M}}$ is a $C^\infty$, compact, $n$-dimensional manifold, that $1\le p\le \infty$, $m\in {\mathbb{N}}$, and also that $u\in W_p^m({\mathbb{M}})$. Assume that $m>n/p$ when $p>1$, and $m\ge n$ when $p=1$. Then there are constants $C_0=C_0({\mathbb{M}})$ and $C_1=C_1(m,k,{\mathbb{M}})$ such that if $u|_X=0$ and $X\subset {\mathbb{M}}$ has mesh norm $h\le C_0/m^2$, then $$\label{p_bound_W_k} \|u\|_{W_p^k({\mathbb{M}})} \le C_1h^{m-k}\|u\|_{W_p^m({\mathbb{M}})} .$$ Using this “zeros lemma” we are able obtain an estimate for $\|s_f-f\|_{L_2({\mathbb{M}})}$, provided $f\in W_2^m({\mathbb{M}})$ and $s_f$ is the interpolant for $f$. \[interp\_error\_W\_2\_manifold\] Let $m>n/2$, $f\in W_2^m({\mathbb{M}})$, and let $s_f$ be the $\kappa_m$-interpolant for $f$ from $V_X$. Then, with the notation from Theorem \[zeros\_lemma\], if $h\le C_0/m^2$, we have $$\label{interp_bound_W_2_manifold} \|s_f-f\|_{L_2({\mathbb{M}})} \le C_1h^m \|f\|_{W_2^m({\mathbb{M}})}.$$ Clearly $s_f - f \in W_2^m$ and $(s_f - f)|_X=f|_X - f_X=0$. Applying Theorem \[zeros\_lemma\] to $s_f - f$ with $k=0$ yields $\| s_f-f \|_{L_2({\mathbb{M}})} \le C_1h^m \| s_f - f \|_{W_2^m({\mathbb{M}})}$. Since the space $W_2^m({\mathbb{M}})$ is the reproducing Hilbert space for $\kappa_m$, the interpolant $s_f$ minimizes $\| g-f \|_{W_2^m({\mathbb{M}})}$ among all $g \in W_2^m({\mathbb{M}})$. Thus, taking $g=0$ yields $\| s_f-f \|_{W_2^m({\mathbb{M}})}\le \| 0 -f \|_{W_2^m({\mathbb{M}})} = \| f \|_{W_2^m({\mathbb{M}})}$, from which the inequality (\[interp\_bound\_W\_2\_manifold\]) follows immediately. The bounds in (\[interp\_bound\_W\_2\_manifold\]) hold whether or not ${\mathbb{M}}$ is homogeneous. In one respect, the bounds are not as strong as we would like. The assumption that $f$ is in the reproducing kernel space $W_2^m$ precludes estimates for less smooth $f$, in $W_2^k({\mathbb{M}})$, with $k<m$. Stronger bounds do hold in important cases, though. For example, in section \[polyharmonic\_kernels\], we will see that the stronger result for $f\in W_2^k$ holds for the class of “polyharmonic" kernels, where ${\mathbb{M}}$ can be a sphere or some other two-point homogeneous space. We conjecture that, at least for $\kappa_m$, the stronger result holds for general $C^\infty$ compact, Riemannian manifolds. Lagrange functions and Lebesgue constants {#lagrange_lebesgu_kappa_m} ----------------------------------------- The Lagrange functions $\{\chi_\xi\}_{\xi\in X}$ associated with $\kappa_m$ have remarkable properties. For quasi-uniform sets of centers, Lebesgue constants are bounded independently of the number of points and the $L_1$ norms of the Lagrange functions are nicely controlled. Specifically, we have the result below, which holds for a general $C^\infty$ metric $g$. \[bdd\_lebesgue\_sob\_ker\] Let ${\mathbb{M}}$ be a compact Riemannian manifold of dimension $n$, and assume $m>n/2$. For a quasi-uniform set $X\subset {\mathbb{M}}$, with mesh ratio $h/q \le \rho $, there exist constants $C_{\mathbb{M}}$ and $C_{{\mathbb{M}},m}$ such that if $h\le C_{\mathbb{M}}/m^2$, then the Lebesgue constant $\Lambda_{V_X} = \max_{x\in {\mathbb{M}}} \sum_{\xi\in X}|\chi_\xi(x)|$ associated with $\kappa_m$ satisfies $$\Lambda_{V_X}\le C_{{\mathbb{M}},m}\rho^n.$$ In addition, we have this uniform bound on the $L_1$-norm of the Lagrange functions: $$\max_{\xi\in X} \|\chi_\xi\|_{L_1({\mathbb{M}})} \le C_{\rho,m,{\mathbb{M}}}q^n.$$ In the statement of the theorem used above, we have made explicit the $\rho$ dependence of the bound on $\Lambda_{V_X}$ found in the proof of [@HNW2010 Theorem 4.6]. We have also adapted the notation there to that used here. The bound on $\|\chi_\xi\|_{L_1({\mathbb{M}})}$ follows from [@HNSW_2_2011 Proposition 3.6], with $s=\chi_\xi$ and $p=1$. In that proposition, the coefficients $A_{p,\eta}=q^{n/p}\delta_{\xi,\eta}$ and so $\|A_{p,\cdot}\|_p=q^{n/p}$. For $p=1$, $\|A_{1,\cdot}\|_1=q^d$, and so $\|\chi_\xi\|_{L_1({\mathbb{M}})} \le C_{\rho,m} q^n$. Kernel quadrature via $\kappa_m$ {#kappa_m_quadrature} -------------------------------- The positive definite Sobolev kernel $\kappa_m$ is invariant under the action of $\calg$, by Corollary \[kernel\_homog\_space\_invariance\]. This is the bare minimum requirement for a kernel to be able to give rise to a computable quadrature formula. The other properties of $\kappa_m$ established in the previous sections give us the result below concerning accuracy and stability. \[accuracy\_stabiliy\_kappa\_m\] Let $X\subset {\mathbb{M}}$ be a finite set having mesh norm $\rho_X\le \rho$. Take $A=(\kappa_m(\xi,\eta))|_{\xi,\eta\in X}$ and suppose $f\in W_2^m({\mathbb{M}})$. Then the vector of weights in $Q_{V_X}$ is $c=J_0 A^{-1}{\mathbf 1}$, the error for $Q_{V_X}$ satisfies $|Q_{V_X}(f) - \int_{{\mathbb{M}}} fd\mu| \le h^m \|f\|_{W_2^m({\mathbb{M}})}$, and the norm of $Q_{V_X}$ is bounded by $\|Q_{V_X}\|_{C({\mathbb{M}})} \le C_{{\mathbb{M}},m}\rho^n$. Finally, the standard deviation defined in Proposition \[Q\_standard\_dev\_prop\] satisfies the bound $\sigma_Q\le C_{\rho,m,{\mathbb{M}}}\sigma_\nu h^{n/2}$. The formula for the weights is a consequence of two things. First, by Corollary \[kernel\_homog\_space\_invariance\] the kernel is invariant and so by Lemma \[invariance\_integral\] the integral $\int_{\mathbb{M}}\kappa_m(x,\cdot)d\mu(x)$ is a constant, namely $J_0$. Second, since the kernel is positive definite, Proposition \[integral\_V\] provides the desired formula for the weights. The error estimate is a consequence of (\[quad\_op\_accuracy\]) and Proposition \[interp\_bound\_W\_2\_manifold\], and the norm estimate follows from (\[quad\_op\_norm\_lebesgue\]) and Proposition \[bdd\_lebesgue\_sob\_ker\]. Finally, the bound on $\sigma_Q$ is a consequence of Proposition \[Q\_standard\_dev\_prop\], Proposition \[bdd\_lebesgue\_sob\_ker\], and of the fact that $\rho^n q^n=h^n$. There are two significant implications of this result. The first is just what was noted in section \[quadrature\]; namely, the weights are obtained by directly solving a linear system of equations. The second is that the measures of accuracy and stability hold for any $X$ with mesh ratio less than a fixed $\rho$. There are several drawbacks. In the case of a *general* homogeneous space ${\mathbb{M}}$, the formulas for kernels $\kappa_m$ are not yet explicitly known. This may be less of a problem when specific cases come up in applications. Also, the error estimates, which provide the chief measure of accuracy, hold for $f$ in the space $W_2^m({\mathbb{M}})$. We would like to “escape” from this *native space* (reproducing kernel space) and have estimates for less smooth $f$. As we shall see in the next section, the situation is much improved in the case of spheres, projective spaces, and other two-point homogeneous manifolds. In the case of ${\mathbb{S}}^2$, not only do we have the requisite kernels, but in important cases we can also give fast algorithms for obtaining the weights (cf. section \[numerics\]). Polyharmonic Kernels {#polyharmonic_kernels} ==================== For spheres, $SO(3)$, and other two-point homogeneous spaces (cf. [@Helgason_1984 pgs. 167 & 177] for a list), one can use polyharmonic kernels, which are related to Green’s functions for certain differential operators. These include restrictions of thin-plate splines [@HNW_3_2011], which are useful because they are given via explicit formulas. Many of these kernels are conditionally positive definite, rather than positive definite. The differential operators are polynomials in the Laplace-Beltrami operator. Since, on a compact Riemannian manifold $-\Delta$ is a self adjoint operator with a countable sequence of nonnegative eigenvalues $\lambda_j < \lambda_{j+1}$ having $+\infty$ as the only accumulation point, we can express a polyharmonic kernel in terms of the associated eigenfunctions $-\Delta \phi_{j,s} = \lambda_j \phi_{j,s}$, $s=1,\ldots,d_j$, $d_j$ being the multiplicity of $\lambda_j$. To make this clear, we will need some notation. Let $m\in {\mathbb{N}}$ such that $m>n/2$ and let $\pi_m\in \Pi_m({\mathbb{R}})$ be of the form $\pi_m(x) = \sum_{\nu=0}^mc_\nu x^\nu$, where $c_m>0$ and let ${\mathcal{L}}_m$ be the $2m$-order differential operator given by ${\mathcal{L}}_m = \pi_m(-\Delta)$. We define ${\mathcal{J}}\subset {\mathbb{N}}$ to be a finite set that includes all $j$ for which the eigenvalue $\pi_m(\lambda_j)$ of ${\mathcal{L}}_m$ satisfies $\pi_m(\lambda_j)\le 0$. (In addition to this finite set, ${\mathcal{J}}$ may also include a finite number $j$’s for which $\pi_m(\lambda_j)>0$.) We say that the kernel $\kappa: {\mathbb{M}}\times{\mathbb{M}}\to {\mathbb{R}}$ is *polyharmonic* if it has the eigenfunction expansion $$\label{polyharmonic_def} \kappa(x,y):=\sum_{j=0}^\infty \tilde \kappa_j\bigg(\sum_{s=1}^{d_j}\phi_{j,s}(x)\phi_{j,s}(y)\bigg), \quad \text{where } \tilde \kappa_j:= \left\{ \begin{aligned} \pi_m(\lambda_j)^{-1},\ & j\not\in{\mathcal{J}}, \\ \text{arbitrary, } & j\in{\mathcal{J}}. \end{aligned} \right.$$ The kernel $\kappa$ is conditionally positive definite with respect to the space $\Pi_{\mathcal{J}}={\mathop{\mathrm{span}}}\{\phi_{j,s}:j\in {\mathcal{J}},\ 1\le s\le d_j\}$. In the sequel, we will need eigenspaces, Laplace-Beltrami operators and so on for compact two-point homogenous manifolds. Without further comment, we will use results taken from the excellent summary given in [@Brown-Dai-05-1]. This paper gives original references for these results. Polyharmonic kernels are special cases of *zonal* functions, which are invariant kernels that depend only on the geodesic distance $d(x,y)$ between the variables $x$ and $y$; $d$ is normalized so that the diameter of ${\mathbb{M}}$ is $\pi$ — i.e., $0\le d(x,y)\le \pi$. To see that polyharmonic kernels are zonal functions, we need the addition theorem on two-point homogeneous manifolds. If $P_\ell^{(\alpha,\beta)}$ denotes the $\ell^{th}$ degree Jacobi polynomial, normalized so that $P_\ell^{(\alpha,\beta)}(1) = \frac{\Gamma(\ell+\alpha+1)}{\Gamma(\ell+1)\Gamma(\alpha+1)}$, then this theorem states that for any orthonormal basis $\{\phi_{j,s}\}_{s=1}^{d_j}$ of the eigenspace corresponding to $\lambda_j$ we have $$\sum_{s=1}^{d_j}\phi_{j,s}(x)\phi_{j,s}(y) = c_{\varepsilon j} P_{\varepsilon j}^{(\frac{n-2}{2},\beta)}(\cos(\varepsilon^{-1}d(x,y))), \quad c_\ell:=\frac{(2\ell+\frac{n}{2}+\beta)\Gamma(\beta+1) \Gamma(\ell+\frac{n}{2}+\beta)}{\Gamma(\beta+\frac{n+2}{2}) \Gamma(\ell+\beta+1)},$$ where the parameters $\varepsilon$ and $\beta$, and of course $\lambda_j$, depend on the manifold. These are listed in the table below. In summary, every polyharmonic kernel has the form $\kappa(x,y) = \Phi(\cos(\varepsilon^{-1}d(x,y)))$, where $$\label{poly_basis_function} \Phi(t) = \sum_{j=0}^\infty \tilde \kappa_j c_{\varepsilon j}P_{\varepsilon j}^{(\frac{n-2}{2},\beta)}(t), \ -1\le t\le 1.$$ [**Manifold** ]{} $\varepsilon$ $\beta$ $\lambda_j$ ---------------------------- --------------- --------------------- ---------------------- ${\mathbb{S}}^n$ $1$ $\frac{n-2}{2} $ $j(j+n-1)$ ${\mathbb{RP}}^n$ 2 $\frac{n-2}{2}$ $2j(2j+n-1)$ ${\mathbb{C}}\mathbb{P}^n$ 1 0 $j(j+\frac{n}{2})$ Quaternion $\mathbb{P}^n$ $1$ $1$ $j(j+\frac{n+2}{2})$ Caley $\mathbb{P}^{16}$ 1 3 $j(j+12)$ : Parameters for compact, two-point homogeneous manifolds . An important class of polyharmonic kernels on ${\mathbb{S}}^n$ are the surface (thin-plate) splines restricted to the sphere. The surface splines on ${\mathbb{R}}^n$ are defined in [@Wendland_book Section 8.3]; their $\tilde \kappa_j$’s are computed in [@Baxter-Hubbert-2001]. Both kernel and $\tilde \kappa_j$ are given below in terms of the parameter $s=m+n/2$; they are also normalized to conventions used here (see also [@mhaskar-etal-2010]). The first formula is used for odd $n$ and the second for even $n$. These kernels are conditionally positive definite of order $m$. $$\label{TPS} \left. \begin{array}{l} \displaystyle{ \Phi_s(t)= \left\{ \begin{array}{cl} (-1)^{s+\frac12}(1-t)^s, & s=m-\frac{n}{2} \in {\mathbb{N}}+\frac12\\[5pt] (-1)^{s+1}(1-t)^s\log(1-t), & s=m-\frac{n}{2}\in{\mathbb{N}}. \end{array} \right.}\\[18pt] \tilde \kappa_j= C_{s,n}\frac{\Gamma(j-s)}{\Gamma(j+s+n)}, \ \text{where, for even }n, \ j>s. \end{array} \right\}$$ where the factor $C_{s,n}$ is given by $$C_{s,n}:=2^{s+n}\pi^{\frac{n}{2}}\Gamma(s+1)\Gamma(s+\frac{n}{2}) \left\{ \begin{array}{ll} \frac{\sin(\pi s)}{\pi} & s=m+\frac{n}2 \in {\mathbb{N}}+\frac12\\[5pt] 1, & s\in{\mathbb{N}}. \end{array} \right.$$ For the group $SO(3)\, (={\mathbb{RP}}^3)$, the following family of polyharmonic kernels, which are similar to those above, was discussed in [@Hangelbroek-Schmidt-2011]: $$\label{def_so3} \kappa(x,y) = \left(\sin\left( \frac{\omega(y^{-1}x)}{2}\right)\right)^{2m-3},\ m\ge 2.$$ Here $ \omega(z)$ is the rotational angle of $z\in SO(3)$. Adjusting the normalization given in [@Hangelbroek-Schmidt-2011 Lemma 2] to that used here, the polynomial $\pi(x)$ associated with $\kappa$ is $\pi(x) = \prod_{\nu=0}^{m-1}(x+1-4\nu^2)$. We remark that derivation of the kernels given in [@Hangelbroek-Schmidt-2011] used the theory of group representations along with a generalized addition formula that holds for any compact homogeneous manifold (see [@Gine-75]). For two-point homogeneous manifolds, the Sobolev kernels $\kappa_m$ are in fact polyharmonic. *The reason is that $\kappa_m$ is the Green’s function for the linear operator ${\mathcal{L}}_m = \sum_{k=0}^m {\nabla^k}^\ast\nabla^k$. However, by [@HNW_3_2011 Lemmas 4.2 & 4.3], ${\mathcal{L}}_m = \pi_m(-\Delta)$, where $\pi_m(x) = x^m +\sum_{\nu=0}^{m-1}c_\nu x^\nu$, so it has to have the form (\[polyharmonic\_def\]). Also, it is easy to show that $\pi_m(\lambda_j)>0$ for $j=0, 1, \ldots$, and that $\kappa_{m,j} = \pi_m(\lambda_j)^{-1}\sim \lambda_j^{-m}$ for $j$ large. *** There are two reproducing kernel Hilbert spaces associated with a polyharmonic kernel $\kappa$. We will take $\tilde\kappa_j>0$ for $j\in {\mathcal{J}}$. These spaces are often called “native spaces,” and are denoted by $\caln_\kappa$ and $\caln_{\kappa,{\mathcal{J}}}$. Consider the inner products $$\langle f,g\rangle_\kappa = \sum_{j=0}^\infty\sum_{s=1}^{d_j}\frac{\hat f_{j,s}\overline{ \hat g}_{j,s}}{\tilde \kappa_j} \ \text{and}\ \langle f,g\rangle_{\kappa,{\mathcal{J}}}= \sum_{j\not\in {\mathcal{J}}}\sum_{s=1}^{d_j}\frac{\hat f_{j,s}\overline{ \hat g}_{j,s}}{\tilde \kappa_j}.$$ The first of these is the inner product of the reproducing kernel Hilbert space for $\kappa$ itself, its native space $\caln_\kappa$, and the second is the semi-inner product for the Hilbert space modulo $\Pi_{\mathcal{J}}$, $\caln_{\kappa,{\mathcal{J}}}$. It is important to note that norms for $W_2^m$ and $\caln_\kappa$ are equivalent, $$\label{kernel_equiv} \|f\|_\kappa \approx \|f\|_{W_2^m({\mathbb{M}})}.$$ This follows easily from $\tilde \kappa_j \sim c_m^{-1} \lambda_j^{-m}\sim c_m^{-1}\tilde \kappa_{m,j}$, for $j$ large. Interpolation with polyharmonic kernels --------------------------------------- If we choose $\tilde \kappa_j > 0$ for $j\in{\mathcal{J}}$, the kernel will be positive definite, and so we can interpolate any $f\in C({\mathbb{M}})$. The form of the interpolant will be $$\label{standard_interp} I_Xf = \sum_{\xi\in X}a_\xi\kappa(x,\xi).$$ with the coefficients being obtained by inverting $f|_X = Aa$, where $A = [\kappa_(\xi,\eta)]_{\xi,\eta\in X}$ and the components of $a$ are the $a_\xi$’s. Let’s look at error estimates, given that $f\in W_2^m({\mathbb{M}})$. Before we start, we begin with the observation that (\[kernel\_equiv\]) and the usual minimization properties of $\|\cdot \|_\kappa$ imply that $$\| I_Xf - f\|_{W_2^m({\mathbb{M}})} \le C_1\| I_Xf - f\|_\kappa \le C\| f\|_\kappa \le C_2\|f\|_{W_2^m({\mathbb{M}})}.$$ This inequality was the key to proving Proposition \[interp\_error\_W\_2\_manifold\], where the reproducing kernel Hilbert space was itself $W_2^m({\mathbb{M}})$. Hence, repeating the proof of that proposition, with the inequality changed appropriately, we obtain the estimate $$\label{general_polyharm_error} \| I_Xf - f\|_{L_2({\mathbb{M}})} \le Ch^m\| f \|_{W_2^m({\mathbb{M}})}, \ f\in W_2^m({\mathbb{M}}),$$ which holds for a polyharmonic kernel (\[polyharmonic\_def\]), provided the conditions on $X$ in Proposition \[interp\_error\_W\_2\_manifold\] are satisfied. There are no restrictions on the two-point homogeneous manifold ${\mathbb{M}}$. There are, however, restrictions on the smoothness of $f$ – namely, $f$ must be at least as smooth as $\kappa$. Put another way, $f$ has to be in the native space $\caln_\kappa$. In the case of ${\mathbb{S}}^n$ or ${\mathbb{RP}}^n$, it is possible to remove this restriction and “escape” from the native space of $\kappa$. We will first deal with ${\mathbb{S}}^n$. From (\[poly\_basis\_function\]), we see that polyharmonic kernels on ${\mathbb{S}}^n$ are *spherical basis functions* (SBFs), provided coefficients with $j\in {\mathcal{J}}$ are taken to be positive. Moreover, $\tilde \kappa_j \sim \lambda_j^{-m}$ for $j$ large. We may then apply [@Narcowich-etal-07-1 Theorem 5.5], with $\phi, \tau,\beta, \mu \rightarrow \Phi, m, \mu, 0$, to obtain the estimate below. (Note that for an integer $k$, the space $H_k({\mathbb{S}}^n)$ in [@Narcowich-etal-07-1] is $W_2^k({\mathbb{S}}^n)$ here.) This result applies to functions *not* smooth enough enough to be in $W_2^m$. \[sphere\_escape\] Let $\kappa$ be a polyhamonic kernel of the form (\[poly\_basis\_function\]), with $\deg(\pi(x))=m$, and with $\tilde \kappa_j$, $j\in J$, chosen to be positive. Assume that the conditions on $X\subset {\mathbb{S}}^n$ in Proposition \[interp\_error\_W\_2\_manifold\] are satisfied. If $\mu$ is an integer satisfying $m\ge \mu>n/2$ and if $f\in W_2^\mu({\mathbb{S}}^n)$, then, provided $h$ is sufficiently small, $$\label{sphere_escape_est} \| I_X f -f\|_{L_2({\mathbb{S}}^n)} \le Ch^\mu \| f \|_{W_2^\mu({\mathbb{S}}^n)}.$$ The $n$-dimensional, real projective space, ${\mathbb{RP}}^n$, is the sphere ${\mathbb{S}}^n$ with antipodal points identified. Thus each $x\in {\mathbb{RP}}^n$ corresponds to $\{x,-x\}$ on ${\mathbb{S}}^n$. We will use this correspondence to lift the entire problem to ${\mathbb{S}}^n$, which is an idea used in [@graef-2011; @Hangelbroek-Schmidt-2011] in connection with approximation on $SO(3)$. As we mentioned earlier, the distance $d(x,y) = d_{{\mathbb{RP}}^n}(x,y)$ has been normalized so that the diameter of ${\mathbb{RP}}^n$ is $\pi$, rather than the $\pi/2$ one would expect from its being regarded as a hemisphere of ${\mathbb{S}}^n$. In fact, the two distances are proportional to each other. The natural geodesic distance on projective space is simply the angle between two lines passing through $\{x,-x\}$ and $\{y,-y\}$, with $x,y\in {\mathbb{S}}^n$, which is just $\theta = \arccos(|x\cdot y|)$, $0\le \theta\le \pi/2$. The distance $d_{{\mathbb{RP}}^n}$, in which the diameter of ${\mathbb{M}}$ is $\pi$, satisfies $d_{{\mathbb{RP}}^n}(x,y) =2\arccos(|x\cdot y|)$. If we use this in (\[poly\_basis\_function\]), then $t = |x\cdot y|$ and $\kappa(x,y)=\Phi(|x\cdot y|)$. One can take this farther. In the case of ${\mathbb{RP}}^n$, the series representation for $\Phi$ is $$\Phi(t) = \sum_{j=0}^\infty \tilde \kappa_j c_{2 j}P_{2j}^{(\frac{n-2}{2},\frac{n-2}{2})}(t).$$ Since the Jacobi polynomials $P_n^{(\frac{n-2}{2},\frac{n-2}{2})}(t)$ are even or odd, depending on whether $n$ is even or odd, and since the series for $\Phi$ contains only polynomials with $n$ even, it follows that $\Phi(-t)=\Phi(t)$. As a consequence we have that $\kappa(x,y) =\Phi(x\cdot y)$. Thus, if we regard the variables $x$ and $y$ to be points on ${\mathbb{S}}^n$, the kernel $\kappa$ is even in both variables, and is nonnegative definite on ${\mathbb{S}}^n$. Our aim now is to lift $\kappa$ to a polyharmonic kernel $\kappa^*$ on ${\mathbb{S}}^n$, with a view toward lifting the whole interpolation problem to ${\mathbb{S}}^n$. To begin, note that the eignvalues for $-\Delta_{{\mathbb{S}}^n}$ have the form $\lambda^*_j = j(j+n-1)$ and so those the ${\mathbb{RP}}^n$ may be written as $\lambda_j = \lambda^*_{2j}$. Next, let $\pi(x)\in \Pi_m$ be the polynomial associated with the kernel $\kappa$ in (\[polyharmonic\_def\]) and let $\tilde \kappa^*_j=\pi(\lambda^*_j)^{-1}$, except where $\pi(\lambda^*_j) \le 0$. For these we only require that the corresponding $\tilde \kappa^*_j>0$. Define the function $$\Phi^*(t) = \sum_{j=0}^\infty \tilde \kappa^*_jc_jP_j^{(\frac{n-2}{2},\frac{n-2}{2})}(t),$$ which is associated with the polyharmonic kernel $\kappa^*(x,y) =\Phi^*(x\cdot y)$ on ${\mathbb{S}}^n$. In constructing $\Phi^*$ we have simply added an odd function to $\Phi$. Thus, $$\Phi(t) = \frac12 \big(\Phi^*(t) + \Phi^*(-t) \big).$$ Start with the centers $X$ on ${\mathbb{RP}}^n$. Each center in $X$ corresponds to $\{\xi,-\xi\}$ on ${\mathbb{S}}^n$, so we may lift $X$ to $X^*=\{\pm \xi \in {\mathbb{S}}^n: \{\xi,-\xi\}\in X \}$. Following the proof in [@graef-2011 Theorem 1], we may show that $q_{X'}=\frac12 q_X$ and $h_{X'}=\frac12 h_X$, where $d_{{\mathbb{RP}}^n}$ is used for $q$ and $h$ in $X$. In addition, we may, and will, identify each $f\in C({\mathbb{RP}}^n)$ with an even function in $C({\mathbb{S}}^n)$. Now, interpolate an even $f$ on ${\mathbb{S}}^n$ at the centers in $X^*$. This gives us $$\label{S_d_interpolant} I_{X^*}f(x) = \sum_{\zeta\in X^*} a^*_\zeta \kappa^*(x,\zeta)$$ Note that $\kappa^*(-x,\zeta) = \Phi^*((-x)\cdot \zeta) = \Phi^*(x\cdot (-\zeta)) = \kappa^*(x,-\zeta)$. Since both $\zeta$ and $-\zeta$ are in $X^*$. it follows that $I_{X^*}f(-x) = \sum_{\zeta\in X^*} a^*_{-\zeta} \kappa^*(x,\zeta)$. Moreover, $I_{X^*}f(-\xi)=f(-\xi)=f(\xi) =I_{X^*}f(\xi)$. Since interpolation is unique, and the two linear combinations of kernels agree on $X^*$, they are equal, and so we have $I_{X^*}f(-x)=I_{X^*}f(x)$ and $a^*_{-\zeta}=a^*_\zeta$. Because $I_{X^*}f$ is even, we have, from (\[S\_d\_interpolant\]), that $I_{X^*}f(x) =\sum_{\zeta\in X^*} a^*_\zeta \frac12 \big(\kappa^*(x,\zeta)+\kappa^*(x,-\zeta))$. Since $\kappa^*(x,\zeta) =\Phi^*(x\cdot\zeta)$, we have $\kappa^*(x,\zeta) + \kappa^*(x,-\zeta) =2\kappa(x,\zeta)$. Consequently, $$I_{X^*}f(x) =\sum_{\zeta\in X^*} a^*_\zeta \kappa(x,\zeta)=\sum_{\zeta\in X}a_\zeta \kappa(x,\zeta),$$ where $a_\zeta = a^*_\zeta+a^*_{-\zeta}=2a^*_\zeta$. The sum on the right above is the interpolant to $f$ on ${\mathbb{RP}}^n$, $I_Xf(x)$. Thus, $I_{X^*}f(x) = I_Xf(x)$. \[projective\_escape\] Let $\kappa$ be a polyhamonic kernel of the form (\[poly\_basis\_function\]), with $\deg(\pi(x))=m$, and with $\tilde \kappa_j$, $j\in {\mathcal{J}}$, chosen to be positive. Assume that the conditions on $X\subset {\mathbb{RP}}^n$ in Proposition \[interp\_error\_W\_2\_manifold\] are satisfied. If $\mu$ is an integer satisfying $m\ge \mu>n/2$ and if $f\in W_2^\mu({\mathbb{RP}}^n)$, then, provided $h$ is sufficiently small, $$\label{projective_escape_est} \| I_X f -f\|_{L_2({\mathbb{RP}}^n)} \le Ch^\mu \| f \|_{W_2^\mu({\mathbb{RP}}^n)}.$$ The metric $ds_{{\mathbb{RP}}^n}$ we are using for ${\mathbb{RP}}^n$ is twice the metric inherited from the sphere; that is, $ds_{{\mathbb{RP}}^n}=2ds_{{\mathbb{S}}^n}$. From the point of view of integration, it is easy to see that $d\mu_{{\mathbb{RP}}^n} = 2^{n/2}d\mu_{{\mathbb{S}}^n}$. Thus various integrals and norms are merely changed by $d$-dependent constant multiples – [e.g.]{}, $\|I_Xf - f\|_{L_2({\mathbb{RP}}^n)}=2^{1-d/2}\|I_X^*f - f\|_{L_2({\mathbb{S}}^n)}$. By Proposition \[sphere\_escape\], $$\|I_X^*f - f\|_{L_2({\mathbb{S}}^n)}\le C_1h_{X^*}^\mu \|I_{X^*}f - f\|_{W_2^\mu({\mathbb{S}}^n)}\le C_2 h_X^\mu \| I_X f -f\|_{W_2^\mu({\mathbb{RP}}^n)}.$$ The inequality (\[projective\_escape\_est\]) then follows immediately from this and the remarks above. Error estimates for interpolation reproducing $\Pi_{\mathcal{J}}$ ----------------------------------------------------------------- The interpolation operator associated with $\kappa$ that reproduces $\Pi_{\mathcal{J}}= {\mathop{\mathrm{span}}}\{\phi_{j,s}:j\in {\mathcal{J}},\ 1\le s\le d_j\}$ is $$I_{X,{\mathcal{J}}}f = \sum_{\xi\in X}a_{\xi,{\mathcal{J}}}\kappa(x,\xi)+p_{\mathcal{J}}, \ p_{\mathcal{J}}\in \Pi_{\mathcal{J}}, \ \text{ and } \sum_{\xi\in X}a_{\xi,{\mathcal{J}}}\phi_{j,s}(\xi) = 0,\ j\in {\mathcal{J}}.$$ We wish to estimate the $L_2$-norm of $I_{X,{\mathcal{J}}}f-f$. To do that, we will need to obtain various equations relating $I_{X,{\mathcal{J}}}f$, $I_Xf$, and $p_{\mathcal{J}}$. (Since the choice of $\tilde \kappa_j$, $j\in {\mathcal{J}}$, obviously has no effect on $I_{X,{\mathcal{J}}}$, we can and will assume that $\tilde \kappa_j>0$, $j\in {\mathcal{J}}$.) Let $P_{\mathcal{J}}$ be the $L_2$ orthogonal projection onto $\Pi_{\mathcal{J}}$. Note that the interpolant $I_{X,{\mathcal{J}}}f= \sum_{\xi\in X}a_{\xi,{\mathcal{J}}}\kappa(x,\xi)+p_{\mathcal{J}}$ consists of two terms. It is easy to show that the first term belongs to $(\Pi_{\mathcal{J}})^\perp$ and, of course, $p_{\mathcal{J}}\in \Pi_{\mathcal{J}}$. Consequently, $$\label{projection_error} P_{\mathcal{J}}(I_{X,{\mathcal{J}}}f-f)= p_{\mathcal{J}}-P_{\mathcal{J}}f.$$ Next, because both interpolate $f$, the difference $\sum_{\xi\in X}(a_\xi - a_{\xi,{\mathcal{J}}})\kappa(x,\xi)$ interpolates $p_{\mathcal{J}}$. This implies that $$I_Xf - I_{X,{\mathcal{J}}}f = I_Xp_{\mathcal{J}}-p_{\mathcal{J}},$$ From this, we have $$\label{first_bnd} \| I_{X,{\mathcal{J}}}f -f\|_{L_2({\mathbb{M}})} \le \|I_Xf - f\|_{L_2({\mathbb{M}})} + \|p_{\mathcal{J}}- I_X p_{\mathcal{J}}\|_{L_2({\mathbb{M}})}$$ Since $p_{\mathcal{J}}- I_X p_{\mathcal{J}}|_X = 0\,$, the second term on the right can be estimated via Theorem \[zeros\_lemma\], provided $h$ is sufficiently small. Making the estimate yields this bound: $$\|p_{\mathcal{J}}- I_X p_{\mathcal{J}}\|_{L_2({\mathbb{M}})} \le Ch^m \|p_{\mathcal{J}}- I_X p_{\mathcal{J}}\|_{W_2^m({\mathbb{M}})}.$$ In addition, by (\[kernel\_equiv\]), the norm induced by $\kappa$ is equivalent to the $W_2^m({\mathbb{M}})$ norm, and thus we have that $\|p_{\mathcal{J}}- I_X p_{\mathcal{J}}\|_{W_2^m({\mathbb{M}})}\le C\|p_{\mathcal{J}}- I_X p_{\mathcal{J}}\|_\kappa\le C\|p_{\mathcal{J}}\|_\kappa$, where the rightmost inequality follows from the minimization properties of $I_Xp_{\mathcal{J}}$ in the norm $\|\cdot\|_\kappa$. From the definition of the $\kappa$ norm, where we have assumed $\tilde \kappa_j=1$, $j\in {\mathcal{J}}$, it is easy to see that $\|p_{\mathcal{J}}\|_\kappa = \|p_{\mathcal{J}}\|_{L_2({\mathbb{M}})}$. Combining the various inequalities then yields $$\|p_{\mathcal{J}}- I_X p_{\mathcal{J}}\|_{L_2({\mathbb{M}})} \le Ch^m \|p_{\mathcal{J}}\|_{L_2({\mathbb{M}})}.$$ Using this on the right in (\[first\_bnd\]) gives us $$\label{second_bnd} \| I_{X,{\mathcal{J}}}f -f\|_{L_2({\mathbb{M}})} \le \|I_Xf - f\|_{L_2({\mathbb{M}})} + Ch^m \|p_{\mathcal{J}}\|_{L_2({\mathbb{M}})}.$$ Furthermore, employing (\[projection\_error\]) in conjunction with this inequality, we see that $$\| I_{X,{\mathcal{J}}}f -f\|_{L_2({\mathbb{M}})} \le \|I_Xf - f\|_{L_2({\mathbb{M}})} + Ch^m \big(\|I_{X,{\mathcal{J}}}f-f\|_{L_2({\mathbb{M}})}+ \|P_{\mathcal{J}}f\|_{L_2({\mathbb{M}})}\big)$$ Choosing $h$ so small that $Ch^m<\frac12$ and then manipulating the expression above, we obtain this result. \[final\_bnd\_poly\_reprod\] Let $f\in L_2({\mathbb{M}})$. For all $h$ sufficiently small, we have $$\label{third_bnd} \| I_{X,{\mathcal{J}}}f -f\|_{L_2({\mathbb{M}})} < 2\|I_Xf - f\|_{L_2({\mathbb{M}})} + Ch^m \|P_{\mathcal{J}}f\|_{L_2({\mathbb{M}})}.$$ We now arrive at the error estimates we seek. \[interp\_error\_reproducion\] Let ${\mathbb{M}}$ be a two-point homogeneous manifold and let $\kappa$ be a polyhamonic kernel of the form (\[poly\_basis\_function\]), with $\deg(\pi(x))=m$. Assume that the conditions on $X\subset {\mathbb{M}}$ in Proposition \[interp\_error\_W\_2\_manifold\] are satisfied. If $\mu$ is an integer satisfying $m\ge \mu>n/2$, then, provided $h$ is sufficiently small, $$\label{interp_error_reproduce} \| I_{X,{\mathcal{J}}} f -f\|_{L_2({\mathbb{M}})} \le \left\{ \begin{array}{cl} Ch^m \| f \|_{W_2^m({\mathbb{M}})}, & \text{\rm all } {\mathbb{M}}, \ \text{and } f\in W_2^m({\mathbb{M}}), \\ [7pt] Ch^\mu \| f \|_{W_2^\mu({\mathbb{M}})}, & {\mathbb{M}}= {\mathbb{S}}^n, {\mathbb{RP}}^n, \text{and }f\in W_2^\mu({\mathbb{M}}). \end{array} \right.$$ In (\[third\_bnd\]), we have $\|P_{\mathcal{J}}f\|_{L_2({\mathbb{M}})} \le \| f\|_{L_2({\mathbb{M}})} \le \| f\|_{W_2^\mu({\mathbb{M}})}$; in addition, the various bounds on $\| I_X f -f\|_{L_2({\mathbb{M}})}$ follow from (\[general\_polyharm\_error\]), (\[sphere\_escape\_est\]), and (\[projective\_escape\_est\]). Combining these yields (\[interp\_error\_reproduce\]). ### Optimal convergence rates for interpolation via surface splines {#superconvergence} For spheres, the interpolants constructed from the restricted thin-plate splines defined in (\[TPS\]) converge at double the rates discussed above – namely, ${\mathcal{O}}(h^{2m})$ rather than ${\mathcal{O}}(h^{m})$ –, provided $f$ is smooth enough. This also applies in $SO(3)$ for the kernels given in (\[def\_so3\]). The spaces $\Pi_{\mathcal{J}}$ being reproduced here are, for ${\mathbb{S}}^n$, the spherical harmonics of degree $m-1$ or less; and for $SO(3)$, the Wigner D-functions for representations of the rotation group, again having order $m-1$ or less. The precise result is stated below. \[sphere\_result\] Suppose $m>n/2$ and let $I_{X,{\mathcal{J}}}$ denote the interpolation operator corresponding to the restricted surface spline defined in (\[TPS\]) for $s=m-n/2$. Then, there is a constant $C$ depending on $\rho, m$ and $n$ such that, for a sufficiently dense set $X \subset {\mathbb{S}}^n$, and for $f\in C^{2m}({\mathbb{S}}^n)$, the following holds: $$\|I_{X,{\mathcal{J}}}f - f\|_{\infty} \le C h^{2m} \|f\|_{C^{2m}({\mathbb{S}}^n)},$$ Similarly, for the kernels in (\[def\_so3\]), if $f\in C^{2m}(SO(3))$, then $\| I_{X,{\mathcal{J}}}f - f\|_{\infty} \le C h^{2m} \|f\|_{C^{2m}(SO(3))}$. Lagrange functions and Lebesgue constants {#lagrange-functions-and-lebesgue-constants} ----------------------------------------- If $\chi_\xi(x)$ is the Lagrange function associated with a kernel $\kappa$ and a space $\Pi_{\mathcal{J}}$, then, by (\[weight\_bound\]), the weights in the quadrature formula have the form $c_\xi = \int_{\mathbb{M}}\chi_\xi(x)d\mu(x)$, $\xi\in X$. Our aim is to use properties of $\chi_\xi$ to obtain bounds on these weights. Before we do this, we will need the following decay estimates for $\chi_\xi$. \[general\_decay\] Suppose that ${\mathbb{M}}$ is an $n$-dimensional compact, two-point homogeneous manifold and that $\kappa$ is a polyharmonic kernel, with $\deg(\pi(x))=m$, where $m>n/2$. There exist positive constants $h_0$, $\nu$ and $C$, depending only on $m$, ${\mathbb{M}}$ and the operator ${\mathcal{L}}_m=\pi(-\Delta)$ so that if the set of centers $X$ is quasi-uniform with mesh ratio $\rho$ and has mesh norm $h\le h_0$, then the Lagrange functions for interpolation by $\kappa$ with auxiliary space $\Pi_{{\mathcal{J}}}$ satisfy $$\label{general_pointwise_lagrange_bound} |\chi_{\xi}(x)| \le C \rho^{m-n/2} \max\left(\exp\left(- \frac{\nu}{h} d(x,\xi) \right), h^{2m}\right).$$ There are two results that we want to obtain. First, we want to estimate the size of each weight in terms of the mesh norm $h\,$; and second, we want to estimate $\sum_{\xi\in X}|c_\xi|$. We only need to estimate $\|\chi_\xi\|_{L_1({\mathbb{M}})}$, since $|c_\xi|\le \|\chi_\xi\|_{L_1({\mathbb{M}})}$. Our approach is to divide the manifold into a ball $B=B_{\xi,R_h}$, having radius $R_h$ and center $\xi$, and its complement $B^\complement$. The radius $R_h$ is the “break-even” distance in (\[general\_pointwise\_lagrange\_bound\]); it is obtained by solving $\exp\left(- \frac{\nu}{h} R_h \right)= h^{2m}$ for $R_h$. The result is $R_h= \frac{2m}{\nu}h |\log(h)|$. By (\[general\_pointwise\_lagrange\_bound\]), we obtain $$\int_{B_0^\complement}\big| \chi_\xi(x)\big|d\mu(x) \le C\rho^{m-n/2}\mu({\mathbb{M}})h^{2m}.$$ Next, again by (\[general\_pointwise\_lagrange\_bound\]), we have that $$\int_B |\chi_\xi(x)| d\mu(x) \le C'\rho^{m-n/2}\int_0^{R_h}e^{-r\nu/h}r^{n-1}dr <C'\rho^{m-n/2}\underbrace{\int_0^\infty e^{-r\nu/h}r^{n-1}dr}_{(n-1)!(h/\nu)^n}$$ Consequently, $ |c_\xi|\le \|\chi_\xi\|_{L_1({\mathbb{M}})}\le C\rho^{m-n/2}(h/\nu)^n(1+h^{2m-n}\nu^n) $. Because $2m>n$, for $h$ small enough, it follows that $$\label{simple_weight_bound} |c_\xi|\le \|\chi_\xi\|_{L_1({\mathbb{M}})} \le C'\rho^{m-n/2}h^n,$$ where $C'=C'({\mathbb{M}},\kappa,{\mathcal{J}})$. The next result concerns the boundedness of the Lebesgue constant, which played a significant role in section \[accuracy\_stability\] in the analysis of the stability of the quadrature operator. \[polyharmonic\_lebesgue\] Let the notation be the same as that in Proposition \[general\_decay\]. If $h\le h_0$, then the Lebesgue constant, $\Lambda_{X,{\mathcal{J}},\kappa} = \max_{x\in {\mathbb{M}}} \sum_{\xi\in X} |\chi_{\xi}(x)|$, associated with $X$, ${\mathcal{J}}$, and $\kappa$, satisfies the bound $\Lambda_{X,{\mathcal{J}},\kappa}\le C$, where $C$ depends only on $m$, $\rho$, and ${\mathbb{M}}$. Quadrature via polyharmonic kernels {#polyharmonic_kappa_quadrature} ----------------------------------- In this section, we will employ the various properties of polyharmonic kernels, which are of course invariant, to obtain results concerning accuracy and stability of the associated quadrature formulas. \[accuracy\_stabiliy\_polyharmonic\] Suppose that ${\mathbb{M}}$ is an $n$-dimensional compact, two-point homogeneous manifold and that $\kappa$ is a polyharmonic kernel, with $\deg(\pi(x))=m$, where $m>n/2$. Let $X\subset {\mathbb{M}}$ be a finite set having mesh ratio $\rho_X\le \rho$, $\Pi_{\mathcal{J}}= {\mathop{\mathrm{span}}}\{\phi_{j,s}:j\in {\mathcal{J}}, \, 1\le s \le d_j\}$, and $Q_{V_X}$ be the corresponding quadrature operator given in Definition \[quadrature\_def\] . The norm of $Q_{V_X}$ is bounded, $\|Q_{V_X}\|_{C({\mathbb{M}})} \le \mu({\mathbb{M}})\Lambda_{X,{\mathcal{J}},\kappa}\le C_{m,\rho,{\mathbb{M}}}$, and the error satisfies the estimates $$\label{quad_error_reproduce} \big| Q_{V_X}(f) - \int_{{\mathbb{M}}} fd\mu \big| \le \left\{ \begin{array}{cl} Ch^m \| f \|_{W_2^m({\mathbb{M}})}, & \text{\rm all } {\mathbb{M}}, \ \text{and } f\in W_2^m({\mathbb{M}}), \\ [7pt] Ch^\mu \| f \|_{W_2^\mu({\mathbb{M}})}, & {\mathbb{M}}= {\mathbb{S}}^n, {\mathbb{RP}}^n, \text{and }f\in W_2^\mu({\mathbb{M}}), \ n/2<\mu\le m. \end{array} \right.$$ Finally, the standard deviation $\sigma_Q$ from Proposition \[Q\_standard\_dev\_prop\] satisfies $\sigma_Q\le C\sigma_\nu h^{n/2}$, where $C=C(\rho,m,{\mathbb{M}},{\mathcal{J}})$. The norm estimate follows from (\[quad\_op\_norm\_lebesgue\]) and Proposition \[polyharmonic\_lebesgue\]. The error estimate is a consequence of (\[quad\_op\_accuracy\]) and Theorem \[interp\_error\_reproducion\]. Finally, the bound on $\sigma_Q$ is a consequence of Proposition \[Q\_standard\_dev\_prop\], Proposition \[polyharmonic\_lebesgue\], and the bound on $\|\chi_\xi\|$ in (\[simple\_weight\_bound\]). At present, the most important compact two-point homogeneous manifolds are spheres and projective spaces (especially ${\mathbb{S}}^2$ and $SO(3))$. As we discussed in section \[superconvergence\], the restricted thin-plate splines (\[TPS\]) on ${\mathbb{S}}^n$ and similar kernels (\[def\_so3\]) on $SO(3)$ give interpolants with optimal convergence for smooth target functions. This also is reflected in the accuracy of the corresponding quadrature formulas: \[TPS\_quadrature\] Let $X$, $\rho_X$, $\rho$, $m$, $\Pi_{\mathcal{J}}$ be as in Theorem \[accuracy\_stabiliy\_polyharmonic\]. Take ${\mathbb{M}}={\mathbb{S}}^n$ or $SO(3)$ and $Q$ to be the quadrature operator corresponding to the restricted surface spline defined in (\[TPS\]) or to that for the polyharmonic kernel in (\[def\_so3\]). If $X$ is a sufficiently dense in ${\mathbb{M}}$, then there is a constant $C=C(m,n,\rho)$ such that for a sufficiently dense set $X \subset {\mathbb{S}}^n$ and for all $f\in C^{2m}({\mathbb{S}}^n)$, $$\big|Q(f) - \int_{{\mathbb{M}}} f d\mu \big| \le C h^{2m} \|f\|_{C^{2m}({\mathbb{S}}^n)},\ \text{where }C=C(m,n,\rho),$$ in addition to the bounds on $\|Q\|$ and $\sigma_Q$ from Theorem \[accuracy\_stabiliy\_polyharmonic\] holding. Because of applications in physical sciences and engineering, ${\mathbb{S}}^2$ is undoubtedly the most important of the manifolds treated here. In the next section, we give some numerical examples for ${\mathbb{S}}^2$ and the $m=2$ surface spline $\Phi(t) = (1-t)\log(1-t)$ (i.e. the thin plate spline $r^2\log(r^2)$ restricted to ${\mathbb{S}}^2$) that validate the above theory. Numerical results for ${\mathbb{S}}^2$ {#numerics} ====================================== We begin with a brief overview of how the quadrature weights can be computed in an efficient manner for the $m=2$ surface spline $\Phi(t) = (1-t)\log(1-t)$ using the local Lagrange preconditioner developed in [@FHNWW2012]. This is followed by a description of the nodes used in the numerical experiments and some properties of the resulting surface spline quadrature weights and their stability. Finally, we give some results validating the error estimates from the previous section. Computing the quadrature weights -------------------------------- The $m=2$ restricted surface spline kernel is conditionally positive definite of order 1 and the finite dimensional subspace $\Pi$ associated with it consists of all spherical harmonics of degree $\leq 1$, i.e. $\Pi: = \text{span}\{Y_{0,0}, Y_{1,0}, Y_{1,1}, Y_{1,2}\}$, where $Y_{\ell,k}$ is the degree $\ell$ and order $k$ spherical harmonic and $0\le k\le 2\ell+1$. Given a set $X = \{x_j\}_{j=1}^{N}$ of distinct nodes on ${\mathbb{S}}^2$, the quadrature weights $c$ for this kernel can be computed by first solving for $c_{\perp}$ and then computing $c$ via . In these equations, $A_{i,j} = (1-x_i \cdot x_j)\log(1-x_i\cdot x_j)$, $i,j=1,\ldots,N$, and $\Psi$ is the $N$-by-$4$ matrix with columns $\Psi_{i,1} = Y_{0,0}(x_i)$ and $\Psi_{i,k+2} = Y_{1,k}(x_i)$, for $i=1,\cdots,N$, $k=0,1,2$. Additionally, $J=\begin{bmatrix}4\pi & 0 & 0 & 0\end{bmatrix}^T$ and $J_0 = 2\pi(4\log(2)-1)$. Since $\Psi$ only has 4 columns, computing $\Psi(\Psi^T \Psi)^{-1}J$ in can be done rapidly using, for example, QR decomposition. Thus, the bulk of the computational effort in computing the quadrature weights is in solving for $c_{\perp}$ in . Since the matrix $A$ is dense, direct methods cannot be realistically applied for large $N$, and one must then resort to iterative methods. However, for iterative methods to be useful, one must apply an effective preconditioner to the system. In [@FHNWW2012], we developed a powerful preconditioner for based on *local Lagrange functions* and combined it with the generalized minimum residual (GMRES) iterative method  [@SaadSchultz]. The basic idea of the preconditioner is, for every node $x_j\in X$, to compute the surface spline interpolation weights for a small subset of nodes about $x_j$ consisting its $p = M(\log N)^2$ nearest neighbors, where $M$ is suitably chosen constant. The data for each interpolant is taken to be cardinal about $x_j$. This is similar to the preconditioner used in [@Faul-Powell-99-1] for interpolation on a 2-D plane, however, in that study the number of nodes in the local interpolants did not grow with $N$ and it was observed that the preconditioner broke down as $N$ increased. As demonstrated in [@FHNWW2012], by allowing the nodes to grow very slowly with $N$ the preconditioner remained effective and the total number of iterations required by GMRES to reach a desired tolerance did not increase with $N$. We refer the reader to [@FHNWW2012] for complete details on the construction of the preconditioner. In the numerical results that follow, we used the preconditioned GMRES technique from [@FHNWW2012] with $p = 2 \lceil (\log N)^2\rceil$ and a relative tolerance of $10^{-12}$ to solve for $c_{\perp}$ in . We then used this in to find $c$. Table \[tbl:iterations\] lists the number of GMRES iterations required to compute $c_{\perp}$ for three different quasi-uniform node families, which are described in the next section. We see from the table that the number of iterations remains fairly for constant as $N$ grows for all three families of nodes. ---------- ------------ ---------- ------------ ---------- ------------ N Iterations N Iterations N Iterations $2562$ $8$ $2501$ $9$ $2500$ $9$ $10242$ $7$ $10001$ $8$ $10000$ $8$ $23042$ $7$ $22501$ $11$ $22500$ $7$ $40962$ $7$ $40001$ $8$ $40000$ $8$ $92162$ $6$ $62501$ $10$ $62500$ $7$ $163842$ $7$ $90001$ $10$ $90000$ $7$ $256002$ $6$ $160001$ $8$ $160000$ $7$ $655362$ $6$ $250001$ $10$ $250000$ $7$ ---------- ------------ ---------- ------------ ---------- ------------ : Number of GMRES iterations required to compute $c_{\perp}$ in using the preconditioned iterative method developed [@FHNWW2012] for determining the $m=2$ surface spline. The quadrature weights are then computed from . In all cases, the relative tolerance of the GMRES method was set to $10^{-12}$.\[tbl:iterations\] We conclude by noting that as part of the iterative method, matrix-vector products involving $A$ must be computed. Since $A$ is dense, this requires ${\mathcal{O}}(N^2)$ operations per matrix-vector product. In the computations performed for this study, these products were computed directly, making the overall cost of the weight computation ${\mathcal{O}}(N^2)$. In a follow up study we will explore the use of fast, approximate matrix-vector products using the algorithm described in [@Keiner:2006:FSR:1152729.1152732]. By using this algorithm, it may be possible to reduce the total cost of computing the weights (or a surface spline interpolant) to ${\mathcal{O}}(N\log N)$. Nodes, weights, and stability ----------------------------- We consider three quasi-uniform families of nodes for the numerical experiments. The first is the icosahedral nodes, which are obtained from successive refinement of the 20 spherical triangular faces formed from the icosahedron. The second are the Fibonacci (or *phyllotaxis*) nodes, which mimic certain plant behavior in nature (see, for example, [@Gonzalez:2010] and the references therein). The third are the quasi-minimum energy nodes, which are obtained by arranging the nodes so that their Riesz energy is near minimal [@HarinSaff04]. In the examples below, a power of 3 was used in the Riesz energy and the nodes were generated using the technique described in [@BorodachovHardinSaff:2012]. The mesh norm for all three of these families satisfies $h \sim \frac{1}{\sqrt{N}}$, where $N$ is the total number of nodes. The mesh ratio $\rho$ stays roughly constant for the Fibonacci and quasi-minimum energy nodes as $N$ increases. For the icosahedral points $\rho$ grows slowly with $N$ since the spacing of the nodes decreases faster towards the vertices of the triangles of the base icosahedron than at the centers [@Saff-Kuijlaars-97-1]. However, in the numerical examples that follow, this increase seems to be of little concern. All three of these families of nodes are quite popular in applications; see, for example [@Giraldo:1997; @StuhnePeltier:1999; @Ringler:2000GeodesicGrids; @Majewski:2002GME] for the icosahedral nodes, [@SwinbankPurser:2006; @SlobbeSimonsKlees:2012; @HuttigKai:2008] for the Fibonacci nodes, and [@WrightFlyerYuen; @flyer_wright2009; @FlyerLehtoBlaiseWrightStCyr2012; @SWFK2012] for the quasi-minimum energy nodes. Quadrature over these nodes also plays an important role in applications, for example, for computing the mass of a certain quantity, the energy for a certain process, or the spectral decomposition of some data. It should be noted that previous studies have been devoted to developing quadrature formulas and error estimates for icosahedral [@atkinson2012spherical Ch. 5] and Fibonacci [@HannayNye:2004] nodes. However, these results rely on the specific construction of the node sets and cannot be applied to more general quasi-uniformly distributed nodes such as the quasi-minimum energy nodes. [cc]{} ![Visualization of the different families of nodes used in the numerical experiments and the corresponding quadrature weights for the $m=2$ spherical spline. The nodes are plotted on ${\mathbb{S}}^2$ using an orthographic projection about the north pole. Each node has been given a color corresponding to the value of the quadrature weight for that node.\[fig:quad\_wghts\]](QuadWghtsIcosN23042.pdf "fig:"){width="48.00000%"} & ![Visualization of the different families of nodes used in the numerical experiments and the corresponding quadrature weights for the $m=2$ spherical spline. The nodes are plotted on ${\mathbb{S}}^2$ using an orthographic projection about the north pole. Each node has been given a color corresponding to the value of the quadrature weight for that node.\[fig:quad\_wghts\]](QuadWghtsFibN22501.pdf "fig:"){width="48.00000%"}\ (a) Icosahedral nodes and weights, $N=23042$ & (b) Fibonacci nodes and weights, $N=22501$\ \ Figures \[fig:quad\_wghts\] (a)–(c) show examples of the different families of nodes and provide a visualization of the values of the corresponding quadrature weights for the $m=2$ surface spline. The geometric pattern of the icosahedral nodes in part (a) of this figure are clearly reflected in the values of the corresponding quadrature weights. This is also true of the Fibonacci nodes in part (b), which have a slight clustering near their seed value (in this case the north pole), but then are quasi-uniformly distributed. There are no clear patterns for the minimum energy nodes in part (c) of Figure \[fig:quad\_wghts\], as these are not distributed in a discernible pattern. Looking at the color bars in each of the plots we see that the range of values of the weights is similar between the different node families and comparable to $1/N$, and are also positive. ![Estimate of the standard deviation $\sigma_Q$ in for the different families of nodes. The estimate is based on sample sets of 500 $N$-point quadratures of different independent, identically, distributed, zero mean (quasi) random data with a standard deviation of 1. This confirms the stability estimate in the last part of Theorem \[accuracy\_stabiliy\_polyharmonic\]. \[fig:stability\]](StandardDeviationEstimate.pdf){width="48.00000%"} To estimate $\sigma_Q$ in , and hence the stability of the quadrature weights in the presence of noise, we performed the following experiments. For each family of nodes, a value of $N$ was selected and then a sample set was generated consisting of 500 values of the $N$-point quadrature of different independent, identically, distributed, zero mean (quasi) random data. The standard deviation of the sample sets was then computed to estimate $\sigma_Q$. The results are plotted in Figure \[fig:stability\] as a function of $N$ for each of the three families of nodes. Comparing the results to the dashed line on the plot, we see that the estimated $\sigma_Q$ decreases like ${\mathcal{O}}(N^{-1/2})$, or like ${\mathcal{O}}(h)$ since $h \sim N^{-1/2}$ for these families of nodes. This is in perfect agreement with the rate predicted by the last part of Theorem \[accuracy\_stabiliy\_polyharmonic\] for the surface splines. Convergence results ------------------- Two target functions of different smoothness were used to test the error estimates of Theorem \[accuracy\_stabiliy\_polyharmonic\] and Corollary \[TPS\_quadrature\]. The target functions were chosen so that the Funk-Hecke formula (see, for example,[@atkinson2012spherical §2.5]) could be used to determine their exact integral over ${\mathbb{S}}^2$. Letting $x,x_c\in {\mathbb{S}}^2$ and $g$ be a zonal kernel, i.e. $g(x,x_ci) = g(x\cdot x_c)$, such that $g \in L^{1}[-1,1]$, the Funk-Hecke formula gives the following result: $$\begin{aligned} \int_{{\mathbb{S}}^2} g(x \cdot x_c)Y_{\ell,k}(x) d\mu(x) = \frac{4\pi a_{\ell}}{2\ell+1} Y_{\ell,k}(x_c), \label{eq:funk_hecke}\end{aligned}$$ where $Y_{\ell,k}$ is any degree $\ell$, order $k$ spherical harmonic, and $a_{\ell}$ is the $\ell$th coefficient in the Legendre expansion of $g(t)$. The following two kernels were used in constructing the target functions: $$\begin{aligned} g_1(t) &= -(2 - 2t)^{1/4},\\ g_2(t) &= \frac{1-\varepsilon^2}{(1 + \varepsilon^2 - 2\varepsilon t)^{\frac32}}\quad (0 < \varepsilon < 1),\end{aligned}$$ which are known as the *potential spline* kernel of order $1/4$ and the *Poisson* kernel, respectively. The Legendre expansion coefficients for these kernels are given as follows (see [@Baxter-Hubbert-2001]): $$\begin{aligned} g_1:\quad a_{\ell} &= \frac{(-1)^{\ell+1}\sqrt{2}(\Gamma\left(\frac54\right))^2(2\ell+1)}{\Gamma\left(\frac54 - \ell\right)\Gamma\left(\frac94+\ell\right)},\\ g_2:\quad a_{\ell} &= (2\ell+1)\varepsilon^{\ell}.\end{aligned}$$ The smoothness of these kernels is of course determined by the decay rate of the Legendre coefficients $a_{\ell}$. For $g_1$, we have $a_{\ell} \sim \ell^{-3/2}$, which means $g_1$ belongs to every Sobolev space $W_2^{\mu}({\mathbb{S}}^2)$ with $\mu < \frac52$. While for $g_2$ the Legendre coefficients decay exponentially fast, which means $g_2\in C^{\infty}({\mathbb{S}}^2)$, analytic in fact. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Target integrands used to test the error estimates. (a) $f_1(x)$ from and (b) $f_2(x)$ from .\[fig:target\_integrands\]](RoughTarget.pdf "fig:"){width="48.00000%"} ![Target integrands used to test the error estimates. (a) $f_1(x)$ from and (b) $f_2(x)$ from .\[fig:target\_integrands\]](SmoothTarget.pdf "fig:"){width="48.00000%"} \(a) $f_1$ \(b) $f_2$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Using the above results, we define the following two target integrands: $$\begin{aligned} f_1(x) &= \sum_{k=1}^{41} \text{sign}(Y_{20,k}(x_c))Y_{20,k}(x)g_1(x\cdot x_c), \label{eq:rough_target}\\ f_2(x) &= \sum_{k=1}^{41} \text{sign}(Y_{20,k}(x_c))Y_{20,k}(x)g_2(x\cdot x_c), \label{eq:smooth_target}\end{aligned}$$ where $x_c=(\cos(-2.0281)\sin(0.76102),\sin(-2.0281)\cos(0.76102),\sin(0.76102))$ and $\varepsilon = 2/3$ for $g_2$; see Figure \[fig:target\_integrands\] (a) and (b) for plots of these respective functions. Integrating these functions over ${\mathbb{S}}^2$ and applying the Funk-Hecke formula gives $$\begin{aligned} \int_{{\mathbb{S}}^2} f_1(x) d\mu(x) &= 0.014830900415995, \\ \int_{{\mathbb{S}}^2} f_2(x) d\mu(x) &= 0.032409262543520.\end{aligned}$$ Both $f_1$ and $f_2$ inherit their smoothness directly from $g_1$ and $g_2$, respectively. Thus, $f_1$ belongs to every Sobolev space $W_2^{\mu}({\mathbb{S}}^2)$ with $\mu < \frac52$ and $f_2\in C^{\infty}({\mathbb{S}}^2)$. Tables \[tbl:rough\_err\] and \[tbl:smooth\_err\] display the relative errors in the $N$-point quadrature of $f_1$ and $f_2$, respectively, for the different family of nodes, while Figures \[fig:target\_err\](a) and (b) display these respective results graphically on a $\log-\log$ scale. Focusing first on the results for $f_1$ in Figure \[fig:target\_err\](a) and comparing the results to the included dashed line, it is clear that for all three families the error is decreasing approximately like ${\mathcal{O}}(N^{-1.25})$. Since $h \sim N^{-1/2}$ for these families of nodes, this observed rate of decrease in the error is approximately ${\mathcal{O}}(h^{2.5})$, which is precisely the rate predicted by the estimates in Theorem \[accuracy\_stabiliy\_polyharmonic\] for functions with $f_1$’s smoothness. Doing a similar comparison of the results for $f_2$ in Figure \[fig:target\_err\](b), we see that the errors associated with the icosahedral nodes very clearly decrease like ${\mathcal{O}}(N^{-2})$. The results for the other two nodes are not as clear, but for larger values of $N$ the errors do appear to be decreasing approximately like ${\mathcal{O}}(N^{-2})$. Again, because of the relationship between $h$ and $N$ for these nodes, the errors for $f_2$ are thus decreasing approximately like ${\mathcal{O}}(h^4)$. This is the expected rate from Corollary \[TPS\_quadrature\] since $f_2$ is infinitely smooth and we have used the $m=2$ surface spline. ---------- ----------------------- ---------- ----------------------- ---------- ----------------------- N Rel. Error N Rel. Error N Rel. Error $2562$ $1.926\times 10^{-1}$ $2501$ $5.112\times 10^{-3}$ $2500$ $3.048\times 10^{-2}$ $10242$ $3.533\times 10^{-2}$ $10001$ $5.549\times 10^{-3}$ $10000$ $6.848\times 10^{-2}$ $23042$ $1.286\times 10^{-2}$ $22501$ $1.770\times 10^{-3}$ $22500$ $2.480\times 10^{-2}$ $40962$ $6.268\times 10^{-3}$ $40001$ $1.040\times 10^{-3}$ $40000$ $1.217\times 10^{-2}$ $92162$ $2.273\times 10^{-3}$ $62501$ $6.460\times 10^{-4}$ $62500$ $6.989\times 10^{-3}$ $163842$ $1.107\times 10^{-3}$ $90001$ $4.068\times 10^{-4}$ $90000$ $4.393\times 10^{-3}$ $256002$ $6.330\times 10^{-4}$ $160001$ $1.844\times 10^{-4}$ $160000$ $2.083\times 10^{-3}$ $655362$ $1.957\times 10^{-4}$ $250001$ $1.085\times 10^{-4}$ $250000$ $1.228\times 10^{-3}$ ---------- ----------------------- ---------- ----------------------- ---------- ----------------------- : Relative error in the $N$-point quadrature of the “rough” target function $f_1$ in for the different families of nodes.\[tbl:rough\_err\] ---------- ----------------------- ---------- ----------------------- ---------- ----------------------- N Rel. Error N Rel. Error N Rel. Error $2562$ $3.358\times 10^{-2}$ $2501$ $1.045\times 10^{-4}$ $2500$ $6.951\times 10^{-2}$ $10242$ $1.888\times 10^{-3}$ $10001$ $4.690\times 10^{-5}$ $10000$ $5.932\times 10^{-4}$ $23042$ $3.642\times 10^{-4}$ $22501$ $3.189\times 10^{-6}$ $22500$ $1.077\times 10^{-4}$ $40962$ $1.143\times 10^{-4}$ $40001$ $7.437\times 10^{-6}$ $40000$ $2.730\times 10^{-5}$ $92162$ $2.245\times 10^{-5}$ $62501$ $1.805\times 10^{-6}$ $62500$ $8.276\times 10^{-6}$ $163842$ $7.098\times 10^{-6}$ $90001$ $9.204\times 10^{-7}$ $90000$ $6.944\times 10^{-6}$ $256002$ $2.897\times 10^{-6}$ $160001$ $3.009\times 10^{-7}$ $160000$ $1.250\times 10^{-6}$ $655362$ $4.433\times 10^{-7}$ $250001$ $1.411\times 10^{-7}$ $250000$ $5.414\times 10^{-7}$ ---------- ----------------------- ---------- ----------------------- ---------- ----------------------- : Relative error in the $N$-point quadrature of the “smooth” target function $f_2$ in for the different families of nodes.\[tbl:smooth\_err\] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Relative errors in the $N$-point quadratures of (a) $f_1$ and (b) $f_2$ for the three different families of nodes. The mesh-norm for all three families of nodes satisfies $h\sim N^{-1/2}$, so that the dashed lines in the figures indicate convergence like (a) ${\mathcal{O}}(h^{2.5})$ and (b) ${\mathcal{O}}(h^4)$. These are the predicted convergence rates from Theorem \[accuracy\_stabiliy\_polyharmonic\] and Corollary \[TPS\_quadrature\], respectively. \[fig:target\_err\]](ErrRoughTarget.pdf "fig:"){width="48.00000%"} ![Relative errors in the $N$-point quadratures of (a) $f_1$ and (b) $f_2$ for the three different families of nodes. The mesh-norm for all three families of nodes satisfies $h\sim N^{-1/2}$, so that the dashed lines in the figures indicate convergence like (a) ${\mathcal{O}}(h^{2.5})$ and (b) ${\mathcal{O}}(h^4)$. These are the predicted convergence rates from Theorem \[accuracy\_stabiliy\_polyharmonic\] and Corollary \[TPS\_quadrature\], respectively. \[fig:target\_err\]](ErrSmoothTarget.pdf "fig:"){width="48.00000%"} \(a) Rough target, $f_1$. \(b) Smooth target, $f_2$. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- We conclude by noting that the nodes and quadrature weights used in the numerical experiments above are available for download from [@WrightQuadWeights]. Quadrature on Manifolds Diffeomorphic to Homogeneous Spaces {#manif_diffeo} =========================================================== Numerical integration of functions defined on smooth surfaces that are diffeomorphic to ${\mathbb{S}}^2$ arise in a number of applications. For example, the shape of the earth, as well as the other planets, is a “flattened sphere” – i.e., an oblate spheroid, which is diffeomorphic to a sphere. To numerically compute integrals over the earth’s surface then requires quadrature formulas over oblate spheroids. In addition, the need for numerical integration over various surfaces also arises in boundary element formulations of continuum problems in ${\mathbb{R}}^3$ [@atkinson2012spherical Ch. 6]. In this section, we discuss quadrature in a more general context. We will show how to use quadrature weights for a homogeneous manifold ${\mathbb{M}}$ to obtain an invariant, coordinate independent quadrature formula for a smooth Riemannian manifold ${\mathbb{L}}$ diffeomorphic to ${\mathbb{M}}$. Of course, the case in which ${\mathbb{L}}$ is an oblate spheroid and ${\mathbb{M}}={\mathbb{S}}^2$ is of special interest. Recall that a $C^\infty$ manifold ${\mathbb{L}}$ is diffeomorphic to ${\mathbb{M}}$ if there is a $C^\infty$ bijection $F:{\mathbb{M}}\to {\mathbb{L}}$. Let $g_{ij}$ be the Riemannian metric for ${\mathbb{M}}$. Suppose that ${\mathbb{L}}$ has the metric $h_{ij}$. Consider a local chart $(U,\phi:U\to {\mathbb{R}}^n)$ near $p_0\in {\mathbb{M}}$. Using this chart we have coordinates $x=(x^1,\ldots,x^n) = \phi(p)$ and a parametrization $p=\phi^{-1}(x)$. We can use $(U,\phi)$ to produce a local chart $(V,\psi:V \to {\mathbb{R}}^n)$ near $q_0=F(p_0) \in {\mathbb{L}}$. Simply let $V=F(U) \subset {\mathbb{L}}$ and $\psi = \phi\circ F^{-1}$. The coordinates for $V$ are thus $x = \psi(q)$, and so the two metrics $g_{ij}$ and $h_{ij}$ can be expressed in terms of the same set of coordinates. In these coordinates, the volume elements are given by $$d\mu_{{\mathbb{M}}}(x) = \sqrt{\det(g_{ij}(x))}\,dx^1\cdots dx^n \ \text{and} \ d\mu_{\mathbb{L}}(x) = \sqrt{\det(h_{ij}(x)}\,dx^1\cdots dx^n.$$ It follows that $$\label{volume_relation} d\mu_{\mathbb{L}}(x) = \underbrace{\sqrt{\frac{\det(h_{ij}(x))}{\det(g_{ij}(x)) }}}_{\displaystyle{w(x)}} d\mu_{{\mathbb{M}}}(x).$$ Suppose that we now make a change of coordinates, from $x=\phi(p)$ to new coordinates $y=\varphi(p)$, or $x=x(y)$. Let $x'(y)$ be the Jacobian matrix for the transformation, and let $J(y)=\det(x'(y))$. In $y$ coordinates, the metrics are $\tilde g(y)=(x'(y))^Tg(x(y))x'(y)$ and $\tilde h(y)=(x'(y))^T h(x(y))x'(y)$. Consequently, we have that $$\det(\tilde g_{ij}(y)) = J(y)^2 \det(g_{ij}(x(y))) \ \text{and} \ \det(\tilde h_{ij}(y)) = J(y)^2 \det(h_{ij}(x(y))),$$ and, furthermore, that $$w(x(y))=\sqrt{\frac{\det(h_{ij}(x(y)))}{\det(g_{ij}(x(y))) }} = \sqrt{\frac{\det(h'_{ij}(y))}{\det(g'_{ij}(y)) }} = \tilde w(y).$$ This means that $w\circ \phi(p) = \tilde w\circ \psi(p)=:W(p)$ is thus a scalar invariant that is independent of the choice of coordinates. In terms of integrals, we have $$\label{intLL_intM} \int_{{\mathbb{L}}} f(q) d\mu_{\mathbb{L}}(q) = \int_{{\mathbb{M}}} f\circ F(p) W(p) d\mu_{{\mathbb{M}}}(p).$$ An invariant, coordinate independent quadrature formula for ${\mathbb{L}}$ can be obtained from the one for ${\mathbb{M}}$. We have the following result. \[quad\_diffeo\] Let $X$ denote the set of centers on ${\mathbb{L}}$ and let $X'=F^{-1}(X)$ be the corresponding set on ${\mathbb{M}}$. Suppose that we have a quadrature formula for ${\mathbb{M}}$ with weights $\{C_{\xi'}\}_{\xi'\in X'}$. Then we have the following quadrature formula for ${\mathbb{L}}$, $$\label{quad_diffeo_formula} Q_{\mathbb{L}}(f) := \sum_{\xi \in X} f(\xi)c_\xi,\ \text{where }c_\xi := W(F^{-1}(\xi))C_{F^{-1}(\xi)}.$$ Applying the quadrature formula for the homogeneous manifold ${\mathbb{M}}$ to the integral on the right-hand side of (\[intLL\_intM\]) yields $$Q_{{\mathbb{M}}}\big(f\circ F(p) W(p)\big) = \sum_{\xi'\in X'} f\circ F(\xi') W(\xi')C_{\xi'}.$$ Since $\xi' = F^{-1}(\xi)$, we have $f\circ F(\xi') = f(\xi)$ and $W(\xi')=W(F^{-1}(\xi))$. Taking $c_\xi = W(F^{-1}(\xi))C_{F^{-1}(\xi)}$ we obtain (\[quad\_diffeo\_formula\]). #### Oblate spheroid Consider the oblate spheroid ${\mathbb{L}}$, $x^2+y^2+z^2/a^2=1$, where $0<a <1$, and the 2-sphere ${\mathbb{S}}^2(={\mathbb{M}})$, $X^2+Y^2+Z^2=1$. The diffeormorphism between the two manifolds is $F(X,Y,Z) := (X,Y,aZ)$; that is, $(x,y,z) = (X,Y,aZ)$. Our aim is to find the scale factor $W$. Since the end result will be coordinate independent, we choose to work in spherical coordinates $(\theta,\phi)$, where $\theta$ is the longitude and $\phi$ is the latitude on ${\mathbb{S}}^2$. (The north pole is $(0,0,1)$.) Obviously, for ${\mathbb{L}}$ we have $x = \sin \theta \cos \phi$, $y=\sin \theta \sin \phi$, $z = a \cos\theta$. The metric for the sphere is $dS^2 = d\theta^2 + \sin^2(\theta)d\phi^2$. The metric for ${\mathbb{L}}$ is the Euclidean metric $dx^2+dy^2+dz^2 $ on ${\mathbb{R}}^3$ restricted to ${\mathbb{L}}$. Making a straightforward calculation, one can show that the metric for ${\mathbb{L}}$ is $$ds^2 = (\cos^2 \theta +a^2\sin^2\theta )d\theta ^2 +\sin^2 \theta d\phi^2 = \big(a^2 + (1-a^2)\cos^2 \theta\big) d\theta^2 +\sin^2 \theta d\phi^2.$$ Consequently, the volume element for ${\mathbb{L}}$ is $$d\mu_{\mathbb{L}}= \sqrt{a^2 + (1-a^2)\cos^2 \theta\,}\sin \theta \,d\theta d\phi = \underbrace{\sqrt{a^2 + (1-a^2)\cos^2 \theta\,}}_{w(\theta,\phi)}\,d\mu_{{\mathbb{S}}^2}$$ To put this in invariant form on ${\mathbb{S}}^2$, we use $Z=\cos\theta$ to obtain $W(X,Y,Z)=\sqrt{a^2 + (1-a^2)Z^2\,}$. Pulling this back to ${\mathbb{L}}$, we have $W\circ F^{-1}(x,y,z)= \sqrt{a^2 + (a^{-2}-1)z^2\,}$. The weights for $Q_{\mathbb{L}}$ are thus $$c_\xi = \sqrt{a^2 + (a^{-2}-1)\xi_z^2\,}\,C_{(\xi_x,\xi_y, \xi_z/a)}.$$ As we mentioned above, the earth is approximately an oblate spheroid. The parameter $a$ is the ratio of the polar radius to the equatorial radius. The *flattening* of the earth is $f = 1 -a$, and has the approximate value $f\approx 1/300$ (cf. Earth Fact Sheet [@NASA-fact-sheet-2008]). From this, we get that $a\approx 299/300$, and so $W\circ F^{-1}(x,y,z) \approx \sqrt{0.993+ 0.007z^2}$. This factor varies between $0.9967$ and $1$, about $0.3\%$. Even so, it could affect the accuracy of the quadrature formula for functions with large values near the equator. As an aside, Jupiter has $f\approx 1/15$ (cf. Jupiter Fact Sheet (ellipticity) [@NASA-fact-sheet-2008]), and for it the change in $W\circ F^{-1}$ would be a hefty $7\%$. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Prof. Doug Hardin from Vanderbilt University for providing us with code for generating the quasi-minimum energy points used in the numerical examples based on the technique described in [@BorodachovHardinSaff:2012]. [^1]: Department of Mathematics, High Point University, High Point, NC 27262, USA. [^2]: Department of Mathematics, University of Hawaii, Honolulu, HI 96822, USA. Research supported by by grant DMS-1232409 from the National Science Foundation. [^3]: Department of Mathematics, Texas A&M University, College Station, TX 77843, USA. Research supported by grant DMS-1211566 from the National Science Foundation. [^4]: Department of Mathematics, Texas A&M University, College Station, TX 77843, USA. Research supported by grant DMS-1211566 from the National Science Foundation. [^5]: Department of Mathematics, Boise State University, Boise, ID 83725, USA. Research supported by grants DMS-0934581and DMS-1160379 from the National Science Foundation. [^6]: See [@HNW2010] for a discussion of the Riemannian geometry involved, including metrics, tensors, covariant derivatives, *etc*.
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper, we study the polynomial stability of analytical solution and convergence of the semi-implicit Euler method for non-linear stochastic pantograph differential equations. Firstly, the sufficient conditions for solutions to grow at a polynomial rate in the sense of mean-square and almost surely are obtained. Secondly, the consistence and convergence of this method are proved. Furthermore, the orders of consistence (in the sense of average and mean-square) and convergence are given, respectively. [**keywords.**]{} non-linear stochastic pantograph differential equations; polynomial stability; semi-implicit Euler method; consistence; convergence author: - | M.H.Song,[^1]  Y.L.Lu,  M.Z.Liu\ [^2] title: '**[Stability of analytical solutions and convergence of numerical methods for non-linear stochastic pantograph differential equations]{}**' --- =0.9 = Introduction ============ Stochastic pantograph differential equations(SPDEs) arise widely in control, biology, neural network, and finance, etc. Asymptotic stability of analytical solution has received considerable attention in literature for both deterministic and stochastic functional differential equations. Especially, plenty of literature on stability exist with non-exponential rates decay of the solutions(see [@cd][@kx][@mz][@ab]). One important non-exponential rates of decay is polynomially asymptotic stability, which means that the rate of decay is controlled by a polynomial function in mean-square or almost surely sense. This type of stability has been studied in [@jg1],[@jg2] and [@jg3]. Buckwar and Appleby consider the polynomial stability of one dimensional linear stochastic pantograph differential equation in [@eb], where the sufficient conditions of polynomially asymptotic property are given. The convergence of numerical method is another crucial property of stochastic differential equations. Recently, many researchers devoted to the stochastic delay differential equations. Mao Wei[@mw] gave the sufficient conditions of convergence with semi-implicit Euler method for variable delay differential equations driven by Poison random jump measure. Fan studied the approximate solution of linear stochastic pantograph differential equations with Razumikhin technique in [@fan]. Baker and Buckwar, in [@bb], also investigated the linear stochastic pantograph differential equations, and the sufficient conditions of convergence of semi-implicit Euler method are obtained. This article investigates non-linear stochastic pantograph differential equations. It is organised as follows. In section 2, we introduce necessary notations and results on pantograph differential equations. The sufficient conditions of polynomial stability of analytical solutions in the sense of mean-square and almost surely are given in section 3. Finally, both the consistence and convergence of the semi-implicit Euler method are proved, and the orders are achieved. Preliminary notations and properties of pantograph differential equations ========================================================================= Throughout this paper, unless otherwise specified, the following notations are used. If $x,y$ are vectors, the inner product of $x,y$ is denoted by $\langle x,y \rangle $=$x^Ty$, and $|x|$ denotes the Euclidean norm of $x\in R^d$. We use $A^T$ to denote the transpose of $A$, if $A$ is a vector or matrix. And if $A$ is a matrix, its trace norm is denoted by $|A|=\sqrt{traceA^TA}$. Let $(\Omega,P,\mathcal{F})$ be a complete probability space with a filtration $\{\mathcal{F}_t\}_{t\geq0}$ satisfying the usual conditions. Let $0<q<1$, $x(t)=(x_1(t),x_2(t),\cdots,x_d(t))^T\in R^d$ and $B(t)=(B_1(t),B_2(t),\cdots,B_m(t))^T$ be a $m$-dimensional Brownian motion. Suppose $h(x)$ be a function, the derivative of $h(x)$ is defined as $D^+h(x)$, i.e $$\label{2.1} D^+h(x)=\limsup\limits_{\delta\rightarrow0}\frac{h(x+\delta)-h(x)}{\delta}.$$ Before studying the stochastic pantograph differential equations, the properties of deterministic pantograph differential equations are firstly introduced. The equation has the following form: $$\label{2.2} \left\{ \begin{array}{l}\ x^{\prime}(t)=\bar{a}x(t)+\bar{b}x(qt) \\x(0)=x_0. \\ \end{array} \right.$$ [@eb] Assume $x(t)$ be the solution of (2.2), $x_0>0$, if $\bar{a}<0$, then there exists a constant number $C$ such that $$\label{2.3} \limsup\limits_{t\rightarrow\infty}\frac{|x(t)|}{t^\alpha}=C|x_0|,$$ where $\alpha\in R$ and $\bar{a}+\bar{b}q^\alpha=0$.  If there exists a constant number $C>0$, such that $$\label{2.4} |x(t)|\leq C|x(0)|t^\alpha , ~t\geq0,$$ then (2.3) holds. [@eb] Assume $x(t)$ be the solution of equation (2.2), $x_0>0$. If $\bar{b}>0$, $p(t):R^{+}\rightarrow R^+$ is a non-negative continuous function satisfying $$\label{2.5} D^+p(t)\leq\bar{a}p(t)+\bar{b}p(qt), ~t\geq0,$$ and $0<p(0)\leq x(0)$, then $p(t)\leq x(t)$ for all $t\geq0$. Furthermore, if $\bar{a}<0$, and $p(t)$ satisfys (2.5), then there exists a constant $C>0$ such that $$\label{2.6} p(t)\leq Cp(0)t^\alpha , t\geq0,$$ where $\bar{a}+\bar{b}q^{\alpha}=0$. Polynomial stability of analytical solutions for non-linear stochastic pantograph differential equations ======================================================================================================== In this section, we consider the following equation $$\label{3.1} \left\{ \begin{array}{l}\ dx(t)=f(t,x(t),x(qt))dt+g(t,x(t),x(qt))dB(t),~0\leq t\leq T \\ x(0)=x_0. \\ \end{array} \right.$$ where $f:[0,T]\times R^d\times R^d \rightarrow R^d$, $g:[0,T]\times R^d\times R^d\rightarrow R^{d\times m}$, $f(t,0,0)=0$, $g(t,0,0)=0$, and $E|x_0|^2<\infty$. It is easy to see there exists zero solution for (\[3.1\]). The zero solution for (\[3.1\]) is said to be mean-square polynomial stable, if there exists a constant number $\alpha<0$, such that $$\label{3.2} \limsup\limits_{t\rightarrow\infty}\frac{logE|x(t)|^2}{logt}\leq\alpha,$$ where $x(t)$ is the solution for (\[3.1\]) with any initial value $x(0)=x_0$. The zero solution of (3.1) is said to be almost surely polynomial stable, if there exists a constant number $\alpha<0$, such that $$\label{3.3} \limsup\limits_{t\rightarrow\infty}\frac{log|x(t)|}{logt}\leq\alpha~~a.s.,$$ where $x(t)$ is the solution of (\[3.1\]) with any initial value $x(0)=x_0$.  Assume there exist real numbers $a $, $b>0, c>0 ,d>0$, such that the coefficients $f$ and $g$ satisfy $$\langle x_1-x_2,f(t,x_1,y)-f(t,x_2,y)\rangle\leq a|x_1-x_2|^2,~~\forall x_1,x_2,y\in R^d, t\geq0;$$ $$|f(t,x,y_1)-f(t,x,y_2)|\leq b|y_1-y_2|,~~\forall x,y_1,y_2\in R^d, t\geq0;$$ $$|g(t,x_1,y)-g(t,x_2,y)|\leq c|x_1-x_2|,~~\forall x_1,x_2,y\in R^d, t\geq0;$$ $$|g(t,x,y_1)-g(t,x,y_2)|\leq d|y_1-y_2|,~~\forall x,y_1,y_2\in R^d, t\geq0.$$ According to $f(t,0,0)=0$, $g(t,0,0)=0$ and assumption \[3.3\], we can estimate $$\langle x,f(t,x,y)\rangle\leq (a+\frac{b}{2})|x|^2+\frac{b}{2}|y|^2,~~~|g(t,x,y)|^2\leq 2c^2|x|^2+2d^2|y|^2.$$ Suppose that $x(t)$ is the solution of (\[3.1\]), if the coefficients $f,g$ satisfy assumption \[3.3\], and $a+b+c^2+d^2<0$, then the zero solution of (\[3.1\]) is mean-square polynomial stable. [**Proof.**]{} According to definition 3.1, we just need to prove that there exist constant number $C_1$ and $\alpha<0$, such that $E|x(t)|^2\leq C_1t^\alpha$. Itô formula shows that $$E(|x(t)|^2)=E(|x_0|^2)+E(\int^t_0[2\langle x(s),f(s,x(s),x(qs))\rangle+|g(s,x(s),x(qs))|^2]{\rm d}s).$$     Let $Y(t)=E(|x(t)|^2)$, then for any $t\geq0$, $t+h\geq0$, we have $$\begin{split} Y(t+h)-Y(t)&\leq 2E(\int^{t+h}_t[(a+\frac{1}{2}b)|x(s)|^2+\frac{1}{2}b|x(qs)|^2]{\rm d}s)+2E(\int^{t+h}_t[c^2|x(s)|^2+d^2|x(qs)|^2]{\rm d}s)\\ &\leq (2a+b+2c^2)\int^{t+h}_tY(s){\rm d}s+(b+2d^2)\int^{t+h}_tY(qs){\rm d}s. \end{split}$$ Due to $$\begin{split} Y(t)=\limsup\limits_{h\rightarrow0}\frac{\int^{t+h}_0Y(s){\rm d}s-\int^t_0Y(s){\rm d}s}{h},\\ Y(qt)=\limsup\limits_{h\rightarrow0}\frac{\int^{t+h}_0Y(qs){\rm d}s-\int^t_0Y(qs){\rm d}s}{h}, \end{split}$$ thus $$D^+Y(t)\leq (2a+b+2c^2)Y(t)+(b+2d^2)Y(qt).$$ Note that $2a+b+2c^2<0$, $b+2d^2>0$, by Lemma 2.3, there exists $C_1$ and $\alpha\in R$, such that $$E(|x(t)|^2)=Y(t)\leq C_1Y(0)t^\alpha=C_1E(|x_0|^2)t^\alpha,$$ where $\alpha$ satisfies $2a+b+2c^2+(b+2d^2)q^\alpha=0$. According to $a+b+c^2+d^2<0$, we can know $\alpha<0$, the theorem is proved. Suppose that $x(t)$ is the solution of (\[3.1\]), if the coefficients $f,g$ satisfy assumption \[3.3\], and $2a+b+2c^2+(b+2d^2)/q<0$, then the zero solution of (\[3.1\]) is almost surely polynomial stable. [**Proof.**]{} According to $2a+b+2c^2+(b+2d^2)/q<0$, we know that $a+b+c^2+d^2<0$ and $E(|x(t)|^2)\leq C_1E(|x(0)|^2)t^\alpha$, where $\alpha<-1$. By Itô formula and assumption \[3.3\], one can show that for any $n-1\leq t\leq n$, $$\label{3.7} \begin{split} E(\sup\limits _{n-1\leq t\leq n}|x(t)|^2)&\leq E(|x(n-1)|^2)+(b+2d^2)E\int^n_{n-1}|x(qs)|^2{\rm d}s+2c^2E\int^n_{n-1}|x(s)|^2{\rm d}s \\ &~~~+2E(\sup\limits_{n-1\leq t\leq n}\int^t_{n-1}x^T(s)\cdot g(s,x(s),x(qs)){\rm d}B(s)). \end{split}$$ According to Burholder-Davis-Gundy inequations, it is easy to show that $$\label{3.11} \begin{split} &~~~~2E(\sup\limits_{n-1\leq t\leq n}\int^t_{n-1} x(s)^{T}\cdot g(s,x(s),x(qs)){\rm d}B(s)) \\ &\leq 8\sqrt{2}E(\int^n_{n-1}| x(s)|^2|g(s,x(s),x(qs))|^2{\rm d}s)^\frac{1}{2} \\ &\leq 8\sqrt{2}E(\sup\limits_{n-1\leq t\leq n}| x(t)|^2\int^n_{n-1}|g(s,x(s),x(qs))|^2{\rm d}s)^\frac{1}{2} \\ &\leq E(\frac{1}{2}\sup\limits_{n-1\leq t\leq n}| x(t)|^2+64\int^n_{n-1}|g(s,x(s),x(qs))|^2{\rm d}s) \\ &\leq \frac{1}{2}E(\sup\limits_{n-1\leq t\leq n}| x(t)|^2)+64E(\int^n_{n-1}(c^2|x(s)|^2+d^2|x(qs)|^2){\rm d}s). \end{split}$$ Substituting (\[3.11\]) into (\[3.7\]), then $$\begin{split} E(\sup\limits_{n-1\leq t\leq n}|x(t)|^2)\leq& 2C_1E(|x_0|^2)(n-1)^\alpha +132c^2C_1E(|x_0|^2)\int^n_{n-1}s^\alpha{\rm d}s \\ &+(2b+132d^2)C_1E(|x_0|^2)q^\alpha\int^n_{n-1}s^\alpha{\rm d}s. \end{split}$$ It is easy to know $\int_{n-1}^{n}s^{\alpha}{\rm d}s\leq (n-1)^{\alpha}\max(1,2^{\alpha})$. So $$E(\sup\limits_{n-1\leq t\leq n}|x(t)|^2)\leq \tilde{C}(n-1)^\alpha.$$ where $\widetilde{C}=[2+132c^2+(2b+132d^2)q^\alpha]C_1E(|x_0|^2)\max(1,2^\alpha)$. By Markov’s inequations, for any $\varepsilon>0$, it is not difficult to show $$\begin{split} &P(\sup\limits_{n-1\leq t\leq n}|x(t)|^2(n-1)^{-1-\alpha-\varepsilon}\geq\gamma) \\ \leq& \frac{1}{\gamma}\frac{1}{(n-1)^{1+\varepsilon}}\frac{1}{(n-1)^\alpha}E(\sup\limits_{n-1\leq t\leq n}|x(t)|^2) \\ \leq& \frac{1}{\gamma}\frac{1}{(n-1)^{1+\varepsilon}}\widetilde{C}. \end{split}$$ By using Borel-Cantelli lemma, the following limit can be achieved $$\limsup\limits_{n\rightarrow\infty}\sup\limits_{n-1\leq t\leq n}|x(t)|^2(n-1)^{-1-\alpha-\varepsilon}=0~~~a.s..$$ Note that for any $t>0$, there exists $n(t)$ such that $n(t)-1\leq t\leq n(t)$, and $$\lim\limits_{t\rightarrow\infty}\frac{t}{n(t)-1}=1.$$ Hence $$\begin{split} \limsup\limits_{t\rightarrow\infty}|x(t)|^2t^{-1-\alpha-\varepsilon}&\leq \limsup\limits_{t\rightarrow\infty}(\sup\limits_{n(t)-1\leq s\leq n(t)}|x(s)|^2(n(t)-1)^{-1-\alpha-\varepsilon})\limsup\limits_{t\rightarrow\infty}\frac{t^{-1-\alpha-\varepsilon}}{(n(t)-1)^{-1-\alpha-\varepsilon}}) \\ &=\limsup\limits_{t\rightarrow\infty}(\sup\limits_{n-1\leq s\leq n}|x(s)|^2(n-1)^{-1-\alpha-\varepsilon})\limsup\limits_{t\rightarrow\infty}\frac{t^{-1-\alpha-\varepsilon}}{(n(t)-1)^{-1-\alpha-\varepsilon}})~~~a.s. \\ &=0. \end{split}$$ Due to the arbitrary of $\varepsilon$, this can imply $$\limsup\limits_{t\rightarrow\infty}\frac{log|x(t)|}{logt}\leq \frac{1+\alpha}{2}.$$ Consistence and convergence of the semi-implicit method ======================================================== In this section, we will employ the semi-implicit Euler methods to solve the equation (\[3.1\]). We define a family of meshes with fixed step-size on the interval $[0,T]$, i.e. $$\label{4.1} T_N=\{t_0,t_1,t_2,\cdots,t_N\},~~~~~t_n=nh,~n=0,1,2,\cdots,N,~h=\frac{T}{N}<1.$$ Since the points $qt_n$ will probably not be included in $T_N$, so we need another non-uniform mesh which consists of all the points $t_n$ and $qt_n$. Let $$\label{4.2} S_{N'}=\{0=t_0=s_0,s_1,s_2,\cdots,s_{N'}=T\}.$$ For any $l$, we have $s_l=t_n$ or $s_l=qt_m$, where $t_n, t_m\in T_N$. We can also display any $s_l\in S_{N'}$ with $t_n<s_l\leq t_{n+1}$ by $$s_l=t_n+\zeta h,~~~~~~\zeta\in (0,1], ~~t_n,t_{n+1}\in T_N.$$ In this paper, we denote by $y(t_n)$ the approximation of $x(t_n)$ at the point $t_n\in T_N$, and $y(qt_n)$ the approximation of $x(qt_n)$ at the point $qt_n\in S_{N'}$, then the semi-implicit Euler method is given by $$\label{4.3} y(t_{n+1})=y(t_n)+h[(1-\theta)f(t_n,y(t_n),y(qt_n))+\theta f(t_{n+1},y(t_{n+1}),y(qt_{n+1}))]+g(t_n,y(t_n),y(qt_n))\triangle B_n,$$ where $y(t_0)=x_0$, $n=0,1,2,\cdots,N-1$, $\triangle B_n=B(t_{n+1})-B(t_n)$, $\theta\in [0,1]$. Here we require $y(t_n)$ to be $\mathcal{F}_{t_n}$-measurable at the point $t_n,n=0,1,\cdots,N$. We can also express (\[4.3\]) equivalently as $$\label{4.4} \begin{split} y(t_{n+1})=y(t_n)&+\int_{t_n}^{t_{n+1}}[(1-\theta)f(t_n,y(t_n),y(qt_n))+\theta f(t_{n+1},y(t_{n+1}),y(qt_{n+1}))]{\rm d}t \\ &+\int_{t_n}^{t_{n+1}}g(t_n,y(t_n),y(qt_n)){\rm d}B(t). \end{split}$$ Note that we can’t express $y(s_l)$ which equals to $y(qt_{m})$ in(\[4.4\]), $s_l\in S_{N'}$, so we need a continuous extension that permits the evaluation of $y(s_l)$at any point $s_l=t_n+\zeta h\in S_{N'}$, $\zeta\in (0,1]$, so we define $$\label{4.5} \begin{split} y(s_l)=y(t_n)&+\int_{t_n}^{t_n+\zeta h}[(1-\theta)f(t_n,y(t_n),y(qt_n))+\theta f(t_{n+1},y(t_{n+1}),y(qt_{n+1}))]{\rm d}t \\ &+\int_{t_n}^{t_n+\zeta h}g(t_n,y(t_n),y(qt_n)){\rm d}B(t). \end{split}$$ For any $\theta$ given, $t_n\in T_n$, $\zeta \in (0,1]$, the local truncation error of semi-implicit Euler method for (\[3.1\]) can be denoted by $\delta_h(t_n,\zeta)$, $$\label{4.6} \begin{split} \delta_h(t_n,\zeta)=& x(t_n+\zeta h)-\{x(t_n)+(1-\theta)\int_{t_n}^{t_n+\zeta h}f(t_n,x(t_n),x(qt_n)){\rm d}t \\ &+\theta\int_{t_n}^{t_n+\zeta h}f(t_{n+1},x(t_{n+1}),x(qt_{n+1})){\rm d}t \\ &+\int_{t_n}^{t_n+\zeta h}g(t_n,x(t_n),x(qt_n)){\rm d}B(t)\}. \end{split}$$ \(i) The semi-implicit Euler method is called to be consistent with order $p_1$ in average sense, if there exist constant $C>0$ and $p_1$, which are independent of step size $h$, such that $$\label{4.7} \max\limits_{0\leq n\leq N-1}\sup\limits_{\zeta\in (0,1]}|E(\delta_h(t_n,\zeta))|\leq Ch^{p_1}~~as~h\rightarrow0.$$ \(ii) The semi-implicit Euler method is called to be consistent with order $p_2$ in the sense of mean-square, if there exist constant $C$ and $p_2$, which are independent of step size $h$, such that $$\label{4.8} \max\limits_{0\leq n\leq N-1}\sup\limits_{\zeta\in (0,1]}(E(|\delta_h(t_n,\zeta)|^2))^{\frac{1}{2}}\leq Ch^{p_2}~~as~ h\rightarrow0.$$ For any $\theta$ given, $t_n\in T_n$, $\zeta \in (0,1]$, the global error of semi-implicit Euler method can be denoted by $\epsilon(s_l)$ $$\label{4.9} \epsilon(s_l)=\epsilon(t_n+\zeta h)=x(t_n+\zeta h)-y(t_n+\zeta h).$$ The semi-implicit Euler method is called to be convergent with order $p$, if there exist constant $C$ and $p$, which are independent of step size $h$, such that $$\label{4.10} \max\limits_{s_l\in S_{N'}}(E(|\epsilon(s_l)|^2))^\frac{1}{2}\leq Ch^p~~as ~h\rightarrow0.$$ [@wc] Assume that there exists a positive constant $K$ such that\ (i)(Lipschitz condition) For all $t\in [0,T]$, $x_1,x_2,y_1,y_2\in R^d$, $$|f(t,x_1,y_1)-f(t,x_2,y_2)|^2\vee|g(t,x_1,y_1)-g(t,x_2,y_2)|^2\leq K(|x_1-x_2|^2+|y_1-y_2|^2),$$ (ii)(Linear growth condition) For all $(t,x,y)\in [0,T]\times R^d\times R^d$, $$|f(t,x,y)|^2\vee|g(t,x,y)|^2\leq K(1+|x|^2+|y|^2).$$ Then there exists a unique solution $x(t)$ to (\[3.1\]), and $E(\sup\limits_{0\leq t\leq T}|x(t)|^2)\leq M$.  Due to Lipschiz condition and $f(t,0,0)=0,~g(t,0,0)=0$, it is not difficult to know $|f(t,x,y)|^2\leq K(|x|^2+|y|^2)$ and $|g(t,x,y)|^2\leq K(|x|^2+|y|^2).$  Under the Lipschitz condition, the semi-implicit Euler method for equation (\[3.1\]) is consistent (i) with order 1.5 in average sense; (ii) with order 1 in mean-square sense. [**Proof.**]{} (i) For the equation (\[3.1\]) and the semi-implicit method (\[4.3\]), the local truncation error takes the special form: $$\label{4.11} \begin{split} \delta_h(t_n,\zeta)=&(1-\theta)\int_{t_n}^{t_n+\zeta h}(f(t,x(t),x(qt))-f(t_n,x(t_n),x(qt_n))){\rm d}t \\ &+\theta\int_{t_n}^{t_n+\zeta h}(f(t,x(t),x(qt))-f(t_{n+1},x(t_{n+1}),x(qt_{n+1}))){\rm d}t \\ &+\int_{t_n}^{t_n+\zeta h}(g(t,x(t),x(qt))-g(t_n,x(t_n),x(qt_n))){\rm d}B(t), \end{split}$$ for $n=0,1,2,\cdots,N$. we will frequently make use of Höder inequality in the next content. Note that $E(|x|)\leq (E(|x|^2))^{\frac{1}{2}}$, so taking expectation and absolute both sides of the equation above, we can estimate $$\label{4.12} \begin{split} |E(\delta_h(t_n,\zeta))|\leq&(1-\theta)\int_{t_n}^{t_n+\zeta h}E(|f(t,x(t),x(qt))-f(t_n,x(t_n),x(qt_n))|){\rm d}t \\ &+\theta\int_{t_n}^{t_n+\zeta h}E(|f(t,x(t),x(qt))-f(t_{n+1},x(t_{n+1}),x(qt_{n+1}))|){\rm d}t \\ \leq&(1-\theta)K^{\frac{1}{2}}\int_{t_n}^{t_n+\zeta h}[E(|x(t)-x(t_n)|^2)+E(|x(qt)-x(qt_n)|^2)]^{\frac{1}{2}}{\rm d}t \\ &+\theta K^{\frac{1}{2}}\int_{t_n}^{t_n+\zeta h}[E(|x(t)-x(t_{n+1})|^2)+E(|x(qt)-x(qt_{n+1})|^2)]^{\frac{1}{2}}{\rm d}t. \end{split}$$ By Lemma \[4.3\], $h<1$ and the integral we can obtain $$E(|x(t)-x(t_n)|^2)\leq(2K(t-t_n)+2K)E(\int_{t_n}^t(|x(t)|^2+|x(qt)|^2){\rm d}t) \leq8KMh.$$ In the same way, we can compute $E(|x(qt)-x(qt_n)|^2)\leq 4KMq(1+q)h$, $E(|x(t)-x(t_{n+1})|^2)\leq 8KMh$ and $E(|x(qt)-x(qt_{n+1})|^2)\leq 4KMq(1+q)h$. Thus $$\label{4.13} |E(\delta_h(t_n,\zeta))|\leq \int_{t_n}^{t_n+\zeta h}C_1h^{\frac{1}{2}}{\rm d}t=C_1h^{\frac{3}{2}}\zeta\leq C_1h^{\frac{3}{2}}$$ where $C_1=K(8M+4Mq(1+q))^{\frac{1}{2}}$. This implies $$\max\limits_{0\leq t_n\leq N-1}\sup\limits_{\zeta\in (0,1]}|E(\delta_h(t_n,\zeta))|\leq C_1h^{\frac{3}{2}}.$$ (ii) According to the definition of $\delta_h(t_n,\zeta)$, the following inequality holds. $$\begin{split} E(|\delta_h(t_n,\zeta)|^2)\leq&3(1-\theta)^2E(|\int_{t_n}^{t_n+\zeta h}(f(t,x(t),x(qt))-f(t_n,x(t_n),x(qt_n))){\rm d}t|^2)\\ &+3\theta^2E(|\int_{t_n}^{t_n+\zeta h}(f(t,x(t),x(qt))-f(t_{n+1},x(t_{n+1}),x(qt_{n+1}))){\rm d}t|^2)\\ &+3E(|\int_{t_n}^{t_n+\zeta h}(g(t,x(t),x(qt))-g(t_n,x(t_n),x(qt_n))){\rm d}B(t)|^2) \end{split}$$ $$\label{4.14} \begin{split} \leq&3K((1-\theta)^2\zeta h+1)E(\int_{t_n}^{t_n+\zeta h}(|x(t)-x(t_n)|^2+|x(qt)-x(qt_n)|^2){\rm d}t) \\ &+3K\theta^2\zeta hE(\int_{t_n}^{t_n+\zeta h}(|x(t)-x(t_{n+1})|^2+|x(qt)-x(qt_{n+1})|^2){\rm d}t) \\ \leq&C_2h^2, \end{split}$$ where $C_2=24K^2M(\theta^2-\theta+1)(q^2+q+2)$. Let $C_3=\sqrt{C_2}$, then $$\max\limits_{0\leq t_n\leq N-1}\sup\limits_{\zeta\in (0,1]}(E(|\delta_h(t_n,\zeta)|^2))^{\frac{1}{2}}\leq C_3h.$$  Under Lipschitz condition, the semi-implicit Euler method for problem (\[3.1\]) is convergent with order 0.5. [**Proof.**]{} For any $s_l=t_n+\zeta h\in S_{N'}$, set $$\label{4.15} \begin{split} \nu_{h}(t_n,\zeta)=&(1-\theta)\int_{t_n}^{t_n+\zeta h}(f(t_n,x(t_n),x(qt_n))-f(t_n,y(t_n),y(qt_n))){\rm d}t \\ &+\theta\int_{t_{n}}^{t_n+\zeta h}(f(t_{n+1},x(t_{n+1}),x(qt_{n+1}))-f(t_{n+1},y(t_{n+1}),y(qt_{n+1}))){\rm d}t \\ &+\int_{t_n}^{t_n+\zeta h}(g(t_n,x(t_n),x(qt_n))-g(t_n,y(t_n),y(qt_n))){\rm d}B(t). \\ \end{split}$$ Then $$\label{4.16} \epsilon(s_l)=x(t_n+\zeta h)-y(s_l)=\epsilon(t_n)+\delta_h(t_n,\zeta)+ \nu_{h}(t_n,\zeta).$$ Squaring both sides of the equation above, employing the conditional expectation with respect to the $\sigma$-algebra $\mathcal{F}_0$, and taking absolute values, we get $$\label{4.17} \begin{split} E(|\epsilon(s_l)|^2|\mathcal{F}_0)\leq&E(|\epsilon(t_n)|^2|\mathcal{F}_0)+E(|\delta_h(t_n,\zeta)|^2|\mathcal{F}_0)+E(|\nu_{h}(t_n,\zeta)|^2|\mathcal{F}_0) \\ &+2E(|\epsilon(t_n)|\cdot|\delta_h(t_n,\zeta)||\mathcal{F}_0)+2E(|\epsilon(t_n)|\cdot|\nu_h(t_n,\zeta)||\mathcal{F}_0) \\ &+2E(|\delta_h(t_n,\zeta)|\cdot|\nu_h(t_n,\zeta)||\mathcal{F}_0) \quad \quad\quad\quad \quad\quad\quad \quad\quad\quad \quad \quad a.s. \\ =&A_1+A_2+A_3+A_4+A_5+A_6. \end{split}$$ Next we will estimate the six terms in (\[4.14\]). For the term $A_2$, by (\[4.14\]) we have $$A_2=E(|\delta_h(t_n,\zeta)|^2|\mathcal{F}_0)=E(E(|\delta_h(t_n,\zeta)|^2|\mathcal{F}_n)|\mathcal{F}_0)\leq C_2h^2.$$ For $A_3$ in (\[4.17\]), we obtain $$\begin{split} A_3\leq&3(1-\theta)^2K\zeta hE(\int_{t_n}^{t_n+\zeta h}(|x(t_n)-y(t_n)|^2+|x(qt_n)-y(qt_n)|^2){\rm d}t|\mathcal{F}_0) \\ &+3\theta^2K\zeta hE(\int_{t_{n}}^{t_n+\zeta h}(|x(t_{n+1})-y(t_{n+1})|^2+|x(qt_{n+1})-y(qt_{n+1})|^2{\rm d}t|\mathcal{F}_0) \\ &+3KE(\int_{t_n}^{t_n+\zeta h}(|x(t_n)-y(t_n)|^2+|x(qt_n)-y(qt_n)|^2){\rm d}t|\mathcal{F}_0) \\ =&3K\zeta h((1-\theta)^2\zeta h+1)E(|\epsilon(t_n)|^2|\mathcal{F}_0)+3K\zeta h((1-\theta)^2\zeta h+1)E(|\epsilon(qt_n)|^2|\mathcal{F}_0) \\ &+3\theta^2K\zeta^2h^2E(|\epsilon(t_{n+1})|^2|\mathcal{F}_0)+3\theta^2K\zeta^2h^2E(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0). \end{split}$$ We estimate $A_4$, $$\begin{split} A_4\leq 2(E(|E(\delta_h(t_n,\zeta))|^2|\mathcal{F}_0))^\frac{1}{2}\cdot (E(|\epsilon(t_n)|^2|\mathcal{F}_0))^\frac{1}{2} \leq C_1^{2}h^{2}+hE(|\epsilon(t_n)|^2|\mathcal{F}_0). \end{split}$$ In the same way, we can see $$\begin{split} A_5\leq &(2-\theta)K^{\frac{1}{2}}\zeta hE(|\epsilon(t_n)|^2|\mathcal{F}_0)+(1-\theta)K^{\frac{1}{2}}\zeta hE(|\epsilon(qt_n)|^2|\mathcal{F}_0) \\ &+\theta K^{\frac{1}{2}}\zeta hE(|\epsilon(t_{n+1})|^2|\mathcal{F}_0)+\theta K^{\frac{1}{2}}\zeta hE(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0), \end{split}$$ and $$\begin{split} A_6\leq&2(E(|\nu_h(t_n,\zeta)|^2|\mathcal{F}_0))^{\frac{1}{2}}\cdot(E(|\delta_h(t_n,\zeta)|^2|\mathcal{F}_0))^{\frac{1}{2}} \\ \leq&C_2h^2+3K\zeta h((1-\theta)^2\zeta h+1)E(|\epsilon(t_n)|^2|\mathcal{F}_0) \\ &+3\theta^2K\zeta^2 h^2E(|\epsilon(t_{n+1})|^2|\mathcal{F}_0) \\ &+3K\zeta h((1-\theta)^2\zeta h+1)E(|\epsilon(qt_n)|^2|\mathcal{F}_0) \\ &+3\theta^2K\zeta^2 h^2E(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0). \end{split}$$ Combining these results, we can compute $$\label{4.18} \begin{split} E(|\epsilon(t_n+\zeta h)|^2|\mathcal{F}_0)\leq& (1+6K\zeta h((1-\theta)^2\zeta h+1)+h+(2-\theta)K^{\frac{1}{2}}\zeta h)E(|\epsilon(t_n)|^2|\mathcal{F}_0) \\ &+(6\theta^2K\zeta^2h^2+\theta K^{\frac{1}{2}}\zeta h)E(|\epsilon(t_{n+1})|^2|\mathcal{F}_0) \\ &+(6K\zeta h((1-\theta)^2\zeta h+1)+(1-\theta)K^{\frac{1}{2}}\zeta h)E(|\epsilon(qt_n)|^2|\mathcal{F}_0) \\ &+(6\theta^2K\zeta^2h^2+\theta K^{\frac{1}{2}}\zeta h)E(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0)+(2C_2+C_1^2)h^2. \end{split}$$ Set $R_0=0$, $R_n=\max\limits_{0\leq i< n}\sup\limits_{\zeta\in (0,1]}E(|\epsilon(t_i+\zeta h)|^2|\mathcal{F}_0)$, then $$E(|\epsilon(t_n)|^2|\mathcal{F}_0)\leq R_n, ~~ E(|\epsilon(qt_n)|^2|\mathcal{F}_0)\leq R_n.$$ In (\[4.18\]), we need to calculate $E(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0)$, which depends on either $t_n<qt_{n+1}<t_{n+1}$ or $qt_{n+1}<t_n$. Case 1: If $t_n<qt_{n+1}<t_{n+1}$, then $E(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0)\leq R_{n+1}$. According to (\[4.18\]), we can see $$\begin{split} E(|\epsilon(t_n+\zeta h)|^2|\mathcal{F}_0)\leq&(1+12(1-\theta)^2Kh^2+12Kh+h+(3-2\theta)K^{\frac{1}{2}}h)R_n\\ &+(12\theta^2Kh^2+2\theta K^{\frac{1}{2}}h)R_{n+1}. \end{split}$$ So $$\begin{split} R_{n+1}=&\max\limits_{0\leq i< n+1}\sup\limits_{\zeta\in (0,1]}E(|\epsilon(t_i+\zeta h)|^2|\mathcal{F}_0) \\ \leq&(1+12(1-\theta)^2Kh^2+12Kh+h+(3-2\theta)K^{\frac{1}{2}}h)R_n \\ &+(12\theta^2Kh^2+2\theta K^{\frac{1}{2}}h)R_{n+1}+(2C_2+C_1^2)h^2. \end{split}$$ There is $h_0=\frac{\sqrt{13}-1}{12}K^{-\frac{1}{2}}$, such that $1-12\theta^2Kh^2-2\theta K^{\frac{1}{2}}h>0$ when $0<h<h_0$. Therefore $$R_{n+1}\leq(1+h\frac{1+12(1-\theta)^2K+12K+3K^{\frac{1}{2}}+12\theta^2K}{1-12\theta^2Kh^2-2\theta K^{\frac{1}{2}}h})R_n+\frac{2C_2+C_1^2}{1-12\theta^2Kh^2-2\theta K^{\frac{1}{2}}h}h^2.$$ Case 2: If $qt_{n+1}<t_n$, then $E(|\epsilon(qt_{n+1})|^2|\mathcal{F}_0)\leq R_n$. In the same way as case 1, we can get $$R_{n+1}\leq(1+h\frac{1+12(1-\theta)^2K+12K+3K^{\frac{1}{2}}+12\theta^2K}{1-6\theta^2Kh^2-\theta K^{\frac{1}{2}}h})R_n+\frac{2C_2+C_1^2}{1-6\theta^2Kh^2-\theta K^{\frac{1}{2}}h}h^2,$$ when $0<h<h_1=\frac{1}{3}K^{-\frac{1}{2}}.$ Now take $0<L<1$, which is independent of $h$, such that $12\theta^2Kh^2+2\theta K^{\frac{1}{2}}h<L$, and set $$M(\theta)=\frac{1+12(1-\theta)^2K+12K+3K^{\frac{1}{2}}+12\theta^2K}{1-L} ,~~C(\theta)=\frac{2C_2+C_1^2}{1-L}.$$ Then combining case 1 and case 2, the $R_{n+1}$ satisfies $$\begin{split} R_{n+1}\leq&(1+hM(\theta))R_n+C(\theta)h^2\leq(1+hM(\theta))R_{n-1}+(1+hM(\theta))C(\theta)h^2+C(\theta)h^2 \\ \leq&\cdots\leq(1+hM(\theta))^{n+1}R_0+C(\theta)h^2\sum_{i=0}^{n}(1+hM(\theta))^{i}\leq \frac{(1+hM(\theta))^{n+1}-1}{M(\theta)}C(\theta)h. \end{split}$$ The expression above indicates that $$E(|\epsilon(s_l)|^2|\mathcal{F}_0)\leq R_{n+1}\leq \frac{e^{TM(\theta)}-1}{M(\theta)}C(\theta)h,$$ for any $s_l=t_n+\zeta h\in S_{N'}$ holds, where $t_n\in T_N$, $\zeta\in (0,1]$. By the definition of convergence, we can show $$\max\limits_{s_l\in S_{N'}}(E(|\epsilon(s_l)|^2|\mathcal{F}_0))^{\frac{1}{2}}\leq \sqrt{\frac{C(\theta)(e^{TM(\theta)}-1)}{M(\theta)}}h^{\frac{1}{2}}$$ The theorem is proved. \ [99]{} Appleby, J.A.D., Berkolaiko, G., Rodkina, A., Non-exponential stability and decay rates in nonlinear stochastic difference equations with unbounded noise, Stochastics: An International Journal of Probability and Stochastics Processes, 81(2009), 99-127. Appleby, J.A.D., Buckwar, E., Sufficient conditions for polynomial asymptotic behaviour of the stochastic pantograph equation, Stochastic Anal, 2003. Appleby, J.A.D., Mackey, D., Almost sure polynomial asymptotic stability of stochastic difference equations, Journal of Mathematical Sciences, 149(2008), 1629-1647. Appleby, J.A.D., Mackey, D., Polynomial Asymptotic Stability of Damped Stochastic Differential Equations, Electronic Journal Qualitative Therory of Differntial Equations. 2(2004), 1-33. Baker, C.T.H., Buckwar, E., Continuous $\theta$-Methods for the Stochastic Pantograph Equation, Electronic Transactions on Numerical Analysis, 11(2000), 131-151. Carr, J., Dyson, J., The functional differential equation $y'(x)=ay(\lambda x)+by(x)$, Proc. Roy. Soc. Edinburgh Sect. A. 74(1974), 165-174. Fan, Z.C., Liu, M.Z., Cao, W.R., Existence and uniqueness of the solutions and convergence of semi-implicit Euler methods for stochastic pantograph equations, J. Math. Anal. Appl. 325 (2007) 1142-1159. Fan, Z.C., Song, M.H., Liu, M.Z., The $\alpha$th moment stability for the stochastic pantograph equation,Journal of Computational and Applied Mathematics, 233(2009), 109-120. Liu, K., Mao, X.R., Large time decay behavior of dynamical equations with random perturbation features, Stochastic Analysis and Applications, 19(2001), 295-327. Liu, M.Z., Yang, Z.W., G.D.Hu, Asymptotical stability of the numerical methods with the constant stepsize for the pantograph equation, BIT, 45(2005), 743-759. Mao, W., Convergence analysis of semi-implicit Euler methods for solving stochastic equations with variable delays and random Jump magnitudes, Journal of Computational and Applied Mathematics, 235(2011), 2569-2580. Tsoi, A.H., Zhang, B., Weak exponential stability of stochastic differential equations, Stochastic Analysis and Applications, 15(1997), 643-649. [^1]: Corresponding author. Email: songmh@lsec.cc.ac.cn, luyulan2013@163.com, mzliu@hit.edu.cn [^2]: This work is supported by the NSF of P.R. China (No.11071050)
{ "pile_set_name": "ArXiv" }
--- abstract: 'This is a paper that aims to interpret the cardinality of a set in terms of Baire Category, *i.e.* how many closed nowhere dense sets can be deleted from a set before the set itself becomes negligible. To do this natural tree-theoretic structures such as the Baire topology are introduced, and the Baire Category Theorem is extended to a statement that a $\aleph$-sequentially complete binary tree representation of a Hausdorff topological space that has a clopen base of cardinality $\aleph$ and no isolated or discrete points is not the union of $<\aleph+1$-many nowhere dense subsets for cardinal $\aleph\ge\aleph_{0}$, where a $\aleph$-sequentially complete topological space is a space where every function $f:\aleph\rightarrow\{0.1\}$ is such that $(\forall x)(x\in f\rightarrow x\in\in X)\rightarrow(f\in X)$. It is shown that if $\aleph<\left|X\right|\le2^{\aleph}$ for $\left|X\right|$ the cardinality of a set $X$, then it is possible to force $\left|X\right|-\aleph\times\left|X\right|\ne\emptyset$ by deleting a dense sequence of $\aleph$ specially selected clopen sets, while if any dense sequence of $\aleph+1$ clopen sets are deleted then $\left|X\right|-(\aleph+1)\times\left|X\right|=\emptyset$. This gives rise to an alternative definition of cardinality as the number of basic clopen sets (intervals in fact) needed to be deleted from a set to force an empty remainder. This alternative definition of cardinality is consistent with and follows from the Generalized Continuum Hypothesis (GCH), which is shown by exhibiting two models of set theory, one an outer (modal) model, the other an inner, generalized metric model with an information minimization principle.' address: 'Dr. Andrew Powell, Honorary Senior Research Fellow, Institute for Security Science and Technology, Level 2 Admin Office Central Library, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom ' author: - Andrew Powell title: 'Topology, Cardinality, Metric Spaces and GCH' --- @path[[/Users/andrewpowell/Downloads/]{}]{} Introduction ============ This paper is experimental in the sense that there is very little recent relevant literature in the subject of this paper, and the paper has not been peer reviewed. For these reasons, please email the author if you find any errors or any arguments lack clarity.\ \ In this paper a natural topology is outlined on the natural numbers, real numbers and sets higher up in the von Neumann cumulative hierarchy of pure sets[^1], which leads to a change in the definition of cardinality of set in order to support the view that cardinality measures how many topologically negligible sets can be deleted from a set before the set itself becomes negligible. It is then shown that there are models of set theory in which the change of definition of cardinality can be performed (which is exactly when the Generalized Continuum Hypothesis, GCH, holds).\ \ Before we begin with the development of a natural topology, it is worth noting some assumptions about the universe of sets. Firstly, we identify the set of subsets of a set $X$ of cardinality $\aleph$ with the set of binary sequences of length $\aleph$, called *binary $\aleph$-sequences*, which are functions $f:\aleph\rightarrow\{0.1\}$, and members of the functions $\langle\alpha,b\rangle$ are called *nodes*. This is possible by fixing an enumeration of $X$, $\langle x_{\alpha}:\alpha<\aleph\rangle$ (by the Axiom of Choice), and for any subset $Y\subseteq X$ forming the binary *$\aleph$-*sequence $\langle b_{\alpha}:(x_{\alpha}\in Y\rightarrow b_{\alpha}=1)\vee(x_{\alpha}\notin Y\rightarrow b_{\alpha}=0)\rangle$. Thus a subset of $X$ can be identified with a binary $\aleph$-sequence, and a set of subsets of $X$ can be identified with a set of binary $\aleph$-sequences. It is natural to think about any set as a *tree* of binary $\aleph$-sequences for some cardinal $\aleph$, where subtrees may *split* from a given $\aleph$-sequence at a given node, if we allow a tree to include the degenerate case where all members of the set are subsets of a single branch of the tree (*i.e*. the tree is a line). It is an obvious but important fact that a tree formed by binary $\aleph$-sequencs is a *binary tree*, *i.e*. a tree in which every node has at most two successor nodes.\ \ Representation by binary trees also suggests a property of sets that will appear throughout this paper, namely the property of a set $X$ corresponding to every binary $\aleph$-sequence through the tree representing a member of $X$. This property is a kind of completeness, but is in general weaker than compactness (unless $\aleph\le\aleph_{0}$). It is called *$\aleph$-sequential completeness*. Logically *$\aleph$*-sequential completeness has the form $(\forall f:\aleph\rightarrow\{0,1\})((\forall x)(x\in f\rightarrow x\in\in X)\rightarrow(f\in X))$, where $x\in\in y$ is defined as $(\exists z)(x\in z\wedge z\in y)$. Like completeness in a metric space, $\aleph$-sequential completeness does correspond to a generalized metric condition. *$\aleph$-*sequential completeness is also a closure condition, but it is stronger than closure because closure depends on which sets are defined to be open. It is in fact a form of absolute closure[^2], because the closure does not depend on the embedding space.[^3]\ \ We can also note that by the same argument as above any set can be considered as a (possibly infinitely long) binary sequence. An ordinal number, $\alpha$, can be coded (non-uniquely) as a constant sequence of 1s of length $\alpha$, but in order to associate $\alpha<\aleph$ with a unique binary $\aleph$-sequence, $\alpha$ is represented as an initial sequence of $\alpha$ 1s followed by a terminal sequence of 0s. $x\subseteq y$ if $x_{\alpha}\le y_{\alpha}$ for all $\alpha<\aleph$ where $x_{\alpha}$ and $y_{\alpha}$ are binary representations at position $\alpha$ in a $\aleph$-sequence of sets $x$ and $y$.\ \ It should be apparent that the universe of sets can be regarded as a binary sequence representation of the von Neumann hierarchy of pure sets, $V_{0}=\emptyset$, $V_{\alpha+1}=\{x:x\subseteq V_{\alpha}\}$ and $V_{\lambda}=\bigcup_{\alpha<\lambda}V_{\alpha}$ for $\lambda$ a limit ordinal. Binary ** $\aleph$-sequences first appear in $V_{\alpha}$ for some ordinal $\alpha$, and if $\aleph$ is an infinite cardinal then $\alpha>\omega$, where $\omega$ is the least ordinal of cardinality $\aleph_{0}$. A natural topology of the natural numbers ========================================= Consider a topology on the natural numbers with closed sets of the form $u_{n}=\{m\in N:m>n\}$ as well as *$\slashed{O}$* and *N*, where $N$ is the set of natural numbers. These sets are closed sets because $\bigcap_{m<i<n}u_{i}=u_{n}$, $\bigcap_{m<i<\omega}u_{i}=\slashed{O}$, $u_{n}\cap N=u_{n}$, $u_{n}\cap\slashed{O}=\slashed{O}$ and $N\cap\slashed{O}=\slashed{O}$ for natural numbers $m,\:n$ and $n>m+1$ where both appear in the same formula. No new closed sets are introduced by taking finite unions of closed sets, *i.e*. $\bigcup_{i\in\{m_{0},\dots,m_{n}\}}u_{i}=u_{m_{0}}$ if $m_{0}\le\ldots\le m_{n}$ for $m_{i}\in N$. Rephrasing these statements, it is easy to see that $u_{j}\subset u_{i}$ if $j>i$ and for any natural numbers $m,\:n>m+1$, $\bigcap_{m<i<n}u_{i}\neq\slashed{O}$ and $\bigcap_{m<i<\omega}u_{i}=\slashed{O}$. Define open sets to be $d_{n}=N-u_{n}$ and $\slashed{O}$ and $N$. Then we see $N-\bigcup_{m<i<n}d_{i}\ne\slashed{O}$ and $N-\bigcup_{m<i<\omega}d_{i}=\slashed{O}$ for any natural numbers $m,\:n>m+1$. If we note $\mid d_{n+1}\mid-\mid d_{n}\mid=1$, then we have $\left|N\right|\ne\sum_{i=m}^{n}$1 and $\left|N\right|=\sum_{i=m}^{\omega}1$, or in cardinality terms $\aleph_{0}\ne n-m$ and $\aleph_{0}=\mid\omega\mid\times1$. These results are not surprising, but it is worth rephrasing: that if we remove any finite set of open sets (not including $N$) from $N$ we have a non-empty remainder, and if we remove any infinite set of open sets from $N$ we have an empty remainder. This shows in this topology that you cannot force[^4] $\aleph_{0}$ to be finite and you cannot force $\aleph_{0}\ne\mid\omega\mid$.\ \ We can state this as: In a natural topology on the set of natural numbers, $N$, with closed sets of the form $u_{n}=\{m\in N:m>n\}$ and *$\slashed{O}$* and *$N$,* you cannot force $\aleph_{0}$ to be finite and you cannot force $\aleph_{0}\ne\mid\omega\mid$. It is possible to reverse the roles of the open and closed sets, but essentially the same topology arises. If, however, (other than closed and open sets $\emptyset$ and $N$) closed sets have the form $u_{n}=\{m\in N:m\le n\}$ and open sets have the form $d_{n}=\{m\in N:m<n\}$, then all open sets are also closed, and all closed sets also open because $m\le n$ if any only if $m<n+1$. But then each set of the form $\{n\}$, where $n\in N$, and $\slashed{O}$ and $N$ are clopen (open and closed). It follows that $X$ has the discrete topology. A natural topology of the real numbers ====================================== A natural generalization of taking terminal segments of the natural numbers as closed sets is to take closed sets of the real numbers to be the set of real numbers in a set $X$ (expressed as a set of binary sequences) that agree with some $x\in X$ on a particular initial segment of $x$, say $\langle x_{m}:m<n\rangle$ where $x_{m}\in\{0,1\}$, but which do not include $x_{n}$ and therefore $x$. To be precise, a closed set is a set of the form $u_{n}(x)=\{y\in X:$$y_{n}\ne x_{n}\wedge(\forall m<n)(x_{m}=y_{m})\}$ for natural number $n>0$. This is a variation of the Baire topology, where open sets extend a finite sequence $\langle x_{m}:m<n\rangle$.[^5] We can see that other closed sets need be added so that closed sets are closed under intersection. We need to add as closed sets *$\slashed{O}$* and *X* and sets of the form $\{x\}$ for $x\in X$ because $\bigcap_{1<i<\omega}u_{n_{i}}(x_{i})=\{x\}$ is possible for some sequence of closed sets $\langle u_{n_{i}}(x_{i}):1<i<\omega\rangle$.[^6] The topology can be thought of in terms of trees: if each member of $X$ is a binary sequence $\omega\rightarrow\{0,1\}$, *i.e.* a binary *$\omega$-sequence*, then every $x\in X$ is a *branch* of the tree and $u_{n}(x)$ is a subtree that splits from $x$ at a particular node of the binary sequence $x$, *i.e.* at $\langle n,b\rangle$ for natural number $n$ and $b\in\{0,1\}$. It is possible for a point to be represented by two branches in the case where a binary sequence is eventually constant. For example, $0111\ldots$ might represented by $1000\ldots$ as well. But since there are countably many such double representations, the use of a tree representation is appropriate for studying uncountable sets of real numbers. In the following treatment all *isolated* members of $X$, $x\in X$ which have a highest node $nd$ such that all other $y\in X$ split from $x$ at or below $nd$, are deleted to simplify the exposition. As there are at most countably many isolated members of $X$ because there are countably nodes of type $nd$, isolated points are well-understood from a cardinality perspective. Moreover, since it is possible for a point to become isolated if other isolated points are removed, by transfinite induction up to a countably infinite ordinal (deleting any $x\in X$ that is covered only by isolated points of order $<\alpha$ at limit ordinal $\alpha$), we can delete all isolated points and leave either the empty set or a dense-in-itself kernel[^7] of the set.\ \ It is also possible to remove closed sets of the form $u_{n}(x)$ in a way which is a generalized version of the construction of a Cantor ternary set. Fix an enumeration of countably infinitely many $x\in X$, say $\langle x_{\alpha<\omega}\rangle$, treated as branches of binary sequences of length $\omega$, which are dense in $X$, *i.e.* every $x\in X$ is covered by an $\omega$-sequence of $x_{\alpha}$’s[^8]. Then for any finite ordinal $\alpha$ there is a highest node of height $n(\alpha)$ at which $x_{\beta<\alpha}$ split from $x_{\alpha}$ (where $n(0)$ is the lowest node from which some branch splits from $x_{0}$); then proceed along the branch $x_{\alpha}$ $r(\alpha)>0$ nodes from which some $u_{n(\alpha)+r(\alpha)}(x_{\alpha})$ splits, and delete any branches that coincide with the terminal segment of $x_{\alpha}$ from nodes of height $r(\alpha)+1$ onwards. If $x_{\alpha}$ has already been deleted then do nothing. For reference we will call this branch deletion construction from set $Y\subseteq X$ $cntr(Y;x_{\alpha})$. Finally, at ordinal $\omega$ take the intersection of all stages of the construction $\alpha<\omega$. We can write the construction $X_{0}=X$, $X_{\alpha+1}=cntr(X_{\alpha}x_{\alpha})$ for $\alpha<\omega$ and $X_{\omega}=\bigcap_{\alpha<\omega}X_{\alpha}$.\ \ ![image](1_Users_andrewpowell_Downloads_diagram.png)\ \ *Figure 1:* *Subtrees (clopen intervals) deleted from a binary tree representation of a set. The construction proceeds $r(\alpha)$ nodes that have branches splitting from them (one empty node is skipped in the diagram) them where $x_{\alpha-1}$ splits from $x_{\alpha}$ and deletes nodes $r(\alpha)+1$ and higher of $x_{\alpha}$.*\ \ The density of the sequence $\langle x_{\alpha<\omega}\rangle$ in $X$ ensures that $X_{\alpha<\omega}\ne\emptyset$ because each non-empty closed set $u_{n(\alpha)+m}(x_{\alpha})$ for $1\le m\le r(\alpha)$ will contain some $x_{\beta>\alpha}.$ The resulting Cantor sets, $X_{\omega}(\langle x_{i<\omega}\rangle)$, are closed and nowhere dense.[^9] The reason why the Cantor sets are closed is that $u_{n}(x_{\alpha})$ are closed and open (clopen) since $u_{n}(x_{\alpha})$ contains all of its limits points in $X$, and $X-u_{n}(x_{\alpha})$ contains all of its limit points (so both are clopen), and the Cantor sets constructed at ordinal $\omega$ have the form $X-\bigcup_{\alpha<\omega}u_{n(\alpha)+r(\alpha)}(x_{\alpha})$, $i.e.$ the complement of an open set.[^10] In this paper $u_{n}(x)$ are known as *clopen intervals*. To see that a resulting Cantor set is nowhere dense, note that each clopen interval is a maximally dense subset of itself, and since each clopen interval in the tree has a clopen interval deleted from it because the sequence $\langle x_{i<\omega}:x_{i}\in X_{\alpha}\rangle$ is dense in $X_{\alpha}$, no subset of the Cantor set is dense in the tree.\ \ While the tree model of a set of real numbers, $X$, is a strong visual construction, there is a case where the model is not applicable, namely where all $x\in X$ cover a single *$\omega$*-sequence. In this case, there are no clopen intervals splitting from the single *$\omega$*-sequence. The members of $X$ will either have a finite length, written $x_{n}$ for $n\in\omega$, or be an $\omega$-sequence, $x_{\omega}$. Each $\{x_{n}\}$ is a closed set because $\{x_{n}\}$ contains its limit points, since it contains only one point, and $X-\{x_{n}\}$ is closed as its closure is $X-\{x_{n}\}$. Hence $\{x_{n}\}$ is clopen. $\{x_{\omega}\}$ is also closed because it contains its limit point, but is not open (as $X-\{x_{\omega}\}$ has $x_{\omega}$ as a limit). But given that all $\{x_{n}\}$ are isolated because they are discrete sets, we can remove them, and leave the set $\{x_{\omega}\}$ or the empty set. If $\{x_{\omega}\}$ exists, then it too is isolated because it is now a clopen and therefore is a discrete set (as its complement is the empty set). As $\left|X\right|\le\aleph_{0}$ and $X$ comprises isolated points, this case has been sufficiently characterized from a cardinality perspective.\ \ We can state this as: In the Baire topology on a subset of the set of real numbers, X, that comprises a set of binary sequences with a countable basis of clopen intervals and no discrete or isolated points, the Cantor sets $X_{\omega}$ constructed from $X$ and a dense $\omega$-sequence $\langle x_{\alpha<\omega}\in X\rangle$ by $X_{0}=X$, $X_{\alpha+1}=cntr(X_{\alpha};x_{\alpha})$ for $\alpha<\omega$ and $X_{\omega}=\bigcap_{\alpha<\omega}X_{\alpha}$ are closed and nowhere dense. In terms of cardinality, note that $X_{\omega}$ can be empty, but if $X$ is an uncountable *sequentially complete*[^11] set of real numbers (in the sense that every *$\omega$*-sequence through the tree is a member of the set), then $X$ has cardinality $2^{\aleph_{0}}$. This is so because every uncountable set of binary sequences must cover an infinite binary tree (because each node is covered by a binary sequence). Remove all isolated binary sequences. Then each binary sequence must split at an arbitrarily high node (*i.e.* into 0 and 1), since otherwise the binary sequence would be an isolated point; and by sequentially completeness, the tree created is isomorphic[^12] to the set of all binary $\omega$-sequences, $2^{\omega}$, which has cardinality $2^{\aleph_{0}}$. The subtree generated by the closed nowhere dense set construction, $X_{\omega}$, also has cardinality $2^{\aleph_{0}}$, as can be seen by labelling the remaining $u_{n}(x_{\alpha})$ 1,2 *et seq* and the deleted subtrees 0 and noting that nested $u_{n}(x_{\alpha})$ give rise to sequences that can be labelled using $\omega$-sequences that do not contain 0. We can conclude by sequential completeness that each sequence is a member of $X$. In cardinality terms we have $2^{\aleph_{0}}-\aleph_{0}\times2^{\aleph_{0}}=2^{\aleph_{0}}\ne\slashed{O}$.\ \ ![image](2_Users_andrewpowell_Downloads_complete_tree.png)\ \ *Figure 2: A tree representation of a set of binary sequences. The bold line is a path through the tree. A set is sequentially complete if every path through the tree is a member of the set.*\ \ We can state this as: In a sequentially complete Baire topology on a set of real numbers, $X$, that comprises a sequentially complete set of binary sequences with a countable basis of clopen intervals and no discrete or isolated points, the Cantor sets have cardinality $2^{\aleph_{0}}$ and the process of deleting $\omega$ clopen intervals gives rise to the equation $2^{\aleph_{0}}-\aleph_{0}\times2^{\aleph_{0}}=2^{\aleph_{0}}\ne\slashed{O}$. On the other hand, because there are only countably infinitely many clopen intervals (since there are only countably infinitely many nodes from which clopen intervals split from a branch), if we were to delete $\aleph_{1}$ clopen intervals in a dense way[^13] the empty set would result. This may seem meaningless, but we can say that every $\omega_{1}$-sequence of Cantor sets, $C_{\alpha}$, such that $C_{\beta}\subset C_{_{\gamma}}$ if $\beta>\gamma$, has a terminal segment of empty sets, *i.e*. $C_{\delta}=\emptyset$ for all $\delta>\beta$ for some countable ordinal $\beta$.[^14] In cardinality terms we have $2^{\aleph_{0}}-\aleph_{1}\times2^{\aleph_{0}}=\slashed{O}$. Moreover, although we can force $2^{\aleph_{0}}-\aleph_{0}\times2^{\aleph_{0}}=\slashed{O}$ (delete any $\omega$-sequence of all clopen intervals $\subset X$ from $X$), we cannot force $2^{\aleph_{0}}-\aleph_{1}\times2^{\aleph_{0}}\ne\slashed{O}$ as the deletion of any dense uncountable sequence of clopen intervals will result in an empty remainder.[^15]\ \ We can state this as: \[thm:In-a-sequentially\]In a sequentially complete Baire topology on a set of real numbers, X, that comprises a sequentially complete set of binary sequences with a countable basis of clopen intervals and no discrete or isolated points, if $\aleph_{1}$ clopen intervals are deleted in a dense way, then the empty set results, i.e. $2^{\aleph_{0}}-\aleph_{1}\times2^{\aleph_{0}}=\slashed{O}$. There is also a connection between this topology and the Baire Category Theorem for compact[^16] Hausdorff[^17] topological spaces, *i.e.* that a compact Hausdorff topological space is not the union of countably many closed nowhere dense subsets. A topological space that comprises a sequentially complete set of binary sequences with countably infinitely many clopen basis sets $u_{n}(x)$[^18] and no discrete or isolated points, $2^{\omega}$ for short[^19], is compact and Hausdorff.[^20] In cardinality terms the Baire Category Theorem implies that $2^{\aleph_{0}}\ne\aleph_{0}\times2^{\aleph_{0}}$ given that each closed nowhere dense set in the compact Hausdorff topological space $2^{\omega}$ has cardinality $2^{\aleph_{0}}$.\ \ We can state this as: In a Hausdorff topological space, X, that comprises a sequentially complete set of binary sequences with a countable clopen base and no discrete or isolated points, X is not the union of countably many nowhere dense subsets. It is also worth noting that we do not need to start with a sequentially complete set $X$ with the Baire topology[^21]. If $X$ is not sequentially complete, contains no sequentially complete clopen interval, has a dense-in-itself subset and has cardinality $\aleph_{0}<c\le2^{\aleph_{0}}$ such that all clopen sets have cardinality $c$ (removing all clopen intervals of cardinality $<c$ if necessary), then by removing clopen intervals in a dense way following Theorem \[thm:In-a-sequentially\] we see that $c-c\times\aleph_{1}=\slashed{O}$. It is in fact possible using the Cantor construction $X_{1}=X$, $X_{\alpha+1}=cntr(X_{\alpha})$ for $\alpha<\omega$ and $X_{\omega}=\bigcap_{\alpha<\omega}X_{\alpha}$ to construct an $X_{\omega}$ which contains any given $x\in X$ by choosing the set $\langle x_{\alpha<\omega}\rangle$ of $\omega$-sequences to be deleted such that $x_{\alpha<\omega}\in X$, $x_{\alpha}\ne x$ and $\langle x_{n<\omega}\rangle$ is dense in $X$,[^22] and by modifying $cntr$ to increase the value of $r(\alpha)$ so that $x\in X_{\alpha}$ for all $\alpha<\omega$ and therefore $x\in X_{\omega}$ by definition.\ \ If we consider that each clopen interval is divided into $r>1$ disjoint clopen intervals and one clopen interval is deleted, we can write $U_{\alpha}=\bigcup_{0\le m\le r(\alpha)}U_{\alpha,m}$ and set $U_{\alpha+1}=U_{\alpha,m}$ for any choice of $m$ such that $1\le m\le r(\alpha)$, where $U_{\alpha,m}$ are clopen intervals, $U_{\alpha,0}$ is deleted because $x_{\alpha}$ is a branch in $U_{\alpha,0}$, $U_{1}=X$ and $U_{\omega}=\bigcap_{\alpha<\omega}U_{\alpha}$, for natural numbers $\alpha,\:m,\:r(\alpha)$. Then if $U_{\alpha}$ preserves $y_{\alpha}\in U_{\alpha}$, *i.e.* $y_{\alpha}\in U_{\omega}$, then if $y_{\alpha}\in U_{\alpha,m}$ for some $m>0$ there are $y_{\alpha,s}\in U_{\alpha,s}\ne y_{\alpha}$ for all $1\le s\le r(\alpha)$ and $s\ne m$. We require that $U_{\alpha}$ is constructed to include a clopen interval around branch $x_{\alpha}\in U_{\alpha,0}$ (which will be deleted), to preserve $\bigcup_{1\le m<\alpha}\{y_{m}\}$ and $y_{\alpha}\in U_{\alpha,m}$ for some $m>0$. This requirement can be met by selecting $y_{1}\in U_{1}$ such that $y_{1}\ne x_{\beta}$ for any $\beta<\omega$ and constructing $U_{\alpha+1}$ and $y_{\alpha+1}$ as follows given clopen interval $U_{\alpha}$ and $y_{\alpha}\in U_{\alpha}$ which is preserved, $i.e.$ $y_{\alpha}\in U_{\omega}$:[^23]\ \ \ \ \ Since each $y_{\alpha+1}$ is preserved by the same construction as was used for $y_{\alpha}$, we see that each splitting of a clopen interval into $r>1$ clopen intervals preserves an additional $r-1$ points of $X$. It follows that it is possible to construct $X_{\omega}$ from $X$, by means of the closed nowhere dense set construction, which contains a dense-in-itself subset of cardinality $\ge\aleph_{0}$. That is, it is possible to force $c-c\times\aleph_{0}\ne\slashed{O}$.\ \ ![image](3_Users_andrewpowell_Downloads_Forcing.png)\ \ *Figure 3: An example of how a descending sequence of clopen intervals can be forced to contain one point of a set $X$ per node of a decomposition of $X$ into clopen intervals.*\ \ We can state this as: In a Baire topology of an uncountable set of real numbers, X, that comprises a set of binary sequences with a countable clopen base and no discrete or isolated points, it is always possible to construct a Cantor set $X_{\omega}$ from $X$ which contains a dense-in-itself subset of cardinality $\ge\aleph_{0}$. That is, it is possible to force $c-c\times\aleph_{0}\ne\slashed{O}$. But deleting $\aleph_{1}$ clopen intervals in a dense way results in the empty set, i.e. $c-c\times\aleph_{1}=\slashed{O}$. A natural topology of sets of higher order ========================================== The Baire topology can be defined in the case of higher order sets in the same way as real numbers with the difference that clopen intervals, $u_{\alpha}(x)$, can split from a branch at any ordinal $\alpha<\aleph$ rather than $\alpha<\omega$. The decision to allow splits that occur at nodes of infinite height has the consequence that the Baire topology on $X$ is not equivalent to a product topology, which is in turn equivalent to allowing only clopen sets that split from a branch of height $n<\omega$ in the case of a finite base for the product[^24].\ \ If $X$ has a dense-in-itself kernel[^25], then it is possible to construct closed nowhere dense sets by removing clopen intervals in the same manner as the case of sets of real numbers, deleting a $\aleph$-sequence $S=\langle x_{\beta<\aleph}\in X\rangle$ that is dense in $X$ by means of the construction $cntr(Y;x_{\beta}):=Y-u_{\beta}(x_{\beta})$, where $u_{\beta}(x_{\beta})=\{y:(y)_{n(\beta)+r(\beta)+1}\ne(x_{\beta})_{n(\beta)+r(\beta)+1}\wedge(\forall\gamma\le n(\beta)+r(\beta))[(y)_{\gamma}=(x_{\beta})_{\gamma}]\}$. $n(\beta)$ is the supremum of nodes where $x_{\beta}$ splits from $x_{\delta<\alpha}$ and the offset $r(\beta)>0$ is any ordinal $r(\beta)<\aleph$ (as in the case of the real numbers, skipping over empty nodes). It follows that we can construct a sequence $X_{0}=X$, $X_{\delta+1}=cntr(X_{\delta};x_{\delta})$ for $\delta<\aleph$ and $X_{\lambda}=\bigcap_{\delta<\lambda}X_{\delta}$ for limit ordinal $\lambda\le\aleph$. We claim that $X_{\aleph}$ is a closed nowhere dense set, which follows because the construction results in sets of the form $X-\bigcup_{\beta<\aleph}u_{\alpha(\beta)+r(\beta)}(x_{\beta})$, $i.e.$ the complement of an open set, and any clopen interval will have a clopen interval deleted from it (since the set of sequences $\{S:S\in X_{\alpha}\}$ is dense in $X_{\alpha}$). Finally, we note that if $X$ has a linear rather than tree representation in terms of binary $\aleph$-sequences, just as in the case of the real numbers we can remove isolated points by (transfinite) induction, starting at the initial member of the linear order, and proceeding until all members of $X$ have become isolated. In this case $X$ has cardinality $\aleph$.\ \ We can state this as: In the Baire topology of a set of binary $\aleph$-sequences, X, with a basis of clopen intervals of cardinality $\aleph$ and no discrete or isolated points, the Cantor sets $X_{\aleph}$ constructed from $X$ and a dense $\aleph$-sequence $\langle x_{\beta<\aleph}\in X\rangle$ by $X_{0}=X$, $X_{\delta+1}=cntr(X_{\delta};x_{\delta})$ for $\delta<\aleph$ and $X_{\lambda}=\bigcap_{\delta<\lambda}X_{\delta}$ for limit ordinal $\lambda\le\aleph$ are closed and nowhere dense. In the same way as in the case of the real numbers it is possible to force $\left|X_{\aleph}\right|\ge\aleph$ by applying the closed nowhere dense set construction to $X$, which has a dense-in-itself subset and has cardinality $\aleph<c\le2^{\aleph}$ such that all clopen sets have cardinality $c$ (removing all clopen intervals of cardinality $<c$ if necessary), to construct an $X_{\aleph}$ which contains any given $x\in X$ by choosing the set $\langle x_{\alpha<\aleph}\rangle$ of $\aleph$-sequences to be deleted to be such that $x_{\alpha<\aleph}\in X,$ $x_{\alpha}\ne x$ and $\langle x_{\alpha<\aleph}\rangle$ is dense in $X$,[^26] and by modifying $cntr$ to increase the value of $r(\beta)$ so that $x\in X_{\alpha}$ for all $\alpha<\aleph$ and therefore $x\in X_{\aleph}$ by definition.\ \ If we consider that each clopen interval is divided into $r>1$ disjoint clopen intervals and one clopen interval is deleted, we can write $U_{\alpha}=\bigcup_{0\le\beta\le r(\alpha)}U_{\alpha,\beta}$ and $U_{\alpha+1}=U_{\alpha,\beta}$ for any choice of $\beta$ such that $1\le\beta\le r(\alpha)$, where $U_{\alpha,\beta}$ are clopen intervals, $U_{\alpha,0}$ is deleted because $x_{\alpha}$ is a branch in $U_{\alpha,0}$, $U_{1}=X$ and $U_{\lambda}=\bigcap_{\beta<\lambda}U_{\beta}$, for ordinal numbers $\alpha,\:\beta,\:r(\alpha)<\aleph$ and $\lambda$ a limit ordinal. Then if $U_{\alpha}$ preserves $y_{\alpha}\in U_{\alpha}$, *i.e.* $y_{\alpha}\in U_{\aleph}$, then if $y_{\alpha}\in U_{\alpha,\beta}$ for some ordinal number $\beta>0$ there are $y_{\alpha,\gamma}\in U_{\alpha,\gamma}\ne y_{\alpha}$ for all $1\le\gamma\le r(\alpha)$ and $\gamma\ne\beta$. We require that $U_{\alpha}$ is constructed to include a clopen set around branch $x_{\alpha}$ in $U_{\alpha,0}$ (which will be deleted), to preserve $\bigcup_{\gamma<\alpha}\{y_{\gamma}\}$ and $y_{\alpha}\in U_{\alpha,\beta}$ for some $\beta>0$. This requirement can be met by selecting $y_{1}\in U_{1}$ such that $y_{1}\ne x_{\alpha<\aleph}$ and by constructing $U_{\alpha+1}$ and $y_{\alpha+1}$ and $U_{\lambda}$ and $y_{\lambda}$ for limit ordinal $\lambda<\aleph$ as follows given clopen interval $U_{\alpha}$ and $y_{\alpha}\in U_{\alpha}$ which is preserved, $i.e.$ $y_{\alpha}\in U_{\aleph}$, or clopen intervals $U_{\beta<\lambda}$ and $y_{\beta}\in U_{\beta}$ in the case of limit ordinal $\lambda$.\ \ \ \ \ Since each $y_{\alpha}$ can be preserved by the same construction as was used for $y_{\beta<\alpha}$, we see that each splitting of a clopen interval into $r>1$ clopen intervals for $r<\aleph$ preserves an additional $r-1$ points of $X$, and all of these points are preserved at limit ordinals (as represented by all possible values of $U_{\lambda}$ for limit ordinals $\lambda\le\aleph$). It follows that every $X_{\aleph}$ generated from $X$ by the closed nowhere dense set construction contains a dense-in-itself subset of cardinality $\ge\aleph$. It follows that if $\aleph<\left|X\right|\le2^{\aleph}$ then it is possible to force $\left|X\right|-\aleph\times\left|X\right|\ne\emptyset$, while if a dense sequence of $\aleph+1$ clopen intervals are deleted then $\left|X\right|-(\aleph+1)\times\left|X\right|=\emptyset$.\ \ ![image](4_Users_andrewpowell_Downloads_Forcing_inf.png)\ \ *Figure 4: An example of how a descending sequence of clopen intervals has clopen intervals from some limit ordinal onwards, at the point where no clopen intervals have yet been deleted in the construction.*\ \ We can state this as: \[thm:Baire\]In a Baire topology of a set of binary $\aleph$-sequences, X, such that $\aleph<\left|X\right|\le2^{\aleph}$ with a basis of clopen intervals of cardinality $\aleph$ and no discrete or isolated points, it is always possible to construct a Cantor set $X_{\aleph}$ from $X$, which contains a dense-in-itself subset of cardinality $\ge\aleph$. That is, if $\aleph<\left|X\right|\le2^{\aleph}$ then it is possible to force $\left|X\right|-\aleph\times\left|X\right|\ne\emptyset$, while if a dense sequence of $\aleph+1$ clopen intervals are deleted then $\left|X\right|-(\aleph+1)\times\left|X\right|=\emptyset$. If $X$ is $\aleph$-sequentially complete and therefore has a base of clopen intervals which are $\aleph$-sequentially complete, *i.e.* all paths of length $\aleph$ through the interval are members of the interval, then we can claim that it is possible to force $2^{\aleph}-\aleph\times2^{\aleph}=2^{\aleph}$ because the same labelling technique can be used on clopen intervals as in the case of the real numbers (all deleted clopen intervals being labelled 0) and we can note that all $\aleph$-sequences of ordinal labels $\aleph>\alpha>0$ are members of $X$ by $\aleph$-sequential completeness and that the cardinality of $\aleph^{\aleph}=2^{\aleph}$. By transfinite induction for $\alpha<\aleph$ with the hypothesis that all clopen intervals $\subseteq X_{\alpha}$ have cardinality $2^{\aleph}$, at stage $\alpha+1$ $X_{\alpha}$ will be split into $>1$ and $<\aleph$ clopen intervals with a label $\ne$0, each of which by the induction hypothesis has cardinality $2^{\aleph}$ , so $X_{\alpha+1}$ as the union of these sets, will also have cardinality $2^{\aleph}.$ For a limit ordinal $\lambda$, all clopen intervals with label $0$ can be deleted, and for the clopen intervals remaining $\aleph$-sequential completeness can be applied to the paths between labels formed at stages successor stages $\alpha<\lambda$ to show that $X_{\lambda}$ has cardinality $2^{\aleph}$. The latter observation relies on the fact that a strictly descending $\aleph$-sequence of non-empty clopen intervals defines a single point or branch $x\in X$, and therefore a descending $\alpha<\aleph$-sequence of clopen intervals can be identified with an initial segment of $x$ of length $\alpha$.\ \ We can state this as: In a $\aleph$-sequentially complete Baire topology of a set of binary $\aleph$-sequences, X, with a basis of clopen intervals of cardinality $\aleph$ and no discrete or isolated points, the Cantor sets have cardinality $2^{\aleph}$ and the process of deleting $\aleph$ clopen intervals gives rise to the equation $2^{\aleph}-\aleph\times2^{\aleph}=2^{\aleph}$. The Baire Category Theorem can also be generalized to the statement that in a $\aleph$-sequentially complete Hausdorff topological space, $X$, that comprises a $\aleph$-sequentially complete set of binary $\aleph$-sequences with a clopen base of cardinality $\aleph$ and with no discrete or isolated points, $X$ is not the union of $<\aleph+1$-many nowhere dense subsets for $\aleph\ge\aleph_{0}$.[^27] It is worth noting that a $\aleph$-sequentially complete Hausdorff space $X$ that comprises a $\aleph$-sequentially complete set of binary $\aleph$-sequences with a clopen base of cardinality $\aleph$ is neither compact [^28] nor metrizable [^29] for $\aleph>\aleph_{0}$, but there is a generalized metric function that can be used.\ \ In [@key-2] R. Kopperman showed that it possible to replace the set of real numbers in the definition of a metric space with a commutative semi-group[^30], and for every topology to find a suitable commutative semi-group for which a metric can be introduced to the topological space (which may not be symmetric or separate distinct members of the topological space). Let $\langle2^{\aleph},\oplus\rangle$ be a structure defined as follows. If $2^{\aleph}$ is the set of functions $\aleph\rightarrow2$ and $a,\,b\in2^{\aleph}$, *i.e.* are binary $\aleph$-sequences, then treat $a$ and $b$ as $\aleph$-sequences of real numbers in the range $[0,\infty)$, $\langle a_{1},...,a_{\alpha<\alpha},\ldots\rangle$ and $\langle b_{1},...,b_{\alpha<\alpha},\ldots\rangle$ for real numbers $a_{\alpha},b_{\alpha}\in[0,\infty)$, and define $a\oplus b$ as the $\aleph$-sequence $\langle a_{1}+b_{1},...,a_{\alpha<\alpha}+b_{\alpha<\alpha},\ldots\rangle$.\ \ Let us denote a clopen interval comprising binary $\beth$-sequences from $a\le b$ to $b$ by $([a,b])[[\beth]$ and the half-open interval from $a<b$ to $<b$ by $[a,b)[\aleph]$. Let us now define $d(x,y)$ for binary $\aleph$-sequences of the form $\langle x,...,x_{\alpha<\alpha},\ldots\rangle$ where real number $x_{\alpha}\in([0,1])[\omega]$, by $d(x,x):=0$ and $d(x,y):=1_{\alpha(x,y)}$, *i.e.* where there is a 1 only in the $\alpha$-th digit of an $\aleph$-sequence of binary $\omega$-sequences with a binary point (a real number) and 0 for all other digits, and *$\alpha$* is a successor ordinal that is the height of the lowest node where $(x)_{\alpha}\ne(y)_{\alpha}$. This is an unambiguous definition because each $x_{\alpha}\in([0,1])$ can be represented as a real number with 0 in front of the binary point (because $1.000\ldots$ can also be written $0.111\ldots).$ We can thus skip the $0.$ before the binary point in the real number representation uniquely identifying the height of the lowest node where $(x)_{\alpha}\ne(y)_{\alpha}$. In practice we will leave the binary point in place for clarity. Surprisingly we have $d(x,y)\le\frac{1}{2}=\langle0.1,0,0,0,\ldots\rangle$ because the first node after $0.$ is the first node that $x$ and $y$ can differ. On the other hand $d(x,y)\oplus d(y,z)\le1$, and a sum of natural number $n$ such distances is bounded by $n/2$.\ \ We can show that if $x$ and $y$ are binary $\aleph$-sequences in the clopen interval $([0,1])[\aleph]$ then $\langle([0,1])[\aleph],d\rangle$ forms a metric space.[^31] We have $d(x,y)=d(y,x)$, $d(x,y)\ge0$ and $d(x,y)=0\rightarrow x=y$ immediately from the definition of $d$ and the fact that all real numbers in $x$ and $y$ start with $0.$ We also have $d(x,y)\oplus d(y,z)\ge d(x,z)$ because: 1. If $\alpha(x,z)>\alpha(x,y)$ then $\alpha(y,z)=\alpha(x,y)$, and $d(x,y)\oplus d(y,z)=1_{\alpha(x,y)}+1_{\alpha(y,z)}=1_{\alpha(x,y)-1}>1_{\alpha(x,y)}>1_{\alpha(x,z)}$. 2. If $\alpha(x,z)<\alpha(x,y)$ then $\alpha(y,z)=\alpha(x,z)$, and $d(x,y)\oplus d(y,z)=1_{\alpha(x,y)}+1_{\alpha(y,z)}>1_{\alpha(x,z)}$. 3. If $\alpha(x,z)=\alpha(x,y)$ then we have $d(x,y)\oplus d(y,z)=1_{\alpha(x,y)}+1_{\alpha(y,z)}>1_{\alpha(x,z)}$. 4. If $x=y$ then $\alpha(x,z)=\alpha(y,z)$, $d(x,y)=0$ and $d(x,y)\oplus d(y,z)=0+1_{\alpha(y,z)}=1_{\alpha(x,z)}$; if $y=z$ then $\alpha(x,y)=\alpha(x,z)$, $d(y,z)=0$ and $d(x,y)\oplus d(y,z)=1_{\alpha(x,y)}+0=1_{\alpha(x,z)}$; and if $x=z$ then $\alpha(x,y)=\alpha(y,z)$, $d(x,z)=0$ and $d(x,y)\oplus d(y,z)=1_{\alpha(x,y)}+1_{\alpha(y,z)}>0$. We can define clopen intervals in the Baire topology as $\{y:d(x,y)=1_{\alpha(x,y)}\}$.\ \ ![image](5_Users_andrewpowell_Downloads_Gen_metric.png)\ \ *Figure 5: Diagrams showing the different cases in the generalized metric of $([0,1])[\aleph]$.*\ \ The clopen interval $([0,1])[\aleph]$ was chosen for simplicity, and it has the closure property $d(x,y)\oplus d(y,z)\in([0,1])[\aleph]$ if $x,y,z\in([0,1])[\aleph]$; but exactly the same generalized metric works on the interval $[0,\infty)[\aleph]$. Any binary real number can be padded with 0s in front of the binary point if necessary to have a prefix of the same length as any other binary real number, and all binary digits in the prefix are treated as negative whole number offsets from the binary points. For example, to calculate the $d(x,y)$ where $x=11.000\ldots$ and $y=100.000\ldots$ the prefix of $x$ can be padded to $011$ and $d(x,y)=100.000\ldots$, which is at position -3 with respect to the binary point. It is true that $d(x,y)\oplus d(y,z)\in[0,\infty)[\aleph]$ if $x,y,z\in[0,\infty)[\aleph]$, but of course $[0,\infty)[\aleph]$ is not closed under upward limits, $i.e.$ $d(x,y)\rightarrow\infty$ if $x$ is fixed and $y\rightarrow\infty$ or *vice versa*, and $\infty\notin[0,\infty)$. The clopen interval $([0,1]))[\aleph]$ is therefore a better representation of the set of all binary $\aleph$-sequences.\ \ We should be clear that for $\aleph>\aleph_{0}$ the generalized metric space is not compact. The reason is that, as we have seen, it is possible to have an $\omega$-sequence of non-empty clopen and totally bounded intervals $\langle X_{\alpha<\omega}\rangle$ such that $X_{\alpha}\subseteq X_{\beta}$ if $\beta\leq\alpha<\omega$ and $\bigcap_{\beta<\omega}X_{\beta}=\emptyset$ . But the following statements are true in a generalized metric space. If $y$ is a limit point of non-empty $\bigcap_{\alpha<\aleph}X_{\beta}$ where $\langle X_{\alpha<\aleph}\rangle$ is an $\aleph$-sequence of non-empty clopen intervals such that $X_{\beta}\subset X_{\gamma}$ if $\gamma<\beta$, then $y\in\bigcap_{\alpha<\aleph}X_{\beta}$.[^32] Moreover, as noted in Footnote \[fn:Construct-a-cover\] and which can be seen from from the proof of the generalized Baire Category Theorem below, in a $\aleph$-sequentially complete Hausdorff space every strictly descending nested $\aleph$-sequence of non-empty clopen intervals converges to exactly one point. Furthermore, the compactness condition can be replaced by a generalized compactness condition in a $\aleph$-sequentially complete Hausdorff topological space called *$<\aleph$-compactness:* if for every $\beta<\aleph$ $\bigcap_{\alpha<\beta}X_{\alpha}\ne\emptyset$ then $\bigcap_{\alpha<\aleph}X_{\alpha}\ne\emptyset$ for any strictly descending $\aleph$-sequence of non-empty clopen intervals $\langle X_{\alpha<\aleph}\rangle$, *i.e.* $X_{\beta}\subset X_{\gamma}$ if ordinal $\gamma<\beta$. It is therefore true that a $\aleph$-sequentially complete $\langle2^{\aleph},\oplus\rangle$-generalized metric space is a $<\aleph$-compact topological space (*i.e*. a topological space such that each closed set satisfies the $<\aleph$-compactness condition).\ \ The proof of the generalized Baire Category Theorem proceeds as follows (broadly following [@key-4] 3.83Ac 213 for the case $\aleph=\aleph_{0}$). Let us suppose for contradiction that $X=\bigcup_{\alpha<\aleph}C_{\alpha}$ for closed nowhere dense sets $C_{\alpha}$. We claim we can find a $\aleph$-sequence of descending non-empty closed sets $\langle D_{\alpha}:\alpha<\aleph\rangle$ such that $D_{0}\subseteq X$, $D_{\beta}\subseteq D_{\alpha}$ if $\alpha<\beta$ and $C_{\alpha}\cap D_{\alpha+1}=\slashed{O}$. This is possible because for every non-empty open set $O$, $O-C_{\alpha}$ is a non-empty open set as $O$ has a non-empty interior and $C_{\alpha}$ has an empty interior. We choose $D_{0}\subseteq X$ to be a clopen interval (since the space has a clopen base), $D_{\alpha+1}\subseteq D_{\alpha}-C_{\alpha}$ to be a clopen interval (as a clopen subset of the non-empty interior of $D_{\alpha}$) and $D_{\lambda}:=\bigcap_{\alpha<\lambda}D_{\alpha}$ for limit ordinals $\lambda$. We can see that $D_{\alpha<\aleph}\ne\slashed{O}$ because at limit ordinals, $\lambda$, the node from which the clopen interval $D_{\lambda}$ splits from some branch $x\in X$[^33] has an ordinal which is the limit of an $\alpha$-sequence for $\alpha<\aleph$ of ordinals $<\aleph$ (because the branches are of length $\aleph$), and thus the node has an ordinal $<\aleph$ (by the Axiom of Choice). As $D_{\alpha<\aleph}$ can be viewed as a clopen interval splitting from a branch, and that clopen interval is then split at some branch in the interval at higher ordinals, it follows by $\aleph$-sequential completeness that $D_{\aleph}$ can be identified with a set containing an $\aleph$-sequence of $2^{\aleph}$, *i.e.* a set containing a single point. Using this observation we have $\bigcap_{\alpha<\aleph}D_{\alpha}=\{x\}$ for $x\in2^{\aleph}$, and since $D_{0}\subseteq X$, $x\in X.$ However, as $C_{\alpha}\cap D_{\alpha+1}=\slashed{O}$, $x\notin\bigcup_{\alpha<\aleph}C_{\alpha}$. Hence $X\ne\bigcup_{\alpha<\aleph}C_{\alpha}$, as was to be proved.\ \ We can state these results as: (Generalized Baire Category Theorem) In a $\aleph$-sequentially complete Hausdorff topological space, X, that comprises a $\aleph$-sequentially complete set of binary $\aleph$-sequences with a clopen base of cardinality $\aleph$ and with no discrete or isolated points, X is not the union of $<\aleph+1$-many nowhere dense subsets for $\aleph\ge\aleph_{0}$. \[thm:A–sequentially-complete\]A $\aleph$-sequentially complete Hausdorff topological space that comprises a $\aleph$-sequentially complete set of binary $\aleph$-sequences is not compact and not metrizable for $\aleph>\aleph_{0}$, but it is possible to use a generalized metric and every strictly descending $\aleph$-sequence of non-empty clopen intervals converges to exactly one point. A Modal Model of Set Theory =========================== We have seen from Theorem 10 that in a $\aleph$-sequentially complete Hausdorff topological space, $X$, that has a clopen base of cardinality $\aleph$ and with no discrete or isolated points, *X* is not the union of $<\aleph+1$-many nowhere dense subsets for $\aleph\ge\aleph_{0}$. But is it the case that $X$ is the union of $\aleph+1$ closed nowhere dense sets if the cardinality of $X>\aleph$? The answer is that this result is possible because it can be forced if the $\aleph+1$ closed nowhere dense sets are dense in $X$, but the forcing is quite natural. [@key-3] provides a clear explanation of set theoretic forcing. It will be seen that the result is independent of Zermelo Fraenkel set theory with the Axiom of Choice (ZFC). The result can be seen by means of the following construction.\ \ If we represent members of a $\aleph$-sequentially complete Hausdorff topological space, $X$, as $<\aleph+1$-sequences, we can define $X(\langle x_{1},\ldots,x_{\alpha<\aleph+1}\rangle;\langle y_{1},\ldots,y_{\beta<\aleph+1}\rangle)$ as the generalized Cantor (*i.e.* closed nowhere dense) set that results from the construction in Theorem 8 that preserves members of $\langle x_{1},\ldots,x_{\alpha<\aleph+1}\rangle$ and deletes members of $\langle y_{1},\ldots,y_{\beta<\aleph+1}\rangle$, where each $x_{\gamma\le\alpha},\:y_{\gamma\le\beta}\in X$ and $x_{\gamma}\ne y_{\delta}$ for all $\gamma\le\alpha,\:\delta\le\beta$. Note that the choice of $x_{\gamma}$ depends on $y_{\delta\le\gamma}$. Consider a $\aleph+1$-sequence $X_{\alpha<\aleph+1}(\langle x_{1},\ldots,x_{\alpha}\rangle;\langle y_{1},\ldots,y_{\alpha}\rangle)$ of $\aleph$-sequentially complete closed nowhere dense sets, which is possible because it is always possible to cover a branch of $\aleph$ nodes with disjoint sets of branches with $\aleph$ members. Then we can see that:\ \ \ is dense in $X$ (because $\langle x_{1},\ldots,x_{\aleph}\rangle\cup\langle y_{1},\ldots,y_{\aleph}\rangle$ is dense in $X$) and it is possible for $\langle x_{1},\ldots,x_{<\aleph+1}\rangle$ and $\langle y_{1},\ldots,y_{<\aleph+1}\rangle$ to each have $\aleph+1$ members if the cardinality of $X>\aleph$. While it is not in general true that the union of $\aleph+1$ closed nowhere dense sets, $X_{\alpha<\aleph+1}$, that are dense in $X$ is $X$ (because a $\aleph$-sequence may exist which is covered by $\aleph$-sequences that is in $X$ but is not in the union), it is true in a natural model of $X.$ That model is a transitive outer model model of cardinality $\aleph+1$ (see for example [@key-3] in the case of countable transitive outer models), in which the forcing partially ordered functions are $f:\aleph+1\rightarrow\{Y:Y\subseteq X\}$, where $f_{\beta}=X_{\beta}$, where as above each $X_{\beta}$ is a function of $\langle x_{1},\ldots,x_{\beta}\rangle$, $\langle y_{1},\ldots,y_{\beta}\rangle$ and of the function $r:\beta\rightarrow\beta$ used to control the preservation of $\langle x_{1},\ldots,x_{\alpha}\rangle$ and the deletion of $\langle y_{1},\ldots,y_{\alpha}\rangle$. Now in the following keep $r$ fixed. To see that $f$ defines a partial ordering, note that $f_{\gamma}\subseteq f_{\alpha}$ for $\aleph+1>\alpha>\gamma$ since $X_{\gamma}\subseteq X_{\alpha}$ for $\langle x_{1},\ldots,x_{\alpha}\rangle$ extending $\langle x_{1},\ldots,x_{\gamma}\rangle$[^34].\ $F=\bigcup_{\beta<\aleph+1}f_{\beta}$ is a function because if: $$\begin{aligned} F(\langle x_{1},\ldots,,x_{<\aleph+1}\rangle;\langle y_{1},\ldots,y_{\aleph}\rangle)\ne F(\langle w_{1},\ldots,w_{<\aleph+1} & \rangle;\langle z_{1},\ldots,z_{\aleph}\rangle)\end{aligned}$$ then it follows that: $$\begin{aligned} f_{\beta}(\langle x_{1},\ldots,x_{\gamma<\beta}\rangle;\langle y_{1},\ldots,y_{\aleph}\rangle)\ne f_{\beta}(\langle w,\ldots,w_{\gamma<\beta}\rangle;\langle z_{1},\ldots,z_{\aleph}\rangle)\end{aligned}$$ for some $\beta<\aleph+1$ by definition of union, and there is a correspondence (possibly many to one) between $\langle x_{1},\ldots,x_{\beta}\rangle$ and $X_{\beta}$. This implies that: $$\begin{aligned} \langle x_{1},\ldots,x_{\gamma<\beta};y_{1},\ldots,y_{\aleph}\rangle\ne\langle w_{1},\ldots,w_{\gamma<\beta};z_{1},\ldots,z_{\aleph}\rangle\end{aligned}$$ since $f_{\beta}$ is a function and hence: $$\begin{aligned} \langle x_{1},\ldots,x_{\gamma<\aleph+1};y_{1},\ldots,y_{\aleph}\rangle\ne\langle w_{1},\ldots,w_{\gamma<\aleph+1};z_{1},\ldots,z_{\aleph}\rangle.\end{aligned}$$\ $F$ is onto $X-\bigcup_{\beta<\aleph}\{y_{\beta}\}$ because if $x\ne y_{\beta<\aleph}$ and $x\notin ran(F)$ for $x\in X$ then for some $\gamma<\aleph+1$ we can add $x$ to be preserved by $X_{\gamma}$ and all $X_{\alpha>\gamma}$ for $\alpha<\aleph+1$. Since the same argument works for $\langle y_{1},\ldots,y_{\aleph}\rangle;\langle x_{1},\ldots,x_{\alpha<\aleph+1}\rangle$ with $G=\bigcup_{\beta<\aleph+1}g_{\alpha}$, showing $G$ is onto $X-\bigcup_{\beta<\aleph}\{x_{\alpha}\}$, we see that $F\cup G$ is a function onto $X$. This model is natural because it is completely described by binary $\aleph+1$-sequences that can be instantiated and that control membership of the closed nowhere dense sets.\ \ To see that this result is independent of ZFC, we note that the function $F\cup G$ is a function from a set of cardinality $\aleph+1$ onto a set of cardinality $2^{\aleph}$ (*i.e.* from ** $\aleph+1$ onto $X$). Hence $\aleph+1\ge2^{\aleph}.$ Since $2^{\aleph}\ge\aleph+1$ by Cantor’s theorem, GCH follows. Conversely if GCH is true, any union of closed nowhere dense sets, such as $\{x\}$ for $x\in X,$ has cardinality $\aleph+1=2^{\aleph}$ , and hence $X$ is the union of $\aleph+1$ closed nowhere dense sets.\ \ We may state this result as: (Not provable in ZFC, equivalent to GCH) In a $\aleph$-sequentially complete Hausdorff topological space, X, that comprises a $\aleph$-sequentially complete set of binary $\aleph$-sequences with a clopen base of cardinality $\aleph$ and with no discrete or isolated points, X is the union of $\aleph+1$-many nowhere dense subsets for $\aleph\ge\aleph_{0}$. The construction showing that $X$ is the union of $\le\aleph+1$ closed nowhere dense sets is naturally constructed in $V_{\alpha}$ for some $\alpha>o(\aleph_{0})=\omega$, and uses the following argument: if a counterexample could be produced, the construction could be applied to the counterexample, showing that the counterexample would not be an actual counterexample. It natural to think of these constructions taking place in a modal model of ZFC (such as the S4 modal model of [@key-7]). As a reminder, an S4-modal model of set theory is a 4-tuple $\langle G,R,D^{G},F\rangle$, where $G$ is a set of forcing conditions, $R$ is a reflexive and transitive relation on $G$, $D^{G}$ is the domain of sets corresponding to $G$ and $F$ is a mapping from forcing conditions to quantifier-free sentences in set theory with constants in $D^{G}$ such that $V$ can be extended to all sentences in set theory with $D^{G}$ by means of the forcing relation $\Vdash$. We have $p\Vdash A$ if $A\in F(p)$, $p\Vdash\neg X$ if $p\nVdash X$, $p\Vdash X\wedge Y$ if $p\Vdash X$ and $p\Vdash Y$, $p\Vdash X\vee Y$ if $p\Vdash X$ or $p\Vdash Y$, $p\Vdash(\exists x)P(x)$ if $p\Vdash P(d)$ for some $d\in D^{G}$, $p\Vdash(\forall x)P(x)$ if $p\Vdash P(d)$ for all $d\in D^{G}$, $p\Vdash\Square X$ if $q\Vdash X$ for every $q\in G$ such that $R(p,q)$, and $p\Vdash\lozenge X$ if $q\Vdash X$ for some $q\in G$ such that $R(p,q)$. In a modal model a sentence of set theory $X$ is translated to a sentence of modal set theory written $\llbracket X\rrbracket$, by induction: $\llbracket A\rrbracket=\Square\lozenge A$ for atomic A, $\llbracket\neg X\rrbracket=\Square\lozenge\neg\llbracket X\rrbracket$, $\llbracket X\wedge Y\rrbracket=\Square\lozenge(\llbracket X\rrbracket\wedge\llbracket X\rrbracket$, $\llbracket X\vee Y\rrbracket=\Square\lozenge(\llbracket X\rrbracket\wedge\llbracket X\rrbracket$, $\llbracket(\exists x)P(x)\rrbracket=\Square\lozenge((\exists x)\llbracket P(x)\rrbracket$, and $\llbracket(\forall x)P(x)\rrbracket=\Square\lozenge((\forall x)\llbracket P(x)\rrbracket$, and it is proven that the translation of every instance of an axiom of $ZFC$ is true for each forcing condition of the model. The model that we have constructed is then $\langle\{x_{\alpha},y_{\alpha},X_{\alpha}:\alpha<\aleph+1\},\subseteq,2^{\aleph},F:\{x_{\alpha}y_{\alpha},X_{\alpha}\}\rightarrow\{x_{\alpha\le\aleph}\in X_{\beta\ge\alpha},y_{\alpha}\notin X_{\beta\ge\alpha}\}\rangle$, where $\{x_{\alpha}y_{\alpha},X_{\alpha}\}$ are as described in Theorem \[thm:Baire\]. A Generalized Metric Model of Set Theory ======================================== We can also use the fact (see Theorem \[thm:A–sequentially-complete\]) that any initial segment of $V$, $V_{\alpha}=2^{\aleph}$ for some cardinal $\aleph$, can be considered as a $\langle2^{\aleph},\oplus\rangle$-generalized metric space with the Baire topology on any set $X\subseteq2^{\aleph}$ comprising binary $\aleph$-sequences (or equivalently $\aleph$-sequences of real numbers). That is to say, that for infinite $\alpha$ and $\aleph$ $V_{\alpha}$ can be represented as a clopen interval $([0,1])[\aleph]$ for binary $\aleph$-sequences for some length $\aleph$. As we have seen, if all real numbers in the $\aleph$-sequence start with the same number ($0.$ in the case of $([0,1])[\aleph]$) then all binary $\aleph$-sequences can still be represented (by ignoring the constant number before the binary point). It is therefore reasonable to represent $V_{\alpha}$ as a clopen interval $([0,1])[\aleph]$ for binary $\aleph$-sequences for some length $\aleph$. That $V_{\alpha}$ is a generalized metric space does not alter what sets exist, as those sets will be sets of binary $\aleph$-sequences (for example most will not be $\aleph$-sequentially complete); the constraint of being a generalized metric space only determines how far apart points in the space are.\ \ It is then possible to decide the membership of $X$ in $<\aleph+1$ steps by enumeration as follows. Consider a clopen interval, $([0,1])[\aleph]$, which is linearly ordered lexicographically, *i.e.* $z<y$ if $(\exists\alpha<\aleph)[(z_{\alpha}<y_{\alpha})\wedge(\forall\beta<\alpha)(z_{\beta}=y_{\beta})$ for $w_{\alpha}$ the $\alpha$-th binary member of the $\aleph$-sequence $w$, and assume that each binary $\aleph$-sequence $z$ in $2^{\aleph}$ is marked with 1 or 0 depending whether $z\in X$ or not, which is decidable only if you find the location of $z$ in the interval. The latter assumption reflects the fact that when you search for $x$ in a linearly ordered set it is either present in its place in the order (when $x\in X$) or it is not (when $x\notin X)$. Before we begin the construction, we will need the ability to divide a binary $\aleph$-sequence (of a $\aleph$-sequence of real numbers) by 2. This is just standard binary division by 2 with carries to the right if necessary.\ \ To start the construction, bisect the interval to give a point $m=\langle0.1,0,0,0,\ldots\rangle$. Now set $r:=m$. If the midpoint $r=x$ then we can decide whether $x\in X$ or $x\notin X$ and stop. Otherwise test whether $x<r.$ If $x<r$ then consider the clopen interval $([0,r])[\aleph]$; and if $x>r$ consider the clopen interval $([r,1])[\aleph]$. Iterate the bisection construction as follows[^35]. $cl_{1}=([0,1])[\aleph]$, $cl_{\alpha+1}=Bi(cl_{\alpha};x)$ and $cl_{\lambda}=\bigcap_{\alpha<\lambda}cl_{\alpha}$ for limit ordinal $\lambda$ (which is the unique maximal clopen interval $\subseteq([0,1])[\aleph]$ such that for all $z\in cl_{\lambda}$ the initial $\lambda$-sequence of $z$ is $x[\lambda]:=\langle x_{\alpha}:\alpha<\lambda\rangle$), *i.e*. $([x[\lambda]\parallel\langle0,0,0,\ldots\rangle,x[\lambda]\parallel\langle1,1,1,\ldots\rangle])$), where $\parallel$ is concatenation, $\langle0,0,0,\ldots\rangle$ and $\langle1,1,1,\ldots\rangle$ are $\aleph$-sequences that stand for $\aleph$ concatenated $\omega$-sequences $\langle0.0,0,0,\ldots\rangle$ and $\langle0.1,1,1,\ldots\rangle$ respectively, $Bi(([a,b])[\aleph];x)=([a,r])[\aleph]$ and $x_{\alpha}=0$ if $([a,b])[\aleph]=cl_{\alpha}$ and $x<r$ for the midpoint $r=(b-a)/2$, $Bi(([a,b])[\aleph];x)=([r,b])[\aleph]$ and $x_{\alpha}=1$ if $([a,b])[\aleph]=cl_{\alpha}$ and $x>r$, and the iteration stops if $x=r$ (and one can decide whether $r\in X)$. It is clear that the construction will terminate in $\le\aleph$ steps, as a nested sequence of clopen intervals can only comprise $\aleph$ members, as that is how many bits there are in the single binary $\aleph$-sequence in any non-empty intersection of a nested sequence of clopen intervals. If $x\in X$ has not been confirmed in $<\aleph$ steps, then at the $\aleph$ step $cl_{\aleph}=([x,x])=\{x\}$, and at ordinal step $\aleph+1$ (*i.e.* 1 after $\aleph)$ we can then decide whether $x\in X$ given that $x$ has been located.\ \ ![image](6_Users_andrewpowell_Downloads_NestedInterval.png)\ \ *Figure 6: The hierarchy of clopen intervals of $\aleph$-sequences produced by iterated bisection of a clopen interval.*\ \ But the condition that $x\in X$ can be decided by enumeration in cardinal $<\aleph+1$ steps is equivalent to GCH as can be shown as follows on the assumption that a function which decides $x\in X$ for all $x$ has $\aleph+1$ bits. This assumption follows from a principle of information minimization since any function $f$ that decides $x\in X$ cannot contain $\le\aleph$ bits as $f$ could be represented by some binary $\le\aleph$ sequence, a potential member of $X$. We can express this by means of a diagonal function $d(\left\lceil y\right\rceil ):=1-\left\lceil y\right\rceil (\left\lceil y\right\rceil )$ for $\left\lceil y\right\rceil $ a $\aleph$-bit code for a function of $\aleph$-bits, and note that we get a contradiction if we put $d:=\left\lceil y\right\rceil $ unless the number of bits in $d$ is greater than the number of bits in $\left\lceil y\right\rceil $. It follows from a principle of information minimization that $f$ also contains $\aleph+1$ bits of information if $\aleph$ is an infinite cardinal because, if we consider a generic $\aleph$-sequence $x$, then there are many ordinals $\alpha$ of the same cardinality as $\aleph$, and $\aleph+1$ bits would suffice to decide whether $x\in X$ or not by considering the least upper bound of $\alpha$ for generic $\aleph\le\alpha<\aleph+1$-sequences that could be used to decide $x\in X.$ \[thm:GCH-is-equivalent\](Not provable in ZFC) GCH is equivalent to[^36] the assertion that the amount of information needed to decide the relation $x\in X$ by an interleaved enumeration of $X$ or $2^{\aleph}-X$ is $<\aleph+1$, for any given binary $\aleph$-sequence x of length at most cardinal $\aleph\ge\aleph_{0}$ and X has cardinality $\le2^{\aleph}$. Assume that: 1. $\emptyset\subseteq X\subseteq2^{\aleph}$, 2. *$X$* has cardinality $\aleph<c<2^{\aleph}$, 3. Any $x\in X$ is expressed as a binary sequence of length at most cardinal $\aleph\ge\aleph_{0}$, and 4. The amount of information needed to decide the relation $x\in X$ by an interleaved enumeration of $X$ or $2^{\aleph}-X$ is $<\aleph+1$. The proof is summarized in the tables below, where a $\checked$ means that the option is possible and $\times$ means that the option is impossible.\ Enumerate $X$ Enumerate $2^{\aleph}-X$ ------------- ----------------- -------------------------- $x\in X$ $<c$ $\checked$ $2^{\aleph}$ $\times$ $x\notin X$ $c$ $\times$ $<2^{\aleph}$ $\checked$ \ *Table 1: The number of steps to decide $x\in X$ by enumeration*\ \ $<c$ Proof Ref. $c$ Proof Ref. ------------------------- ------------ ----------------------- ------------ $\aleph+1<c$ $\times$ 1 $\aleph+1<c$ $\times$ 4 $\aleph+1=c$ $\checked$ 2 $\aleph+1=c$ $\times$ 5 $\aleph+1>c$ $\times$ 3 $\aleph+1>c$ $\times$ 3 \ \ \ $<2^{\aleph}$ Proof Ref. $2^{\aleph}$ Proof Ref. -------------------------------------- ------------ ------------------------- ------------ **$\aleph+1<2^{\aleph}$** $\times$ 1 $c<2^{\aleph}$ $\times$ 8 **$\aleph+1=2^{\aleph}$** $\checked$ 6 $c<2^{\aleph}$ $\times$ 8 **$\aleph+1>2^{\aleph}$** $\times$ 7 $c<2^{\aleph}$ $\times$ 8 \ \ *Table 2: The possible cardinal relationships for the number of steps in Table 1 and proof references*\ **\ Proof references:\ \ 1. $x\in X$ would almost always be decided in $\ge\aleph+1$ bits for a given enumeration of $X$, contradicting assumption d).\ 2. $\aleph+1=c$ is consistent with assumption d), as $x\in X$ would be decided in $<c=\aleph+1$ steps by enumeration.\ 3. $\aleph+1>c$ contradicts assumption b) $\aleph<c$, as there would be a cardinal strictly between $\aleph$ and $\aleph+1$.\ 4. $x\in X$ would almost always be decided in $>\aleph+1$ bits for a given enumeration of $X$, contradicting assumption d).\ 5. $\aleph+1=c$ implies that $\aleph+1$ bits are needed to decide $x\in X$ by enumerating all of $X$, which contradicts assumption d).\ 6. **$\aleph+1=2^{\aleph}$** is consistent with assumption d), as $x\in X$ would be decided in $<2^{\aleph}=\aleph+1$ steps by enumeration.\ 7. **$\aleph+1>2^{\aleph}$** contradicts Cantor’s theorem tha**t $\aleph+1\le2^{\aleph}$.**\ 8. $c<\left|2^{\aleph}-X\right|=2^{\aleph}$ and therefore $x\in X$ could always be decided in $<2^{\aleph}$ steps by enumeration of $X$.\ We can conclude that if $x\in X$ then $c=\aleph+1$ and if $x\notin X$ then $\aleph+1=2^{\aleph}$. Using predicate logic[^37] we can conclude $(\exists x)(x\in X)\rightarrow c=\aleph+1$ and $(\exists x)(x\in2^{\aleph}-X)\rightarrow\aleph+1=2^{\aleph}$. Since both *X* and $2^{\aleph}-X$ are not empty we can conclude that $c=\aleph+1=2^{\aleph}$, which contradicts the assumption that $c<2^{\aleph}$. GCH then follows.\ \ Conversely, assume GCH. Then if $x\in X$ then by GCH $x$ will be enumerated in $<\left|X\right|\le2^{\aleph}=\aleph+1$ steps. While if $x\notin X$ then $x$ will be enumerated in $<\left|2^{\aleph}-X\right|=2^{\aleph}=\aleph+1$ steps. In either case then $x\in X$ can be decided by enumeration in $<\aleph+1$ steps, *i.e.* in $<\aleph+1$ bits. What this result shows that if the class of all pure sets $V$ is considered to be a hierarchy of $\langle2^{\aleph},\oplus\rangle$-generalized metric spaces, then GCH holds based on a principle of information minimization. It is of course not true that the class of all pure sets in $V$ needs to be a hierarchy of $\langle2^{\aleph},\oplus\rangle$- generalized metric spaces, but it is a natural construction of $V$ based on a natural topology of sets. Alternative Definition of Cardinality ===================================== Having shown that there are models of ZFC in which Theorems 10 and 12 are true, we can now redefine cardinality to reflect cardinality in this model (in which GCH is true). In terms of the normal definition of cardinality, $2^{\aleph}-(\aleph+1)\times2^{\aleph}=\slashed{O}$ is not surprising; it simply says that $2^{\aleph}-max(\aleph+1,2^{\aleph})=2^{\aleph}-2^{\aleph}=\slashed{O}$. However, the fact that $\aleph+1$ is the least cardinal number with the property that $2^{\aleph}-(\aleph+1)\times2^{\aleph}=\slashed{O}$ is forced in the Baire topology suggests a modification to the definition of cardinal number. Intuitively, the idea is that iterating the closed nowhere dense set construction in a dense way on a nowhere dense set will produce ever sparser nowhere dense sets, but the deletion of $\aleph+1$ such nowhere dense sets in a dense way results in the empty set. Cardinality in these terms then measures how sparse a set can be before it ceases to exist. Or, in the spirit of the Baire Category Theorem, cardinality measures how many negligible sets you need to add together before a non-negligible set is formed.\ \ This argument suggests a change of definition of cardinality, namely a set *X* (represented as a tree *T*) has cardinality $\aleph$ if $\aleph$ is the largest cardinal such that every dense (in the sense that every non-empty open set of the tree has non-empty intersection with the sequence), non-repeating sequence of (clopen) splitting subtrees of *T* of length $\aleph$ has an empty remainder after removal of $\aleph$ subtrees in this sequence from *T*, while it is possible to construct a non-empty remainder after removal of any subsequence of length $<\aleph$. This construction is always possible because the number of splitting subtrees is the same as the number of nodes and the branch length, and it is always possible to delete $<\aleph-$many paths (*i.e*. $\aleph$-sequences that are not branches) or branches in a way that leaves a dense sequence of splitting subtrees.[^38] This is so by the result in Section 4 that any set $X$ with a dense-in-itself subset and of cardinality $\aleph<c\le2^{\aleph}$ has, under the Baire topology, the property that $c-c\times(\aleph+1)=\slashed{O}$ and it is possible to force $c-c\times\aleph\ne\slashed{O}$ using the closed nowhere dense set construction.\ \ In logical terms the change in definition of cardinality can be stated as follows: - $\left|T\right|=\alpha\leftrightarrow(P(T,\alpha)\wedge(\forall\gamma:Card(\gamma))(\gamma>\alpha\rightarrow\neg P(T,\gamma))$, where - $\alpha$ is a cardinal, $Card(\alpha)$ - *T* is a binary tree with a root - $P(T,\beta):=(\forall\langle u_{\eta<\beta}\rangle:S(T,\langle u_{\eta<\beta}\rangle))(\bigcap_{\eta<\beta}u_{\eta}=\slashed{O})\wedge(\forall\gamma<\beta)$\ $(\exists\langle u_{\delta<\gamma}\rangle:S(T,\langle u_{\delta<\gamma}\rangle))(\bigcap_{\delta<\gamma}u_{\gamma}\neq\slashed{O}$) - $S(T,\langle u_{\eta<\xi}\rangle):=NR(\langle u_{\eta<\xi}\rangle)\wedge D(T,\langle u_{\eta<\xi}\rangle)\wedge C(T,\langle u_{\eta<\xi}\rangle)$ - $NR(\langle u_{\eta<\xi}\rangle):=(\forall\theta<\xi)(\forall\lambda<\xi)(\forall u_{\theta}\in\langle u_{\eta<\xi}\rangle)(\forall u_{\lambda}\in\langle u_{\eta<\xi}\rangle)$\ $(u_{\theta}=u_{\lambda}\rightarrow\theta=\lambda)$ [\[]{}non-repeating sequence[\]]{} - $D(T,\langle u_{\eta<\xi}\rangle):=(\exists\theta\leq\alpha)(\forall w_{\delta}\in\langle w_{\eta<\theta}\rangle:B(T,\langle w_{\eta<\theta}\rangle)(\exists u_{\kappa}\in\langle u_{\eta<\xi}\rangle)(w_{\delta}\cap u_{\kappa}\neq\slashed{O})$ [\[]{}dense sequence[\]]{} - $C(T,\langle u_{\eta<\xi}\rangle):=(\forall\rho<\xi)(u_{\rho}\neq\slashed{O}\wedge Clopensplit(T,u_{\rho}))$ [\[]{}sequence of non-empty clopen splitting subtrees[\]]{} - $B(T,\langle u_{\eta<\xi}\rangle):=C(T,\langle u_{\eta<\xi}\rangle)\wedge(T=\bigcup_{\delta<\xi}u_{\delta})$ [\[]{}clopen basis for tree *T*[\]]{} - $Clopensplit(T,u):=(\exists x)(\exists\beta:Ord(\beta))(u=\{y\in T:y\ne x\wedge(\forall\gamma<\beta)(x_{\gamma}=y_{\gamma})\})$ - $Ord(\alpha):=(\alpha=\emptyset\vee(\exists\beta<\alpha)(Ord(\beta)\wedge\alpha=\beta\cup\{\beta\})\vee(\alpha=\bigcup_{Ord(\beta):\beta<\alpha}\beta)$) - $Card(\alpha):=Ord(\alpha)\wedge(\neg\exists\beta<\alpha)[Ord(\beta))\wedge(\exists f:\beta\rightarrow\alpha)Sur(f)]$ - $Sur(f):=Fn(f:X\rightarrow Y))\wedge(\forall y)(\exists x)(y=f(x))$ - $Fn(f:X\rightarrow Y):=(\forall x\in X)(\exists y\in Y)(y=f(x))\wedge\forall x\in X)(\forall y\in Y)[x=y\rightarrow f(x)=f(y)]$ In the case of sets of natural numbers, a number $n$ can be represented by a sequence $\langle1,\dots,1,0,\ldots\rangle$, *i.e.* $n$ 1s and then a terminal $\omega$-sequence of 0s. Removal of a dense sequence of clopen subtrees is visually the removal of all of the terminal $\omega$-sequence of 0s (and of course the initial sequence of 1s) because in the discrete topology each set $\{n\}$, *i.e.* &lt;1, Thus $\aleph_{0}-\aleph\times\aleph_{0}=\slashed{O}$ has solution $\aleph=\aleph_{0};$ and $N$ has cardinality $\aleph_{0}$ and a finite set with $n$ members has cardinality $n$, as before.\ \ It can be seen that according to the modified definition of cardinality $2^{\aleph}=\aleph+1$ for all cardinals $\aleph\ge\aleph_{0}$, and there are no cardinals $\aleph+1<\beth<2^{\aleph}$, *i.e.* that the Generalized Continuum Hypothesis is true in the sense of the new definition of cardinality. Conclusions =========== There is a natural way to measure size of sets, which is given by how many clopen intervals need to be deleted in a dense way from a binary tree of binary $\aleph$-sequences before the empty set results (or a countable set of isolated points). This definition of cardinality works well for a set universe which satisfies the Axiom of Choice, when all sets of size $\le2^{\aleph}$ are sets of binary $\aleph$-sequences. The price to be paid for the use of a Baire topology (in which clopen intervals exist) is that the Baire topology is pathological in several respects: clopen sets are totally disconnected[^39] by definition, and Baire topological spaces, such as clopen interval $([0,1])[\aleph]$, comprise a set of binary $\aleph$-sequences are not compact or metrizable (for $\aleph>\aleph_{0}$). That said, since all sets can be regarded as $\aleph$-tuples of real numbers, there is a natural generalized metric and in a $\aleph$-sequentially complete topological space, every strictly nested decreasing $\aleph$-sequence of clopen intervals is a set with a single element. The class of sets is then quite well behaved under the assumption of the Axiom of Choice, although this good behaviour does not extend to the properties of sets that can be created using the Axiom of Choice (see [@key-5] for a selection of such sets). [10]{} J.L. Bell & M. Machover, *A Course in Mathematical Logic,* North-Holland, Amsterdam, 1977. F. Hausdorff, *Set Theory*, Chelsea, New York, 1957. Translated by J.R. Aumann et al from *Grundzüge der Mengenlehre*, Veit, Leipzig, third edition 1937. R. Kopperman, “All Topologies Come From Generalized Metrics”, *The American Mathematical Monthly* 95.2, pp. 89-97, 1988. T. Jech, Set Theory: The Third Millennium Edition, Revised and Expanded, Springer, New York, 2002. K. Kunen, *Set Theory: An Introduction to the Independence Proofs* North-Holland, Amsterdam, 1980. A. Levy, *Basic Set Theory*, Springer, Berlin, 1979. Reprinted by Dover Publications, New York, 2002. C.-T. Liu, “Absolutely Closed Spaces’, *Transactions of the American Mathematical Society,* Vol. 130, No. 1, pp. 86-104, 1968. A.W. Miller, “Special Subsets of the Real Line” in *Handbook of Set-Theoretic Topology*, eds. K. Kunen and J.E. Vaughan, Elsevier, Amsterdam, pp. 203-228, 1984. J.R. Shoenfield, “The Axioms of Set Theory” in *Handbook of Mathematical Logic*, ed. J. Barwise, North-Holland, Amsterdam, pp. 321-344, 1984. R. M. Smullyan & M. Fitting, *Set Theory and the Continuum Problem*, Dover, New York, 2009. L.A. Steen & J.A. Seebach *Counterexamples in Topology,* Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1996. [^1]: [@key-6] is used as the standard reference for motivating the axioms of set theory (Zermelo-Fraenkel set theory with the Axiom of Choice) and [@key-2-0]is the standard reference for developments in Zermelo-Fraenkel set theory. [^2]: [@key-4-0] is a reference for a notion of *absolute closure*, defined as ’A Hausdorff space $X$ is called absolutely closed if $X$ is closed in every Hausdorff space in which it is imbedded” (see [@key-4-0] Definition 1.1). [^3]: *$\aleph$-sequential completeness* is not the same as *compactness* (because for $\aleph>\aleph_{0}$ there are infinite covers without finite subcovers) or *sequential compactness* (because $\aleph$-sequences are longer than $\aleph_{0}$-sequences for $\aleph>\aleph_{0}$). In particular $\aleph$-sequential completeness is not the same as the Stone-Čech compactification, because $\aleph$-sequentially complete Baire spaces, unlike Stone spaces over an infinite power set, are not compact for $\aleph>\aleph_{0}$, and Stone spaces over an infinite power set are larger ($=2^{2^{\aleph}}$) and richer than the power set with an $\aleph$-sequentially complete Baire topology (cardinality $=2^{\aleph}$), see [@key-1-0] Theorem 4.3 p. 143 & p. 146. [@key-8] and the online $\pi$-base resource give an excellent view of the landscape of the topological properties of topological spaces. [^4]: “Force” is used in its everyday sense, *i.e.* it is possible with some effort to do something. The powerful mathematical notion of forcing, which amounts to giving a set a property by adding a consistent set of finitely specified conditions, is related to developments later in this paper. (See [@key-3] for a clear introduction to set-theoretic forcing). [^5]: See [@key-4] s. 2.1, p. 222, for example where the finite (binary for definiteness) sequence $s$ is the initial segment of $x$. The case of binary sequences gives rise to the Cantor topology. [^6]: An easy way to see that $\bigcap_{1<i<\omega}u_{n_{i}}(x_{i})=\{x\}$ is possible is to use the tree approach in the main text: at the $i$-th split in the binary tree that represents $X$ choose a subtree which contains a member of $u_{n_{i}}(x_{i})$, choosing one subtree (using the Axiom of Choice, or choosing the one that starts with 0 if you want to avoid the Axiom of Choice) if there is a choice. Then the sequences of choices defines a path $x$ which may be in $\bigcap_{1<i<\omega}u_{n_{i}}(x_{i})$, and will be if $x\in X$ because $x\in u_{n_{i}}(x_{i})$ for each $1<i<\omega$. Conversely, for any $x\in X$ it is always possible to construct a sequence of $u_{n_{i}}(x_{i})$ such that $\bigcap_{1<i<\omega}u_{n_{i}}(x_{i})=\{x\}$, by choosing $u_{n_{i}}(x_{i})$ such that $x\in u_{n_{i}}(x_{i})$ at each split. [^7]: A set is *dense-in-itself* if it contains no isolated points. The word “kernel” indicates that the isolated points have been removed and that a non-empty set remains. The construction is from [@key-1] p. 198. [^8]: This definition is equivalent to the standard definition of every open neighbourhood of $x$ having non-empty intersection with the dense set. [^9]: If you proceed along the branch $x_{\alpha}$ by exactly $1$ node, then the intersection will form a single branch, *i.e.* contain exactly one point. [^10]: It is worth noting that $\bigcup u_{n}(x_{\alpha})$ is not in general closed, because all $u_{n}(x_{\alpha})$ could split from a common sequence that is $\notin X$. [^11]: Sequential completeness is not a topological notion as the real numbers with the standard open interval topology is homeomorphic to the open interval *$(0,1)$* with the same open interval topology, but the real numbers is sequentially complete and *$(0,1)$* is not because the constant 0 and constant 1 *$\omega$*-sequences are sequences through the tree but 0 and 1 are not members of ** $(0,1)$. Sequential completeness is a metric notion in general, but in the case of the Baire topology it is also set-theoretic (whether any given sequence is a member of a set) and tree-theoretic (whether a sequence is covered by other sequences that split from it at a node implies that the sequence is a member of the set). There is little difference in practice because a Baire space is metrizable with, for example, the metric $d(x,x)=0$ and $d(x,y):=2^{-n}$ where *n* is the height of the lowest node such that $(x)_{n}\ne(y)_{n}$. [^12]: That is, there is a one-to-one mapping of $X$ onto $2^{\omega}$ that preserves the branch structure. [^13]: That is, there is no non-empty subset of $X$ that does not have a clopen interval deleted from it. If there were some clopen interval which were not subject to deletion, then the empty set would not result. [^14]: It is always possible to re-order any countably infinite set as a total ordering of order type $\alpha$ for any $\alpha<\omega_{1}$, but if there were a strictly decreasing nested sequence of Cantor sets of length $\aleph_{1}$ then as at least one clopen interval is deleted at each step, $X$ would have at least $\aleph_{1}$ clopen intervals, which is false. Likewise a tree with a path of length $\omega_{1}$ with $\omega_{1}$ clopen intervals splitting from it would define a set of clopen intervals of cardinality $\aleph_{1}$, which does not exist; and thus no tree which has a path of length $\omega_{1}$ with $\omega_{1}$ clopen intervals splitting from it represents a set of real numbers. [^15]: If the denseness condition were removed, it would be possible to delete the same clopen interval $\aleph_{1}$ times, or rather delete it once and then do nothing $\aleph_{1}$ times. [^16]: A topological space $X$ is *compact* if for every set of subsets $M\subseteq N$ such that $\bigcup M=X$ there is a finite set of subsets $L\subseteq M$ such that $\bigcup L=X$. A topological space $X$ is locally compact if every $x\in X$ has some open set $U$ and some compact set $C$ such that $x\in U\subseteq C$, [^17]: A topological space is *Hausdorff* if there are disjoint neighbourhoods around any two distinct points, *i.e.* $Hausdorff(\langle x,N\rangle):=(\forall x\in X)(\forall y\in X)(\exists Y\in N)(\exists Z\in N)(x\neq y\rightarrow Y\cap Z=\slashed{O})$. [^18]: A space that has a base of countably many open sets is called *second-countable*. [^19]: A standard Baire space, $\omega^{\omega}$, is sequentially complete, but it is not compact nor sequentially compact, in essence because it is too wide: it has unbounded sequences of branches and an infinite cover comprising those branches and subtrees that split from them that does not have a finite subcover. The notation $2^{\omega}$ reflects the fact that the topological space is actually a Cantor space, *i.e.* $[0,1]$ with the Baire topology. A Cantor space is compact (as a product of a compact set, namely $2=\{0,1\}$. [^20]: \[fn:To-show-compactness\]To show compactness from first principles, proceed using a Heine-Borel construction. Assume that a topological space that comprises a sequentially complete set of binary sequences with countably infinitely many clopen basis sets and no discrete or isolated points, $X$, is not compact, *i.e.* there exists an infinite cover of open sets $\{C_{i<\alpha}:\alpha\ge\omega\}$ without a finite open subcover. Then subdivide the set underlying of $X$, $E$ say, into two disjoint clopen intervals $E_{1}$ and $E_{2}$ (which exists since $X$ has a clopen basis and the complement of any clopen set is clopen) and iterate the process. At least one clopen interval in each subdivision will not be compact. Because *$E$* is a sequentially complete and has a countably infinite basis, if a nested sequence $E_{N}$ consists of closed non-empty sets, where *N* is an $\omega$-sequence of finite binary sequences such that if $m\in N$ and $n>m$ and $n\in N$ then *m* is a subsequence of *n*, the subdivision process $\bigcap_{n\in N}E_{N}$ will result in a non-empty set. In fact, because $N$ defines a unique point, $\bigcap_{n\in N}E_{N}$ contains exactly one point in $E,$$L$. Now every point in $E$ will be a member of at least one open set, $C_{j}$, in the cover, otherwise $E$ would not be covered. But $L\in E_{n}\subseteq C_{j}$ for some $n\in N$ where $E_{n}\in E_{N}$ since an open set $C_{j}$ such that $L\in C_{j}$ will include a clopen interval $E_{n}$ such that $L\in E_{n}$ because the space has a basis of clopen intervals. By construction any clopen interval will split from $E$ depending only on its first $n$ binary digits for some natural number $n$. Thus if an $E_{N}$ were a nested sequence of clopen intervals that are not compact, then we would have $L\in E_{m}\subset E_{n}\subseteq C_{j}$ for all $m>n$ where $E_{m}\in E_{N}$ for any $E_{m}$ whose members agree with members of $C_{j}$ on the first $n$ binary digits, which means that $\{C_{j}\}$ is a single (*i.e.* finite) cover for $E_{m>n}$, contradiction. To show the space is Hausdorff, note that any two distinct branches *c* and *d* will split from one another at a certain node, $nd=\langle n,b\rangle$ where $b\in\{0,1\}$: a $u_{n}(d_{\alpha})$ that includes *c* and all branches that split from *c* after node *n* will have a disjoint union with a $u_{n}(d_{\alpha})$ that includes *d* and all branches that split from *d* after node *n*. Hence the space is Hausdorff. [^21]: If $X$ is not compact or sequentially complete, the Baire Category Theorem does not apply. [^22]: There are at least $\aleph_{0}$ such $\omega$-sequences because if there were a finite number, then some clopen interval would be sequentially complete. [^23]: The rate of growth of $r(n)$ depends on the height of the splitting node of $x_{n}$ and $y_{n}$, which could be set arbitrarily high. [^24]: In this case for a product topology a finite sequence of bounded finite sets (*i.e.* an initial finite $n$-ary sequence for some natural number $n$) will define the topology. [^25]: If $X$ does not have a dense-in-itself kernel then $\left|X\right|\le\aleph$. [^26]: There are at least $\aleph$ such $\aleph$-sequences because if there were $<\aleph$, then some clopen interval will be $\aleph-$sequentially complete. [^27]: See for example [@key-4] Proposition 3.8 p. 213 for the case $\aleph=\aleph_{0}$. [^28]: \[fn:Construct-a-cover\]Construct a cover $Z$ of $X$ as follows. Fix a branch $x\in X$ and add to $Z$ all disjoint clopen intervals that split from $x$. Then add to $Z$ a clopen interval that splits from $y\in X$ such that $y\ne x$ at a node of index $>\aleph_{0}$ . $Z$ has no finite open subcover if the base of $X$ has cardinality $\aleph>\aleph_{0}$ because there are at least $\aleph_{0}$ disjoint clopen intervals in the cover such that removal of any one such set would not result in a cover of $X$. A corollary is that there is a descending $\aleph$-sequence of clopen sets $\langle x_{\alpha<\aleph}\rangle$ (complements of clopen intervals) such that all finite intersections of $X_{\alpha}$ are non-empty while $\bigcap_{\beta<\aleph}X_{\beta}=\emptyset$. The failure of compactness means that the topological space cannot be characterized by convergent ultrafilters, but it possible nevertheless to characterize $X$ by the set of strictly descending $\aleph$-sequences of clopen intervals converging to a point $x$, and in fact a generalized local compactness condition does hold for any $\aleph$-sequentially complete Hausdorff topological space that has a clopen base of cardinality $\aleph$: if for every $\beta<\aleph$ $\bigcap_{\alpha<\beta}F_{\alpha}\ne\emptyset$ then $\bigcap_{\alpha<\aleph}F_{\alpha}\ne\emptyset$ for any strictly descending $\aleph-$sequence of non-empty clopen intervals $F_{\alpha}$, *i.e.* $F_{\beta}\subset F_{\gamma}$ if ordinal $\gamma<\beta$. This follows by following the branch from which successive nested clopen intervals split. [^29]: \[fn:By-the-Nagana-Smirnov\]By the Nagata-Smirnov metrization theorem $2^{\aleph}$ for $\aleph>\aleph_{0}$ is not metrizable as it is Hausdorff and regular (since any two points can be separated by clopen neighbourhood), but does not have a countable locally finite base (since there are uncountably many clopen intervals and if every member of $2^{\aleph}$ is only a member of finitely many clopen intervals, the family of clopen intervals in the base is uncountable). [^30]: A semi-group is defined like a group but may lack an inverse operation to the group operation. [^31]: In fact $([0,1])[\aleph]$ is also an ultrametric space as $max(d(x,y),d(y,z))\ge d(x,z)$, see Figure 5. [^32]: If $\bigcap_{\beta<\aleph}X_{\beta}\ne\emptyset$ and $y\in\overline{\bigcap_{\beta<\aleph}X_{\beta}}-\bigcap_{\beta<\aleph}X_{\beta}$ then $y\notin X_{\alpha}$ for some $\alpha<\aleph$, and since $X_{\alpha}$ is clopen, $y\notin\overline{X_{\alpha}}$ and hence $y\notin\bigcap_{\beta<\aleph}\overline{X_{\beta}}$. Since $\overline{\bigcap_{\beta<\aleph}X_{\beta}}\subseteq\bigcap_{\beta<\aleph}\overline{X_{\beta}}$ by definition of closure, it follows that $y\notin\overline{\bigcap_{\beta<\aleph}X_{\beta}}$, contradiction. [^33]: In fact the $\aleph-$sequence of initial segments from which nested clopen intervals split defines a branch that is in $D_{\aleph}$. [^34]: The $\aleph+1$-sequence of sets $\langle X_{\alpha<\aleph+1}\rangle$ must be eventually constant $\subset X$ in $<\aleph+1$ steps; otherwise the preservation of $\aleph+1$ members of $X$ will result in X. [^35]: The clopen intervals are not subsets of $X$ in general but are subsets of $2^{\aleph}$. [^36]: Strictly the inference from the information limitation principle to GCH is probabilistic (true almost always) in cardinality terms rather than logically necessary. [^37]: Existential elimination: for example, assume $(\exists x)(x\in X)$ and $(\forall x)(x\in X\rightarrow c=\aleph+1)$, then if $c\neq\aleph+1$ then by contraposition $(\forall x)(x\notin X)$ and hence $\neg(\exists x)(x\in X)$, contradiction; hence $c=\aleph+1$. [^38]: The condition to delete $<\aleph-$many branches ensures that a tree of cardinality $\aleph$ does not have cardinality $\aleph+1$ [^39]: A topological space $X$ is *totally disconnected* if for every two points $x,y\in X$ such that $x\ne y$ there are disjoint open sets $O_{1}$ and $O_{2}$ such that $x\in O_{1}$, $y\in O_{2}$ and $O_{1}\cup O_{2}=X$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a simple experiment that allows advanced undergraduates to learn the principles and applications of spectroscopy. The technique, known as [*acoustic resonance spectroscopy*]{}, is applied to study a vibrating rod. The setup includes electromagnetic-acoustic transducers, an audio amplifier and a vector network analyzer. Typical results of compressional, torsional and bending waves are analyzed and compared with analytical results.' address: - '$^1$ Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, P. O. Box 48-3, 62251 Cuernavaca, Morelos, México' - '$^2$ Departamento de Ciencias Básicas, Universidad Autónoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 México D. F., México' - '$^3$ Departamento de Ingeniería Eléctrica, Universidad Autónoma Metropolitana-Iztapalapa, A. P. 55-534, 09340 México D. F., Mexico' author: - 'J A Franco-Villafañe$^1$, E Flores-Olmedo$^2$, G Báez$^2$, O Gandarilla-Carrillo$^3$ and R. A. Méndez-Sánchez$^1$' title: Acoustic resonance spectroscopy for the advanced undergraduate laboratory --- Introduction {#Sec:Intro} ============ Spectroscopy is a broad field experimental technique in physics with many applications. Historically the spectroscopy was crucial to develop Quantum Mechanics, one of the fundamental pillars of physics. Spectroscopy nowadays is a tool used in research laboratories across the world in Physics, Chemistry and Biology so, it deserves to be introduced to students of physics at an advanced stage of their education. For this reason we have designed an experiment, for an advanced undergraduate –third year– physics laboratory, based on the acoustic resonance spectroscopy (ARS). This spectroscopy involves scattering of acoustical and mechanical waves; the former for longitudinal (pressure) waves and the latter also includes transverse waves, among others. Compared with other spectroscopic techniques, the ARS is a non-destructive technique that requires minimum sample preparation. The sample used here is a uniform rod of circular cross section that can be easily changed by beams or plates uniform or structured with some specific purpose. Another advantage is that the ARS results can be analyzed deeply and compared with theoretical predictions. In this paper the acoustic resonance spectroscopy is presented with a very simple system: a vibrating rod. In the next section we present the resonant response theory of torsional waves in a rod. In Section \[Sec:Experimental\] we present the ARS as well as the experimental setup with a description of the transducers used. A comparison between theory and experiment is performed in Sec. \[Sec:Comparison\]. A brief conclusion follows. Theory: Resonances with losses for torsional waves in rods {#Sec:Theory} ========================================================== The vibrating rod at low frequencies is one of the simplest cases to study elastic systems. In this regime the elastic rods can vibrate in three different ways: compressional, torsional and bending [@Morietal; @Graff; @Rossing]; an illustrative animation of these kind of vibrations can be found in [@MendezWeb]. At low amplitudes, and for rods with circular cross section, it is possible to study separately those different kinds of waves. In what follows we will present the theory for torsional vibrations in rods that satisfy the wave equation. The compressional waves, in a first approximation, satisfy also the wave equation while bending (also called flexural) waves satisfy a fourth order partial differential equation [@Graff; @Landau1]. The theory developed below can be easily adapted, without major effort, to compressional and bending waves. ![Uniform rod with circular cross-section. In the zoom the angle of twist $\Phi$ is defined. The arrows indicate the torque.[]{data-label="Fig:torsionalwaves"}](F1){width="0.6\columnwidth"} The torsional vibrations in rods with circular uniform cross-section (see figure \[Fig:torsionalwaves\]) satisfy $$\label{Eq:linearwaves} \frac{\partial ^2 \Phi}{\partial z^2}-\frac{1}{v^2} \frac{\partial^2 \Phi}{\partial t^2}=0, \label{wavequation}$$ where $\Phi$ is the angle of twist, $v=\sqrt{G/\rho}$ is the speed of the torsional waves with $G$ the shear modulus and $\rho$ the density of the rod. [**To obtain the results for compressional waves, one has to change $\Phi$ by the longitudinal displacement $u$ in the previous equation and use the speed of compressional waves $\sqrt{E/\rho}$ with $E$ the Young modulus**]{}. Since the rod is free at one of its ends ($z=0$), it satisfies the following boundary condition $$\label{Eq:BoundaryCero} \left. \frac{\partial \Phi}{\partial z}\right|_{z=0}=0.$$ At the other end of the rod ($z=L$), a sinusoidal excitation of intensity $F_0$ and angular frequency $\omega$ is applied, $$\label{Eq:force} \left. \frac{\partial \Phi}{\partial z}\right|_{z=L}= F_0 \exp\left(\rmi\omega t\right),$$ where $F_0$ is the ratio between the applied torque and the torsional rigidity [@Graff]. Using separation of variables $$\Phi(z,t)=\phi(z) \exp\left(\rmi\omega t\right),$$ equation (\[Eq:linearwaves\]) can be written as $$\begin{aligned} \frac{\rmd^2 \phi}{\rmd z^2} + \frac{\omega^2}{v^2} \phi = 0,\end{aligned}$$ with solution $$\phi(z)=a \exp\left(\rmi k z\right)+ b \exp\left(-\rmi k z\right).$$ Here $k=\omega/v$ is the wavenumber and $\phi(z)$ is the time independent angle of twist. The constants $a$ and $b$ can be evaluated as follows. From the boundary condition (\[Eq:BoundaryCero\]) one gets $$\left.\frac{\rmd\phi}{\rmd z}\right|_{z=0}=\rmi k (a-b)=0,$$ i.e., $a=b$. Moreover, using (\[Eq:force\]) one gets $$\label{Eq:solution} \phi(z)=-\frac{F_0}{k \sin (kL)}\cos (k z).$$ From the last equation it is possible to see that the angle of twist $\phi(z)$ goes to infinity when $\sin(k L)=0$, or well when $k L=n \pi$, $n\in\mathbb{Z}$. This yields an infinite number of solutions, $k_n=n \pi/L$, that correspond to the normal mode frequencies $$\label{Eq:ResonantFreq} f_n=\frac{nv}{2L},\quad n=1,2,3,\dots$$ To avoid the indeterminacy in the response (\[Eq:solution\]) it is usual to include some absorption (losses) in a phenomenological way. This can be done by adding an imaginary part to the wavenumber: $k=k_{\mathrm{R}}+\rmi k_{\mathrm{I}}$ where $k_{\mathrm{R}}$ and $k_{\mathrm{I}}$ are the real and imaginary parts of $k$, respectively. The imaginary part of the wavenumber is a parameter that fix the intensity of the absorption. [**In general $k_{\mathrm{I}}$ depends on the frequency in a very complicated way [@Crocker] and cannot be taken as a constant for the complete frequency range.**]{} In consequence the angle of twist becomes complex. The response of the rod with absorption is then $$\label{angleoftwist} \phi(z) = -\frac{F_0}{k}\left[ \frac{\cos(k_{\mathrm{R}}z)\cosh(k_{\mathrm{I}}z)-\rmi\sin(k_{\mathrm{R}}z)\sinh(k_{\mathrm{I}}z)} {\sin(k_{\mathrm{R}}L)\cosh(k_{\mathrm{I}}L)+\rmi\cos(k_{\mathrm{R}}L)\sinh(k_{\mathrm{I}}L)} \right].$$ For $z=0$ the intensity of the response is $$\label{ImUz0} | \phi(z=0) |^2=\frac{F_{0}^2}{(k_{\mathrm{R}}^2+k_{\mathrm{I}}^2)[\sin^2(k_{\mathrm{R}}L)+\sinh^2(k_{\mathrm{I}}L)]}.$$ Now we will show that equation (\[ImUz0\]), near to the resonances, can be written in a Breit-Wigner (Lorentzian) form. This regime is called [*isolated resonances regime*]{}. When $k_{\mathrm{R}}L=n\pi+\delta$ with $\delta\ll1$, i. e., close to the resonances, and for small absorption, i.e., $k_{\mathrm{I}}L\ll1$, equation (\[ImUz0\]) reduces to $$\label{Eq:Lorentzian} |\phi(z=0)|^2\approx A_n \frac{\Gamma/2}{\left(f-f_n\right)^2+\left(\Gamma/2\right)^2},$$ where $A_n=vF^2_0/2k_{\mathrm{I}}n^2\pi^3$ and $\Gamma=k_{\mathrm{I}}v/\pi$. Here, $f_{n}$ and $\Gamma$ are the center and width of the resonance, respectively. A plot of $|\phi(z=0)|^2$ as a function of $k_{\mathrm{R}}L$ is given in figure \[Fig.ImaginaryPart\](a) for two different values of the absorption intensity $k_{\mathrm{I}}$. As can be seen in this figure the intensity of the response shows peaks at $k_{\mathrm{R}}L=n\pi$, with $n=1,2,3,...$, which correspond to the normal modes. These peaks are usually called resonant. As can be seen in the figure the high of the peaks decreases quadratically as function of $k_{\mathrm{R}}L$. Additionally one can appreciate in the same figure that the width of the resonant peaks increase with $k_{\mathrm{I}}$. [**This is only valid when the absorption parameter $k_{\mathrm{I}}$ is constant and if $k_{\mathrm{I}}L \ll 1$; these assumptions are only true locally around the resonant frequencies. Thus, each resonant mode has a different $k_{\mathrm{I}}$.**]{} [**As we will see in the next section, the detector measures the acceleration of the metal surface. To obtain it, the expression (\[angleoftwist\]) for the time independent angle of twist $\phi$ should be multiplied by $\omega^2$. Notice that the Breit-Wigner result for the resonances (\[Eq:Lorentzian\]), is still valid since it is assumed that the resonances are narrow ($\delta<<1$) and thus $f\approx f_n$.**]{} In order to evaluate the phase of the resonance we write the angle of twist in polar form as $\phi(z=0)=|\phi(z=0)|\exp(\rmi\theta)$ where $$\label{Eq:phase} \tan\theta=-\frac{k_{\mathrm{R}}\cos(k_{\mathrm{R}}L)\sinh(k_{\mathrm{I}}L)+k_{\mathrm{I}}\sin(k_{\mathrm{R}}L)\cosh(k_{\mathrm{I}}L)}{k_{\mathrm{R}}\sin(k_{\mathrm{R}}L)\cosh(k_{\mathrm{I}}L)-k_{\mathrm{I}}\cos(k_{\mathrm{R}}L)\sinh(k_{\mathrm{I}}L)}.$$ A plot of the phase $\theta$ as a function of $k_{\mathrm{R}}L$ is given in figure \[Fig.ImaginaryPart\](b) for two different values of $k_{\mathrm{I}}$. Again, for isolated resonances, it is possible obtain a simple expression for the phase: $$\label{Eq:phaseLorentzian} \theta\approx\arctan\left(\frac{\Gamma/2}{f-f_{n}}\right).$$ From this equation one can see that the phase has a change of $\pi$ for each resonance. ![(Color online) a) Intensity $|\phi(z=0)|^2/(F_0 L)^2$ of the angle of twist, equation (\[ImUz0\]), and b) phase $\theta$ of the angle of twist, equation (\[Eq:phase\]), as a function of $k_{\mathrm{R}}L$. The continuous (black) lines correspond to $k_{I}L=0.1$ and the dashed (red) lines correspond to $k_{I}L=0.4$. []{data-label="Fig.ImaginaryPart"}](F2){width="0.95\columnwidth"} Experimental measurement of resonances in rods {#Sec:Experimental} ============================================== To measure the resonances of an aluminum rod we propose to use the experimental setup depicted in figure \[Fig.ExperimentalSetup\]. As we will see below, to excite and detect the vibrations, two electromagnetic-acoustic transducers (EMATs) are used [@Morietal; @Rossing; @Simpson]. The key equipment of the experimental setup of figure \[Fig.ExperimentalSetup\] is the vector network analyzer (VNA). A sinusoidal signal is generated by the VNA (ANRITSU model MS4630B) and sent to a high-fidelity Cerwin-Vega! audio amplifier model CV900. The amplified signal is sent to the EMAT exciter located very close to one end ($z=L$) of the rod. The exciter generates a sinusoidal torque on the aluminum rod, see equation (\[Eq:force\]), which produces torsional waves of frequency $f$. A second EMAT measures the response at the free end of the rod ($z=0$) and its signal is directly sent to the VNA. Although the range of frequency of the VNA can be swept from $10$ Hz to $300$ MHz, we work in a shorter range since the audio amplifier works only from $5$ Hz to $60$ kHz. [**The rod is supported by two nylon threads which are located at the nodes for the lower modes. The effect of this support is very small and decreases considerably for higher modes. To measure an spectrum, without missing resonances, it is important that both transducers are located close to the ends of the rod since the free boundary conditions guarantee a maximum amplitude of the vibration there. Notice that moving the detector along the rod will give a measurement of the normal mode wave amplitude.** ]{} The EMATs can be built easily with coils and permanent magnets; changing the orientation of the coil and the magnet, with respect to the rod axis, different kinds of waves can be selectively excited or detected [@Morietal]. The EMATs are also invertible, i.e., they can be used as exciters or detectors. The configuration of the EMATs to excite (detect) the different kind of waves in rods is shown in figure \[Fig.EMATSconfig\]. [**Heuristically the EMAT, as an exciter, operates as follows**]{}: a variable current $I(t)$ of frequency $f$ in the coil generates a magnetic field oscillating with the same frequency. When a metal surface is near to the coil, due to Faraday’s induction law, eddy currents are produced on the metal. These currents interact with the EMAT’s permanent magnet through the Lorentz force. In this way the surface is attracted and repelled at frequency $f$ without mechanical contact. [**Heuristically**]{} the EMAT, as a detector, works as follows: when a vibrating metallic surface is close to the EMAT’s permanent magnet, the change of the flux of the magnetic field produces eddy currents on the metal [**proportional to the speed of the metal**]{}. These currents generate a magnetic field which induce a emf, [**proportional to the derivative of these currents**]{}, on the EMAT’s coil detector. [**Thus the detector measures the acceleration of the metal surface. Notice that for each normal mode this is proportional to the amplitude of vibration since the frequency is almost constant.**]{} [**In our experiment the EMAT exciter, on the one hand, has a cylindrical neodymium magnet of diameter 12 mm, height 12 mm and 12000 G of residual induction and has a coil with 100 turns and diameter of 40 mm and height of 28 mm. The magnet (enameled) wire was the No. 14 AWG. The EMAT detector, on the other hand, also has a cylindrical neodymium magnet of diameter 4 mm, height 4 mm and 12000 G of residual induction. A coil with 400 turns of diameter 10 mm and height of 10 mm. The magnet wire in this case was the No. 32 AWG. With these characteristics of the EMATs, the power pumped into the EMAT exciter was of the order of 65 W while the typical signal measured with the EMAT detector was of the order of 200 mV.** ]{} [**The exciter as well as the detector, have a finite size. Therefore, the interaction with the rod is not only at one point, as in the theoretical model of section 2, but in a finite region. This only affects the results when the wavelength is of the order of the size of the EMATs; this corresponds to modes with $n\gtrsim 100$; this yields a frequency that exceeds the maximum operating frequency of the amplifier.**]{} ![Experimental setup used to measure the resonant response of the aluminum rod. The EMAT (lower left corner) is configured to measure torsional waves. [**The rod is supported by two nylon threads.**]{}[]{data-label="Fig.ExperimentalSetup"}](F3){width="0.6\columnwidth"} ![(Color online) EMATs exciter/detector configurations to measure the different kinds of waves in rods: (a) compressional, (b) bending and (c) torsional.[]{data-label="Fig.EMATSconfig"}](F4) The measurements made with the VNA can be transferred to the computer using the floppy unit or well throughout a direct connection to the computer using the GPIB or RS-232 ports. The VNA allows to measure both, the intensity and the phase of the response. ![Measured spectrum using the setup of figure \[Fig.ExperimentalSetup\] and the EMAT configurations of figure \[Fig.EMATSconfig\]. The aluminum rod has a length $L=1$ m and a circular cross section with diameter $D=1.27$ cm. Here (a), (b) and (c) correspond to compressional, bending and torsional spectra, respectively.[]{data-label="Fig.WideSpectrum"}](F5){width="0.7\columnwidth"} ![(Color online) Measured resonances for the same rod of figure \[Fig.WideSpectrum\]. The upper panels give [**the square of the acceleration**]{} and the lower panels the phase $\theta$ of the angle of twist. The solid (red) lines correspond to Lorentzian fits (see equation \[Eq:Lorentzian\]) with parameters: (a) compressional $\Gamma=0.39$ Hz and $f_{r}=2514.6$ Hz; (b) bending $\Gamma=0.49$ Hz and $f_{r}=512.2$ Hz; and (c) torsional $\Gamma=0.09$ Hz and $f_{r}=3134.6$ Hz. Here $f_{r}$ and $\Gamma$ are the center and width of the Lorentzian fit, respectively.[]{data-label="Fig.ResonantPeaks"}](F6){width="0.7\columnwidth"} Comparison Between Theory and Experiment {#Sec:Comparison} ======================================== ![(Color online) Resonant frequencies measured with the VNA and the EMATs for the same rod of figure \[Fig.ResonantPeaks\] as a function of the number of modes. The results for compressional waves are given by the squares (black), for torsional waves by the circles (blue) and for bending waves by the triangles (red). The solid and dashed lines correspond to a least-squares fit for the compressional and torsional results, respectively.[]{data-label="Fig.ResonantFrequencies"}](F7){width="0.6\columnwidth"} In figure \[Fig.WideSpectrum\] we show the measured spectra, with the EMATs in the different configurations of figure \[Fig.EMATSconfig\]. As can be seen, the different kind of vibrations are selected by the EMATs since different peaks appear for different configurations. Only a very small compressional peak, around $5$ kHz appears in the bending spectrum. The appearance of this peak is not an incorrect alignment of the coil or magnet, but a real physical effect called lateral inertia correction [@Morietal; @Graff]. Three measured peaks, corresponding to compressional, torsional and bending resonances, are given in detail in figure \[Fig.ResonantPeaks\]. Apart from [**the square of the acceleration (proportional to intensity $|\phi(z=0)|^2$)**]{}, the corresponding phase $\theta(z=0)$ is given in the same figure. Lorentzian fits, see (\[Eq:Lorentzian\]), for the resonant peaks are also shown. The absorption parameter $k_{\mathrm{I}}$ can be obtained from the width of the fitted resonances. As can be seen in figure \[Fig.ResonantPeaks\], the compressional and bending resonances are wider than the torsional ones. This is expected since the coupling of the torsional wave with the air is very small due to the circular symmetry of the cross-sectional area of the rod and the fact that shear waves cannot travel in air. As expected, the phase of the response shows a change of $\pi$ for each resonance. The velocity $v$ of the compressional and torsional waves can be obtained measuring several resonances since $f_{n}$ are linear with $n$, and the slope is $v/2L$. In figure \[Fig.ResonantFrequencies\] the centers of the resonances as a function of $n$ for torsional, compressional and bending waves are plotted. As can be seen in this figure, the bending waves are not linear with $n$; this means that they are dispersive [@Graff; @Landau1]. The slopes of the curves for compressional and torsional waves yield wave speeds of $5\,025\pm5$ m/s and $3\,135\pm1$ m/s, respectively. [**Notice that the uncertainty in the compressional wave velocity is larger than that of the torsional waves, this is due to the Rayleigh or lateral inertia effect (see ref. [@Graff]).**]{} [**Also, from these velocities, some physical parameters of aluminium rod can be calculated. Using a value of $\rho=2722\pm21$ Kg/m$^3$ we obtained Young’s modulus $E=68.6\pm0.3$ GPa from the compressional velocity and shear modulus $G=26.7\pm0.2$ GPa. Those values agree with those found in the literature [@Crandall].**]{} Conclusions {#Sec:Conclusions} =========== We have introduced an experiment, for the advanced physics laboratory, which permits undergraduates to learn the basic principles of spectroscopy. It also allows the students to compare experimental results with theoretical predictions since very simple systems, as it is a vibrating rod, can be studied in both ways. The technique, called acoustic resonant spectroscopy, allows the students to measure the resonant curve and the phase of the response. The possibilities of this setup in the undergraduate laboratory courses are several since the rod can be substituted by arbitrary and more complicated elastic systems. Apart from being an experiment in which non-destructive testing can be performed, the apparatus can be used with great success to show quantitatively to the students several interesting physical phenomena as the emergence of bands in periodic systems [@JASA1], the wave amplitudes in plates with regular or irregular shapes [@JFV], among many others. [**In fact, the effect of the absorption can also be studied when covering part of the rod with an absorbing foam, the width was found to increase two orders of magnitude. We should mention that the experimental setup was successfully already used in three advanced physics laboratory courses.**]{} We thank A. Morales and E. Basurto for their help in the experimental measurements. We also thank X. A. Méndez-Báez, A. Salas-Brito and A. Arreola for useful comments. This work was supported by DGAPA-UNAM project IN11131 and by CONACYT project 79613. References {#references .unnumbered} ========== [10]{} Morales A, Gutiérrez L and Flores J 2001 [*Am. J. Phys.*]{} [**69**]{} 517. Graff K F 1975 [*Wave Motion in Elastic Solids*]{} (NY: Dover) p. 142. Rossing T D and Russell D A 1990 [*Am. J. Phys.*]{} [**58**]{} 1153. An illustrative animations of vibrations in rods is given in <http://www.fis.unam.mx/~mendez/animations.html> Landau L D and Lifshitz E M 1976 [*Theory of Elasticity*]{} (Oxford: Pergamon) p 115. Crocker M J 1998 [*Handbook of Acoustics*]{} (NY: Jhon Wiley & Sons) p 681. Simpson H M and Wolfe P J 1975 [*Am. J. Phys.*]{} [**43**]{} 506. Crandall S H, Dahl N C, and Lardner T J 1978 [*An Introduction to the Mechanics of Solids*]{} (Boston: McGraw-Hill). Morales A, Flores J, Gutiérrez L, and Méndez-Sánchez R A 2002 [*J. Acoust. Soc. Am. **112***]{} 1961-1967. Flores J 2007 [*Eur. Phys. Jour. Special Topics **145***]{} 63-75.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Machine learning techniques, specifically the $k$-nearest neighbour algorithm applied to optical band colours, have had some success in predicting photometric redshifts of quasi-stellar objects (QSOs): Although the mean of differences between the spectroscopic and photometric redshifts, $\Delta z$, is close to zero, the distribution of these differences remains wide and distinctly non-Gaussian. As per our previous empirical estimate of photometric redshifts, we find that the predictions can be significantly improved by adding colours from other wavebands, namely the near-infrared and ultraviolet. Self-testing this, by using half of the 33643 strong QSO sample to train the algorithm, results in a significantly narrower spread in $\Delta z$ for the remaining half of the sample. Using the whole QSO sample to train the algorithm, the same set of magnitudes return a similar spread in $\Delta z$ for a sample of radio sources (quasars). Although the matching coincidence is relatively low (739 of the 3663 sources having photometry in the relevant bands), this is still significantly larger than from the empirical method (2%) and thus may provide a method with which to obtain redshifts for the vast number of continuum radio sources expected to be detected with the next generation of large radio telescopes.' author: - | S. J. Curran[^1]\ School of Chemical and Physical Sciences, Victoria University of Wellington, PO Box 600, Wellington 6140, New Zealand date: 'Accepted —. Received —; in original form —' title: 'QSO photometric redshifts from SDSS, WISE and GALEX colours' --- \[firstpage\] [techniques: photometric – methods: statistical – galaxies: active – galaxies: photometry – infrared: galaxies – ultraviolet: galaxies]{} Introduction {#intro} ============ There is currently much interest in developing reliable photometry-based redshifts for distant active galactic nuclei (@lnp18 and references therein). Much of this is driven by the large number of sources expected to be detected through continuum surveys with the next generation of telescopes, such as the [*Australian Square Kilometre Array Pathfinder*]{} (ASKAP, @jtb+08), the [*LOw-Frequency ARray*]{} (LOFAR, @vwg+13) and the [*extended Röntgen Survey with an Imaging Telescope Array*]{} (eROSITA, @sal14).[^2] For example, the [*Evolutionary Map of the Universe*]{} (EMU, @nha+11) on the ASKAP is expected to yield 70 million radio sources. Being able to add a third coordinate, even statistically, would significantly increase the scientific value of these surveys. Obtaining the spectroscopic redshift ($z_{\rm spec}$) for each source would be impractical thus the need for “quick and easy” photometric redshifts ($z_{\rm phot}$). Furthermore, extinction by intervening dust can make optical spectroscopy difficult [@cwm+06], whereas high redshift sources, sufficiently luminous to yield a reliable spectroscopic redshift, ionise all of the neutral gas within the host galaxy [@cw12; @chj+19], biasing against the detection of gas-rich objects [@msc+15]. Ideally, the redshifts for radio sources would be obtained from the radio photometric properties, although this has proven to be elusive [@maj15; @nsl+19], due to their relatively featureless spectral energy distributions (SEDs). There has, however, been considerable success applying machine learning methods of optical-band magnitudes, specifically with the $k$-nearest neighbour (kNN) algorithm which compares the Euclidean distance between a datum and its $k$ nearest neighbours in a feature space [@bbm+08], typically the $u- g$, $g - r$, $r - i$ and $ i - z$ colours of the [*Sloan Digital Sky Survey*]{} (SDSS).[^3] However, as noted by @cm19, this method fails at $z\gapp2$, giving a non-Gaussian (fat-tailed) distribution of $\Delta z \equiv z_{\rm spec} - z_{\rm phot}$ [@rws+01; @wrs+04; @mhp+12; @hdzz16]. Finding an empirical relationship between the ratio of two colours and the redshift, @cm19 obtain a near-Gaussian distribution of $\Delta z$, via a [*redshift dependent*]{} colour ratio, the approximate redshift first being estimated from a near-infrared magnitude (see also @wwf+16 [@gas+18] and references therein). Thus, different combinations of observed-frame colours are required in order to yield a useful photometric redshift. We therefore suspect that the breakdown in the kNN method is due to the exclusive use of optical (SDSS) photometry and find that the redshift predictions can be significantly improved with the addition of photometry from other bands, which we address in this letter. Analysis and results ==================== The sample {#sec:samp} ---------- From the SDSS Data Release 12 (DR12, @aaa+15), we extracted the first 33643 QSOs with accurate spectroscopic redshifts ($\delta z/z<0.01$), which span the magnitude range $r = 14.667 - 22.618$. We then used the source coordinates to obtain the nearest source within a 6 arc-second search radius in the [*NASA/IPAC Extragalactic Database*]{} (NED), which usually resulted in a single match. As well as obtaining the specific flux densities, we used the NED names to query the [ *Wide-Field Infrared Survey Explorer*]{} (WISE, @wem+10) and the [*Two Micron All Sky Survey*]{} (2MASS, @scs+06) databases. For each of the bands[^4], the photometric points which fell within $\Delta\log_{10}\nu=\pm0.05$ of the central frequency of the band were averaged, with this then being converted to a magnitude. QSO colours and photometric redshifts ------------------------------------- ### SDSS colours We start by using the standard $u- g$, $g - r$, $r - i$ and $ i - z$ colours [@rws+01; @bbm+08], including also the $r$ magnitude as a feature in the algorithm [@hdzz16]. Of the sample of 33643, 33166 have all five SDSS magnitudes, giving a 98% matching coincidence. We trained the model on half of the sample, finding that $k\approx10$ nearest neighbours minimised the standard deviation, $\sigma$, from the $z_{\rm phot} = z_{\rm spec}$ line (Fig. \[big\], top left). In the top panel of the figure, we see a similar distribution to those obtained by @rws+01 [@wrs+04; @hdzz16], with two dense groups of outliers within $z_{\rm spec}\lapp2$ and $z_{\rm phot}\lapp2$. The data also exhibit the deviation from $z_{\rm phot} = z_{\rm spec}$ at $z_{\rm spec}\gapp2$, contributing to the wide wings in the $\Delta z$ distribution (@rws+01 [@wrs+04; @mhp+12; @prm+15; @rmp+15; @hdzz16; @cm19]). ### WISE colours {#sec:wise} As stated in Sect. \[intro\], given the dependence of the observed-frame bands on redshift, a single set of colours would not be expected to yield accurate photometric redshifts over a wide range. Specifically, @cm19 find that it is the [*rest-frame*]{} $(U-K)/(W2-FUV)$ colour ratio which correlates strongly with redshift, corresponding to the [ *observed-frame*]{} ratios $(I-W2)/(W3-U)$ at $1\leq z \leq3$ and $(I-W2.5)/(W4-R)$ at $z>3$.[^5] They postulate that this dependence results from a decrease in the rest-frame $U-K$ colour coupled with an increase in the $W2 - FUV$ colour as the luminosity increases, which gives a proxy for the redshift via the Malmquist bias. It is therefore apparent that in order to accurately determine a large range of photometric redshifts, other bands beyond the optical must be invoked. The method of @cm19 requires nine different magnitude measures from a number of disparate surveys, resulting in photometric redshifts being obtained for $<34$% of the sources.[^6] The WISE database provides two near-infrared (NIR, $W1$ & $W2$) and two mid-infrared (MIR, $W3$ & $W4$) magnitudes from a single survey and overlaps well with the sources in the SDSS database (e.g. @lhs16 [@slj+16; @wwf+16]). Adding the $z-W1$, $W1-W2$, $W2-W3$ and $W3-W4$ colours to the algorithm, we obtain considerably better photometric redshifts, with greatly reduced wings in the $\Delta z$ distribution (Fig. \[big\], top middle). Using sources for which all four WISE magnitudes are available yields a 72% matching coincidence, with the missing sources due to the objects being undetected in the $W3$ and $W4$ bands. The matching coincidence can be increased to 95% by using the $W1$ and $W2$ magnitudes only[^7], which returns a similar result to using all of the WISE magnitudes (Fig. \[big\], top right). This could be due to the $W3$ and $W4$ magnitudes probing a relatively featureless region of the SED (see Sect. \[disc\])[^8], which does not contribute significantly to the model. ### GALEX colours In addition to the infrared, the ultraviolet (UV) colours may also be used to improve upon the optical data alone: @bbm+08 combined the SDSS colours with the near-ultraviolet ($NUV$, $\lambda = 227$ nm) and far-ultraviolet ($FUV$, $\lambda = 153$ nm) bands of the [*Galaxy Evolution Explorer*]{} [@mfs+05] to obtain a standard deviation of $\sigma = 0.34$ from 11149 QSOs. The GALEX database was also queried as part of the photometry search (Sect. \[sec:samp\]) and, adding $FUV-NUV$ and $NUV-u$ to the SDSS colours in the algorithm (Fig. \[big\], bottom left), we obtain a similar standard deviation as @bbm+08. This is a significant improvement over the SDSS colours alone and a slight improvement over the SDSS+WISE colours. However, due to the Lyman break (see Sect. \[disc\]), the signal in the GALEX data drops significantly at high redshift, particularly at $z\gapp3$ (see Fig. \[SED\]), which will limit the GALEX photometry’s usefulness at high redshift. Adding the WISE to the GALEX colours (Fig. \[big\], bottom middle), we see further improvement although at the cost of the matching coincidence falling to 59%. As before, using the $W1$ and $W2$ magnitudes only gives a similar result while retaining a 78% coincidence (Fig. \[big\], bottom right). Discussion {#disc} ========== By adding the WISE to the SDSS colours in the kNN algorithm, we obtain significantly better photometric redshifts than from using the SDSS alone (cf. @rws+01 [@wrs+04; @mhp+12; @hdzz16]). Furthermore, this is without the need to filter out outliers (red sources, e.g. @rws+01), nor the visual inspection of images prior to their inclusion in the algorithm (e.g. @mhp+12). Even just the addition of the two NIR ($W1$ & $W2$) magnitudes leads to significant improvement, which can be further enhanced with the addition of the GALEX magnitudes (Table \[tab:SDSS\]). ---------------------------- ------- ---------- ---------- ---------- -- Algorithm $n$ $\pm0.1$ $\pm0.2$ $\pm0.5$ SDSS 16583 37.1 56.4 77.4 SDSS + WISE 12029 43.9 69.4 90.5 SDSS + $W1$ & $W2$ 16035 47.7 71.3 90.2 SDSS + GALEX 13598 48.5 72.1 91.0 SDSS + GALEX 13598 48.5 72.1 91.0 SDSS + $W1$ & $W2$ + GALEX 13180 50.5 75.1 94.1 ---------------------------- ------- ---------- ---------- ---------- -- : Summary of the algorithm performances for the SDSS sample. $n$ gives the matching coincidence of sources (out of 16850) and is followed by the percentage of photometric redshifts which lie within $\Delta z$ of the spectroscopic value. \[tab:SDSS\] In Fig. \[SED\] we show the mean spectral energy distributions of the sample at various redshifts, where we see the inflection at $\lambda\approx1$ $\mu$m, as the NIR emission from heated dust transitions to the optical emission from the accretion disk (@rob96 and references therein). The excess at $\lambda\gapp1$ $\mu$m from the warm dust is also apparent (e.g. @bfl+19), although only the shape of the $JHK$ profile shows any redshift dependence. An increase in the peak frequency, which would counter the redshift of the profile, would be expected as the luminosity (redshift) increases due to the higher peak temperature of a modified blackbody (@cd19 and references therein). The addition of the NIR bands permits the inclusion of these features, explaining why the $K$ ($\lambda=2.2$ $\mu$m) and $W2$ bands are crucial to the empirical model [@cm19]. Another feature is the Lyman break, where UV photons are absorbed by intervening hydrogen. This is also apparent in Fig. \[SED\], particularly for the mean $z=4$ SED, where the first drop in flux at $\lambda_{\rm rest}=0.1216$ $\mu$m ($\lambda_{\rm obs}=0.61$ $\mu$m) is due to Lyman- absorption and the second drop due to ionisation by $\lambda_{\rm rest}=0.0912$ $\mu$m ($\lambda_{\rm obs}=0.46$ $\mu$m) photons. The addition of the GALEX bands permits the inclusion of this feature at low redshift, although, the steep drop in flux results in a loss of signal at high redshift. Thus, the incorporation of upper limits to the UV fluxes into the algorithm would be required in order to fully utilise this feature. As stated in Sect. \[intro\], a photometric redshift estimate based only upon the source magnitudes will prove invaluable to large surveys of redshifted continuum sources. Of particular interest are radio sources, in the era of the [*Square Kilometre Array*]{} and its pathfinders. Given that, in practice, the detections from blind radio surveys will then be checked/re-observed for photometry in other bands, we wish to test the algorithm on an independent radio selected sample which has been followed up for spectroscopic redshifts, rather than an optical sample (SDSS), which has been cross-matched with radio sources. We therefore use the [*Optical Characteristics of Astrometric Radio Sources*]{} (OCARS) catalogue of [*Very Long Baseline Interferometry*]{} astrometry sources, a sample of flat spectrum radio sources (quasars) observed over five radio bands (spanning 2–27 GHz)[^9], with $S$-band flux densities ranging from 15 mJy to 4.0 Jy [@mab+09]. Of these, 3663 have spectroscopic redshifts [@mal18], with 36% having the full SDSS photometry (Table \[tab:OCARS\]).[^10] ---------------------------- ------ ---------- ---------- ---------- -- Algorithm $n$ $\pm0.1$ $\pm0.2$ $\pm0.5$ SDSS 1320 27.1 44.4 66.9 SDSS + WISE 1007 37.3 56.2 80.6 SDSS + $W1$ & $W2$ 1187 38.8 59.9 79.3 SDSS + GALEX 810 41.7 61.6 83.3 SDSS + WISE + GALEX 676 44.4 63.9 86.8 SDSS + $W1$ & $W2$ + GALEX 739 47.9 68.9 89.3 ---------------------------- ------ ---------- ---------- ---------- -- : As Table \[tab:SDSS\], but for the OCARS sample (out of 3663 sources). \[tab:OCARS\] Since our aim is to predict the photometric redshifts of a radio selected sample, with no a priori knowledge of the spectroscopic redshifts, we train the algorithm on the full SDSS sample. The model is then applied to the OCARS sources with the relevant photometry, as well as spectroscopic redshifts (in order to test the results). In Fig. \[OCARS\_big\], we see different distributions than from the SDSS photometric redshifts (Fig. \[big\]), which is confirmed by the spread in magnitudes (Fig. \[r-z\], top), possibly due to the wider range of redshifts of the OCARS sample (Fig. \[r-z\], bottom). Nevertheless, training the algorithm on the SDSS sample gives photometric redshifts which are nearly as accurate as for the SDSS sources themselves (Fig. \[big\]) when all three surveys are used (Fig. \[OCARS\_big\], right), with the percentages within $\Delta z$ just being slightly lower (Table \[tab:OCARS\]). Conclusions =========== A rapid automated method of obtaining source redshifts would vastly increase the scientific value of large surveys of radio continuum sources. One method which has shown much promise is the $k$-nearest neighbour algorithm which utilises SDSS colours. For a sample of 33643 QSOs from the SDSS, 98% have all five magnitudes. Using these sources to train half of the data, we obtain a mean of $\mu_{\Delta z}\approx0$ for $z_{\rm phot} - z_{\rm spec}$ for the remaining half. However, in common with other studies which apply the kNN algorithm to the SDSS colours [@rws+01; @wrs+04; @mhp+12; @hdzz16], the $\Delta z$ distribution is wide-tailed with $\sigma_{\Delta z}=0.52$. Including the WISE colours in the algorithm significantly narrows the $\Delta z$ distribution ($\sigma_{\Delta z}=0.38$), which is further improved with the addition of the GALEX colours ($\sigma_{\Delta z}=0.31$). Without the need for visual inspection nor the filtering out of outliers, the spread is comparable to other studies (using similar and differing methods) which include NIR photometry [@bbm+08; @bmh+12; @bcd+13; @ywf+17; @dbw+18; @sih18]. Although inclusion of both the WISE and GALEX photometry significantly improves the photometric redshifts, we find that exclusion of the WISE $W3$ & $W4$ magnitudes has no effect, due to the FIR range of the SED being relatively featureless. The $W1$ & $W2$ magnitudes are, however, crucial since there is an excess in the NIR due to emission dust heated by the AGN. Furthermore, the UV segment of the SED exhibits the Lyman break, which becomes apparent in the optical band at redshifts of $z\gapp3$. Thus, the inclusion of the GALEX photometry has the potential to distinguish high redshift sources, although the steep drop in signal limits its usefulness. For photometric redshifts obtained empirically from the observed source colours, @cm19 found that different combinations of observed-frame magnitudes were required for different redshift regimes. This method gave $\sigma_{\Delta z}=0.34$, although the numerous magnitude measurements required resulted in a low matching coincidence. Adding the GALEX ultraviolet magnitudes to the SDSS and WISE colours in the kNN algorithm, we can surpass the empirical method ($\sigma_{\Delta z}=0.31$) while retaining a large matching coincidence. Training the algorithm on the entire SDSS sample, the SDSS + $W1$ & $W2$ + GALEX magnitude combination gives $\mu_{\Delta z}=0.001$ and $\sigma_{\Delta z}=0.36$ when applied a sample of 3663 radio selected sources. This, again, markedly outperforms the algorithm trained on the SDSS colours alone ($\mu_{\Delta z}=-0.12$ and $\sigma_{\Delta z}=0.70$), although the matching coincidence is relatively low (20%) due to the requirement of all three surveys. This, however, is still significantly higher than that obtained from the empirical method (2%, @cm19).[^11] Thus, we confirm that the addition of other wavebands, in particular the two WISE near-infrared ($W1$ & $W2$) and two GALEX ultraviolet bands, can significantly improve the photometric redshifts obtained from the $k$-nearest neighbour algorithm. Acknowledgements {#acknowledgements .unnumbered} ================ I wish to thank the anonymous referee whose feedback helped significantly improve the manuscript. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration and NASA’s Astrophysics Data System Bibliographic Service. Funding for the SDSS has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. [39]{} natexlab\#1[\#1]{} S. [et al.]{}, 2015, ApJS, 219, 12 T. T. [et al.]{}, 2017, ApJ, 850, 66 N. M., [Brunner]{} R. J., [Myers]{} A. D., [Strand]{} N. E., [Alberts]{} S. L., [Tcheng]{} D., 2008, ApJ, 683, 12 F., [Fabbian]{} G., [Lapi]{} A., [Gonzalez-Nuevo]{} J., [Gilli]{} R., [Baccigalupi]{} C., 2019, ApJ, 871, 136 J. [et al.]{}, 2012, ApJ, 749, 41 M., [Cavuoti]{} S., [D’Abrusco]{} R., [Longo]{} G., [Mercurio]{} A., 2013, ApJ, 772, 140 M. J. I., [Jarrett]{} T. H., [Cluver]{} M. E., 2014, PASA, 31, e049 Curran S. J., Duchesne S. W., 2019, A&A, 627, A93 Curran S. J., [Hunstead]{} R. W., [Johnston]{} H. M., [Whiting]{} M. T., [Sadler]{} E. M., [Allison]{} J. R., [Athreya]{} R., 2019, MNRAS, 484, 1182 Curran S. J., Moss J. P., 2019, A&A, 629, A56 Curran S. J., Whiting M. T., 2012, ApJ, 759, 117 S. J., [Whiting]{} M. T., [Murphy]{} M. T., [Webb]{} J. K., [Longmore]{} S. N., [Pihlstr[ö]{}m]{} Y. M., [Athreya]{} R., [Blake]{} C., 2006, MNRAS, 371, 431 K. J. [et al.]{}, 2018, MNRAS, 473, 2655 M., [Allison]{} J. R., [Sadler]{} E. M., [Moss]{} V. A., [Jarrett]{} T. H., 2017, MNRAS, submitted (arXiv:1709.08634) B., [Ding]{} H.-P., [Zhang]{} Y.-X., [Zhao]{} Y.-H., 2016, Research in Astronomy and Astrophysics, 16, 74 S. [et al.]{}, 2008, Experimental Astronomy, 22, 151 D., [Hogg]{} D. W., [Schlegel]{} D. J., 2016, AJ, 151, 36 K. J., [Norris]{} R. P., [Park]{} L. A. F., 2019, PASP, 131, 108003 C. [et al.]{}, 2009, IERS Technical Note, 35, 1 N., [Hewett]{} P. C., [P[é]{}roux]{} C., [Nestor]{} D. B., [Wisotzki]{} L., 2012, MNRAS, 424, 2876 Majic R. A. M., Curran S. J., 2015, [Radio Photometric Redshifts: Estimating radio source redshifts from their spectral energy distributions]{}. Tech. rep., Victoria University of Wellington Z., 2018, ApJS, 239, 20 D. C. [et al.]{}, 2005, ApJ, 619, L1 R., [Sadler]{} E. M., [Curran]{} S., 2015, Advancing Astrophysics with the Square Kilometre Array (AASKA14), 134 R. P. [et al.]{}, 2011, PASA, 28, 215 R. P. [et al.]{}, 2019, PASP, 131, 108004 C. M. [et al.]{}, 2015, ApJ, 811, 95 G. T. [et al.]{}, 2015, ApJS, 219, 39 G. T. [et al.]{}, 2001, AJ, 122, 1151 Robson I., 1996, Active Galactic Nuclei. John Wiley & Sons, Chichester S. [et al.]{}, 2016, ApJS, 227, 2 M., 2014, in IAU Symposium, Vol. 304, Multiwavelength AGN Surveys and Studies, [Mickaelian]{} A. M., [Sanders]{} D. B., eds., pp. 421–421 M., [Ilbert]{} O., [Hoyle]{} B., 2019, Nature Astronomy, 3, 212 M. F. [et al.]{}, 2006, AJ, 131, 1163 M. P. [et al.]{}, 2013, A&A, 556, A2 F. [et al.]{}, 2016, ApJ, 819, 24 M. A. [et al.]{}, 2004, ApJS, 155, 243 E. L. [et al.]{}, 2010, AJ, 140, 1868 Q. [et al.]{}, 2017, AJ, 154 \[lastpage\] [^1]: Stephen.Curran@vuw.ac.nz [^2]: See, for example, @asu+17 for photometric redshift estimates of X-ray selected samples. [^3]: There are numerous other methods used to obtain the photometric redshifts, for example template fitting of the SEDs (e.g. @dbw+18). See @sih18 for an overview. [^4]: $u$ ($\lambda = 354$ nm), $g$ ($478$ nm), $r$ ($623$ nm), $i$ ($763$ nm), $z$ ($913$ nm) and $W1$ ($3.39$ $\mu$m), $W2$ ($4.65$ $\mu$m), $W3$ ($11.2$ $\mu$m), $W4$ ($22.8$ $\mu$m, @bjc14). [^5]: @cm19 refer to the $\lambda=8.0$ $\mu$m magnitude ([*Spitzer Space Telescope*]{}), located between $W2$ and $W3$, as $W2.5$. [^6]: The matching coincidence is 34% for the observed-frame $W3\cap W2 \cap\, I\cap U$ magnitudes alone, which falls to 2% for all of the magnitudes required to cover the whole redshift range. [^7]: The 5% of undetected sources are due to confusion in the NIR around the SDSS source. [^8]: There are silicate features at $\lambda = 10$ and 18 $\mu$m, but these cannot be detected at the coarse spectral resolutions considered here. [^9]: $S$-band (2–4 GHz), $C$-band (4–8 GHz), $X$-band (8–12 GHz), $U$-band (12–18 GHz) and $K$-band (18–27 GHz). [^10]: 48% of OCARS sources have at least one SDSS magnitude match, which is expected as the former covers the whole sky and the latter is restricted to northern declinations. [^11]: Which requires the $W4,W3,W2,K,I,R,U,FUV$ and 8 $\mu$m magnitudes.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems. Therefore, we present a novel architecture, based on memory-augmented networks, that is inspired by the von Neumann and Harvard architectures of modern computers. This architecture enables the learning of abstract algorithmic solutions via Evolution Strategies in a reinforcement learning setting. Applied to Sokoban, sliding block puzzle and robotic manipulation tasks, we show that the architecture can learn algorithmic solutions with strong generalization and abstraction: scaling to arbitrary task configurations and complexities, and being independent of both the data representation and the task domain.' author: - | Daniel Tanneberg$^*$, Elmar Rueckert$^{\dag*}$, Jan Peters$^{*\ddag}$\ $^*$Intelligent Autonomous Systems, Technische Universität Darmstadt\ $^\dag$Institute for Robotics and Cognitive Systems, Universität zu Lübeck\ $^\ddag$Robot Learning Group, Max-Planck Institute for Intelligent Systems\ bibliography: - 'iclr2020.bib' title: Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer --- Introduction ============ \[sec:intro\] Transferring solution strategies from one problem to another is a crucial ability for intelligent behavior [@silver2013lifelong]. Current learning systems can learn a multitude of specialized tasks, but extracting the underlying structure of the solution for effective transfer is an open research problem [@taylor2009transfer]. Abstraction is key to enable these transfers [@tenenbaum2011grow] and the concept of *algorithms* in computer science is an ideal example for such transferable abstract strategies. An algorithm is a sequence of instructions, which solves a given problem when executed, independent of the specific instantiation of the problem. For example, consider the task of sorting a set of objects. The algorithmic solution, specified as the sequence of instructions, is able to sort any number of arbitrary classes of objects in any order, e.g., toys by color, waste by type, or numbers by value, by using *the same sequence of instructions*, as long as the features and compare operations defining the order are specified. Learning such structured, abstract strategies enables the transfer to new domains and representations [@tenenbaum2011grow]. Moreover, abstract strategies as algorithms have built-in generalization capabilities to new task configurations and complexities. Here, we present a novel architecture for learning abstract strategies in the form of algorithmic solutions. Based on the Differential Neural Computer [@graves2016hybrid] and inspired by the von Neumann and Harvard architectures of modern computers, the architectures modular structure allows for straightforward transfer by reusing learned modules instead of relearning, prior knowledge can be included, and the behavior of the modules can be examined and interpreted. Moreover, the individual modules of the architecture can be learned with different learning settings and strategies – or be hardcoded if applicable – allowing to split the overall task into easier subproblems, contrary to the end-to-end learning philosophy of most deep learning architectures. Building on memory-augmented neural networks [@graves2016hybrid; @neelakantan2016neural; @weston2014memory; @joulin2015inferring], we propose a flexible architecture for learning abstract strategies as algorithmic solutions and show the learning and transferring of such in symbolic planning tasks. ![image](figure_01.pdf){width="99.00000%"} The Problem of Learning Algorithmic Solutions --------------------------------------------- \[sec:problem\_statement\] We investigate the problem of learning algorithmic solutions which are characterized by three requirements: **R1** – generalization to different and unseen task configurations and task complexities, **R2** – independence of the data representation, and **R3** – independence of the task domain. Picking up the sorting algorithm example again, R1 represents the ability to sort lists of arbitrary length and initial order, while R2 and R3 represent the abstract nature of the solution. This abstraction enables the algorithm, for example, to sort a list of binary numbers while being trained only on hexadecimal numbers (R2). Furthermore, the algorithm trained on numbers is able to sort lists of strings (R3). If R1 – R3 are fulfilled, the algorithmic solution does not need to be retrained or adapted to solve unforeseen task instantiations – only the data specific operations need to be adjusted. Research on learning algorithms typically focuses on identifying algorithmic generated patterns or solving *algorithmic problems* [@neelakantan2016neural; @zaremba2014learning; @kaiser2016neural; @kaiser2016can], less on finding *algorithmic solutions* [@joulin2015inferring; @zaremba2016learning] fulfilling the three discussed requirements R1 – R3. While R1 is typically tackled, as it represents the overall goal of generalization in machine learning, the abstraction abilities from R2 and R3 are missing. Additionally, most algorithms require a form of feedback, using computed intermediate results from one computational step in subsequent steps, and a variable number of computational steps to solve a problem instance. Thus, it is necessary to be able to cope with varying numbers of steps and determining when to stop, in contrast to using a fixed number of steps [@neelakantan2016neural; @sukhbaatar2015end], making the learning problem more challenging in addition. A crucial feature for algorithms is the ability to save and retrieve data. Therefore, augmenting neural networks with different forms of external memory, e.g., matrices, stacks, tapes or grids, to increase their expressiveness and to separate computation from memory, especially in long time dependencies setups, is an active research direction [@graves2016hybrid; @weston2014memory; @joulin2015inferring; @zaremba2016learning; @sukhbaatar2015end; @kumar2016ask] with earlier work in the field of grammar learning [@das1992learning; @mozer1993connectionist; @zeng1994discrete]. These memory-augmented networks improve performance on a variety of tasks like reasoning and inference in natural language [@graves2016hybrid; @weston2014memory; @sukhbaatar2015end; @kumar2016ask], learning of simple algorithms and algorithmic patterns [@joulin2015inferring; @zaremba2016learning; @graves2014neural], and navigation tasks [@wayne2018unsupervised]. **The contribution** of this paper is a novel modular architecture building on a memory-augmented neural network (DNC [@graves2016hybrid]) for learning algorithmic solutions in a reinforcement learning setting. We show that the learned solutions fulfill all three requirements R1 – R3 for an algorithmic solution and the architecture can process a variable number of computational steps. A Neural Computer Architecture for Algorithmic Solutions ======================================================== \[sec:architecture\_overview\] In this section, we introduce the novel modular architecture for learning algorithmic solutions, shown in Figure \[fig:nhc\]. The architecture builds on the Differential Neural Computer (DNC) [@graves2016hybrid] and its modular design is inspired by modern computer architectures, related to [@neelakantan2016neural; @weston2014memory]. The DNC augments a controller neural network with a differentiable autoassociative external memory to separate computation from memory, as memorization is usually done in the networks weights. The controller network learns to write and read information from that memory by emitting an interface vector which is mapped onto different vectors by linear transformations. These vectors control the read and write operations of the memory, called read and write heads. For writing and reading, multiple attention mechanisms are employed, including content lookup, temporal linkage and memory allocation. Due to the design of the interface and the attention mechanisms, the DNC is independent of the memory size and fully differentiable, allowing gradient-based end-to-end learning. **Our architecture.** In order to learn algorithmic solutions, the computations need to be *decoupled* from the specific data and task. To enable such data and task independent computations, we propose multiple alterations and extensions to the DNC, inspired by modern computer architectures. First, information flow is divided into two streams, data and control. This separation allows to disentangle data representation dependent manipulations from data independent algorithmic instructions. Due to this separation, the *algorithmic modules* need to be extended to include two memories, a data and a computational memory. The data memory stores and retrieves the data stream, whereas the computational memory works on information generated by the control signal flow through the learnable controller and memory transformations. The two memories are coupled, operating on the *same* locations, and these locations are determined by the computational memory, and hence by the control stream. As with the DNC, multiple read and write heads can be used. In our experiments, one read and two write heads are used, with one write head constrained to the previously read location. In contrast to the DNC, but in line with the computer architecture-inspired design and the goal of learning deterministic algorithms, writing and reading uses hard attentions instead of soft attentions. Hard attention means that only one memory location can be written to and read from (unique addresses), instead of an weighted average over all locations as with soft attentions. We also employed an additional attention mechanism for reading, called *usage linkage*, similar to the temporal linkage of the DNC, but instead of capturing temporal relations, it captures *usage* relations, i.e., the relation between written memory location and previously read location. With both linkages in two directions and the content look up, the model has five attention mechanisms for reading. While the final read memory location is determined by a weighted combination of these attentions (see *attention* in Figure \[fig:task\_example\] in the Appendix), each attention mechanism itself uses hard decisions, returning only one memory location. See Appendix \[app:evaluation\] for the effect of the introduced modifications and extensions. For computing the actual solution, operating only on the control stream is not enough, as the model still needs to manipulate the data. Therefore, we added several modules operating on the data stream, inspired by the architecture of computers. In particular, an Input, Transform~D~, ALU (*arithmetic logic unit*) and Output module were added (more details in Section \[sec:data\_modules\]). These modules manipulate the data, steered by the algorithmic modules. The full architecture is shown in Figure \[fig:nhc\]. As algorithms typically involve recursive or iterative data manipulation, the model receives its own output as input in the next computation step, making the whole architecture an output-input model. With all aforementioned extensions, algorithmic solutions fulfilling R1 – R3 can be learned. The Algorithmic Modules ----------------------- The algorithmic modules consist of the *C*ontroller, the *M*emory and the *T*ransform~*C*~ module and build the core of the model. These modules learn the algorithmic solution operating on the control stream. With $t$ as the current computational step and $c$ as the control stream (see Figure \[fig:nhc\]), the input-output of the modules are $C(c_{i,t}, c_{m,t-1}, c_{f,t-1}, c_{a,t-1}, c_{o,t-1}) \longmapsto c_{c,t}$ , $M(c_{i,t}, c_{c,t}) \longmapsto c_{m,t}, d_{m,t}$ and $T_C(c_{c,t}, c_{m,t}, c_{i,t}) \longmapsto c_{f,t}$. The algorithmic modules are based on the DNC with the alterations and extensions described before. Next we discuss how these algorithmic modules can be learned before looking into the data-dependent modules. ### Learning of the Algorithmic Modules \[sec:lam\] Learning the algorithmic modules, and hence the algorithmic solution, is done in a reinforcement learning setting using Natural Evolution Strategies (NES) [@wierstra2014natural]. NES is a blackbox optimizer that does not require differentiable models, giving more freedom to the model design, e.g., the hard attention mechanisms are not differentiable. NES updates a search distribution of the parameters to be learned by following the natural gradient towards regions of higher fitness using a population of offsprings (altered parameters) for exploration. Let $\theta$ be the parameters to be learned and using an isotropic multivariate Gaussian search distribution with fixed variance $\sigma^2$, the stochastic natural gradient at iteration $t$ is given by $$\begin{aligned} \vspace{-5pt} \nabla_{\theta_{t}} \mathbb{E}_{\epsilon \sim N(0, I)} \left[ u (\theta_t + \sigma\epsilon) \right] \approx \frac{1}{P\sigma} \sum_{i=1}^{P} u(\theta_t^i)\epsilon_i \nonumber \ , \vspace{-5pt}\end{aligned}$$ where $P$ is the population size and $u(\cdot)$ is the rank transformed fitness [@wierstra2014natural]. The parameters are updated by $$\begin{aligned} \vspace{-5pt} \theta_{t+1} = \theta_t + \frac{\alpha}{P\sigma} \sum_{i=1}^{P} u(\theta_t^i)\epsilon_i \nonumber \ , \vspace{-5pt}\end{aligned}$$ with learning rate $\alpha$. Recent research showed that NES and related approaches like Random Search [@mania2018] are powerful alternatives in reinforcement learning, that are easier to implement and scale, perform better with sparse rewards and credit assignment over long time scales, and have fewer hyperparameters [@salimans2017evolution]. For robustness and learning efficiency, weight decay for regularization [@krogh1992simple] and automatic restarts of runs stuck in local optima are used as in [@wierstra2014natural]. This restarting can be seen as another level of evolution, where some lineages die out. Another way of dealing with early converged or stuck lineages is to add intrinsic motivation signals like novelty, that help to get attracted by another local optima, as in NSRA-ES [@conti2018improving]. In the experiments however, we found that within our setting, restarting – or having an additional *survival of the fittest* on the lineages – was more effective, see Appendix \[app:evaluation\] for a comparison. The algorithmic solutions are learned in a curriculum learning setup [@bengio2009curriculum] with sampling from old lessons [@zaremba2014learning] to prevent unlearning and to foster generalization. Furthermore, we created *bad memories*, a learning from mistakes strategy, similar to the idea of AdaBoost [@freund1997decision], which samples previously failed tasks to encourage focusing on the hard tasks. This can also be seen as a form of experience replay [@mnih2015human; @lin1992self], but only using the task configurations, the initial input to the model, not the full generated sequence. Bad memories were developed for training the data-dependent modules to ensure their robustness and $100\%$ accuracy, which is crucial to learn algorithmic solutions. If the individual modules do not have $100\%$ accuracy, no stable algorithmic solution can be learned even if the algorithmic modules are doing the correct computations. For example, if one module has an accuracy of $99\%$, the $1\%$ error prevents learning an algorithmic solution that works *always*. This problem is even reinforced as the proposed model is an output-input architecture that works over multiple computation steps using its own output as the new input – meaning the overall accuracy drops to $36.6\%$ for $100$ computation steps. Therefore using the bad memories strategy, and thus focusing on the mistakes, helps significantly in achieving robust results when learning the modules, enabling the learning of algorithmic solutions. While the bad memories strategy was crucial to achieve $100\%$ robustness when training the data-dependent modules, the effect on learning the algorithmic solutions was less significant (see Appendix \[app:evaluation\] for an evaluation). Data-dependent Modules ---------------------- \[sec:data\_modules\] The data-dependent modules (Input, ALU, Transform~D~ and Output) are responsible for all operations that involve direct data contact, such as receiving the input data from the *outside* or manipulating a data word with an operation chosen by the algorithmic modules. Thus, these modules need to be learned or designed for a specific data representation and task. However, as all modules only have to perform a certain subtask, these modules are typically easier to train. As learning the algorithmic modules via NES does not rely on gradients and due to the information flow split, the data-dependent modules can be instantiated *arbitrarily*, e.g., can have non-differentiable parts, do not need to be neural networks or can be hardcoded. Therefore, prior knowledge can be incorporated by implementing it directly into these modules. The modular design facilitates the transfer of learned modules, e.g., using the same algorithmic solution in a new domain without retraining the algorithmic modules or learning a new algorithm within the same domain without retraining the data modules. Next the general functionality of the modules will be explained. **The Input module** is the interface to the *external world* and responsible for data preprocessing. Therefore, it receives the external input data and the data from the previous computational step. It sends data to the memory and control signals to the subsequent modules with information about the presented data or the state of the algorithm – formally as $I(d_{e,t}, d_{o,t-1}) \longmapsto c_{i,t} , d_{i,t} $ . **The ALU module** performs the basic operations which the architecture can use to modify data. Therefore, it receives the data and a control signal indicating which operation to apply and outputs the modified data and control signals about the operation – $A(c_{f,t}, d_{f,t}) \longmapsto c_{a,t}, d_{a,t}$. As in many applications the basic operations only modify a part of the data and to reduce the complexity of the ALU, a **Transform~D~ module** extracts the *relevant* part from the data beforehand – $T_D(d_{m,t}) \longmapsto d_{f,t}$ – or just transfers the unmodified data if no transformation is required for the task. **The Output module** combines the result of the data manipulation operation from the ALU module and the data before the manipulation. It inserts the local change done by the ALU into the original data word – $O(c_{a,t}, d_{a,t}, d_{m,t}) \longmapsto c_{o,t}, d_{o,t}$. As before with the Transformation module, depending on the task, the Output module can also be designed to just pass on the received data. Experiments =========== \[sec:exp\] We investigate the learning of symbolic planning tasks, where task complexity is measured as the number of computational steps required to solve a task, i.e., the size of the corresponding search tree (see Figure \[fig:search\_trees\]). Learning is done in the Sokoban domain, whereas the generalization and abstraction requirements R1 – R3 are shown by transferring to (1) longer planning tasks, (2) bigger Sokoban worlds, (3) a different data representation, and (4) two different task domains – sliding block puzzle and robotic manipulation. In Sokoban, an agent interacts in a grid world with four actions – moving up, right, down or left. Therefore, the ALU can perform four operations and additionally a `nop` operation that leaves the given configuration unchanged. The world contains empty spaces that can be entered, walls that block movement and boxes that can be pushed onto empty space. A task is given by a start configuration of the world and the desired goal configuration. For learning, we use a world of size $6\times6$ that is enclosed by walls. A world is represented with binary vectors and four-dimensional one-hot encodings for each position, resulting in $144$-dimensional data words. The configuration of each world – inner walls, boxes and agent position – is sampled randomly. Each world is generated by sampling uniformly the number of additional inner walls from $[0,2]$ and boxes from $[1,5]$. The positions of these walls, boxes and the position of the agent are sampled uniformly from the empty spaces. An example task and the learned solution is shown in the Appendix in Figure \[fig:task\_example\] – the penguin is the agent, icebergs are boxes, iceblocks are walls and water is empty space. Algorithmic Modules ------------------- In the experiments we use a feedforward neural network as Controller with a layer size of $16$ neurons and `tanh` activation. The Transform~C~ is a linear layer projecting its $27$-dimensional input onto the $5$ operations of the ALU using `leaky-ReLU` activation and one-hot encoding. The computational memory has a word size of $8$ bit, the Input module generates $3$ control signals ($2$ for Learning to Search), and the ALU and Output module control signal feedback is not used here. Thus, the input to the Controller consists of $16$ control signals and in total there are about $1600$ parameters. ### Learning of the Data-dependent Modules All data-dependent modules are trained in a supervised setting and consist of feedforward networks. They optimize a cross entropy loss using Adam [@kingma2015adam] on a mini-batch size of $20$. To improve their generalization and robustness, the bad memories mechanism described in Section \[sec:lam\] is used with a buffer size of $200$ and $50\%$ of the samples within a mini-batch are sampled from that. The following task-dependent instantiations of the data-dependent modules are examples used for the Sokoban domain. **The Input module** learns an equality function using differential rectifier units as inductive bias [@weyde2018feed] and consists of a feedforward network with $10$ hidden units and `leaky-ReLU` activation. Using the learned binary equality signal $I_{e,t}$ at step $t$, it produces three binary control signals according to $c_{i,t}^{[1]} = (1 - I_{e,t}) - c_{i,t-1}^{[2]} , \ c_{i,t}^{[2]} = I_{e,t} + c_{i,t-1}^{[2]} , \ \text{and} \ c_{i,t}^{[3]} = I_{e,t}c_{i,t-1}^{[2]} \ $, indicating the different phases of the algorithm. For the Learning to Search experiment only the first two signals are used. **The Transform~D~ module** extracts a different view on the data, if required by the ALU, as described in Section \[sec:data\_modules\]. Here, it consists of a feedforward network with $500$ hidden neurons and uses `leaky-ReLU` activation. For the Sokoban domain, the actions that the agent can take – and therefore the operation the ALU can apply – only change the world locally. Thus, the Transform~D~ module extracts a local observation of the world $d_f$, i.e., the agent and the two adjacent locations in all four directions, as these are the only locations where an action can produce a change. **The ALU module** receives the data view extracted by Transform~D~ and the control signal from Transform~C~, that encodes the operation to apply. It learns to apply the operations, i.e., it learns an action model by learning preconditions and effects, and outputs the (potential) local change together with a control signal indicating if the action changed the world or not. The local change is encoded as the direction of the change and the three according spaces. The module consists of two feedforward networks, one for the control signal $c_a$ and one for applying the actions producing the manipulated data $d_a$. The learned $c_a$ is used to gate the output between the output of the action network and the data input without change. The control network has two hidden layers with sizes $[64,64]$, the action network has hidden layers with $[128,64]$ neurons and both use `leaky-ReLU` activations. **The Output module** inserts the (locally) changed data from the ALU into the data stream. It receives the data from the memory $d_m$, the data $d_a$ and control stream $c_a$ from the ALU. It consists of two feedforward networks for learning the data $d_o$ and the control signal $c_o$ stream. The control network has two hidden layers with sizes $[500,250]$, the data network has hidden layers with $[500,500]$ neurons and both use `leaky-ReLU` activations. The control signal $c_o$ is used for gating between the data with the inserted change and the original data $d_m$. To ensure that the Output module uses the manipulated data of the ALU and is not learning to manipulate the data itself, it is constrained to learn a binary mask that indicates where the change needs to be inserted. This binary map indicates for each position in $d_a$ where to insert it in $d_m$ and can be seen as a structured prediction problem. Note, the training data only consists of data and control signals, the true binary mask is not known. Learning Algorithmic Solutions ------------------------------ \[sec:learn\_alg\_sols\] We investigate the learning of two algorithms, (1) a search algorithm and (2) a search-based planning algorithm. The data-dependent modules do not need to be retrained for the different algorithms. For evaluating that the learned strategy is an abstract *algorithmic solution*, we show that it fulfills the three requirements R1 – R3 discussed in Section \[sec:problem\_statement\]. ### Learning to Search {#sec:learning_to_search} In the first task, the model has to learn breadth-first-search to find the desired goal configuration. For that purpose, the initial input to the model is the start and goal configuration and subsequent inputs are the goal configuration and the output of the model from the previous computation step. To solve the task, the model has to learn to produce the correct search tree and recognizing that the goal configuration is reached by choosing the `nop` operation for the correct computation step. For the curriculum learning the levels are defined as the number of nodes from the search tree that have to be fully explored, e.g., for Level $1$, up to five correct computation steps have to be performed on the initial configuration; for Level $3$ the initial configuration as well as the two subsequently found configurations need to be fully explored (see Figure \[fig:search\_trees\](a)). This requires up to $13$ correct computational steps. Curriculum levels are specified up to Level $21$ that involves up to $85$ correct computation steps to be solved. An additional Level $22$ is activated afterwards that consists of new samples from all $21$ levels for evaluation. To prevent unlearning of previous levels, $20\%$ of the samples in the mini-batch are sampled uniformly from previous levels. As in [@wierstra2014natural] we use restarting, but here the run automatically restarts if the maximum fitness of a level is not reached within $2500$ iterations. All experiments have a total budget of $10.000$ iterations. The fitness function $f$ uses step-wise binary losses computed as comparison to the correct solution over mini-batches of $N$ samples and is defined as $$\begin{aligned} \label{eq:fit} f = \begin{cases}\frac{1}{N} \sum_{n}^{N} f_{e}^{[n]} \qquad \qquad \text{ if } \frac{1}{N} \sum_{n}^{N} f_{e}^{[n]} < 100 \\\frac{1}{N} \sum_{n}^{N} f_{e}^{[n]} + f_b^{[n]} \quad \text{ otherwise}\end{cases} \ , \ \text{with} \qquad \qquad \quad \\ \vspace{-20pt} f_e^{[n]} = \frac{100}{3 T_e^{[n]}} \sum\nolimits_{t = 1}^{T_e^{[n]}} I(c_{f,t}^{[n]} = \tilde{c}_{f,t}^{[n]}) + 2 I(d_{m,t}^{[n]} = \tilde{d}_{m,t}^{[n]}) \nonumber \quad \text{and} \quad f_b^{[n]} = 20 I(c_{f,T_e^{[n]}+1}^{[n]} = \texttt{nop}) \nonumber \ , $$ where $T_e$ is the number of steps required for constructing the search tree or when the first mistake occurs, $c_{f,t}$ is the operation chosen to be applied by the ALU from Transform~C~ at step $t$, $d_{m,t}$ is the data word read from the memory, and $\tilde{c}_{f,t}$ and $\tilde{d}_{m,t}$ are the correct choices respectively. The exploration fitness $f_e^{[n]}$ captures the fraction of correct computation steps until the goal configuration is found, scaled to $0$-$100$%. Note that, NES therefore only uses a single scalar value that summarizes the performance of the parameters over $N$ samples and all computational steps. The learning rate $\alpha$ is to $0.01$, the $\sigma$ of the search distribution to $0.1$, weight decay is applied with $0.9995$, mini-batch size is $N = 20$ and the population size is $P = 20$. We use a gini coefficient based ranking that gives more importance to samples with higher fitness [@schaul2010pybrain]. The maximum fitness is $120$ for all levels and a level is solved when $250$ subsequent iterations have the maximum fitness, i.e., $5000$ samples are solved correctly. The bad memories consist of $200$ samples and $25\%$ of the samples within a mini-batch are sampled uniformly from those. Whenever $10$ subsequent iterations achieve the maximum fitness, the buffer is cleared and *no learning is performed*. ### Learning to Plan (Search + Backtrack) In the second task, the model has to learn, in addition to the breadth-first-algorithm that computes a search tree to the goal configuration, to also extract the path from the search tree that encodes the solution to the given planning problem (see Figure \[fig:search\_trees\] and Figure \[fig:task\_example\] in the Appendix). Therefore, the model has to not only learn to encode and perform two different algorithms, but also to switch between them at the correct computation step. The initial input to the model is the start and goal configuration and subsequent inputs are the goal configuration and the output of the model from the previous computation step, as before. When the goal configuration is found by the model, the input is the start configuration and the previous output. To solve the task, the model has to learn to produce the search tree and recognizing that the goal configuration is reached as before. In addition, after recognizing the goal configuration, the model needs to switch behavior and output the path of the search tree encoding the planning solution. This solution consists of the states from the initial to the goal configuration and `nop` operations in reverse order. Therefore, the number of maximum computation steps increases up to $89$ in Level $21$. The fitness function is defined as in Equation  but with $$\begin{aligned} f_b^{[n]} = \frac{50}{3 T_b^{[n]}} \sum\nolimits_{t = T_e^{[n]}+1}^{T_e^{[n]} + T_b^{[n]}} I(c_{f,t}^{[n]} = \texttt{nop}) + 2 I(d_{m,t}^{[n]} = \tilde{d}_{m,t}^{[n]}) \nonumber \ , \vspace{-5pt}\end{aligned}$$ where $T_b$ is the number of steps required for backtracking the solution or when the first mistakes occurs. The maximum fitness is $150$ and all other settings remain as before. \ **R1** – Generalization to Unseen Task Configurations and Complexities {#sec:r1} ---------------------------------------------------------------------- A main goal in all learning tasks, is to achieve generalization – to not only learn to solve seen situations, but to learn a solution that generalizes to unseen situations. One evaluation of this generalization ability is built into our learning process itself. A curriculum level is solved after $250$ subsequent iterations ($5000$ samples) with maximum fitness and iterations with maximum fitness do not trigger learning. Thus, if presenting a new level that involves more complex tasks, the fitness stays at maximum and no learning is triggered, the previously learned solution generalizes to the new setting – generalizes to more complex tasks (see Figure \[fig:search\_trees\]). This generalization is shown in Figure \[fig:exp\_runs\]. For example, in the *Learning to Plan* setup (Figure \[fig:exp\_runs\](b)), after $3$ levels the algorithmic solution is found and no learning is triggered anymore during the run. Moreover, the last triggered learning was for curriculum Level $3$ – meaning a complexity of $15$ computational steps – and the found solution generalizes up to the highest specified curriculum Level $21$ with $89$ computational steps. Learning the algorithmic solution is done within $3$ levels and $2563$ iterations. Figure \[fig:exp\_runs\](d) shows the evaluation of learning to solve the two tasks over $15$ runs each. In contrast, the original DNC [@graves2016hybrid] model and a stack-augmented recurrent neural network for algorithmic patterns [@joulin2015inferring] are not able to solve Level $1$ when trained in a supervised setup with gradient descent and considerably more training iterations, see Figure \[fig:exp\_runs\](c) and the Appendix \[app:baselines\] for implementation details. **Task complexity.** Additionally, we evaluated the learned algorithmic solution with task complexities far beyond the specified curriculum learning levels, i.e., complexities experienced during training. Therefore, we used the run shown in Figure \[fig:exp\_runs\](b) and solved tasks requiring $\textbf{330.631}$ computational steps (corresponds to level $82.656$), having been **trained only up to $\textbf{15}$** steps (see Figure \[fig:search\_trees\] for the complexities) and having been tested during training only up to $89$ steps. Remember the models recurrent output-input structure, given the initial task input, the model performs $330.631$ computational steps, i.e., learns to build a search tree with over $330.600$ nodes, autonomously correct to compute and output the solution. Moreover, the solution learned in $6\times6$ environments, successfully solved all tasks within $8\times8$ environments. Thus, the learned strategy represents an abstract algorithmic solution that generalizes and scales to arbitrary task configurations and complexities, fulfilling R1. The learned algorithmic solution is explained with an example in the Appendix \[app:learned\_solution\]. **R2** – Independence of the Data Representation {#sec:r2} ------------------------------------------------ Algorithmic solutions are independent of the data representation, meaning the abstract strategy is still working if the encoding is changed, as long as the data-dependent operations are adjusted. Consider again a sorting algorithm. Its algorithmic behavior stays the same independent of if it has to sort a list of numbers encoded binary or hexadecimally, as long as the compare operators are defined. To show that our learned algorithmic solutions have this feature and fulfill R2, we change the representation of the data, but reuse the learned algorithmic modules and the model can still solve all tasks without retraining. The data-dependent modules are adapted and relearned. The changed representation, e.g., the penguin represents a wall instead of the agent, and results over $10.000$ iterations ($200.000$ samples) over all curriculum levels are shown in Figure \[fig:transfer\] (*left*). The fitness is at maximum from the start, showing that all samples in all levels are successfully solved without triggering learning while operating on the new data representation and hence, R2 is fulfilled. **R3** – Independence of the Task Domain {#sec:r3} ---------------------------------------- Requirement R3 states that an algorithmic solution is independent of the task domain. Consider again the sorting algorithm example: as long as the compare operators are defined, it is able sort *arbitrary objects*. Therefore, the data-dependent modules are adapted and relearned but we reuse the learned algorithmic solution on two new task domains. As new domains, $3\times3$ *sliding block puzzles* and a *robotic manipulation* task are used (Figure \[fig:transfer\]). Configurations are represented with binary vectors as described for Sokoban in Section \[sec:exp\]. For the puzzle domain, actions are sliding adjacent tiles onto the free (white) space from four directions. A task configuration is given as a start and goal board configuration. In the robotic manipulation domain, a task is given as start and goal configuration of the objects. The available actions are the four locations on which objects can be stacked, e.g., the action `pos1` encodes to move the gripper to the position and place the grasped object on top, or to pick up the top object if no object is grasped. The maximum stacking height is $3$ boxes, resulting in a discrete representation of the object configuration with a $3\times4$ grid. As with the new data representation, the learned algorithmic solution is able to solve all $200.000$ presented samples from all curriculum levels in the new domains without triggering learning (Figure \[fig:transfer\]), showing the independence of the task domain, fulfilling R3. ![image](figure_08.pdf){width=".99\textwidth"} Conclusion ========== \[sec:conclusion\] We present a novel architecture for learning algorithmic solutions and showed how it can learn abstract strategies that generalize and scale to arbitrary task configurations and complexities (R1) (Section \[sec:r1\]), and are independent to both, the data representation (R2) (Section \[sec:r2\]) and the task domain (R3) (Section \[sec:r3\]). Such algorithmic solutions represent abstract strategies that can be transferred directly to novel problem instantiations, a crucial ability for intelligent behavior. To show that our architecture is capable of learning strategies fulfilling the algorithm requirements R1 – R3, we performed experiments with complexities orders of magnitude higher than seen during training ($15$ vs. $330.631$ steps, and Figure \[fig:search\_trees\] & \[fig:exp\_runs\]), and transferred the learned solution to bigger state spaces, a new data representation and two new task domains (Figure \[fig:transfer\]) – showing, to the best of our knowledge, for the first time the learning of such abstract strategies. The modular structure and the information flow of the architecture enable the learning of algorithmic solutions, the transfer of those, and the incorporation of prior knowledge. Using Natural Evolution Strategies for learning removes constraints on the individual modules, allowing for arbitrary module instantiations and combinations – extending the limitation of differentiability of the DNC. As the complexity and structure of the algorithmic modules needs to be specified, it is an interesting road for future work to learn these in addition, building on the ideas from [@greve2016evolving; @merrild2018hyperntm]. By learning transferable abstract algorithmic solutions fulfilling R1 – R3, the pool of learnable problems for learning systems is increased and opens the investigation of learning such abstract strategies in challenging domains like medical diagnosis, household robotics, or AI assistants for future work. Acknowledgements {#acknowledgements .unnumbered} ================ This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No \#713010 (GOAL-Robots) and No \#640554 (SKILLS4ROBOTS). This research was supported by NVIDIA. We want to thank Kevin O’Regan for inspiring discussions on defining algorithmic solutions. Behavior of the learned algorithmic solution {#app:learned_solution} ============================================ Figure \[fig:task\_example\] highlights the learned algorithmic behavior – one memory location is read with content lookup attention repeatedly until all operations have been applied, the node is fully explored. Then attention shifts towards temporal linkage to read the *next* data to be explored. This pattern continuous until the goal configuration is found in step $11$. After that, behavior changes to output the backtracking solution by switching to usage linkage attention and `nop` operations until reaching the initial configuration. ![image](figure_09.pdf){width=".99\textwidth"} Details on the implementations of the comparison methods {#app:baselines} ======================================================== Both models, the orignal Differential Neural Computer (DNC) [@graves2016hybrid] and the stack-augmented recurrent network [@joulin2015inferring] are trained in a supervised setting with cross-entropy losses. They use the same output-input loop as our architecture, i.e., receiving their own output as input in the next computation step in addition to the goal configuration. The loss is computed based on the correct sequences of configurations and the control signal indicating that the goal has been reached, similar like the fitness function from our architecture in . Both use a LSTM network with $256$ hidden units as controller and the memory word size is set to $152$, equal to our model. Like our architecture, the DNC has one read and two write heads. The stack-augmented model uses four stacks with the three actions `PUSH`, `POP`, and `NO_OP`. Evaluation of the learning process and model components {#app:evaluation} ======================================================= For evaluation the effect of the individual modifications and extensions we compared our architecture with and without them on the Learning to Search task. In all setups all runs had a budget of $10.000$ iterations. The bar plots show mean and standard deviation of the number of learning iterations, numbers on top of the bars show the number of runs that triggered learning in that level. Plots below the bar plot show the number of runs that successfully solved the according curriculum level, i.e., where they ended after the budget of $10.000$ iterations. All comparisons are done without the restarting mechanisms, except in the evaluation for that mechanism. Novelty and restarts {#novelty-and-restarts .unnumbered} -------------------- Here two mechanisms to face the problem of getting stuck in local optima are evaluated, namely the automatic restart as in the original NES [@wierstra2014natural] and the use of an additional novelty signal as in NSRA-ES [@conti2018improving]. For the novelty calculation, we defined the behavior as the sequence of read memory locations and applied ALU operations. The baseline model does not use either of the two mechanisms. While we did not observe an improvement using novelty, the automatic restarts reduced the number of learning iterations, see Figure \[fig:novelty\_restarts\]. Note that the baseline and novelty model are also able to learn algorithmic solutions, but they require more iterations and, hence, they die out before the final curriculum level due to reaching the budget of $10.000$ iterations. ![image](figure_12.pdf){width=".99\textwidth"} Constrained write head {#constrained-write-head .unnumbered} ---------------------- Here we evaluated the introduced constrained write head, that updates the previously read memory location. We compared against two models without this constrained head, one with one write head and one with two write heads to compensate the missing constrained head. The constrained head was a necessary modification to enable the efficient learning of algorithmic solutions, see Figure \[fig:constrained\_head\]. ![image](figure_13.pdf){width=".99\textwidth"} Usage-linkage and Hard attention vs. Soft attention {#usage-linkage-and-hard-attention-vs.-soft-attention .unnumbered} --------------------------------------------------- Here the introduced usage-linkage and hard attention mechanism for memory access are evaluated. While using hard attention instead of soft attention was a necessary modification to enable efficient learning of algorithmic solutions, the introduced usage-linkage had a smaller impact on the Learning to Search task, as shown in Figure \[fig:no\_usage\_and\_soft\_search\]. When applied to the Learning to Plan setup however, the usage-linkage improved the learning of algorithmic solutions significantly, see Figure \[fig:usage\_plan\]. Both results show that the model learns to use the attention mechanisms that are required for the algorithmic solution, i.e., the usage-linkage is especially useful for the backtracking in the Learning to Plan setup compared to the Learning to Search setup where no backtracking is required. ![image](figure_14.pdf){width=".99\textwidth"} ![image](figure_15.pdf){width=".99\textwidth"} Bad memories {#bad-memories .unnumbered} ------------ The bad memories approach was developed while learning the data-dependent modules and was a necessary mechanism to learn robust and generalized modules with $100\%$ accuracy, as explained in Section \[sec:lam\]. For learning the algorithmic solutions, the impact of this learning from mistakes strategy was less significant, see Figure \[fig:bad\_memories\]. ![image](figure_16.pdf){width=".99\textwidth"}
{ "pile_set_name": "ArXiv" }
--- abstract: '1.0cm 1.0cm We use recent nuclear parton distributions, among them the Hirai–Kumano–Nagai (HKN) and Eskola–Paukkunen–Salgado (EPS08) parameterizations, in our pQCD-improved parton model to calculate the nuclear modification factor, $R_{AA''}(p_T)$, at RHIC and at the LHC. At RHIC, the deuteron-gold nuclear modification factor for pions, measured at $p_T \geq 10$ GeV/c in central collisions, appears to deviate more from unity than the model results. The slopes of the calculated $R_{dAu}(p_T)$ are similar to the slopes of the PHENIX pion and photon data. At LHC, without final-state effects we see a small enhancement of $R_{dPb}(p_T)$ in the transverse momentum range $10$ GeV/c $ \geq p_T \geq 100$ GeV/c for most parameterizations. The inclusion of final-state energy loss will reduce the $R_{dPb}(p_T)$ values.' author: - 'G.G. Barnaföldi^1,2^' - 'G. Fai^1^' - 'P. Lévai^2^' - 'B. A. Cole^3^' - 'G. Papp^4^' title: '[Cold Nuclear Modifications at RHIC and LHC ]{}' --- Introduction ============ Nuclear modifications in the physics of partonic degrees of freedom arise either due to the environment in the initial nuclei around the incoming parton or from the interaction with the partonic matter created in the final state of a nuclear collision. Coherent quantum effects may connect and mix initial and final state phenomena. Here we investigate ’[*cold nuclear effects*]{}’ defined as effects that do not originate in connection with the thermalized partonic matter in collisions between two heavy nuclei [@Cole:2007; @GGB:QM]. Cold nuclear effects are usually described as modifications of the parton distribution functions (PDF) into nuclear parton distributions (nPDF). At sufficiently small momentum fraction, this has been referred to as nuclear shadowing, but further effects, like nuclear multiple scattering, EMC, higher-twist effects, etc. need to be included in a more complete description. In this short review we present the high-$p_T$ and high-$x_T$ behavior of some of these modifications, using shadowing/nuclear PDF parameterizations by HIJING [@HIJING], EKS [@EKS], EPS08 [@EPS08], and HKN [@HKN]. At present, cold nuclear modifications can be tested by experimental data at RHIC energies on hadron spectra from $dAu$ collisions, or by direct photon spectra from $dAu$ or $AuAu$ collisions, which are essentially not influenced by non-Abelian jet energy loss. Note that recent theoretical studies point out the possibility of a small energy loss in $pA$ or $dA$ type collisions [@Cole:2007; @VitevCold]. Cold Nuclear Effects at RHIC ============================ We calculated the nuclear modifications at RHIC energies up to $p_T \lesssim 70$ GeV/c using various shadowing parameterizations in our pQCD improved parton model [@GGB:HQ]. Results are compared to the preliminary PHENIX data up to the highest measured momenta ($p_T \lesssim 18$ GeV/c) [@PHENIX; @3; @4]. On the left side of Fig. \[fig:1\], $R^{\pi}_{dAu}(p_T)$ is plotted for neutral pions at $\sqrt{s}=200$ $A$GeV against a logarithmic $p_T$ scale, using the HIJING, EKS, EPS08, and HKN parameterizations. The HIJING shadowing is understood with its accompanying multiple scattering [@GGB:HQ]. All calculated $R^{\pi}_{dAu}(p_T)$ curves have an approximately linear dependence on $\log(p_T)$ in most of the EMC region ($15$ GeV/c $\lesssim p_T \lesssim 50$ GeV/c). Thus, we used a simple linear function of $\log(p_T)$ to get the slope parameter $\beta$: $$R_{dAu}(\log(p_T))=\alpha + \beta\cdot\log(p_T) \,\,\, . \label{beta}$$ The slope $\beta$ is plotted in the right panel of Fig. \[fig:1\]. We use the best-fitted linear curve in the above $p_T$ interval to compare the calculated slopes to the ones extracted from the data. The slopes of the model curves are quite similar, although they are smaller (in absolute value) than the measured quantities. This may be due to final state effects [@Cole:2007] or isospin effects [@Isospin]. In addition to pion production slopes in $dAu$ collisions, direct photon production in both $dAu$ and $AuAu$ collisions is plotted. Our results are compared to the preliminary PHENIX data [@PHENIX; @3; @4]. These data have decreasing tendencies with $p_T$, leading to negative $\beta$ values similar to those in $\pi^0$ production. In the case of the $dAu$ photon data, the large error bars manifest themselves in a large uncertainty of the extracted slope. Direct $\gamma$ production in $AuAu$ shows a stronger decrease, since the initial state nuclear effects enter the convolution integral twice in this case. We analyzed the theoretical uncertainties in our model [@Cole:2007; @GGB:QM]. We found that at $p_T > 3$ GeV/c our model is not sensitive to the scale choice. Nuclear PDF parameterizations vary both the position of the Cronin peak and the high-$p_T$ slope of $R_{dAu}(p_T)$. The HKN parameterization allowed us to analyze the errors via the Hessian method [@HKN]. An almost constant $\pm 10\%$ uncertainty was found in the whole transverse momentum range. It can be seen from Fig. \[fig:1\] that the EPS08 and HKN nPDF parameterizations give slopes close to the experimental ones. However, HKN comes closer to the measured points at high $p_T$ (see the left panel). Note that the agreement with the data can be improved by the introduction of a small opacity, as we have done in Ref. [@Cole:2007]. Predictions for the LHC ======================= Based on Ref.s [@Cole:2007; @GGB:QM; @E706] we estimated the cold nuclear modifications at LHC energies with larger intrinsic transverse momenta. We apply the same parameterizations used above. In Fig. \[fig:2\] we plot $R^{\pi}_{dA}(x_T)$ in $dPb$ collisions at midrapidity at RHIC and LHC energies as functions of transverse momentum fraction, $x_T = 2 p_T / \sqrt{s}$. The preliminary PHENIX $dAu$ data on neutral pion production are also shown. The different panels display different shadowing parameterizations as noted on Fig. \[fig:2\]. Though at RHIC energies all parameterization give similar results, at LHC we see that at $x_T \approx 3 \cdot 10^{-3}$ HIJING yields a strong ($\approx 30\%$) suppression, while HKN gives a $10\%$ enhancement at the same $x_T$ values. The EKS and EPS08 results predict a $5-10\%$ enhancement (suppression) around $x_T \approx 10^{-2}$ ($10^{-3}$). The EPS08 results show somewhat stronger suppression than EKS at low $x_T$, due to the inclusion of high-rapidity RHIC data in the fitting procedure of this recent parameterization. For the high momentum-fraction region both of these models show a similar behavior due to the EMC effect. HIJING and HKN have a reasonable agreement between RHIC and LHC energies (they scale with $x_T$), while EKS and especially EPS08 display less precise scaling at around the anti-shadowing maximum. Finally we turn to possible cold energy loss at the LHC. The standard estimate is ${ {\textrm d}}N/ { {\textrm d}}y\approx 1500-4000$ in central $PbPb$ collisions at LHC energies [@LastCall]. While in central $dPb$ collisions the geometrical cross section is small, using $L/\lambda \sim \langle N_{part}\rangle ^{1/3}$ from [@GGB:HQ] one obtains ${ {\textrm d}}N/ { {\textrm d}}y\approx 500-2000$, still large, in central $dPb$ collisions. This motivates us to examine the effect of a small energy loss in the final state. We apply opacity values $L/\lambda_g \lesssim 3$ for this illustration. The results are indicated as thin dotted ($L/\lambda_g=1$) and dot-dashed ($L/\lambda_g=3$) lines in Fig. \[fig:2\]. The energy loss is stronger at the lower $x_T$ values. Here the relative particle yield is suppressed, and $R_{dA}(x_T)$ is steeper. In the high $p_T$ region the suppression loses strength as it increases with $\sim \log(E)$. We tested this effect at RHIC and LHC energies in $AuAu$ ($PbPb$) collisions [@GGB:QM08]. Summary ======= We analyzed the high transverse-momentum behavior of the nuclear modifications in $dA$ collisions. Using preliminary PHENIX data we checked several common nuclear shadowing parameterizations. We compared the logarithmic slopes of the nuclear modification factor in the $15$ GeV/c $\lesssim p_T \lesssim 50$ GeV/c region. The shadowing parameterizations investigated have an almost linear behavior with a negative slope in $dAu$ collisions at RHIC, which can be attributed to the EMC effect. While we found all studied shadowing models reliable at intermediate $p_T$, at high transverse momenta the EPS08 and the HKN nuclear PDF seem to give the best agreement with the PHENIX data. We tested the $x_T$ scaling of midrapidity $\pi^0$ production in $dA$ collisions at RHIC and LHC energies in the models. At high $x_T$ most models have a similar behavior, but EPS08 deviates from precise scaling around $x_T\approx 6\cdot 10^{-2}$. At lower $x_T$, $R_{dA}(x_T)$ shows a $5-15\%$ enhancement in the HKN, EKS, and EPS08 models. The enhancement is absent in the HIJING with multiscattering model. We tested the effect of a small energy loss, which counteracts any enhancement in $dPb$ collisions and yields a wide-range suppression of the $R_{dA}(x_T)$. In light of these results and open questions, we emphasize the need for $pPb$ (or at least $dPb$) measurements at the LHC. Acknowledgments =============== One of the authors (GGB) would like to thank the organizers for local support. Our work was supported in part by Hungarian OTKA PD73596, T047050, NK62044, and IN71374, by the U.S. Department of Energy under grant U.S. DOE DE-FG02-86ER40251, and jointly by the U.S. and Hungary under MTA-NSF-OTKA OISE-0435701. [50]{} B.A. Cole, G.G. Barnaföldi, P. Lévai, G. Papp, and G. Fai [*arXiv:hep-ph/0702101*]{}, (2007) G.G.  Barnaföldi [*et al.*]{} [*Int. J. Mod. Phys.*]{} [**E16**]{} 1923 (2007); [*J. Phys.*]{} [**G30**]{}, S1124 (2004) X.-N. Wang and M. Gyulassy [*Phys. Rev.*]{} [**D44**]{} 3501 (1991); [*Comput. Phys. Commun.*]{} [**83**]{} 307 (1994); S.J. Li and X.N. Wang, [Phys. Lett.]{} [**B527**]{} 85 (2002) K. Eskola, V.J. Kolhinen, and C.A. Salgado [*Eur. Phys. J.*]{} [**C9**]{} 61 (1999) K.J. Eskola, H. Paukkunen, and C.A. Salgado *arXiv:0802.0139* (2008) M. Hirai [*et al.*]{} [*Phys. Rev.*]{} [**C70**]{} 044905 (2004); [*ibid*]{} [**D75**]{}, 094009 (2007). I. Vitev [*Phys.Rev.*]{} [**C75**]{} 064906 (2007) G.G. Barnaföldi [*et al.*]{} *Eur. Phys. J.* [**C49**]{} 333 (2007) and references therein. S.S. Adler *et al.* \[PHENIX\] *Phys. Rev. Lett* [**98**]{} 172302 (2007) D. Peressounko [*et al.*]{} \[PHENIX\] [*arXiv:nucl-ex/0609037*]{} (2006) T. Isobe [*et al.*]{} \[PHENIX\] [*J. Phys.*]{} [**G34**]{} S1015 (2007) F. Arleo [*JHEP*]{} [**0707**]{} 032 (2007) L. Apanasevich [*et al.*]{} \[E706\] [*Phys. Rev.*]{} [**D59**]{} 074007 (1999) N. Arnesto (ed.) [*et al.*]{} [*J. Phys.*]{} [**G35**]{} 054001 (2008) G.G. Barnaföldi [*et al.*]{} *arXiv:0805.0335* [*Submitted to J. Phys.*]{} [**G**]{} (2008)
{ "pile_set_name": "ArXiv" }
--- abstract: | We begin by providing observational evidence that the probability of encountering very high and very low annual tropical rainfall has increased significantly in the recent decade (1998-present) as compared to the preceding warming era (1979-1997). These changes over land and ocean are spatially coherent and comprise of a rearrangement of very wet regions and a systematic expansion of dry zones. While the increased likelihood of extremes is consistent with a higher average temperature during the pause (as compared to 1979-1997), it is important to note that the periods considered are also characterized by a transition from a relatively warm to cold phase of the El Niño Southern Oscillation (ENSO). To further probe the relation between contrasting phases of ENSO and extremes in accumulation, a similar comparison is performed between 1960-1978 (another extended cold phase of ENSO) and the aforementioned warming era. Though limited by land-only observations, in this cold-to-warm transition, remarkably, a near-exact reversal of extremes is noted both statistically and geographically. This is despite the average temperature being higher in 1979-1997 as compared to 1960-1978. Taken together, we propose that there is a fundamental mode of natural variability, involving the waxing and waning of extremes in accumulation of global tropical rainfall with different phases of ENSO. 0.2 truecm [**Journal Ref: QJRMS, DOI:10.1002/qj.2633, 2015.**]{} address: 'Centre for Atmospheric and Oceanic Sciences & Divecha Centre for Climate Change, Indian Institute of Science, Bangalore 560012, India.' author: - 'Jai Sukhatme and V. Venugopal' title: Waxing and Waning of Observed Extreme Annual Tropical Rainfall --- Introduction ============ In the context of global warming, the increasing moisture content of the troposphere [@HS-2006; @Trenberth-2011] is expected to result in an amplification of short-duration extreme rainfall events [@Trenberth-1999; @AI-2002; @GS-2009], mostly validated by regional ground-based observations [@East-rev; @Groisman-2005; @Venu]. On longer timescales, a consequence of the increase in column integrated water vapour for locations with very high and low accumulation is the so-called thermodynamic effect of “wet regions getting wetter, and dry regions getting drier” [@HS-2006; @Wentz-2007]. Some observations [@LP-2009; @Allan-etal-2010; @Chou-etal-2013] and long-term global warming simulations [@Chou-etal-2009; @Giorgi-etal-2011; @LWK-2013; @Gorman-2012] are consistent with the expected consequences of this paradigm. In addition, dynamical changes due to warming also affect rainfall [@Seager-2010]. The combination of these two complicate the precipitation response [@Xie-2010; @Chad-2013], especially over land [@Sonia2]. In fact, as suggested by [@LA-2012; @LA-2013], the “wet-wetter, dry-drier" hypothesis may be more appropriate over land if the wet and dry regions are not considered to be fixed geographical locations [see also, @Polson-2013]. Apart from the role of warming, it has been suggested that changes in regional extremes have a natural component. In particular, individual locations with more than a century long data clearly exhibit multiple cycles in heavy rainfall [see for example, @Willems-2013; @Marani-2014]. Further, it has been documented that regional extremes in rainfall vary with El Niño and La Niña conditions [see @Gershunov-2003; @Grimm-2009; @Alexander-2013; @Phil for reports on the continental United States, South America, eastern Australia and the Philippines, respectively]. In fact, connections between daily and monthly extremes and the El Niño Southern Oscillation (ENSO) on a more global scale have been explored over the tropical oceans [@Allan-Soden-2008] as well as over land [@Lyon-2005; @Curtis-2007; @Alexander-2009]. In addition to the effect of natural cycles on short-duration extremes, different regions in the tropics experience anomalously wet or dry years during El Niño and La Niña events [@RH1; @RH2; @Dai-GRL]. Thus, short-duration regional rainfall extremes as well as very high and low annual accumulation are plausibly influenced by both warming and natural cycles. In the present work, we focus on the footprint of ENSO on annual accumulation and its extremes in the tropics. In Section 2, we compare extremes in global accumulation during the ongoing pause in global warming and the preceding warming era. Apart from the fact that the average temperature during the recent period since 1998 is higher than the preceding warming era, the warming-vs-pause contrast is also a comparison between long predominantly warm and cold phases of ENSO, respectively. Keeping this in mind, in Section 3, we attempt to delineate possible connections between these changes in very low and high annual rainfall and ENSO phase transitions. We also discuss, in Section 4, the consistency between our global viewpoint using annual rainfall as a measure, and the noted trends in short-duration regional extremes. Finally, the paper concludes with a summary of results and a brief discussion in Section 5. Warming vs Pause ================ Observations suggest that, despite the continual build-up of greenhouse gases in the atmosphere, the rate of surface warming since 1998 has been slower than in the preceding decades [@Fyfe; @Cowtan2014]; a phenomenon referred to as the “pause" or “hiatus" in global warming. While the cause behind the ongoing hiatus has received much attention, with many competing theories in the fray, the answer remains elusive [see, for example, the succinct summary in @Held-2013]. Rather than worry about its cause, here we view the pause — the first of its kind with possibly others to follow [@Meehl] — as a natural laboratory wherein the climate continues to evolve with one its primary variables being held relatively constant. Given this unique state of affairs, the first question we ask concerns the fate of tropical rain during the pause as compared to the immediately preceding warming era [beginning in the late 1970s, @TF-2013]. With regard to rainfall measurements, the Global Precipitation Climatology Product (GPCP) provides data at a spatial and temporal resolution of 2.5 degrees and 1 month, respectively [@adler-etal-2003]. This data is available from 1979 to the present day, thus covering the hiatus and the preceding warming era. Due to its monthly temporal and coarse spatial resolution, short-duration localized intense precipitation events which are usually the focus of studies on extremes lie outside the scope of this data. Rather, the measure which we focus on in this work, which is arguably a better yardstick for assessing the “wet-wetter, dry-drier” paradigm, is annual accumulation at every grid point. Thus, in the remainder of this manuscript, extremes refer to very high and low annual accumulation. We begin by examining the differences in annual tropical (35S-35N) rainfall accumulation as recorded in the hiatus (1998 to 2013) and the preceding warming era (1979 to 1997). Figure \[fig:fig1\]a shows the normalised frequency distributions of annual rainfall during these two periods. These distributions are based on the union of the data from every year of the respective era, i.e., a sample size of 19 (16) $\times$ 28 $\times$ 72. While frequency distributions are used for illustrating changes, cumulative distribution functions (CDF) are used for significance testing. Specifically, the Kolmogorov-Smirnov (KS) test [@Papoulis] shows that the CDFs of accumulation have changed significantly between the two eras (a KS distance of 0.02, leading to a $p$-value close to zero, based on the null hypothesis $H_0$: CDF$_{\rm {hiatus}}$ = CDF$_{\rm {warming}}$). The changes in the tails of the distributions are better captured in Figure \[fig:fig1\]b, which shows their difference (hiatus - warming). In particular, we note that, in the global tropics, the probabilities of encountering very low ($<$ 200 mm) and very high ($>$ 3000 mm) accumulation have increased significantly during the hiatus. It is worth reiterating that in this comparison we are considering all the years that make up an era to be a single set, thus the increased probability of extremes in accumulation is true of the hiatus as a whole and individual years can deviate from this expectation. In addition to the pause being warmer (on average) than the preceding warming era, the periods considered are also characterized by a transition from a relatively warm to cold phase of the ENSO [@TF-2013]. This transition is evident when we examine the difference in rainfall climatology between the two eras (shown in Figure \[fig:fig2\]a). For example, we see an east-west anomaly along the equatorial Pacific ocean, indicating a preference of moist convection more to the west (east) during the hiatus (warming). Similarly, the subtropical signature of the two phases can be seen with the southeastern spreading of anomalies into the southern Pacific Ocean [see, for example, @RH1; @RH2; @Wallace-ENSO; @Dai-GRL]. Thus, as both warming and a phase transition in ENSO are in play for the present comparison, it is not possible to attribute the noted increase in extremes of tropical accumulation to any one of these factors [see also the discussion in @Pend for possible models of changes in rainfall distribution due to ENSO and warming]. To focus on the role of phase changes in ENSO on extreme accumulation, we note that the era preceding the late seventies, i.e., 1960 to the mid-1970s, was also a long cold phase of ENSO [see for example the discussion in @ZWB]. In fact, both these transitions are also captured by the Pacific Decadal Oscillation index (see, for example, <http://jisao.washington.edu/pdo>). It should be kept in mind that a period identified as a particular phase of ENSO is interrupted by events of opposite polarity. For example, even though the pause is by and large a cold phase of ENSO, an examination of the PDO index reveals that the hiatus too can be partitioned into pre- and post-2005, where the latter period is dominated by La Niña conditions. Given that there are no long-term tropical rainfall observations which cover both land and ocean, we utilise Global Precipitation Climatology Centre (GPCC) data [@gpcc], that is based on station observations (only land) and is available from 1950 onwards, at a spatial and temporal resolution of $0.5^\circ$ degree and 1 month, respectively. A difference in climatologies of annual accumulation for the two phase transitions of ENSO, namely, (i) {1979-to-1997} [*vs*]{} {1998-to-2013} (warm-to-cold) and (ii) {1960-to-1978} [*vs*]{} {1979-to-1997} (cold-to-warm) are shown in Figure \[fig:fig2\]b and c, respectively. Not only are changes over land consistent between Figures \[fig:fig2\]a and b, they are also opposite in character to those shown in Figure \[fig:fig2\]c. Having noted the expected changes in climatologies, the specific question we seek to answer is whether a cold-to-warm transition is characterised by a decrease in extremes; i.e., is there a natural modulation of very high and low accumulation associated with ENSO phase transitions? In other words, given that the mean temperature during 1979-1997 was higher than the preceding era (1960s to the mid-1970s), if there is indeed a decrease in global extremes of accumulation, it clearly points to the role of ENSO phase transitions. Extremes and ENSO Transitions ============================= Figure \[fig:fig3\] shows the differences between the normalised histograms of accumulation, based on GPCC data, for two phase transitions of ENSO: (i) {1979-to-1997} $-$ {1960-to-1978} and (ii) {1998-to-2013} $-$ {1979-to-1997}. We note that the changes in very high and low accumulation for the warm-to-cold transition from both GPCP (Figure \[fig:fig1\]b) and GPCC (red fill in Figure \[fig:fig3\]) are qualitatively similar. Further, even though land and ocean rainfall estimates from GPCP have different biases, the consistency with GPCC is reassuring. It is worth noting that the percentage changes in Figure \[fig:fig3\] are smaller than in Figure \[fig:fig1\]b. This could be attributed to the higher spatial resolution of GPCC observations. More strikingly, the changes in extremes from a cold {1960-to-1978} to warm {1979-to-1997} phase of ENSO (blue fill in Figure \[fig:fig3\]) are exactly the opposite of what is observed in a warm-to-cold transition. This shows that relative changes in extremes of tropical accumulation are closely linked to ENSO. To ascertain if there is a spatial character associated with the statistical changes described above, we utilize the crossings at approximately 200 mm and 3000 mm in Figures \[fig:fig1\] and \[fig:fig3\] as thresholds for very low (“dry”) and high (“wet”) accumulation, respectively. Further, the two eras that straddle a particular transition are denoted by E1 and E2 (where E2 follows E1 in time). Using this terminology, we construct “Index maps" that consist of a union of three sets. Specifically, these maps consist of geographical locations that accumulated more (less) than 3000 mm (200 mm) of rain in (i) one or more of the years of E1 and E2 (cyan); (ii) one or more of the years of E2 and none of E1 (blue); (iii) one or more of the years of E1 and none of the years in E2 (red). Thus, locations in blue (red) represent appearance (disappearance) of wet and dry regions in E2 when compared to E1. Consider first, the warm-to-cold transition, i.e., E1={1979-to-1997} and E2={1998-to-2013}, which is captured by both GPCP and GPCC data. As seen from the global GPCP product (Figure \[fig:fig4\]), new high accumulation regions appear near the western maritime continent, over the Indian Ocean and the western part of equatorial South America (blue in Figure \[fig:fig4\]a). This is accompanied by a depletion in the eastern core of the Pacific convergence zone (red). Overall, the “new” locations still lie within climatologically rainy zones, thus indicating a rearrangement of wet regions. The aforementioned changes over land are also seen from GPCC (blue in Figures \[fig:fig5\]a,b). At the other end, the new dry points take the form of spatially coherent fringes and indicate a systematic expansion of the existing dry regions off the western coasts of Australia, and North and South America (blue in Figure \[fig:fig4\]b). As this warm-to-cold transition also involves comparison with on-an-average warmer temperatures (during the pause), it is worth noting that the expansion of dry zones is sometimes linked to global warming [@Marvel][^1]. Furthermore, on land (most clearly seen in Figure \[fig:fig5\]c from GPCC due to its higher spatial resolution), we observe a mixed signal over the Australian continent (red and blue), signs of new dry zones in the western United States and northern Pakistan (blue), along with the disappearance of existing dry regions in southern Africa and a coherent shrinking of the deserts in the Sahel region of northern Africa (red). It is worth noting that these changes over land are in very good agreement with regional reports; specifically, the projected drying of the western United States [2000 onwards; see @Seager-etal-2007], the occurrence of repeated severe droughts over northern Pakistan in the early 2000s[^2], and the reduction of dryness over southern Africa [1998 onwards; see Figure 2 in @Giannini-2008] as well as the Sahel [@sahel]. ![image](fig5){width="75.00000%"} In the cold-to-warm transition (i.e., E1={1960-to-1978} and E2={1979-to-1997}), which is only captured by GPCC data, the geographical changes (over land) seen in Figure \[fig:fig6\] are almost exactly opposite in character to those in Figure \[fig:fig5\]. These changes are consistent with the reversal seen in the two ends of the probability distributions in Figure \[fig:fig3\]. In particular, the maritime continent shows a disappearance of wet points to the west (red), as does the western portion of equatorial South America (red). The same observation holds true for the very dry zones; here, new dry points are seen in southern Africa and the Sahel region (blue), while dryness reduces over Pakistan, western United States and much of continental Australia (red). Thus, the answer to the question posed earlier (at the end of Section 2) is that, there appears to be an intrinsic waxing and waning, of very high and low tropical rainfall associated with ENSO phase transitions. Not only are the changes statistically significant, they also bear a coherent spatial signature. Consistency with trends in regional short-duration extremes =========================================================== Having taken a global point of view, with a focus on annual accumulation, we now assess whether the changes seen above agree with previously reported trends in regional extremes. Beginning with [@IY], there have been numerous reports of increasing trends in short-duration extreme rainfall events across the globe [see, for example, @East-rev; @Groisman-2005; @Venu]. In order to show that this increase is in fact consistent with our global accumulation picture (waxing and waning), we use North America (15N-45N; 60W-130W) as an example. The motivation behind this choice stems from the fact that the region considered is large enough, so as to smooth out large local fluctuations, and thus more amenable to study extremes [see, for example, @Groisman-2005 and the references therein]. Following the same procedure as before, we construct histograms and index maps for the two phase transitions over this region. The difference seen in the right tails of the histograms during the cold-to-warm transition (blue fill in Figure \[fig:fig7\]a) indicates an increased likelihood of exceeding annual rainfall of $\sim$1200 mm. Indeed, this increase is marked by an appearance of new wet spots of high accumulation (blue in Figure \[fig:fig7\]b) in {1979-to-1997} as compared to {1960-to-1978} over central US. This matches precisely with the significant short-duration extreme event trends from the mid 1970s to the late 1990s, over the contiguous central US, reported by [@Groisman-2005]. Furthermore, as in the case of global tropics, the statistical and geographic changes associated with a warm-to-cold transition (Figure \[fig:fig7\]c) are opposite in nature to those during a cold-to-warm transition. At first glance, in the cold-to-warm transition, the increase (decrease) in likelihood of encountering very high (low) annual rainfall over North America might appear to be contrary to the global tropical perspective presented earlier (i.e., waning of both wet and dry accumulation extremes; blue fill in Figure \[fig:fig3\]). A closer examination, however, indicates that this is not the case. In fact, the very high accumulation in the North American region forms a subset of the middle portion of the global histogram, and the changes between $\sim$1200 mm and $\sim$2000 mm in Figures \[fig:fig7\]a and \[fig:fig3\] are similar. Discussion ========== An immediate implication of our finding is that a warming signal can be enhanced or subdued depending on the phase of ENSO. However, disentangling their respective contributions to changes in rainfall extremes remains a challenge [e.g., @Shukla]. That said, it is worth asking if our methodology can also shed light on the more gradual contribution of warming. To this end, a comparison of extremes in similar phases of ENSO could prove fruitful. Specifically, given the data at hand, we examined the changes in extreme accumulation in two cases: (i) between two temporally well-separated cold phases of ENSO (1960-1978 vs 1998-2013); and (ii) within the warming era that was mostly a warm phase of ENSO (1979-1987 vs 1988-1997). In both these cases, the average temperature is higher in the respective latter period. The first experiment (whose results can be deduced by summing the two curves in Figure 3) yields a marginal increase on the very wet side, and, in fact, a decrease in dry extremes, neither of which is statistically significant. The second experiment (differences in PDFs not shown) showed almost no change in extremes on either side; in fact, if anything, both very high and low accumulation showed a marginal decrease. Taken together, it is difficult to argue for a clear influence of warming on the extremes of annual accumulation; in fact, internal variability, as governed by the phases of ENSO, appears to play a much more significant role than warming. To summarise, we studied the changes in very low and high tropical rainfall accumulation from a global point of view. The main finding is that there appears to be a fundamental natural mode of variability in tropical rainfall accumulation extremes with the changing phases of ENSO. Specifically, our analysis provides clear observational evidence that a warm-to-cold (cold-to-warm) transition of ENSO is associated with waxing (waning) of extreme accumulation, both statistically and spatially, over the global tropics. The dominant role of ENSO was made clear by comparing accumulation across two transitions; specifically, cold-to-warm (1960-1978 vs 1979-1997) and a warm-to-cold (1979-1997 to 1998-2013), both of which involved a progressive increase in average global temperatures[^3]. Moreover, as illustrated over the continental US, this global modulation is consistent with previously reported trends in short-duration regional extremes. ![image](fig7){width="40.00000%"} 0.5truecm [*Acknowledgements:*]{} JS & VV acknowledge financial support from the Divecha Centre for Climate Change. Discussions with George Huffman and John M. Wallace are greatly appreciated. We thank the Earth System Research Laboratory, NOAA for making available long-term GPCP and GPCC precipitation observations. We thank the two anonymous reviewers for their insightful and constructive comments. [99]{} Adler RF et al. 2003. The Version-2 Global Precipitation Climatology Project (GPCP) monthly precipitation analysis (1979-Present). *Journal of Hydrometeorology* [**4**]{}, 1147-1167. Alexander LV, Uotila P and Nicholls N. 2009. Influence of sea surface temperature variability on global temperature and precipitation extremes. *Journal of Geophysical Research - Atmospheres* [**114**]{}, 10.1029/2009JD012301. Allan RP and Soden BJ. 2008. Atmospheric warming and the amplification of precipitation extremes. *Science* [**321**]{}, DOI:10.1126/science.1160787. Allan RP, Soden BJ, John VO, Ingram W and Good P. 2010. Current changes in tropical precipitation. *Environmental Research Letters* [**5**]{}, DOI:10.1088/1748-9326/5/2/025205. Allen MR and Ingram WJ. 2002. Constraints on future changes in climate and the hydrologic cycle. [*Nature*]{} [**419**]{}, 224-232. Chadwick R, Boutle I and Martin G. 2013. Spatial Patterns of Precipitation Change in CMIP5: Why the Rich Do Not Get Richer in the Tropics. *Journal of Climate* [**26**]{}, 3803-3822. Chou C, Chiang JCH, Lan C-W, Chung C-H, Lia Y-C and Lee C-J. 2013. Increase in the range between wet and dry season precipitation. *Nature Geoscience* [**6**]{}, DOI:10.1038/NGEO1744. Chou C, Neelin JD, Chen C-A and Tu J-Y. 2009. Evaluating the “Rich-Get-Richer” mechanism in tropical precipitation change under global warming. *Journal of Climate* [**22**]{}, 1982-2005. Cowtan K and Way RG. 2014. Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. *Quarterly Journal of the Royal Meteorological Society* [**140**]{}, 1935-1944. Curtis S, Salahuddin A, Adler RF, Huffman GJ, Gu G and Hong Y. 2007. Precipitation extremes estimated by GPCP and TRMM: ENSO relationships. *Journal of Hydrometeorology* [**8**]{}, 678-689. DelSole T, Tippett MK and Shukla J. 2011. A significant component of unforced multidecadal variability in the recent acceleration of global warming. *Journal of Climate* [**24**]{}, 909-926. Dai A and Wigley TML. 2000. Global patterns of ENSO-induced precipitation. *Geophysical Research Letters* [**27**]{}, 1283-1286. Easterling DR, Evans JL, Groisman PYa, Karl TR, Kunkel KE and Ambenje P. 2000. Observed variability and trends in extreme climate events: A brief review. *Bulletin of the American Meteorological Society* [**81**]{}, 417-425. Fyfe JC, Gillet N and Zwiers F. 2013. Overestimated global warming over the past 20 years. *Nature Climate Change* [**3**]{}, 767-769. Gershunov A and Cayan DR. 2003. Heavy daily precipitation frequency over the contiguous U.S. Sources of climatic variability and seasonal predictability. *Journal of Climate* [**16**]{}, 2752-2765. Giannini A, Biasutti M, Held IM and Sobel AH. 2008. A global perspective on African climate. *Climatic Change* DOI:10.1007/s10584-008-9396-y. Giorgi F, Im E-S, Coppola E, Diffenbaugh NS, Gao XJ, Mariotti L and Shi Y. 2011. Higher hydroclimatic intensity with global warming. *Journal of Climate* [**24**]{}, 5309-5324. Goswami BN, Venugopal V, Sengupta D, Madhusoodanan M and Xavier PK. 2006. Increasing trend of extreme rain events over India in a warming environment. *Science* [**314**]{}, DOI:10.1126/science.1132027. Greve P, Orlowsky B, Mueller B, Sheffield J, Reichstein M and Seneviratne SI. 2014. Global assessment of trends in wetting and drying over land. *Nature Geoscience* [**7**]{}, 716-721. Grimm AM and Tedeschi RG. 2009. ENSO and extreme rainfall events in South America. *Journal of Climate* [**22**]{}, 1589-1609. Groisman PYa, Knight RW, Easterling DR, Karl TR, Hegerl GC and Razuvaev VN. 2005. Trends in intense precipitation in the climate record. *Journal of Climate* [**18**]{}, 1326-1350. Held IM. 2013. The cause of the pause. *Nature* [**501**]{}, 318-319. Held IM and Soden BJ. 2006. Robust responses of the hydrological cycle to global warming. *Journal of Climate* [**19**]{}, 5686-5699. Iwashima T and Yamamoto R. 1993. A statistical analysis of the extreme events: Long-term trend of heavy daily precipitation. *Journal of the Meteorological Society of Japan* [**71**]{}, 637-640. King AD, Alexander LV and Donat MG. 2013. Asymmetry in the response of eastern Australia extreme rainfall to low-frequency Pacific variability. *Geophysical Research Letters* [**40**]{}, 2271-2277. Lau W.K-M., Wu H-T. and Kim K-M. 2013. A canonical response of precipitation characteristics to global warming from CMIP5 models. *Geophysical Research Letters* [**40**]{}, 3163-3169. Liepert BG and Previdi M. 2009. Do models and observations disagree on the rainfall response to global warming? *Journal of Climate* [**22**]{}, 3156-3166. Liu C and Allan RP. 2012. Multi-satellite observed responses of precipitation and its extremes to interannual climate variability. *Journal of Geophysical Research* [**117**]{}, D03101. Liu C and Allan RP. 2013. Observed and simulated precipitation responses in wet and dry regions 1850-2100. *Environmental Research Letters* [**8**]{}, 034002. Lyon B and Barnston AG. 2005. ENSO and the spatial extent of interannual precipitation extremes in tropical land areas. *Journal of Climate* [**18**]{}, 5095-5109. Mantua  NJ, Hare SR, Zhang Y and Wallace JM. 1997. A Pacific interdecadal climate oscillation with impacts on salmon production. *Bulletin of the American Meteorological Society* [**78**]{}, 1069-1079. Marani M and Zanetti S. 2014. Long-term oscillations in rainfall extremes in a 268 year daily time series. *Water Resources Research* [**18**]{}, DOI:10.1002/2014WR015885. Marvel K and Bonfils C. 2013. Identifying external influences on global precipitation. *Proceedings of the National Academy of Sciences* [**110**]{}, 19301-19306. Meehl GA, Arblaster J, Fasullo JT, Hu A and Trenberth KE. 2011. Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods. *Nature Climate Change* [**1**]{}, DOI:10.1038/NCLIMATE1229. Monteiro JM, Wallace JM, Sukhatme J and Murtugudde R. 2015. The contribution of ENSO variability to the recent expansion of the tropical belt. *AGU Chapman Conference*, Santa Fe, USA, July 2015. O’Gorman PA. 2012. Sensitivity of tropical precipitation extremes to climate change. *Nature Geoscience* [**5**]{}, 697-700. O’Gorman PA and Schneider T. 2009. The physical basis for increases in precipitation extremes in simulations of 21st-century climate change. *Proceedings of the National Academy of Sciences* [**106**]{}, 14773-14777. Olsson L, Eklundh L and Ardö J. 2005. A recent greening of the Sahel-trends, patterns and potential causes. *Journal of Arid Environments* [**63**]{}, 556-566. Papoulis A and Pillai U. 2002. [*Probability, Random variables and Stochastic Processes*]{}, McGraw Hill, 4th edition. Pendergrass AG and Hartmann DL. 2014. Two Modes of Change of the Distribution of Rain. *Journal of Climate* [**27**]{}, 8357-8371. Polson D, Hegerl GC, Allan RP and Sarojini BB. 2013. Have greenhouse gases intensified the contrast between wet and dry regions? *Geophysical Research Letters* [**40**]{}, DOI:10.1002/grl.50923. Ropelewski CF and Halpert MS. 1987. Global and Regional Scale Precipitation Patterns Associated with the El Niño/Southern Oscillation. *Monthly Weather Review* [**115**]{}, 1606-1626. Ropelewski CF and Halpert MS. 1989. Precipitation patterns associated with the high index phase of the southern oscillation. *Journal of Climate* [**2**]{}, 268-284. Schneider U, Becker A, Finger P, Meyer-Christoffer A, Ziese M and Rudolf B. 2013. GPCC’s new land surface precipitation climatology based on quality-controlled in situ data and its role in quantifying the global water cycle. *Theoretical and Applied Climatology* [**115**]{}, DOI:10.1007/s00704-013-0860-x. Seager R et al. 2007. Model projections of an imminent transition to a more arid climate in Southwestern North America. *Science* [**316**]{}, DOI:10.1126/science.1139601. Seager R, Naik N and Vecchi GA. 2010. Thermodynamic and Dynamic Mechanisms for Large-Scale Changes in the Hydrological Cycle in Response to Global Warming. *Journal of Climate* [**236**]{}, 4651-4668. Trenberth KE. 2009. Conceptual framework for changes of extremes of the hydrological cycle with climate change. *Climate Change* [**42**]{}, 327-339. Trenberth KE. 2011. Changes in precipitation with climate change. *Climate Research* [**47**]{}, 123-138. Trenberth KE and Fasullo JT. 2013. An apparent hiatus in global warming? *Earth’s Future* [**1**]{}, 19-32. Venugopal V and Sukhatme J 2015. Changes in tropical rainfall during the warming hiatus. [*in preparation*]{}. Villafuerte MQ et al. 2014. Long-term trends and variability of rainfall extremes in the Philippines. *Atmospheric Research* [**137**]{}, 1-13. Wallace JM, Rasmussuon EM, Mitchell TP, Kousky VE, Sarachik ES and von Storch H. 1998. On the structure and evolution of ENSO-related climate variability in the tropical Pacific: Lessons from TOGA. *Journal of Geophysical Research* [**103(C7)**]{}, 14241-14259. Wentz FJ, Ricciardulli L, Hilburn K and Mears C. 2007. How much more rain will global warming bring? *Science* [**317**]{}, 233-235. Willems P. 2013. Adjustment of extreme rainfall statistics accounting for multidecadal climate oscillations. *Journal of Hydrology* [**490**]{}, 126-133. Xie S-P et al. 2010. Global Warming Pattern Formation: Sea Surface Temperature and Rainfall. *Journal of Climate* [**23**]{}, 966-986. Zhang Y, Wallace JM and Battisti DM. 1997. ENSO-like interdecadal variability: 1900-93. *Journal of Climate* [**10**]{}, 1004-1020. [^1]: This expansion of dry zones is part of a developing story on the widening of tropical Hadley cells, which it itself appears to have a strong ENSO component [see, for example, @Joy-new]. [^2]: See for example the documentation of droughts over Pakistan, [http://pakistanweatherportal.com/2011/05/08/ history-of-drought-in-pakistan-in-detail](http://pakistanweatherportal.com/2011/05/08/ history-of-drought-in-pakistan-in-detail). [^3]: In fact, recent work compares accumulation within the pause (where temperatures are fairly uniform), and once again, an increase in extremes due to a warm-to-cold transition is clearly evident [@VS].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Molecular junctions and similar devices described by an energy dependent transmission coefficient can have a high linear response thermoelectric figure of merit. Since such devices are inherently non-linear, the full thermodynamic efficiency valid for any temperature and chemical potential difference across the leads is calculated. The general features in the energy dependence of the tranmission function that lead to high efficiency and also high power output are determined. It is shown that the device with the highest efficiency does not necessarily lead to large power output. To illustrate this, we use a model called the t-stub model representing tunneling through an energy level connected to another energy level. Within this model both high efficiency and high power output are achievable. Futhermore, by connecting many nanodevices it is shown to be possible to scale up the power output without compromising efficiency in an (exactly solvable) n-channel model even with tunneling between the devices.' author: - 'Selman Hershfield, K. A. Muttalib and Bradley J. Nartowt' title: 'Non-linear thermoelectric transport: A class of nano-devices for high efficiency and large power output' --- Introduction ============ Thermoelectric materials [@book] can convert unused waste heat to electricity (Seebeck effect) or use electricity for refrigeration (Peltier effect). A good thermoelectric material needs to have a good electrical conductivity $\sigma$, and at the same time a poor thermal conductivity $\kappa$. However, in normal bulk materials the two properties are related and follow the well-known Wiedemann-Franz law given by $\kappa/\sigma T=\pi^2k_B^2/3e^2$, where $T$ is the temperature, $k_B$ is the Boltzmann constant, and $e$ is the electric charge. As a result it has not yet been possible to find bulk thermoelectric materials efficient enough to be cost effective except in specialized applications like space travel. The subject has gained a lot of attention in recent years due to the increasing prospect of enhanced efficiency by nanostructural engineering [@review1; @review2; @majumdar; @datta1; @flensberg; @buttiker; @imry; @sanchez; @whitney; @jordan]. It seems possible to control the electrical and thermal properties independently in such nanosystems [@dresselhaus; @reddy]. The effectiveness of a thermoelectric material is usually estimated by its thermoelectric figure of merit $ZT \equiv S_e^2T\sigma/\kappa$, where $S_e$ is the thermopower or Seebeck coefficient, and $\kappa$ contains contributions from electrons as well as phonons. Currently, best materials have $ZT \sim 1$, while it is estimated that $ZT > 3$ would be industrially competitive [@majumdar]. Mahan and Sofo [@mahan] considered the optimization of the figure of merit as a mathematical problem and found that for an ideal delta-function distribution of the transmission function $\mathcal{T}(E)$ as a function of energy $E$, the figure of merit diverges in the absence of any phonon contribution to the thermal conductivity [@majumdar; @hochbaum; @boukai]. It was argued later [@murphy] that a molecular junction can also give rise to a diverging figure of merit. Other theoretical models have also predicted large $ZT$ values for nanosystems, e.g. for double quantum dots [@ws]. While there is reason to be optimistic about the prospects for making useful nanostructure thermoelectric devices, there are a number of major issues which need to be addressed. In this paper we address three of them. 1\. The figure of merit for bulk systems, $ZT$, is derived in the linear response regime. This is quite reasonable in bulk systems because one is expanding in the gradient in the temperature and the electrical potential. It is possible to have large temperature differences across a sample and yet small gradients in temperature within the sample. For the type of nanostructure considered in the above and in this paper the temperature and electrical potential gradients occur on the nanometer scale, leading to enormous gradients. Thus, in nanostructures the interesting regime for extracting energy is the nonlinear response regime. What is the response of high $ZT$ nanostructure devices in the nonlinear response regime? 2\. Also, from bulk systems one would expect that a higher thermodynamic efficiency would lead to a higher power output. We will show in the next section that there are some models for nanostructures which lead to the maximum thermodynamic efficiency in the limit where the power output goes to zero. This is possible because the efficiency, $\eta$, is the ratio of the electrical energy extracted to the heat transfer between reservoirs. This ratio can approach the Carnot limit maximum as both terms in the ratio go to zero. Is it possible to achieve large power output and high efficiency simultaneously in nanostructured devices? 3\. In nanostructures the currents and heat transfers are very small compared to macroscopic electrical currents and heat transfers. To make nanostructure devices useful for extracting energy on the macroscopic scale, one needs to scale up the response. The obvious way to do this is to put many nanoscale devices in parallel. If these devices are very far apart, then clearly the power output scales with the number of devices. However, as the devices are put more closely together to optimize the power output per unit area, there will eventually be tunneling between devices. How does tunneling between nanoscale thermoelectric devices effect their efficiency and power output? While there have been electronic structure and transport calculations through molecular junctions[@datta1; @finch; @liu], in this paper we consider a model called the t-stub [@stadler], which has the important features of more specific calculations such as a rapidly varying transmission coefficient near the Fermi energy [@abbout]. This model has been used to understand tight binding and density functional theory calculations of tunneling through molecules and has even been parametrized for specific molecules. The t-stub model is closely related to models used for interference in wave guides, and interference is the mechanism that produces the rapidly varying transmission coefficient. While more realistic models of granular semiconductors [@tripathi; @glatz] have been studied in the linear response regime, the t-stub model is suited toward answering the general questions above about nonlinear response, optimizing power and efficiency, and tunneling between nanoscale devices. Within this model we will still be able to estimate and compare the power output of nanoscale devices to present commerical devices. The rest of the paper is organized as follows. In Sec. II we present the formalism for doing non-linear response and obtain from thermodynamic arguments the criteria for obtaining both a large power output and a high efficiency in the non-linear response regime. In Sec. III the t-stub model is solved in the nonlinear transport regime. It is shown that for the parameters chosen based on the insight developed from Sec. II one can obtain large efficiency and power output simultaneously. In Sec. IV the effect of coupling many t-stub devices in parallel is caclulated as a function of the number of devices coupled. In Sec. V we discuss the inclusion of phonon contributions to the thermal conductivity, and in Sec. IV we summarize our findings for the three questions posed in this introduction. Some technical details are presented in the Appendices. Efficiency and Power Output for Non-linear Response =================================================== A thermodynamic heat engine takes heat $Q_L$ from a reservoir kept at temperature $T_L$, does work $W$, and releases heat $Q_R$ to a reservoir kept at a lower temperature $T_R$. The efficiency is defined as the ratio of work done to the heat extracted from the high temperature reservoir: $\eta\equiv W/Q_L=1-Q_R/Q_L$, where the latter follows from the conservation of energy. The power output, on the other hand, is the product of the charge current times the voltage drop across the device. In our notation it is given by $P = (\mu_R-\mu_L)I_N$ where $I_N$ is the number current and $\mu_L$ and $\mu_R$ are the chemical potentials of the left and the right leads, respectively. In the following we fix $\mu_L$, and $\mu_R$ is determined by the load connected to the thermoelectric generator. In terms of an energy dependent transmission function $\mathcal{T}(E)$ the power output can be written as && P = (\_R-\_L)dE (E)F(E);&& F(E)f\_L(\_L,T\_L; E)-f\_R(\_R,T\_R; E) \[eq-power\] where $f_j(\mu_j,T_j;E)\equiv 1/(1+e^{(E-\mu_j)/k_BT_j})$ are the Fermi functions in the two leads. Using this notation the efficiency can be written in terms of the transmission function as =. \[eq-eta\] This expression, together with Eq. (\[eq-power\]), allows us to optimize the efficiency as well as the power output by carefully matching $\mathcal{T}(E)$ for a given $F(E)$. Figure \[fig-FofE\] shows an example of $F(E)$ for an arbitrarily chosen set of parameter values (in units of $\mu_L=1$, $h=1$) as shown in the figure caption. ![\[fig-FofE\](Color online) Optimizing transmission function $\mathcal{T}(E)$ in the non-linear regime: (a) Difference of the Fermi functions $F(E)\equiv f_L(E)-f_R(E)$ (solid black curve) with $T_L=0.5$, $\mu_L=1.0$, $\mu_R=1.68$, and $\hat{E}=1.7$ and a square-wave $\mathcal{T}(E)$ (dashed red curve) starting at $E=\hat{E}$ and ending at some arbitrary $E_m$. Note that here $\hat{E} > \mu_R > \mu_L$. (b) The efficiency $\eta$ (solid line) and the power output $P$ (dashed line) as functions of the parameter $E_m$. Note that the power output is zero where the efficiency is maximum.](Fig-1a.eps){width="0.3\textheight"} ![\[fig-FofE\](Color online) Optimizing transmission function $\mathcal{T}(E)$ in the non-linear regime: (a) Difference of the Fermi functions $F(E)\equiv f_L(E)-f_R(E)$ (solid black curve) with $T_L=0.5$, $\mu_L=1.0$, $\mu_R=1.68$, and $\hat{E}=1.7$ and a square-wave $\mathcal{T}(E)$ (dashed red curve) starting at $E=\hat{E}$ and ending at some arbitrary $E_m$. Note that here $\hat{E} > \mu_R > \mu_L$. (b) The efficiency $\eta$ (solid line) and the power output $P$ (dashed line) as functions of the parameter $E_m$. Note that the power output is zero where the efficiency is maximum.](Fig-1b.eps){width="0.3\textheight"} Note that $F(E)$ crosses over from negative to positive values at $E=\hat{E}$ obtained from $ (\hat{E}-\mu_L)/T_L= (\hat{E}-\mu_R)/T_R. $ Solving for $T_R$ gives T\_R=T\_L; F()=0. \[eq-TR\] Thus one sees that for a given $T_L$ and $T_R$ with $0 < T_R < T_L$, both chemical potentials must lie on the same side of the parameter $\hat{E}$, satisfying the inequalities $\hat{E} \ge \mu_R \ge \mu_L$ (case I) or $\mu_L \ge \mu_R \ge \hat{E}$ (case II). For definiteness, and without loss of generality, we will only consider case I in all our examples and discussions. It might seem from Eqs. (\[eq-power\]), (\[eq-eta\]) that one can simply increase the power output $P$, and hence the efficiency $\eta$, by arbitrarily increasing the difference in chemical potentials between the two leads. However, the number current $I_N$ is maximum when the generator is in ‘short circuit’ (without any external load resistance) and $\mu_R=\mu_L$, while $I_N\to 0$ when $\mu_R-\mu_L = \Delta\mu_0$ where $\Delta\mu_0$ depends on the energy dependence of the transmission function. (Beyond $\Delta\mu_0$, the current changes sign and the device takes in energy rather than generating it.) The power output $P$ (and hence the efficiency $\eta$) is therefore zero at both these limits. The choice of $\mu_R$ that maximizes $P$ within these limits is usually quite different from the choice that maximizes $\eta$. As a simple soluble example, let us consider $\mathcal{T}(E)\propto \mathcal{T}_0\delta(E-E')$, $E' > \hat{E}$, leading to $P\propto (\mu_R-\mu_L)\mathcal{T}_0F(E')$ and $\eta= (\mu_R-\mu_L)/(E'-\mu_L)$. Then $P$ is maximized if $E'$ is chosen to coincide with the maximum of $F(E)$, while $\eta$ is maximized if $E'$ is chosen to be equal to its smallest allowed value, $\hat{E}$. Note that if $E'=\hat{E}=\mu_R$ exactly, one gets the ideal efficiency $\eta=1$. This is consistent with the result that an ideal delta function form of the transmission function can lead to a divergent figure of merit (in the absence of phonons) [@mahan]. However for this choice, one has $F(E=\hat{E})=0$, which implies $P=0$. This is an ideally efficient but completely useless generator. This is why considering the efficiency (or the figure of merit) without considering the power output can be misleading. The general form of $F(E)$ shown in Figure \[fig-FofE\] provides significant insights into the features of $\mathcal{T}(E)$ that could be helpful to optimize both efficiency and power. For example, for a given $F(E)$, one can minimize the *cancellations* from positive and negative parts of $F(E)$ contributing to $P$ if $\mathcal{T}(E)$ is chosen to have negligible weight in the entire range where $F(E) <0$. (Note that $\mathcal{T}(E)$ can not be negative, and we only consider case I. For case II, a similar condition would mean having $\mathcal{T}(E)\to 0$ in the entire range where $F(E) >0$.) This insight, together with the results provided by the delta-function model considered above, immediately suggest certain design criteria for a good thermoelectric material. First, it should have a tunable phenomena leading to a negligible value for $\mathcal{T}(E)$ in a range of $E$ dictated by $F(E)$. For example, a square $\mathcal{T}(E)$ starting at $E=\hat{E}$ and ending at some arbitrary $E_m$ as shown in Figure \[fig-FofE\] avoids the negative parts of $F(E)$ and at the same time takes advantage of the maximum of $F(E)$. Second, any design has to optimize the power output and the efficiency simultaneously, as opposed to maximizing one or the other. In the example of Figure \[fig-FofE\], the power output $P$ is zero where $\eta$ is maximum, and as a function of the width of the square transmission function the power increases while the efficiency decreases [@whitney1]. Third, we note that while the ‘strength parameter’ $\mathcal{T}_0$ in the delta-function model drops out of the efficiency, it directly increases the power output. Thus it should be possible to optimize $\eta$ and $P$ for a single channel device, while the power output can subsequently be made large by ‘scaling up’ the number current by increasing the number of channels. In the following we will first consider a single chain model that allows us to tune the energy dependence of the transmission function in a desirable way, and then discuss possible ways of scaling up the power output. While in this work we will consider the full non-linear regime, the connection of the thermodynamic efficiency $\eta$ with the figure of merit $ZT$ mentioned in the Introduction (and valid only in the linear response regime) is briefly discussed in Appendix I. In particular, the goal of $ZT\ge 3$ can be rewritten as $\eta \ge 0.3 \times \Delta T/T$ where $T$ is the average temperature, and linear response regime implies $\Delta T/T \ll 1$. In order to be able to make a valid comparison, we will define $\eta = \eta_c \times \bar{\eta}$ where $\eta_c$ is the Carnot efficiency. Thus an industrially competitive $\eta$ would mean /\_c 0.3; \_c1-. \[goal\] In the non-linear regime $\eta_c$ need not be much smaller than unity. In the context of space travel the temperature differences can be quite large with $T_L \gg T_R$ and $\eta _c \approx 1$. However for harnessing waste energy on the earth, we will need $T_R$ to be the room temperature and $T_L \sim 450$ K, approximately the temperature of a running automobile engine. In other words, the goal is not only to have $\eta/\eta_c \ge 0.3$, but also to have it for $\eta_c\sim 1/3$. We will see below that our model achieves both. A Model System ============== Although a square-wave $\mathcal{T}(E)$ considered in Figure 1 would be ideal, it is not clear how such a shape can be obtained in practice in a nano-system given the fact that any tunnel-barrier designed to cut-off the transmission in a desired energy range would have its own inherent interference effects that would destroy the sharpness of the cutoff. Our goal here is to take advantage of the interference effcts in producing a $\mathcal{T}(E)$ as close to the square-wave as possible, keeping in mind that the device has to be geometrically scalable to increase the power output. For these reasons, it is more convenient to start with a simple exactly solvable microscopic model which has the potential to achieve any desired $\mathcal{T}(E)$ and which, at the same time, is geometrically scalable. A single chain model with two channels, one purely electronic and another phonon-assisted, is expected to show a dip in the electron transmission as a function of energy due to destructive interference between the channels. This idea leads to the simplest model that allows us to tune the energy dependence of $\mathcal{T}(E)$. Consider the model shown in Figure \[fig-toy-phonon\] where an isolated chain is attached to an extra site on the side (site 4 in Fig. 2). This is known in the literature as the $t$-stub model, and as noted in the Introduction, has been used to make connections with realistic molecular junctions [@stadler]. The extra site may be regarded as either an energy level within the same atom as site 2 in Fig. 2, or an energy level on a neighboring atom. In the case when the occupancy of site 2 is small, it may also be regarded as corresponding to a single virtual phonon excitation. This later analogy breaks down when the occupancy of site 2 is not small, which is the case considered here. The Hamiltonian $H$ of the isolated chain is given by H=( [cccc]{} V & t\_1 & 0 & 0\ t\_1 & V\_1 & t\_1 & t\_3\ 0 & t\_1 & V & 0\ 0 & t\_3 & 0 & V\_0 ). The retarded Green function with the leads is G\^R = \^[-1]{} where $\Sigma_{11}=\Sigma_L$, $\Sigma_{33}=\Sigma_R$ and all other $\Sigma_{ij}=0$. The ‘self energies’ $\Sigma_{L,R}$ are due to coupling to the left and the right leads, respectively [@datta], and are given by \[Sigma-el\] \_[L,R]{}=t\^2g\_[L,R]{}=-te\^[ika]{} where $t$ is the hopping element in the leads. Here $g_{L,R}$ is the surface Green function of the left or right semi-infinite lead, $k$ is the incident wave vector and $a$ is the lattice constant in the two leads and we have assumed symmetric leads. The tight binding model in the leads correspond to $V=2t$ and $E=2t(1-\cos ka)$, so that the bandwidth is $\mathcal{W}=4t$. In the following in all our examples, we will always choose our energy parameters in units of $t$. ![ One dimensional model consisting of a chain connected to a central site $2$ and an extra site $4$ with site energies $V_1$ and $V_0$, respectively. The hopping parameter between sites 2 and 4 is $t_3$. Site $2$ is connected to site $1$ in the left lead $L$ and to site $3$ in the right lead $R$ with hopping parameters $t_1$. Leads $L$ and $R$ are characterized by hopping parameters $t$ and site energies $V$. The site $4$ is not directly connected to any of the leads. We will refer to this as the t-stub model.[]{data-label="fig-toy-phonon"}](Fig-2.eps){width="0.3\textheight"} The Green function across the chain $G_{13}$ is G\_[13]{} = where $\Sigma_2 \equiv t_3^2/(E-V_0)$. For symmetric leads, defining $ a_0\equiv E-V-\Sigma_{L,R}=\frac{|t|^2}{\Sigma_{L,R}}, $ we can rewrite it as G\_[13]{}=g\_0, \[G13\] where $D_1\equiv a_0(E_0E_1-t_3^2)-2E_0t_1^2$, $E_0\equiv E-V_0$ and $E_1\equiv E-V_1$. The transmission coefficient $\mathcal{T}$ is then given by \[Tdef\] &=& v\_Lv\_R|G\^R\_[13]{}|\^2 = E(4t-E)|g\_0|\^2 , where $v_L$, $v_R$ are the left and right channel velocities, respectively, given by $v_L=v_R=2t\sin ka=\sqrt{E(4t-E)}$. Since $g_0\propto E_0$, this exhibits a zero at the resonant energy $E=V_0$, in addition to the zeros at the band edges. Note that the parameter $V_0$ allows us to tune the position of the dip, while $V_1$ and $t_3$ can be used to vary the width. To some extent, these molecular parameters can be tuned by an electric field [@park] or possibly by an external gate voltage. In particular, the parameters can be chosen to generate a $\mathcal{T}(E)$ which has the desired feature of being negligible for a range of $E$ where $F(E)$ is negative. In order to avoid any artificial effects from the band edges, we will always use thermodynamic parameters such that $F(E)$ is negligible at the upper band edge. Since $\mathcal{T}(E)$ for our choice of $t_3$ is negligible at the lower band edge, the product $\mathcal{T}(E)F(E)$, and hence the resulting power and efficiency, will be largely insensitive to the cut-off in the model at either band-edge. ![(Color online) The single chain t-stub model: (a) Transmission function $\mathcal{T}(E)$ with the extra side level (solid black curve), corresponding to $t_3=2.5$ and without the extra side level (dashed red curve), corresponding to $t_3=0$, all in units of $t$ (bandwidth $\mathcal{W}=4$). The parameters used for both cases are $t_1=1$, $V_0=V_1=0.8$. For comparison, the blue dash-dotted curve shows $F(E)$ for $\mu_L=0.65$, $T_L=0.5$, $\mu_R=1.0$ and $\hat{E}=1.7$ such that $\mu_L < \mu_R < \hat{E}$. (b) The efficiency $\eta/\eta_c$ corresponding to the $F(E)$ and the two $\mathcal{T}(E)$ shown in (a), as functions of $\mu_L$: red dotted line corresponds to the case $t_3=0$ and the black solid line is for $t_3=2.5$. Note the order of magnitude increase in efficiency showing the importance of the interference effects from the extra side level. (c) Power output $P$ (in units of $t^2/h)$ corresponding to the $\mathcal{T}(E)$ in (a) with $t_3=2.5$.[]{data-label="fig-TofE-SingleChain"}](Fig-3a.eps){width="0.3\textheight"} ![(Color online) The single chain t-stub model: (a) Transmission function $\mathcal{T}(E)$ with the extra side level (solid black curve), corresponding to $t_3=2.5$ and without the extra side level (dashed red curve), corresponding to $t_3=0$, all in units of $t$ (bandwidth $\mathcal{W}=4$). The parameters used for both cases are $t_1=1$, $V_0=V_1=0.8$. For comparison, the blue dash-dotted curve shows $F(E)$ for $\mu_L=0.65$, $T_L=0.5$, $\mu_R=1.0$ and $\hat{E}=1.7$ such that $\mu_L < \mu_R < \hat{E}$. (b) The efficiency $\eta/\eta_c$ corresponding to the $F(E)$ and the two $\mathcal{T}(E)$ shown in (a), as functions of $\mu_L$: red dotted line corresponds to the case $t_3=0$ and the black solid line is for $t_3=2.5$. Note the order of magnitude increase in efficiency showing the importance of the interference effects from the extra side level. (c) Power output $P$ (in units of $t^2/h)$ corresponding to the $\mathcal{T}(E)$ in (a) with $t_3=2.5$.[]{data-label="fig-TofE-SingleChain"}](Fig-3b.eps){width="0.3\textheight"} ![(Color online) The single chain t-stub model: (a) Transmission function $\mathcal{T}(E)$ with the extra side level (solid black curve), corresponding to $t_3=2.5$ and without the extra side level (dashed red curve), corresponding to $t_3=0$, all in units of $t$ (bandwidth $\mathcal{W}=4$). The parameters used for both cases are $t_1=1$, $V_0=V_1=0.8$. For comparison, the blue dash-dotted curve shows $F(E)$ for $\mu_L=0.65$, $T_L=0.5$, $\mu_R=1.0$ and $\hat{E}=1.7$ such that $\mu_L < \mu_R < \hat{E}$. (b) The efficiency $\eta/\eta_c$ corresponding to the $F(E)$ and the two $\mathcal{T}(E)$ shown in (a), as functions of $\mu_L$: red dotted line corresponds to the case $t_3=0$ and the black solid line is for $t_3=2.5$. Note the order of magnitude increase in efficiency showing the importance of the interference effects from the extra side level. (c) Power output $P$ (in units of $t^2/h)$ corresponding to the $\mathcal{T}(E)$ in (a) with $t_3=2.5$.[]{data-label="fig-TofE-SingleChain"}](Fig-3c.eps){width="0.3\textheight"} Now we show the importance of the extra site. In the example shown in Figure \[fig-TofE-SingleChain\] we compare two cases, one with $t_3=0$ (no side level) and the other with $t_3=2.5$ (in units of $t=1$), together with a given choice of $F(E)$. We have chosen $V_0$ and $V_1$ to produce a negligible $\mathcal{T}(E)$ for $E<\hat{E}=1.7$ for the choice $t_3=2.5$. The resulting efficiencies $\eta/\eta_c$, for fixed values of $T_L=0.5$ and $\mu_R=1.0$ are shown in the middle panels (a) and (b) of Figure \[fig-TofE-SingleChain\] as a function of $\mu_L$, where we keep $\hat{E}=1.7$ fixed in order to take advantage of the feature in $\mathcal{T}(E)$ in the top panel. The range of $\mu_L$ is restricted, in each case, by the requirement that $I_N > 0$. Note that while the maximum $\eta/\eta_c$ without the side level is only $\eta/\eta_c\approx 0.04$ (red dashed line), it can be one order of magnitude larger for the finite value of $t_3$ chosen here (black solid line). Clearly, the matching of $\mathcal{T}(E)$ with $F(E)$, tuned with the help of the interference associated with the side level, can be an effective tool to increase the efficiency of a nano-engineered thermoelectric material. Indeed when compared with Eq. (\[goal\]), the increased efficiency in the above example exceeds the threshold for industrial competitiveness. Moreover the maximum value $\eta/\eta_c\approx 0.4$ occurs for $\mu_L=0.65$, for which $T_R/T_L=2/3$ or $\eta_c=1/3$. As mentioned earlier, this fulfills the requirement for a practical device to harness waste heat energy from the environment. As for the power $P$ it is important to note that although, as warned before, the maximum of $P$ and $\eta$ do not occur for the same value of $\mu_L$, there is a range of $\mu_L$ for which $\eta/\eta_c > 0.3$ and $P > 0.001$ (in units of $t^2/h$) simultaneously. However, $P\sim 10^{-3}$ is unacceptable as a practical device. For example $T_R=300 K$ in the above example of Figure \[fig-TofE-SingleChain\] corresponds to $t = 3k_BT_R$. This implies $P=10^{-3}\times t^2/h \sim 10^{-10}$ Watts. Comparing with currently available bulk commercial devices [@selman] with $P\approx 4000$ Watts/m$^2$ for the power per unit area of the thermoelements, it is clear that it is absolutely essential to be able to geometrically ‘scale up’ the model in order to obtain the necessary power output. We emphasize that the values of the microscopic model parameters chosen for illustration in Fig. \[fig-TofE-SingleChain\] are not the ‘best’ (fine-tuned) values. In fact, while we have not explored the entire parameter space systematically, a simple parameter sampling of $3\times 10^3$ possible sets of the parameters around the chosen values, as explained in Figure \[montecarlo\], shows two important features. ![Efficiency and power for randomly chosen $3\times 10^3$ sets of parameters within the range $t_1=1\pm 1, t_3=2.5 \pm 1$ and $V_0,=V_1=0.8 \pm 2$, keeping the thermodynamic parameters $\mu_R=1.0$ and $\hat{E}=1.7$ fixed and choosing $\mu_L=0.65$. The dashed lines demarcate the parameters for which one can have $\eta/\eta_c > 0.3$ and $P > 0.001$ simultaneously. For comparison, the blue solid line shows the maximum $P$ for a given $\eta/\eta_c$ as obtained for an ideal square-wave between $\hat{E}$ and $E_m$ as shown in Figure 1, for $\hat{E} <E_m<4t$. The red dash-dotted line corresponds to the case where the square-wave is extended into the negative part of $F(E)$. Note that in this case, the square-wave is no longer the limiting envelop. []{data-label="montecarlo"}](Fig-4.eps){width="0.3\textheight"} First, our results are quite generic in the sense that typically a variety of different combinations of the parameters will give similar values of $\eta$ and $P$. Second, fine-tuning the parameters could actually increase $\eta$ significantly towards the maximum envelop obtained for an ideal square-wave form of $\mathcal{T}(E)$, shown by the blue solid line. In the square-wave case, starting from the Mahan-Sofo limit of ideal efficiency and zero power for $E_m\to \hat{E}$, $P$ increases (and $\eta/\eta_c$ decreases) as $E_m$ increases, the maximum of $P= 0.0065$ occuring for $E_m=4t$ (the band edge), corresponding to the maximum width for which $F(E)$ is positive. Increasing the width of the square wave any further requires including the negaive part of $F(E)$ which, as discussed in Sec II, decreases $P$ significantly in the region covered by the red dash-dotted line in Figure \[montecarlo\]. Clearly the square-wave no longer corresponds to an optimum envelop in this regime since reducing the ideal transmission of a square-wave in the negative $F(E)$ regime with any other shape would lead to an increase in $P$. While the density of the points in Figure \[montecarlo\] does not have any meaning, it is clear that a wide range of parameters is available where $\eta/\eta_c > 0.3$ and $P > 0.001$ simultaneously. Nevertheless, the maximum of $P$, even for an ideal square-wave, remains unacceptably small, $P \approx 0.007$, unless it can be geometrically scaled up. In the following section we extend the model to $n$ number of coupled chains, which turns out to be still exactly soluble. The scaled up model: $n$ chains =============================== We extend the single chain $t$-stub model to a system with $n$ number of chains, each consisting of a ‘left’ and a ‘right’ site $p_k$ and $q_k$, respectively, with site energies $V=2t$, and a ‘middle’ site $R_k$ with site energy $V_1$. The lead-junction hopping parameters are given by $V_{p_kR_k}=V_{R_kq_k}=t_1$. Each site $R_k$ has a side level with hopping element $t_3$ connected to a site with energy $V_0$, as shown in Figure \[fig-toy-phonon-scaled\]. Clearly, if the chains are independent, the total power would simply scale with the number of chains. However, in a nano-system, two chains nearby are always coupled due to quantum tunneling via nearby atoms, and it is not clear if the interference effects in the single chain would survive in the presence of multiple possible paths generated by interchain couplings. Here we will consider the simplest case where the chains are connected at sites $R_k$ with hopping parameters $V_{R_kR_{k+ 1}}=V_{R_kR_{k-1}}=t_0$. ![The scaled up quasi-1d chain, connecting site $R_j$ with energy $V_1$ of one chain to the corresponding site $R_{j\pm 1}$ in the neighboring chain, with hopping parameter $t_0$, leaving the bond $t_3$ as the only connection to the $V_o$ site. The parameters for each individual chain as well as the left and right leads $L$ and $R$ are the same as in Figure \[fig-toy-phonon\].[]{data-label="fig-toy-phonon-scaled"}](Fig-5.eps){width="0.3\textheight"} We will start with $n-1$ chains and call the Green functions $G^0_{p_jq_k}$ for arbitrary $j, k\le n-1$. We will then add the $n$th chain (with site $R_n$ connected to site $R_{n-1}$ by hopping parameter $t_0$) and evaluate the resulting $G_{p_jq_k}$ recursively. From symmetry, we will only need $G_{p_jq_k}$ for $j\le k\le n$. In order to evaluate $G_{p_jq_k}$ for $j\le k\le n$ we will need the ‘building blocks’ $G_{R_jR_k}$, $G_{p_jR_k}$ and $G_{R_jq_k}$. To start with, we note that $G_{R_nR_n}$ satisfies the recursion relation \[recXn\] X\_n=; X\_n G\_[R\_nR\_n]{}, where we defined \[b\] b()\^2. Note that $X_1=1$. The solution to Eq. (\[recXn\]) is given by [@mathematica] \[solnXn\] X\_n=; . Now consider $G_{R_nq_k}$, which satisfies a recursion relation \[recYnk\] Y\_[n,k]{}=X\_nY\_[n-1,k]{}; Y\_[n,j]{}G\_[p\_jR\_n]{}=G\_[R\_nq\_j]{}; j &lt; n. The solution to Eq.(\[recYnk\]) is given by Y\_[n,k]{}=X\_k\_[m=k]{}\^[n-1]{}X\_[m+1]{}; k&lt;n. We are now in a position to evaluate the Green functions across the chain. In terms of the $X$ and $Y$ functions, the recursion relations eventually lead to the following expressions: G\_[p\_kq\_k]{} &=& \[1+bX\_kX\_[k-1]{}\]+ t\_0\_[m=k]{}\^[n-1]{}Y\_[m+1,k]{}Y\_[m,k]{};&& k &lt; n-1 \[GpkqkXY\] and G\_[p\_jq\_k]{} &=& X\_kY\_[k-1,j]{}+ t\_0\_[m=k]{}\^[n-1]{}Y\_[m+1,k]{}Y\_[m,j]{}; && j &lt; k &lt; n-1. \[GpjqkXY\] Note that the transmission involves $\sum |G_{p_jq_k}|^2$, so the above expression as a sum over sites gets very complicated. However it turns out that by using the recursion relation (\[recXn\]), it is possible to rewrite them as products instead of sums. In particular, one gets \[GpkqkX\] G\_[p\_[k]{}q\_[k]{}]{}=g\_0 . The proof is given in Appendix II. Given the above, the expression for $G_{p_jq_k}$ can be simplified as \[GpjqkX\] G\_[p\_jq\_k]{} =Q\_[j,k]{}G\^n\_[p\_kq\_k]{}; Q\_[j,k]{} ()\^[k-j]{}\_[m=j]{}\^[k-1]{} X\_m. Defining $z \equiv (1-\alpha)/(1+\alpha)$, the expressions for the Green functions can be rewritten as G\_[p\_kq\_k]{} &=& ; G\_[p\_jq\_k]{} &=& (1-z\^j)(1-z\^[n-k+1]{}) \[Gz\] so that the transmission function becomes: \[TEalln\] &&(E) = v\^2&=& \_n\[\_[k=1]{}\^[n]{}|1-z\^k|\^2|1-z\^[n-k+1]{}|\^2 &+& 2\_[k=2]{}\^[n]{} |1-z\^[n-k+1]{}|\^2 \_[j=1]{}\^[k-1]{}|z|\^[k-j]{}|1-z\^j|\^2\] where $v$ is the channel velocity and \[Tn\] \_n v\^2. This is the exact result for the transmission function of the $n$-chain model. It is possible to sum the terms analytically, but for a given finite $n$ it is easier to evaluate them directly numerically. Figure \[fig-TofE-n\] shows evaluation of $\mathcal{T}(E)$ from Eq. (\[TEalln\]) for identical single chain parameters as used in Figure \[fig-TofE-SingleChain\] (namely $t_1=1$, $t_3=2.5$ and $V_1=V_0=0.8$), and using the interchain hopping parameter $t_0=1$, with $n=50$. ![Transmission $\mathcal{T}(E)$ for identical set of parameters as in the single chain model shown in Figure \[fig-TofE-SingleChain\], with interchain coupling $t_0=1$ and the number of chains $n=50$. Maximum efficiency remains similar to the single chain model while the maximum power output scales with $n$.[]{data-label="fig-TofE-n"}](Fig-6.eps){width="0.3\textheight"} Note that the maximum of $\mathcal{T}(E)$ for $n=50$ is almost a factor $50$ larger compared to $\mathcal{T}(E)$ for $n=1$ shown in Fig \[fig-TofE-SingleChain\], and it has the same features helpful for obtaining a large efficiency. Thus essentially $\mathcal{T}(E)$ scales with $n$, at least for the chosen values of the parameters. For comparison with the single chain case we choose the same thermodynamic parameters $\mu_R=1.0$, $\hat{E}=1.7$ and $\mu_L=0.65$, which gives $\eta/\eta_c=0.43$ and $P=0.1$. Thus while the efficiency remains undiminished (in fact it is slightly enhanced), the power output scales with $n$ as expected. In other words, it is possible to scale up the single chain power without compromising the efficiency, by simply increasing the number of chains. The estimate that $\mathcal{T}(E)$ and hence $P$ essentially scales with $n$ for large $n$ can be understood from a simple continuum ($n\to \infty$) limit. As shown in Appendix III, the transmission in the continuum limit is given by (E)=v\^2||\^2 \[Tinfinity\] where $v$ is the channel velocity defined earlier. This is the same transmission function per channel obtained in Eq. (\[TEalln\]), in the limit $z\to 0$. In this limit all channels become independent (renormalized by the parameter $\alpha$), and the total transmission simply scales with the number of channels $n$. Going back to the single chain estimate for power and using the fact that scaling with $n$ holds for large $n$, we see that by putting chains $10$ nm apart and connecting them by the cross-bond $t_0$, it should be possible to achieve a power output of $P\sim 10^{-10}$ Watts/(10$^{-8}$ m)$^2$ $\sim 10^{6}$ Watts/m$^2$ (keeping the efficiency near $\eta/\eta_c > 0.4$). This is several orders of magnitude larger than the bulk thermoelectric generators currently available commercially when measured as power per unit area of the thermoelements. Commercial devices can have their power output increased further by using essentially non-planar interfaces to increase the effective area between hot and cold regions [@selman]. This could also be done for the devices based on the scaled up model. We emphasize here again that in addition to choosing the values of the microscopic model parameters $t_1, t_3, V_0, V_1$ as those of the single channel case, the connecting bond hopping parameter $t_0=1$ has been chosen as the simplest possibility and not as the ‘best’ fine-tuned value. As in the single channel case, we expect that it should be possible to increase the efficiency further by fine-tuning the parameters. The important point is that there would be a wide range of microscopic as well as thermodynamic parameters that can yield $\eta/\eta_c > 0.3$, and power $P> 0.001 \times n$, simultaneously. While we have chosen $\eta_c=1/3$ corresponding to a high temperature bath with $T_L=450$ K, it is clear that the design of a practical industrially competitive thermoelectric device should be possible even with a lower $T_L$. Phonons in a single chain ========================= In practice, the denominator of Eq. (\[eq-eta\]) should have an added contribution from phonons that would decrease the efficiency. In analogy to the electronic contribution $I_e = \int dE (E-\mu_L) \mathcal{T}(E)F(E)$, the phonon contribution to the energy flux can be written as I\_[ph]{} &=& \_0\^ dEE (E) B(E); B(E)&& \_L(E)-\_R(E) where $\eta(E)$ is the Bose distribution function $ \eta (\omega)= 1/[\exp(\hbar\omega/k_B T)-1], $ and the subscripts $L$ and $R$ refer to the left and right leads, respectively. Here $\xi(E)$ is the phonon transmission function. For phonons in the leads with dispersion relation $ \omega^2=\omega^2_0(1-\cos Ka)$ where $K$ is the phonon wave vector, the transmission function can be expressed as () & & [Tr]{}\[\_R()\_[1,3]{}()\_L()\^\_[1,3]{}\] , where $\mathcal{G}$ is the phonon Green function across the molecule and the spectral function $\Lambda$ is defined as \_[L,R]{}() & & i\[\^[ph]{}\_[L,R]{}-\^[ph]{}\_[L,R]{}\] = \^2\_0 Ka &=& . Here the phonon self energy $\Sigma^{ph}_{L,R}$ due to the leads, in analogy with the electron self energy given in Eq. (\[Sigma-el\]), is given by \^[ph]{}\_[L,R]{}=-e\^[iKa]{}. \[ph-self-energy\] If $\xi(E)=1$ for all $E$, then the equilibrium linear response contribution can be obtained exactly, showing that each massless phonon mode contributes a quantum of $\pi^2/3$ to the energy flux [@rego]. As a comparison, using the parameters in Figure \[fig-TofE-SingleChain\], we get the number current $I_N \approx 0.1$ and $I_e\approx 0.14$; adding $\pi^2/3$ to $I_e$ would reduce the efficiency from $\eta/\eta_c\approx 0.4$ to $\eta/\eta_c\approx 0.03$. However, it is also clear that the phonon transmission can be significantly reduced by designing the central ‘molecule’ to have a large mass compared to the atoms in the lead. ![The single chain phonon model where the ‘molecule’ $2-4$ is assumed to have a mass $M$, connected by two springs, each of spring constant $k_1$, to the surface sites in the two leads each having mass $m$. Each surface mass $m$ is connected to its nearest neighbor in the lead by a spring with a spring constant $k$. The bulk of the leads are also made of masses $m$ connected by springs of spring constant $k$.[]{data-label="fig-toy-phonon-model"}](Fig-7.eps){width="0.3\textheight"} Indeed, for an order of magnitude estimate, let us reconsider the single-chain model, with the $V$-sites replaced by ‘balls’ of mass $m$ and the $t$-bonds replaced by ‘springs’ of spring constant $k$ in the left and right semi-infinite leads as shown in Figure \[fig-toy-phonon-model\]. The ‘molecule’ of sites $2-4$ is replaced by a ball of mass $M$ and the bonds $t_1$ are replaced by springs of spring constant $k_1$. The transmission function will then be determined by the frequencies $\Omega^2_0\equiv 2k_1/M$ and $\Omega^2_1\equiv 2k_1/m$. It is easy to estimate that e.g. for $k_1=k$ and $M/m=10$ such that $\hbar\Omega_1=20 k_BT_L$ and $\hbar\Omega_0=2 k_BT_L$ where $k_BT_L=0.5$, the phonon contribution is $I_{ph} \approx 0.0003$, which is negligible compared to the electronic contribution $I_e\approx 0.14$. Note that just as in the electronic case, we do not expect the efficiency to decrease if this one-chain contribution scales with increasing number of chains. Summary and conclusion ====================== As noted by a number of authors, nanostructured thermoelectric devices constructed from molecular juntions show potential because the transmission coefficient near the Fermi energy can be rapidly varying. This leads to a large thermoelectric figure of merit, $ZT$. In this paper we show that there are other necessary conditions for these devices to be useful besides just a high thermoelectric figure of merit. Specifically, we have addresed three questions. First, because the length scales are so small in nanostructures, any non-infinitesimal temperture or electrical potential difference leads to very large gradients. Nanostructures and molecular junctions in particular are inherently in the nonlinear response regime. Starting in Sec. II we show that just having a rapidly varying transmission coefficient near the Fermi energy is not sufficient to get a large efficiency or power output in the nonlinear response regime. Rather in the nonlinear response regime, the crucial factor is where the transmission coefficient is weighted relative to the difference in the Fermi distribution functions of the leads. The optimum transmission coefficient depends on the temperature and the chemical potential difference. Second, we are ultimately interested in the power output from a thermoelectric device. While for ordinary thermoelectric devices one would assume that the large power output occurs with high efficiency, we show that for some tunneling models the efficiency approaches the thermodynamic maximum as the power output goes to zero. This raises the question of whether it is possible to have both high power output and high efficiency in tunneling through molecules. In Sec. III we consider a model system that has been used by a number of authors including some who fit it to microscopic calculations. Within this model, called the $t$-stub, we find that it is possible to have both high power output and large efficiency. Random sampling of the parameters in the model show that this occurs for a range of parameters – not just for some very specific ones. Finally, because nanostructures are so small, the currents and energy transfers are also quite small. For any macroscopic device one would like to scale up the response of individual nanostructure devices by having many act in parallel. However, when molecules are placed close enough together, there will be tunneling between them and the single molecule calculations are no longer valid. Thus, in Sec. IV we calculate the transmission coefficient, efficiency, and power for many coupled $t$-stub devices in parallel. We find that the coupling does not necessarily destroy the effect, and it is still possible to obtain both high efficiency and high power. We estimate the power produced by many t-stub devices in parallel and find that they could in principle be commerically viable. Thus, thermoelectric devices constructed by tunneling through a molecule are still promising upon closer inspection. At least in one model which has been mapped onto realistic systems, it is possible to obtain high power output, high efficiency, and also to scale up the response by placing many coupled nanostructure devices in parallel. Nonetheless, we have made several common approximations that need to be addressed in future more realistic calculations. We have assumed that the hot electrons dissipate their energy in the leads and that energy is carried away rapidly. This is a common assumption in molecular tunnel junctions. This energy dissipation should ultimately be modeled microscopically with phonons or inelastic electron-electron scattering on the molecule or in the leads. We have included in Sec V phonons to carry heat between the two leads but not interacting with the electrons. Including scattering with the electrons would address potential heating issues and also any loss of coherence effects caused by inelastic scattering. It would also address the effects of the Coulomb interaction beyond the average effects included in static electronic structure calculations. Future work will include some of these inelastic scattering mechanisms microscopically. Acknowledgments: {#acknowledgments .unnumbered} ================ KAM is grateful to J-L Pichard for introducing him to the current issues in thermoelectricity during a sabbaatical stay at Saclay, France, supported in part by RTRA Triangle de la Physique (Project Meso-Therm). Appendices ========== Appendix I: Linear response regime and Figure of Merit {#appendix-i-linear-response-regime-and-figure-of-merit .unnumbered} ------------------------------------------------------ In order to make connection with the figure of merit in the linear response regime, let us start by expanding the function F(E) & & f\_L(\_L,T\_L; E)-f\_R(\_R,T\_R; E); f\_j(\_j, T\_j; E) && 1/(1+e\^[(E-\_j)/k\_BT\_j]{}) for small chemical potential difference $\mu_L-\mu_R$ and small temperature difference $T_L-T_R \equiv \Delta T$: F(E) && (-)&& ((\_L-\_R)+(E-\_[eq]{})) where $T$ is the average temperature. The number and energy currents across the junction then becomes I\_N &=& (\_L-\_R)L\_0+L\_1I\_E &=& (\_L-\_R)L\_1+L\_2, where we have defined L\_ndE(E-\_L)\^n (E) . \[eq-Ln\] Here $L_0$ and $L_2$ are positive, but $L_1$ can be positive or negative, satisfying the relation $(L_2/L_0) > (L_1/L_0)^2$. For the optimization of the efficiency, let us first consider the case when $I_N=0$. The ‘open circuit’ chemical potential difference is \_L-\_R = - \_0. Using $\Delta \mu_0$, we can rewrite I\_N &=& L\_0 \[(\_L-\_R)-\_0\]I\_E &=& L\_1 \[(\_L-\_R)-\_0(1+)\] where -1 &gt; 0. The efficiency then becomes = . If $L_1 >0$, then $\Delta \mu_0 <0$ and $\mu_L-\mu_R <0$, and if $L_1 <0$, then $\Delta \mu_0 >0$ and $\mu_L-\mu_R >0$. However, in both cases the ratio $x\equiv (\mu_L-\mu_R)/ \Delta \mu_0$ is positive and in particular $0 < x < 1$. In terms of $x$ the efficiency becomes = =. The figure of merit, defined as $ZT \equiv S_e^2T\sigma/\kappa$, is then identified with ZT = . \[eq-ZT\] Thus in the linear response regime, maximizing $ZT$ corresponds to minimizing $\delta$ and hence maximizing the efficiency. As an estimate, since the maximum efficiency $\eta_{max}$ occurs for $x\approx 1/2$, we have \_[max]{} . For $ZT=3=1/\delta $, we have \_[max]{}0.3 . Appendix II: Proof of Eq. (\[GpkqkX\]) {#appendix-ii-proof-of-eq.-gpkqkx .unnumbered} -------------------------------------- Here we give a proof of the equivalence of the two equations (\[GpkqkXY\]) and (\[GpkqkX\]). Using the definitions of $Y_{m,k}$, we rewrite Eq. (\[GpkqkXY\]) as G\^n\_[p\_kq\_k]{} = g\_0X\_kNow we use Eq. (\[solnXn\]) to write \_[i=k+1]{}\^[k+j]{}X\_[i]{} &=& 2\^j &=& . Then the Green function becomes G\^n\_[p\_kq\_k]{} &=& g\_0X\_k&=& g\_0X\_kwhere we have used ===z. We rewrite && &=& , giving G\^n\_[p\_kq\_k]{} &=& g\_0X\_k&=& g\_0X\_kZ\_k \_[j=0]{}\^[n-k]{}where we defined Z\_k and we have extended the sum from $j=0$ to include the first term equal to 1. Note that in the difference of the two sums, only the $j=0$ contribution of the first term and $j=n-k$ of the second term survive, the rest canceling each other. This gives & & \_[j=0]{}\^[n-k]{} = - &=& . Using X\_k=(1+z) we finally get G\^n\_[p\_kq\_k]{} &=& g\_0(1+z) &&&=& g\_0. This is identical to Eq. (\[GpkqkX\]) Appendix III: Continuum model {#appendix-iii-continuum-model .unnumbered} ----------------------------- The on-site (retarded) Green function $G_{22}$ for a single wire ($t_0=0$) connected to the leads is given by $G^R_W\equiv G_{22}=E_0a_0/D_1$ (compare with Eq. (\[G13\]) for $G_{13}$). When wires are connected by a hopping matrix element $t_0$ at the center to form a half space, the Green function at the central site is t\_0G\^R\_H = i. The physical situation is where the imaginary part of the inverse of the retarded Green function is positive. The full Green function of the entire space of wires is (G\^R\_F)\^[-1]{} = i 2t\_0 . In terms of the parameters $g_0$ and $\alpha$ defined in Eqs. (\[G13\]) and (\[solnXn\]), this can be rewritten as (G\^R\_F)==e\^[-2ika]{}. The transmission function in terms of this central site Green function is given by (E)=(ka) i(G\^R\_F-G\^A\_F) where $G^A$ is the advanced Green function. Using the expression for $G^R_F$ above, and the fact that $G^A$ is the complex conjugate of $G^R$, we finally obtain Eq. (\[Tinfinity\]). [99]{} T.C. Harman and J.M. Honig, *Thermoelectric and Thermomagnetic Effects and Applications*, McGraw-Hill, New York (1967). See e.g. Y. Dubi and M. Di Ventra, Rev. Mod. Phys. 83, 131 (2011) and references therein. G.J. Snyder and E.S. Toberer, Nature Materials 7, 105 (2008). A. Majumdar, Science 303, 777 (2004). M. Paulsson and S. Datta, Phys. Rev. B 67, 241403(R) (2003). M. Leijnse, M.R. Wegewijs and K. Flensberg, Phys. Rev. B 82, 045412 (2010). R. Sánchez and M. Büttiker, Phys. Rev. B 83, 085428 (2011). J-H Jiang, O. Entin-Wohlman and Y. Imry, Phys. Rev. B 85, 075412 (2012). D. Sánchez and R. López, Phys. Rev. Lett. 110, 026804 (2013). R.S. Whitney, Phys. Rev. B 87, 115404 (2013). A.N. Jordan, B. Sothmann, R. Sánchez and M. Buttiker, Phys. Rev. B 87, 075312 (2013). M. Dresselhaus, G. Chen, M.Y. Tang, R. Yang, H. Lee, D. Wang, Z. Ren, J-P. Fleurial, and P. Gogna, Adv. Mater. 19, 1043 (2007). P. Reddy, S.Y. Jang, R.A. Segalman, and A. Majumdar, Science 315, 1568 (2007). G.D. Mahan and J.O. Sofo, Proc. Natl. Acad. Sci., USA, Vol 93, 7436 (1996). A.I. Hochbaum, R. Chen, R.D. Delgado, W. Liang, E.C. Garnett, M. Najarian, A. Majumdar, and P. Yang, Nature 451, 163 (2008). A.I. Boukai, Y. Bunimovich, J. Tahir-Kheli, J-K Yu, W.A. Goddard III, and J.R. Heath, Nature 451, 168 (2008). P. Murphy, S. Mukerjee, and J. Moore, Phys. Rev. B 78, 161406(R), (2008). M Wierzbicki and R. Swirkowicz, Phys. Rev. B 84, 075410 (2011). C.M. Finch, V.M. Garcia-Suarez, and C.J. Lambert, Phys. Rev. B 79, 033405 (2009). Y. S. Liu and Y. C. Chen, Phys. Rev. B 79, 193101 (2009). R. Stadler and T. Markussen, J. Chem. Phys. 135, 154109 (2011). A. Abbout, H. Ouerdane, and C. Goupil, Phys. Rev. B 87, 155410 (2013). V. Tripathi and Y.L. Loh, Phys. Rev. Lett. 96, 046805 (2006). A. Glatz and I.S. Beloborodov, Phys. Rev. B 80, 245440 (2009); *ibid* EPL 87, 57009 (2009). Recently, we came across an e-print by R. Whitney, cond-mat arXiv:1306.0826, where efficiency is optimized at a given power for a square-wave transmission function. See e.g. S. Datta, *Electronic Transport in Mesoscopic Systems*, Cambridge Univ. Press, 1995. K. Park, Z. Deutsch, J.J. Li, D. Oron and S. Weiss, ACS Nano 6, 10013 (2012). D. M. Rowe and G. Min, Power Sources 73, 193 (1998). S. Wolfram, The Mathematica Book, Wolfram Media/Cambridge University Press, (1996). L.G. C. Rego and G. Kirczenow, Phys. Rev. Lett. 81, 232 (1998); K. Schwab, E.A. Henriksen, J.M. Worlock, and M.L. Roukes, Nature 404, 974 (2000).
{ "pile_set_name": "ArXiv" }
--- author: - | [ Eung Jin Chun$^a$, Dong-Won Jung$^b$, Sin Kyu Kang$^{c,d}$, Jong Dae Park$^b$]{}\ [*$^a$Korea Institute for Advanced Study*]{}\ [*P.O.Box 201, Cheongryangri, Seoul 130-650, Korea*]{}\ [*$^b$Department of Physics, Seoul National University, Seoul 151-747, Korea*]{}\ [*$^c$Institute for Basic Science, Korea University, Seoul 136-701, Korea*]{}\ [*$^d$Graduate School of Science, Hiroshima University, Higashi-Hiroshima 739-8526, Japan*]{} title: '**Collider Signatures of Neutrino Masses and Mixing from R-parity Violation**' --- -1.5cm 22.5cm -1.5cm Introduction ============ In the supersymmetric standard model, the gauge invariance and renormalizablity allow lepton and baryon number violation and thus it may cause too a fast proton decay. Such a problem is usually avoided by introducing a discrete symmetry. Among various possibilities, the $Z_2$ R-parity and $Z_3$ B-parity have been advocated as they can be remnants of gauge symmetries in string theory [@ibro]. Imposing R-parity has been more popular because of its simplicity and the possibility of having a natural dark matter candidate. The second option of allowing lepton number violation is also of a great interest since it can generate neutrino masses and mixing [@hasu] in an economical way to explain the current neutrino data. There is a huge (but incomplete) list of literature investigating neutrino properties in this framework [@oldies]. R-parity violation may lead to a distinctive collider signature that the usual lightest supersymmetric particle (LSP), which is typically a neutralino, produces clean lepton (or baryon) number violating signals through its decay [@sig91]. In a model of neutrino masses and mixing with R-parity violation, one can have more specific predictions for various branching ratios of the LSP decay, as the structure of lepton flavor violating couplings is dictated by the pattern of neutrino mixing determined from neutrino experiments [@mrv; @jaja; @valle2]. This provides a unique opportunity to test the model in the future collider experiments. A necessary condition is of course that the LSP has a short lifetime to produce a bunch of decay signals inside the detector. In the models we will consider, the total LSP decay rate is proportional to the (heaviest) neutrino mass and thus the measurement of the LSP decay length could also be useful to test the model. It is the purpose of this paper to examine the correspondence between neutrino oscillation parameters and collider signatures charactering specific models of neutrino masses and mixing from R-parity violation. For this, we will consider the bilinear and trilinear models to see whether they can accommodate the atmospheric [@sk-atm] and solar neutrino oscillations [@sol-exp] and the constraint coming from the CHOOZ experiment [@chooz], simultaneously. One of our basic assumptions is the universality of soft supersymmetry breaking terms at a high scale, which is usually imposed to avoid flavor problems in the supersymmetric standard model. This implies that the lepton flavor violation occurs only in the superpotential with bilinear and/or trilinear R-parity violating terms and the supersymmetry breaking mechanism is flavor-blind. Then, the tree-level neutrino mass is generated by the renormalization group evolution which breaks universality between the slepton and Higgs soft terms at the weak scale. As a specific scheme, we will consider the mechanism of gauge mediated supersymmetry breaking which solves the supersymmetric flavor problem in a natural way [@gmsb]. A comprehensive analysis of neutrino masses and mixing in this context has been performed in Ref. [@CK].[^1] Under such an assumption, the bilinear model can only realize the small mixing angle of solar neutrino oscillations while the trilinear model can accommodate the large mixing angle as well. In both cases, we will investigate whether the LSP decay length is short enough and what are the predictions for LSP decay signals which could test the model in the future collider experiments. Here, another assumption we make is that the LSP is a neutralino. Let us remark that a similar analysis has been made in Ref. [@valle2] considering supergravity models with generic bilinear R-parity violating terms. This paper is organized as follows. In Sec. 2, we calculate the “effective” trilinear R-parity violating couplings, rotating away the mixing mass terms between the ordinary particles and superparticles which arise as a consequence of bilinear R-parity violation. Those couplings are relevant for the LSP decay. In Sec. 3, we examine the neutrino mass matrix which is generated through renormalization group evolution and various (finite) one-loop diagrams. From this, we will make a qualitative analysis to examine the sizes of various R-parity violating couplings which are required to explain the current neutrino oscillation data. In Sec. 4, we will provide a numerical analysis to determine R-parity conserving and violating input parameters with which the atmospheric and solar neutrino masses and mixing are realized, in the context of gauge mediated supersymmetry breaking models. Calculating the corresponding LSP decay rate and branching ratios of various modes, we will find how the model can be tested in the collider experiments. Finally, we will conclude in Sec. 5. Effective R-parity violating vertices from bilinear terms ========================================================= Allowing lepton number violation in the supersymmetric standard model, the superpotential is composed of the R-parity conserving $W_0$ and violating $W_1$ part; $$\begin{aligned} \label{supo} W_0 &= & \mu H_1 H_2 + h^e_i L_i H_1 E^c_i + h^d_i Q_i H_1 D^c_i + h^u_i Q_i H_1 U^c_i \nonumber\\ W_1 &=& \epsilon_i \mu L_i H_2 + {1\over2}\lambda_{ijk} L_i L_j E^c_k + \lambda^\prime_{ijk} L_i Q_j D^c_k \,.\end{aligned}$$ Among soft supersymmetry breaking terms, let us write R-parity violating bilinear terms; $$V_{soft} = B\mu H_1 H_2 + B_i \epsilon_i \mu L_i H_2 + m^2_{L_iH_1} L_i H_1^\dagger + h.c. \,.$$ It is clear that the electroweak symmetry breaking gives rise to nonzero vacuum expectation values of sneutrino fields, $\tilde{\nu}_i$, as follows [@hasu]; $$a_i \equiv {\langle \tilde{\nu}_i \rangle \over \langle H_1 \rangle} = - { \bar{m}^2_{L_iH_1} + B_i \epsilon_i \mu t_\beta \over m^2_{\tilde{\nu}_i} }$$ where $\bar{m}^2_{L_iH_1}= m^2_{L_iH_1}+ \epsilon_i \mu^2$, $t_\beta=\tan\beta = \langle H_2 \rangle/ \langle H_1 \rangle$ and $m^2_{\tilde{\nu}_i}= m^2_{L_i}+ M_Z^2 c_{2\beta}/2$. In general, there are three types of independent R-parity violating bilinear parameters such as $ \epsilon_i$, $a_i$ and $B_i/B$, which give rise to the mixing between the ordinary particles and superparticles. That is, neutrinos and neutralinos, charged leptons and charginos, neutral Higgs bosons and sneutrinos, as well as, charged Higgs bosons and charged sleptons have mixing mass terms which are determined by the above R-parity violating parameters. The mixing between neutrinos and neutralinos particularly serves as the origin of the tree-level neutrino masses which will be discussed later. Note that the above quantities have to be very small to account for tiny neutrino masses. While the effect of such small parameters on the particle and sparticle mass spectra (apart from the neutrino sector) are negligible, they induce small but important R-parity violating vertices between the particles and sparticles, which make the LSP destabilized and generate one-loop neutrino masses. The derivation of the induced R-parity violating couplings has been performed in many previous works. The usual approach is to take full diagonalizations of enlarged sparticle–particles mass matrices with R-parity violating parts so that the vertices in terms of the mass eigenstates are obtained directly. In this work, we take an alternative but equivalent approach which is useful when R-parity violating parameters are small. It is to rotate away only the small R-parity violating (off-diagonal) blocks of the particle–sparticle mass matrices, leaving untouched R-parity conserving particle or sparticle masses at the diagonal blocks. In this way, we can draw the induced (or “effective”) R-parity violating vertices in terms of the electroweak/flavor eigenstate basis. A merit of this method is that one can clearly see the vertex structure of the induced R-parity violating couplings along with the usual trilinear vertices in $W_1$ of Eq. (\[supo\]) added to the usual R-parity conserving Lagrangian. This is nothing but the usual see-saw diagonalization, which we summarize as follows. Let us take sparticle–particle mass matrix given by $$\pmatrix{ M & \Delta \cr \Delta^\dagger & M' }$$ with $\Delta \ll M, M'$. Then, the approximate diagonalization (valid up to the second order of R-parity violating parameters $\sim \Delta/M$ or $\sim \Delta/M'$) can be done with the help of the rotation matrix given by $$\pmatrix{ 1-{1\over2} \Theta \Theta^\dagger & -\Theta \cr \Theta^\dagger & 1 -{1\over2} \Theta^\dagger \Theta }$$ where $\Theta$ can be found by solving the relation, $\Delta=M \Theta-\Theta M'$, in the leading order of $\Delta$. The upper and lower diagonal blocks are then shifted as $M \to M + (\Theta \Delta^\dagger+ \Delta \Theta^\dagger)/2$ and $M' \to M' - (\Theta^\dagger \Delta+ \Delta^\dagger \Theta)/2$. Note that the neutrino-neutralino mass matrix has vanishing sub-matrix for the neutrinos, $M\equiv0$, and the above change in $M$ is just the see-saw generation of small neutrino masses. For the other particles/sparticles, such changes can be safely neglected. After performing such a rotation, we get the “effective” R-parity violating vertices in the electroweak/flavor basis. Then, it is quite straightforward to find the corresponding couplings in the mass basis following the usual diagonalization of the familiar (R-parity conserving) particle/sparticle mass matrices. In this paper, we do not repeat to write the mixing mass terms between sparticles and particles. Instead, we will present the rotation matrices $\Theta$ in terms of the following three bilinear R-parity violating variables; $$\epsilon_i\; (\mbox{or } a_i)\;, \quad \xi_i\equiv a_i - \epsilon_i\;, \quad \eta_i \equiv a_i - B_i/B \;.$$ In generic supersymmetry breaking models with non-universality, the above three types of parameters are independent. But, in the restrictive models imposing the universality condition at the mediation scale of supersymmetry breaking, nonzero values of $\xi_i$ and $\eta_i$ arise as a consequence of renormalization group evolution and thus only two types of parameters are independent. In this paper, we usually take $\epsilon_i$ and $\xi_i$ as independent ones. Rotating away the neutrino-neutralino mixing mass terms (by $\theta^N$) can be made by the following redefinition of neutrinos and neutralinos: $$\pmatrix{ \nu_i \cr \chi^0_j } \longrightarrow \pmatrix{ \nu_i- \theta^N_{ik} \chi^0_k \cr \chi^0_j + \theta^N_{lj} \nu_l }$$ where $(\nu_i)$ and $(\chi^0_j)$ represent three neutrinos $(\nu_e, \nu_\mu, \nu_\tau)$ and four neutralinos $(\tilde{B}, \tilde{W}_3, \tilde{H}^0_1, \tilde{H}^0_2)$ in the flavor basis, respectively. The rotation elements $\theta^N_{ij}$ are given by $$\begin{aligned} \theta^N_{ij} &=& \xi_i c^N_j c_\beta - \epsilon_i \delta_{j3} \quad\mbox{and} \\ (c^N_j) &=& {M_Z \over F_N} ({ s_W M_2 \over c_W^2 M_1 + s_W^2 M_2}, -{ c_W M_1 \over c_W^2 M_1 + s_W^2 M_2}, -s_\beta{M_Z\over \mu}, c_\beta{M_Z\over \mu}) \nonumber\end{aligned}$$ where $F_N=M_1 M_2 /( c_W^2 M_1 + s_W^2 M_2) + M_Z^2 s_{2\beta}/\mu$. Here $s_W=\sin\theta_W$ and $c_W=\cos\theta_W$ with the weak mixing angle $\theta_W$. Defining $\theta^L$ and $\theta^R$ as the two rotation matrices corresponding to the left-handed negatively and positively charged fermions, we have $$\pmatrix{ e_i \cr \chi^-_j } \rightarrow \pmatrix{ e_i- \theta^L_{ik} \chi^-_k \cr \chi^-_j + \theta^L_{lj} e_l } \quad;\quad \pmatrix{ e^c_i \cr \chi^+_j } \rightarrow \pmatrix{ e^c_i- \theta^R_{ik} \chi^+_k \cr \chi^+_j + \theta^R_{lj} e^c_l }$$ where $e_i$ and $e^c_i$ denote the left-handed charged leptons and anti-leptons, $(\chi^-_j)=(\tilde{W}^-,\tilde{H}^-_1)$ and $(\chi^+_j)=(\tilde{W}^+,\tilde{H}^+_2)$. The rotation elements $\theta^{L,R}_{ij}$ are given by $$\begin{aligned} && \theta^L_{ij}= \xi_i c^L_j c_\beta-\epsilon_i \delta_{j2}\;, \quad \theta^R_{ij}= {m^e_i\over F_C} \xi_i c^R_j c_\beta \quad\mbox{and} \\ && (c^L_j)= -{M_W \over F_C} (\sqrt{2}, 2s_\beta{M_W\over \mu})\;, \nonumber \\ && (c^R_j)= -{M_W \over F_C} (\sqrt{2}(1-{M_2\over \mu} t_\beta), \frac{M_2^2 c^{-1}_\beta}{\mu M_W }+2{M_W \over \mu} c_\beta) \nonumber\end{aligned}$$ and $F_C= M_2 + M_W^2 s_{2\beta}/\mu$. Denoting the rotation matrix by $\theta^S$, we get $$\label{thetaS} \pmatrix{ \tilde{\nu}_i \cr H^0_1 \cr H^0_2 } \rightarrow \pmatrix{ \tilde{\nu}_i- \theta^S_{i1} H^0_1 -\theta^S_{i2} H^{0*}_2 - \theta^S_{i3} H^{0*}_1 -\theta^S_{i4} H^{0}_2 \cr H^0_1 + \theta^S_{i1} \tilde{\nu}_i + \theta^S_{i3} \tilde{\nu}^*_i \cr H^0_2 + \theta^S_{i2} \tilde{\nu}^*_i + \theta^S_{i4} \tilde{\nu}_i \cr}$$ where $$\begin{aligned} \theta^S_{i1} &=& -a_i - \eta_i s_\beta^2 m_A^2 [ m^4_{\tilde{\nu}_i} - m^2_{\tilde{\nu}_i}(m_A^2+M_Z^2 s_\beta^2)-m_A^2M_Z^2s_\beta^2c_{2\beta}] /F_S \\ \theta^S_{i2} &=& + \eta_i s_\beta c_\beta m_A^2 [ m^4_{\tilde{\nu}_i} - m^2_{\tilde{\nu}_i}(m_A^2+M_Z^2 c_\beta^2)+m_A^2M_Z^2c_\beta^2c_{2\beta}] /F_S \nonumber\\ \theta^S_{i3} &=& -\eta_i s_\beta^2 c_\beta^2 m_A^2 M_Z^2 [ m^2_{\tilde{\nu}_i} - m_A^2 c_{2\beta}]/F_S \nonumber\\ \theta^S_{i4} &=& +\eta_i s_\beta^3 c_\beta m_A^2 M_Z^2 [ m^2_{\tilde{\nu}_i} + m_A^2 c_{2\beta}]/F_S \nonumber\end{aligned}$$ with $F_S= (m^2_{\tilde{\nu}_i}-m_h^2)(m^2_{\tilde{\nu}_i}-m_H^2) (m^2_{\tilde{\nu}_i}-m_A^2)$ and $m_A, m_h$ and $m_H$ are the masses of pseudo-scalar, light and heavy neutral scalar Higgs bosons, respectively. Note that $m_A^2= -B\mu/c_\beta s_\beta$ in our convention. For our calculation, we assume that all the R-parity violating parameters are real and so are all $\theta$’s. We also note that the presence of the scalar fields as well as their complex conjugates in Eq. (\[thetaS\]) is due to the electroweak symmetry breaking, which is expected to be suppressed by the factor $M_Z^2/m_A^2$. Defining $\theta^C$ as the rotation matrix, we have $$\pmatrix{ \tilde{e}_i \cr \tilde{e}^{c*}_i \cr H^-_1 \cr H^-_2 } \rightarrow \pmatrix{ \tilde{e}_i- \theta^C_{i1} H^-_1 -\theta^C_{i2} H^{-}_2 \cr \tilde{e}^{c*}_i- \theta^C_{i3} H^-_1 -\theta^C_{i4} H^{-}_2 \cr H^-_1 + \theta^C_{i1} \tilde{e}_i + \theta^C_{i3} \tilde{e}^{c*}_i \cr H^-_2 + \theta^C_{i2} \tilde{e}_i + \theta^C_{i4} \tilde{e}^{c*}_i \cr}$$ where $$\begin{aligned} \theta^C_{i1} &=& -a_i - \eta_i { s_\beta^2 m_A^2 (m^2_{Ri}-m^2_{H^-}) \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } -\xi_i {m^e_i \mu m^2_{Di}t_\beta \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } \\ \theta^C_{i2} &=& - \eta_i { s_\beta c_\beta m_A^2 (m^2_{Ri}-m^2_{H^-}) \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } -\xi_i {m^e_i \mu m^2_{Di} \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } \nonumber\\ \theta^C_{i3} &=& + \eta_i { s_\beta^2 m_A^2 m^2_{Di} \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } +\xi_i {m^e_i \mu (m^2_{Li}-m^2_{H^-})t_\beta \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } \nonumber\\ \theta^C_{i4} &=& + \eta_i { s_\beta c_\beta m_A^2 m^2_{Di} \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } +\xi_i {m^e_i \mu (m^2_{Li}-m^2_{H^-}) \over (m^2_{H^-}-m^2_{\tilde{e}_{i1}}) (m^2_{H^-}- m^2_{\tilde{e}_{i2}}) } \,. \nonumber\end{aligned}$$ Here, $m_{H^-}$ stands for the charged-Higgs boson mass, and $m^2_{Li}$, $m^2_{Ri}$ and $m^2_{Di}$ correspond to the LL, RR and LR components of the $i$-th charged-slpeton mass-squared matrix, respectively, and $m^2_{\tilde{e}_{i1,i2}}$ are its eigenvalues. We remark that the appearance of $a_i$ in $\theta^S_{i1}$ and $\theta^C_{i1}$ is due to the rotations which remove the Goldstone modes from the redefined neutral and charged slepton fields. With the expressions for the rotation matrices in Eqs. (4)–(11), we can obtain the effective R-parity violating vertices from the usual R-parity conserving interaction vertices, which are relevant to the LSP decays. We list them below by taking only the linear terms in $\theta$’s which are enough for our purpose. : $$\begin{aligned} \label{chinuZ} {\cal L}_{\chi^0 \nu Z} &=& \overline{\chi}^0_i\gamma^\mu P_L L^{\chi^0 \nu Z}_{ij} \nu_j Z_\mu^0 + h.c. \\ \mbox{with} \quad L^{\chi^0 \nu Z}_{ij} &= &{g\over 2 c_W}\, [c^N_1,c^N_2,0,2c^N_4]\;\xi_j c_\beta \,. \nonumber\end{aligned}$$ : $$\begin{aligned} \label{chilW} {\cal L}_{\chi^0 l W} &=& \overline{\chi}^0_i\gamma^\mu \left[ P_L L^{\chi^0 l W}_{ij} + P_R R^{\chi^0 l W}_{ij} \right] e_j W_\mu^+ + h.c. \\ \mbox{with} \quad L^{\chi^0 l W}_{ij} &= &{g\over \sqrt{2}} \, [c^N_1,c^N_2-\sqrt{2}c^L_1, c^N_3-c^L_2,c^N_4] \, \xi_j c_\beta \nonumber \\ \quad R^{\chi^0 l W}_{ij} &= &{g\over \sqrt{2}} \, [0,-\sqrt{2}c^R_1, 0, -c^R_2] \, \xi_j c_\beta \nonumber\end{aligned}$$ : $$\begin{aligned} {\cal L}_{\chi^0 \nu H^0_{1,2}} &=& \overline{\chi}^0_i \left[ P_L L^{\chi^0 \nu H^0_{1,2}}_{ij} + P_R R^{\chi^0 \nu H^0_{1,2}}_{ij} \right] \nu_j H^{0*}_{1,2} + h.c. \\ \mbox{with} \quad L^{\chi^0 \nu H^0_{1}}_{ij} &= &{g\over \sqrt{2}} \, [-t_W(\theta^S_{j1}-\theta^N_{j3}), ( \theta^S_{j1}-\theta^N_{j3}), (t_W \theta^N_{j1}-\theta^N_{j2}), 0 ] \nonumber \\ L^{\chi^0 \nu H^0_{2}}_{ij} &= &{g\over \sqrt{2}} \, [-t_W(\theta^S_{j4}+\theta^N_{j4}), (\theta^S_{j4}+\theta^N_{j4}), 0, (-t_W\theta^N_{j1}+\theta^N_{j2}) ] \nonumber \\ R^{\chi^0 \nu H^0_{1}}_{ij} &= &{g\over \sqrt{2}} \, [-t_W \theta^S_{j3}, \theta^S_{j3}, 0, 0 ] \nonumber\\ R^{\chi^0 \nu H^0_{2}}_{ij} &= &{g\over \sqrt{2}} \, [-t_W \theta^S_{j2}, \theta^S_{j2}, 0, 0 ] \nonumber\end{aligned}$$ : $$\begin{aligned} {\cal L}_{\chi^0 l H^+_{1,2}} &=& \overline{\chi}^0_i \left[ P_L L^{\chi^0 l H^+_{1,2}}_{ij} + P_R R^{\chi^0 l H^+_{1,2}}_{ij} \right] e_j H^{+}_{1,2} + h.c. \\ \mbox{with} \quad L^{\chi^0 l H^+_{1}}_{ij} &= &{-1\over \sqrt{2}} \, [g'(\theta^C_{j1}-\theta^L_{j2}), g( \theta^C_{j1}-\theta^L_{j2}), \sqrt{2}(g\theta^L_{j1}+h^e_j\theta^C_{j3}), 0 ] \nonumber \\ L^{\chi^0 l H^+_{2}}_{ij} &= &{-1\over \sqrt{2}} \, [g' \theta^C_{j2}, g \theta^C_{j2}, h^e_j\theta^C_{j4}, 0 ] \nonumber \\ R^{\chi^0 l H^+_{1}}_{ij} &= & [\sqrt{2} g' \theta^C_{j3} + h^e_j \theta^N_{j1}, h^e_j \theta^N_{j2}, -h^e_j (\theta^C_{j1}-\theta^N_{j3}), h^e_j \theta^N_{j4} ] \nonumber\\ R^{\chi^0 l H^+_{2}}_{ij} &= & [\sqrt{2} g' \theta^C_{j4} - {g'\over\sqrt{2}} \theta^R_{j2}, - {g\over\sqrt{2}} \theta^R_{j2}, - h^e_j \theta^C_{j2}, - g \theta^R_{j1} ] \nonumber\end{aligned}$$ In Eqs. (12)–(15), the four components inside brackets correspond to the indices $i=1,\cdots,4$ indicating the neutralino states ($\tilde{B}$, $\tilde{W}_3$, $\tilde{H}^0_1$, $\tilde{H}^0_2$), respectively, as before. Here, let us remark that all of the above vertices depend only on the variables $\xi_i$ or $\eta_i$ which are generated by renormalization group evolution under the universality condition, even though the individual elements $\theta^N_{i3}$, $\theta^L_{i2}$, $\theta^S_{i1}$ and $\theta^C_{i1}$ depend on either $\epsilon_i$ or $a_i$. This fact will be important when we study the LSP decay processes. : In the below, we list the $\lambda$-like or $\lambda'$-like couplings which are, however, neither supersymmetric nor $SU(2)_L$-symmetric: $$\begin{aligned} && {\cal L}_{LQ\bar{d}} = \varepsilon_{ab}\left[ {\Lambda}^{d1}_{aij} \tilde{L}_{ai}\overline{d}_j P_L Q_{bj} + {\Lambda}^{d2}_{aij}\tilde{L}^{c*}_{ai}\overline{d}_j P_L Q_{bj} \right. \\ && \qquad\qquad \left. + \Lambda^{d3}_{aij} \left( \overline{d}_j P_L L_{ai} \tilde{Q}_{bj} +\overline{L}^c_{ai} P_L Q_{bj} \tilde{d}^c_j \right) +\Lambda^{d4}_{ai} \overline{L}^c_{ai} P_L Q_{bj} \tilde{d}^*_j \right] + h.c. \nonumber\\ \mbox{where}&& {\Lambda}^{d1}_{aij} = [\theta^S_{i1},\theta^C_{i1}]\, h^d_j \;, \quad {\Lambda}^{d2}_{aij} = [\theta^S_{i3},\theta^C_{i3}]\, h^d_j \;, \nonumber\\ && \Lambda^{d3}_{aij} = [\theta^N_{i3},\theta^L_{i2}]\, h^d_j\;, \quad \Lambda^{d4}_{ai} = {g\over\sqrt{2}} [-t_W \theta^N_{i1}+\theta^N_{i2}, \sqrt{2}\theta^L_{i1}] \nonumber\end{aligned}$$ $$\begin{aligned} && {\cal L}_{L\bar{Q}u} = \delta_{ab}\left[ {\Lambda}^{u1}_{aij} \tilde{L}_{ai}\overline{Q}_{bj} P_R u_{j} + {\Lambda}^{u2}_{aij}\tilde{L}^{c*}_{ai}\overline{Q}_{bj} P_R u_{j} \right. \\ && \qquad\qquad \left. + {\Lambda}^{u3}_{aij} \left( \tilde{u}^{c*}_{j}\overline{Q}_{bj} P_R L_{ai} +\overline{L}^c_{ai} \tilde{Q}^*_{bj} P_R u_j \right) +{\Lambda}^{u4}_{aij} \tilde{u}_j \overline{Q}_{aj} P_R L_{ai} \right] + h.c. \nonumber\\ \mbox{where} && {\Lambda}^{u1}_{aij} = [-\theta^S_{i2},\theta^C_{i2}]\, h^u_j \;, \quad {\Lambda}^{u2}_{aij} = [-\theta^S_{i4},\theta^C_{i4}]\, h^u_j \;, \nonumber\\ && \Lambda^{u3}_{aij} = [-\theta^N_{i4},\theta^R_{i2}]\, h^u_j \;, \quad {\Lambda}^{u4}_{ai} = {g\over\sqrt{2}} [-{1\over3}t_W \theta^N_{i1}-\theta^N_{i2}, -\sqrt{2}\theta^R_{i1}] \nonumber\end{aligned}$$ $$\begin{aligned} && {\cal L}_{LL\bar{e}} = \varepsilon_{ab}\left[ {\Lambda}^{l1}_{aij} \tilde{L}_{ai}\overline{e}_j P_L L_{bj} + {\Lambda}^{l2}_{aij}\tilde{L}^{c*}_{ai}\overline{e}_j P_L L_{bj} \right. \\ && \qquad\qquad \left. + \Lambda^{l3}_{aij} \left( \overline{e}_j P_L L_{ai} \tilde{L}_{bj} -\overline{L}^c_{ai} P_L L_{bj} \tilde{e}^c_j \right) +\Lambda^{l4}_{ai} \overline{L}^c_{ai} P_L L_{bj} \tilde{e}^*_j \right] \nonumber\\ && \qquad\qquad +\Lambda^{l5}_{ai} \tilde{\nu}_j \overline{L}^c_{aj} P_R L_{ai} + h.c. \nonumber\\ \mbox{where} && {\Lambda}^{l1}_{aij} = [\theta^S_{i1},\theta^C_{i1}]\, h^e_j \;,\quad {\Lambda}^{l2}_{aij} = [\theta^S_{i3},\theta^C_{i3}]\, h^e_j \;,\quad \Lambda^{l3}_{aij} = [\theta^N_{i3},\theta^L_{i2}]\, h^e_j \nonumber \\ && \Lambda^{l4}_{ai} = {g\over\sqrt{2}} [t_W \theta^N_{i1}+\theta^N_{i2}, -\sqrt{2}\theta^L_{i1}] \,,\quad \Lambda^{l5}_{ai} = {g\over\sqrt{2}} [t_W \theta^N_{i1}-\theta^N_{i2}, -\sqrt{2}\theta^R_{i1}] \nonumber\end{aligned}$$ In Eqs. (16)–(18), the two components in the brackets correspond to the two states of the $SU(2)_L$ doublets with indices $a,b=1,2$, and $L^c \equiv (\nu, e^c)$ is defined as an lepton $SU(2)_L$ doublet while $\tilde{L}^c=(\tilde{\nu},\tilde{e}^c)$ is its scalar counterpart. Finally, we have $${\cal L}_{\nu f \tilde{f}} = \Lambda^{\nu}_i\, \overline{\nu}_i P_R \left[ {2\over3} u_j \tilde{u}^c_j -{1\over3} d_j \tilde{d}^c_j - e_j \tilde{e}^c_j \right] + h.c.$$ where $ \Lambda^{\nu}_i = \sqrt{2} g' \theta^N_{i1} $. As one can see, the above vertices are non-supersymmetric and $SU(2)_L$ breaking. But, among various terms in Eqs. (16) and (18), one can separate out the supersymmetric couplings, $\epsilon_i h^d_j$ and $\epsilon_i h^e_j$, leaving all the vertices depending only on $\xi_i$ or $\eta_i$ similarly to the vertices in Eqs. (12)–(15). Then, combining those with the couplings in the superpotential (\[supo\]), we can define the effective supersymmetric couplings as $\tilde{\lambda}'_{ijk} = \epsilon_i h^d_j \delta_{jk} +\lambda'_{ijk}$ and $\tilde{\lambda}_{ijk} = \epsilon_i h^e_j \delta_{jk} +\lambda_{ijk}$. We will see that these couplings determine the quantity $\xi_i$ or $\eta_i$ through the renormalization group evolution of the bilinear (soft) terms. Radiative neutrino mass matrix from R-parity violation ====================================================== After performing the rotations described in the previous section, the three neutrinos in the “weak-basis” get important mass corrections arising from the see-saw mechanism associated with the heavy four neutralinos. As is well-known, this gives the “tree-level” neutrino matrix of the form; $$\label{Mtree} M^{tree}_{ij}= - {M_Z^2 \over F_N} \xi_i \xi_j c_\beta^2 \,,$$ which makes massive only one neutrino, $\nu_3$, in the direction of $\vec{\xi}$. The other two get masses from finite one-loop corrections and thus $\nu_3$ is usually the heaviest component. We fix the value of $m_{\nu_3}$ from the atmospheric neutrino data [@sk-atm] and thus the overall size of $\xi\equiv |\vec{\xi}|$ as $$\label{xicb} \xi c_\beta = 0.74\times10^{-6} \left(F_N \over M_Z\right)^{1/2} \left(m_{\nu_3} \over 0.05 \mbox{ eV} \right)^{1/2}\,.$$ where $F_N$ is defined in Eq. (5) and its typical value is given by $M_2$. Furthermore, among three neutrino mixing angles defined by the mixing matrix $$U=\pmatrix{ 1 & 0 & 0 \cr 0 & c_{23} & s_{23} \cr 0 & -s_{23} & c_{23} \cr} \pmatrix{ c_{13} & 0 & s_{13} \cr 0 & 1 & 0 \cr -s_{13} & 0 & c_{13} \cr} \pmatrix{ c_{12} & s_{12} & 0 \cr -s_{12} & c_{12} & 0 \cr 0 & 0 & 1 \cr}$$ with $c_{ij}=\cos\theta_{ij}$ and $s_{ij}=\sin\theta_{ij}$, etc., two angles are almost determined by the tree-level mass matrix (\[Mtree\]) as follows; $$\begin{aligned} \label{twoangles} \sin^22\theta_{atm} &\approx& \sin^22\theta_{23} \approx 4 {\xi_2^2 \over \xi^2} {\xi_3^2\over \xi^2} \nonumber\\ \sin^22\theta_{chooz} &\approx& \sin^22\theta_{13} \approx 4 {\xi_1^2 \over \xi^2} \left(1-{\xi_1^2\over \xi^2}\right) \,.\end{aligned}$$ The atmospheric neutrino and CHOOZ experiments [@sk-atm; @chooz] require $\sin^22\theta_{atm} \approx 1$ and $\sin^22\theta_{chooz} < 0.2$. The angle $\theta_{12}$ can be determined only after including one-loop corrections and is responsible for the solar neutrino mixing, $\theta_{sol} \approx \theta_{12}$, if the neutrino mass matrix is to explain the atmospheric and solar neutrino oscillations as will be discussed in the next section. An important property of the tree-level neutrino mass matrix (\[Mtree\]) is that it depends on the bilinear R-parity violating quantities $\xi_i=a_i - \epsilon_i$ which can be re-expressed as $$\label{xiis} \xi_i = \epsilon_i {\Delta m^2_i+ \Delta B_i \mu t_\beta \over m_{\tilde{\nu}_i}^2 } - {m^2_{L_i H_1} \over m_{\tilde{\nu}_i}^2 }$$ where $\Delta m^2_i \equiv m^2_{H_1}-m^2_{L_i}$, $\Delta B_i \equiv B-B_i$ and $ m_{\tilde{\nu}_i}^2 = m^2_{L_i}+ M_Z^2 c_{2\beta}/2$. In other words, the nonzero values of $\xi_i$ arise from the mismatch of soft mass parameters for the Higgs field $H_1$ and slepton field $L_i$ having the same gauge quantum numbers. If one assumes the universality condition, one has $\Delta m^2_i= \Delta B_i = m^2_{L_i H_1}=0$ at the mediation scale of supersymmetry breaking, and their nonzero values are generated by Yukawa coupling effects through the renormalization group evolution down to the weak scale. Under the assumption that the R-parity violating couplings follow the usual hierarchies as the quark and lepton Yukawa couplings, that is, $\lambda'_i\equiv\lambda'_{i33}$ and $\lambda_i\equiv\lambda_{i33}$ give the dominant contributions, the renormalization group equations (RGE) of the bilinear terms are given by $$\begin{aligned} \label{rges} 16 \pi^2 {d\over dt} \Delta m^2_i &=& 6 h_b^2 X_b + 2 (1-\delta_{i3}) h_\tau^2 X_\tau \nonumber\\ 16 \pi^2 \epsilon_i {d\over dt} \Delta B_i &=& \epsilon_i (6 h_b^2 A_b + 2(1-\delta_{i3}) h_\tau^2 A_\tau ) + (6 \lambda'_i h_b A'_i + 2\lambda_i h_\tau A_i) \nonumber\\ && - \Delta B_i (3 \lambda'_i h_b + \lambda_i h_\tau) \\ 16 \pi^2 {d\over dt} m^2_{L_i H_1} &=& -(6 \lambda'_i h_b X_b + 2\lambda_i h_\tau X_\tau) + m^2_{L_i H_1} (3 h_b^2 +(1+\delta_{i3}) h_\tau^2) \nonumber\\ && +(6 \lambda'_i h_b + 2\lambda_i h_\tau )\Delta m^2_i -(6 \lambda'_i h_b A_b \Delta A'_i + 2\lambda_i h_\tau A_\tau \Delta A_i) \nonumber\end{aligned}$$ where $t=\ln Q$ with the renormalization scale $Q$, $X_b=m^2_{Q_3}+m^2_{D^c_3} + m^2_{H_1} + A_b^2$, $X_\tau=m^2_{L_3}+m^2_{E^c_3} + m^2_{H_1} + A_\tau^2$. Here, $A$’s are the trilinear soft parameters corresponding to the $h_b,h_\tau$, $\lambda'_i$ and $\lambda_i$ couplings, and finally $\Delta A'_i\equiv A'_i-A_b$, $\Delta A_i\equiv A_i-A_\tau$. Under the one-step approximation, the above RGE can be solved as $$\begin{aligned} \label{rgesol} \epsilon_i \Delta m^2_i - m^2_{L_i H_1} &=& {1 \over 8\pi^2} \left(3 \tilde{\lambda}'_i h_b X_b + \tilde{\lambda}_i h_\tau X_\tau \right) \ln{M_m \over m_{\tilde{t}} } \nonumber\\ \epsilon_i \Delta B_i &=& {1 \over 8\pi^2} \left(3 \tilde{\lambda}'_i h_b \tilde{A}'_i + \tilde{\lambda}_i h_\tau \tilde{A}_i \right) \ln{M_m \over m_{\tilde{t}} }\end{aligned}$$ where $M_m$ is the mediation scale of supersymmetry breaking, and $m_{\tilde{t}}$ is a typical stop mass scale where we calculate the sneutrino vacuum expectation values. Here, we have defined $\tilde{\lambda}'_i = \epsilon_i h_b + \lambda'_i$, $\tilde{\lambda}_i = \epsilon_i(1-\delta_{i3}) h_\tau + \lambda_i$, $\tilde{\lambda}'_i \tilde{A}'_i = \epsilon_i h_b A_b + \lambda'_i A'_i$ and $\tilde{\lambda}_i \tilde{A}_i = \epsilon_i(1-\delta_{i3}) h_\tau A_\tau + \lambda_i A_i$. Note that $\lambda_3$ as well as $\tilde{\lambda}_3$ vanish. In gauge mediated supersymmetry breaking models where the mediation scale $M_m$ is low, the above approximate solution is quite reliable. We are ready to discuss the typical sizes of the supersymmetric bilinear, and trilinear parameters, $\epsilon_i$ and $\lambda'_i, \lambda_i$, (or $\tilde{\lambda}'_i$, $\tilde{\lambda}_i$) which will be relevant for the study of the LSP decay. Assuming that there is no fine cancellation among various terms in Eq. (\[xiis\]) and the term with $X_b$ gives the largest contribution in (\[rgesol\]), we obtain $$ {\xi_i c_\beta \over \tilde{\lambda}'_i} \sim {3\over 8 \pi^2} {m_b \over v}{ X_b \over m^2_{\tilde{\nu}_i} } \ln{ M_m \over m_{\tilde{t}} } \,.$$ In gauge mediated supersymmetry breaking models [@gmsb], the sfermion soft masses are determined by gauge-boson/gaugino loop corrections which implies $X_b/m^2_{\tilde{\nu}_i} \approx 2\alpha_3^2/\alpha_2^2$. Further assuming the supersymmetry breaking scale $\Lambda_S$ close to $M_m$, we take $M_m/m_{\tilde{t}} \approx 4\pi/\alpha_3$. This gives $$\label{lampv} \epsilon_i h_b\;\mbox{or}\; \lambda'_i \sim 20\, \xi_i c_\beta \,.$$ When the tree mass matrix gives the atmospheric neutrino mass scale as discussed, we get $\xi_i c_\beta \sim 10^{-6}$ and thus $$\epsilon_i \sim 20 \xi_i c_\beta/h_b \sim 2\times10^{-3}/t_\beta$$ for $F_N=M_Z$. This shows that the parameters $\epsilon_i$ and $a_i$ can be very large while maintaining $\xi_i=a_i-\epsilon_i$ very small for low $\tan\beta$. Let us consider another possibility that $\tilde{\lambda}_i$ gives dominant contribution in Eq. (\[rgesol\]). Following the similar steps as above, we obtain $$\label{lambv} \epsilon_i h_\tau\;\mbox{or}\; \lambda_i \sim 560\, \xi_i c_\beta$$ where we took $X_\tau = 3 m^2_{\tilde{\nu}_i}$. This implies that the contribution of $\lambda_i$ to $\xi_i$ is comparable to that of $\lambda'_i$ if $\lambda_i \sim 30 \lambda'_i$. Later, we will see that the large mixing angle explaining the the solar neutrino data can be obtained for $\lambda_{1,2} \sim 5 \lambda'_{2,3}$. So far, we neglected the radiative corrections in the determination of vacuum expectation values of the sneutrino as well as Higgs fields. To obtain reliable minimization conditions for the electroweak symmetry breaking, one has to consider the effective scalar potential $$V_{eff}= V_0 + V_1$$ where $V_0$ is the tree-level potential and $V_1= {1\over 64\pi^2} \mbox{Str} {\cal M}^4 \left( \ln{{\cal M}^2 \over Q^2} -{3\over2}\right)$ includes one-loop corrections. With R-parity violation, $V_1$ is a function of not only the Higgs fields but also the sneutrinos [@CK; @valle97; @valle1]. In deriving Eq. (\[xiis\]), we neglected $V_1$ and used the tree-level minimization conditions for the neutral Higgs and sneutrino fields. Since the nonzero $\xi_i$’s are also generated by the renormalization effect, the inclusion of $V_1$ in the determination of sneutrino vacuum expectation values is crucial, in particular, in the case of a low-scale supersymmetry breaking mediation. In gauge mediation models, such one-loop corrections can give rise to an order of magnitude change in neutrino mass-squared values which are well-measured in the atmospheric and solar neutrino experiments [@CK]. After including such effects, Eq. (\[xiis\]) is modified to $$\begin{aligned} \label{newxi} \xi_i &=& + \epsilon_i {\Delta m^2_i+ \Delta B_i \mu t_\beta \over m_{\tilde{\nu}_i}^2 +\Sigma^{(2)}_{L_i}} - {m^2_{L_i H_1} \over m_{\tilde{\nu}_i}^2+\Sigma^{(2)}_{L_i} } \nonumber\\ & & + \epsilon_i {\Sigma_{H_1} - \Sigma^{(2)}_{L_i} - \epsilon_i^{-1}\Sigma^{(1)}_{L_i} \over m_{\tilde{\nu}_i}^2+\Sigma^{(2)}_{L_i} }\end{aligned}$$ where $\Sigma_{H_1} = \partial V_1/H_1^* \partial H_1$, $\Sigma^{(1)}_{L_i} = \partial V_1/H_1^* \partial L_i$ and $\Sigma^{(2)}_{L_i} = \partial V_1/L_i^* \partial L_i$ [@CK]. In our numerical calculation in the following section, we include such improvements. In order to get the full neutrino mass matrix, one-loop radiative corrections to neutrino mass matrix should be included: $$M^\nu_{ij} = M^{tree}_{ij} + M^{loop}_{ij} \,.$$ In the below, we will discuss whether the above mass matrix $M^\nu$ can explain the atmospheric and solar neutrino data, simultaneously. The numerical calculation in this direction has been performed first in Ref. [@hemp] without including the effect of the one-loop effective potential $V_1$ in the context of minimal supergravity models. Inclusion of full one-loop corrections has been made in Refs. [@CK] and [@valle1], in gauge mediation and supergravity models, respectively. As mentioned, the one-loop mass matrix $M^{loop}$ lifts twofold degeneracy of the tree-level mass matrix and thus all the three neutrinos get masses which are generically hierarchical. It is instructive to compare the tree and loop mass components, in order to get an idea about the mass of the neutrino, $\nu_2$, which determine the solar neutrino mass scale. The largest contribution to $M^{loop}$ usually comes from the one-loop diagrams with $\lambda'_i$ and $\lambda_i$ (more generally with the induced ones, $\tilde{\lambda}'_i$ and $\tilde{\lambda}_i$) which takes the from $$\begin{aligned} \label{Mloop} M^{loop}_{ij} &=& 3 {\tilde{\lambda}'_i \tilde{\lambda}'_j \over 8\pi^2} {m_b^2(A_b+\mu t_\beta) \over m^2_{\tilde{b}_1} - m^2_{\tilde{b}_2} } \ln{m^2_{\tilde{b}_1} \over m^2_{\tilde{b}_2}} \nonumber\\ && + { \tilde{\lambda}_i \tilde{\lambda}_j \over 8\pi^2} {m_\tau^2(A_\tau+\mu t_\beta) \over m^2_{\tilde{\tau}_1} - m^2_{\tilde{\tau}_2} } \ln{m^2_{\tilde{\tau}_1} \over m^2_{\tilde{\tau}_2}}\end{aligned}$$ where $m_{\tilde{b}_i}$ and $m_{\tilde{\tau}_i}$ are the sbottom and stau mass eigenvalues, respectively. When $\tilde{\lambda}'_i > \tilde{\lambda}_i$ so that $\tilde{\lambda}'_i$ give dominant contributions to both $\xi_i$ and $M^{loop}$, one has $$\label{bicase} {M^{loop}_{ij} \over M^{tree}_{ij} } \sim {3\over 8 \pi^2} {\tilde{\lambda}'_i \tilde{\lambda}'_j \over \xi_i \xi_j c_\beta^2} {m_b^2 \mu t_\beta \over M_Z m^2_{\tilde{b}}} \sim 10^{-3} t_\beta$$ assuming the relation (\[lampv\]) and $2.5\mu=m_{\tilde{b}}=500$ GeV. In this case, the second neutrino mass eigenvalue is determined by the sub-leading contribution of $\tilde{\lambda}_i$ to either $\xi_i$ or $M^{loop}$. Thus, we expect $ m_{\nu_2}/m_{\nu_3} < 10^{-3} t_\beta$, or equivalently, $$\label{smallratio} {\Delta m^2_{21} \over \Delta m^2_{32} } < 10^{-6} t_\beta^2 \,.$$ Note that the solar neutrino experiments require $\Delta m^2_{21} = 10^{-5}-10^{-10}$ eV$^2$ depending on the type of solar neutrino oscillation solutions. As the atmospheric neutrino oscillation requires $\Delta m^2_{32}\approx 3\times10^{-3}$ eV$^2$, it would be much easier to get the so-called vacuum oscillation or the low $\Delta m^2$ MSW solution. To realize the large mixing MSW (LMA) solution which is now strongly favored by the recent SNO data [@sol-exp], a large $\tan\beta$ is needed. Such a tendency has also been observed by numerical calculations in the context of minimal supergravity models [@hemp; @valle1]. However, under the assumption that $\tilde{\lambda}'_i > \tilde{\lambda}_i$, it is impossible to get a large mixing angle for the solar neutrino oscillation due to the CHOOZ constraint. This will become clear when we discuss the bilinear model in the next section. Let us now consider the opposite case that $\tilde{\lambda}_i \gg \tilde{\lambda}'_i$ so that $\tilde{\lambda}_i$ give dominant contributions to $\xi_i$ and $M^{loop}$ (which is the case when $\tilde{\lambda}_i > 30 \tilde{\lambda}'_i$ as in Eq. (\[lambv\])), one finds $$\label{tricase} {M^{loop}_{ij} \over M^{tree}_{ij} } \sim {1\over 8 \pi^2} {\tilde{\lambda}_i \tilde{\lambda}_j \over \xi_i \xi_j c_\beta^2} {m_\tau^2 \mu t_\beta \over M_Z m^2_{\tilde{\tau}}} \sim t_\beta$$ for $\mu=1.5 m_{\tilde{\tau}}=200$ GeV. Note that $m_{\nu_2}/m_{\nu_3}$ can be even larger than one and the resultant neutrino mass components satisfy $M^{\nu}_{11,12,22} > M^{\nu}_{i3}$ as $\tilde{\lambda}_3\equiv0$. Such a case is not favorable as it cannot give a large mixing angle for the the atmospheric neutrino oscillation. From the above discussion, we can infer that the atmospheric and LMA solar neutrino oscillation can be realized if the couplings $\tilde{\lambda}_i$ and $\tilde{\lambda}'_i$ satisfy a relation in-between (33) and (35), which will be the case that $\lambda_i$ is moderately larger than $\lambda'_i$.[^2] We will analyze the neutrino masses and mixing in such a scheme and its collider signature in the following section. Atmospheric and solar neutrino oscillations and LSP decays in GMSB models ========================================================================= Let us make a numerical analysis to find how the neutrino mass matrix from R-parity violation explain both the atmospheric and solar neutrino oscillations and what are the corresponding collider signatures coming from the LSP decay. Our discussions are specialized in the models with gauge mediated supersymmetry breaking (GMSB) in which the universality condition is automatic and thus supersymmetric flavor problems are naturally avoided. For our discussion, we will take the minimal number of the messenger multiplets ($5+\bar{5}$) and the messenger supersymmetry breaking scale $\Lambda_S$ not too far from the mediation scale $M_m$ [@gmsb]. We will also concentrate on the cases where the LSP (being a neutralino) is lighter than the $W$ boson so that only three-body decays are allowed. Due to the (effective) R-parity violating couplings introduced in the previous sections, the LSP, denoted by $\tilde{\chi}^0_1$, decays through the mediation of on/off-shell $W,Z$ gauge bosons, Higgses, sleptons and squarks, producing the following three-fermion final states, $$\nu \nu \nu\,, \quad \nu l_i^\pm l_j^\mp\,,\quad \nu q \bar{q}'\,,\quad \l_i^\pm q q' \,.$$ Here we do not distinguish the neutrino flavors, and the final quark states will be identified with jets. The modes, $\nu l^\pm_i l^\mp_j$ and $\l^\pm_i jj$, are of a particular interest since the flavor dependence of R-parity violating couplings, which are relevant to the neutrino mixing angles, will be encoded in their branching ratios. Before performing the numerical analysis, let us make some qualitative discussions on the LSP decay. When the LSP is heavier than the $W$ boson, the decay modes $\tilde{\chi}^0_1 \to l_i^\pm W^\mp$ will have sizable branching fractions [@mrv; @jaja] and measuring them will give a direct information on the ratios $ \xi_1^2 : \xi_2^2 : \xi_3^2$ from which we can probe the neutrino mixing angles $\theta_{23}$ and $\theta_{13}$ through Eq. (\[twoangles\]) [@jaja]. The decay rate of the mode $\tilde{\chi}^0_1 \to l_i W$ is given by [@jaja] $$\begin{aligned} \label{lW} \Gamma(l_i W) &=& {G_F m^3_{\tilde{\chi}^0_1} \over 4\sqrt{2} \pi} [|C^L_1|^2 + |C^R_1{m^e_i\over F_C}|^2] |\xi_i|^2 c_\beta^2\, I_2(M_W^2/m^2_{\tilde{\chi}^0_1}) \\ \mbox{with} \quad C^L_1 &=& {1\over\sqrt{2}} [N_{11} c^N_1 + N_{12}(c^N_2-\sqrt{2}c^L_1) +N_{13}(c^N_3-c^L_2)+N_{14}c^N_4] \nonumber\\ C^R_1 &=& N_{12}c^R_1 + {1\over \sqrt{2}} N_{14}c^R_2. \nonumber\end{aligned}$$ Here, $N_{1j}$ are the components of the neutralino diagonalization matrix for the LSP and $I_2(x)=(1-x)^2(1+2x)$. Taking $\xi_ic_\beta=10^{-6}$, $C_1^L=1$ and $m_{\tilde{\chi}^0_1}=M_Z$, we get $\Gamma(lW) \approx 10^{-14}$ GeV corresponding to the decay length $\tau \approx 2\,cm$. Thus, the measurement of BR($e W$) : BR($\mu W$) : BR($\tau W$), or $\xi_1^2$ : $\xi_2^2$ : $\xi_3^2$, will be certainly feasible in the future colliders. Note that the contribution $C_1^R m_\tau/F_C \sim m_\tau t_\beta/\mu$ can be neglected unless $\tan\beta$ is very large. If the LSP is lighter than the $W$ boson, only three-body decay modes are allowed and thus the desired decay modes may be too suppressed to be observed. As a comparison with the above two body decay, let us consider the process $\tilde{\chi}^0_1 \to l_i W^* \to l_i f f'$ whose decay rate is $$\label{lW*} \Gamma(l_i W^*) = {3 G_F^2 m_{\tilde{\chi}^0_1}^5 \over 64\pi^3} [|C_1^L|^2+|C_1^R {m^e_i \over F_C}|^2] |\xi_i|^2 c^2_\beta \, I_3(m_{\tilde{\chi}^0_1}^2/M_W^2)$$ where $I_3(x)=[12x-6x^2-2x^3+12(1-x)\ln(1-x)]/x^4$. This gives $\Gamma(l_i W^*) \approx 8\times10^{-17}$ GeV for $C_1^L=1$, $\xi_i c_\beta= 10^{-6}$ and $m_{\tilde{\chi}^0_1}=50$ GeV. If this is the dominant decay channel, the total decay length will be $\tau \sim 2.5 m$ making it hard to observe sufficient LSP decay signals. However, it will turn out that the dominant LSP decay diagrams involve the effective $\lambda'_i$ or $\lambda_i$ couplings which make the LSP decay well inside the detector. This can be understood from the previous discussions showing that $\tilde{\lambda}'_i, \tilde{\lambda}_i \gg \xi_i c_\beta$. Furthermore, the corresponding decay modes $\tilde{\chi}^0_1 \to \nu jj$ or $\nu l_i^\pm l_j^\mp$ are dominated by the diagrams with the exchange of the sneutrino or charged slepton (in particular, the right-handed stau) which are relatively light. To get an order of magnitude estimation, let us consider the decay rate for $\tilde{\chi}_1^0 \to \nu l_i^\pm l_j^\mp$; $$\Gamma(\nu_k l_i^\pm l_j^\mp) = {\alpha' \tilde{\lambda}_{kij}^2 \over 768 \pi^2} { m_{\tilde{\chi}_1^0}^5 \over m^4_{\tilde{e}^c_j} } |N_{11}|^2 J(m^2_{\tilde{\chi}_1^0}, m^2_{\tilde{\nu}_i}, m^2_{\tilde{e}_j}, m^2_{\tilde{e}^c_k})$$ where $\alpha'=g'^2/4\pi$ and $J$ is a order-one function of the sparticle masses which is normalized to be one in the limit of $m^2_{\tilde{\chi}_1^0}= m^2_{\tilde{e}^c_k}=0$. Taking $\lambda_{i33}=2\times10^{-5}$, $m_{\tilde{\chi}_1^0}=50$ GeV $m_{\tilde{e}^c_3}=70$ GeV and $J=1$, we get $\Gamma(\nu_i \tau\bar{\tau}) \approx \Gamma(\nu_3 l_i^\pm \tau^\mp) \approx10^{-14}$ GeV which corresponds to $\tau \sim 2\, cm$. As we will see, this is a typical order of magnitude for the total decay rate of the LSP when R-parity violation accounts for the atmospheric and solar neutrino masses and mixing. Let us now present our numerical results for the two possible schemes of R-parity violation: (i) the bilinear model which has only three input parameters $\epsilon_i$; (ii) the trilinear model where we introduce five input parameters $\lambda'_i$ and $\lambda_i$. $\bullet$ Set1 $\tan\beta=10$ $\Lambda_S=40$ TeV $M_m=150$ TeV ------------------------ ----------------------- --------------------- -------------------------- $\epsilon^0_i$ $7.53\times10^{-6}$ $2.51\times10^{-4}$ $2.51\times10^{-4}$ $\tilde{\lambda}'_i$ $1.40\times10^{-6}$ $4.68\times10^{-5}$ $4.68\times10^{-5}$ $\tilde{\lambda}_i$ $7.60\times10^{-7}$ $2.53\times10^{-5}$ 0 $\xi^0_i$ $2.40\times10^{-7}$ $8.01\times10^{-6}$ $7.86\times10^{-6}$ $\xi_i$ $1.81\times10^{-7}$ $6.06\times10^{-6}$ $6.63\times10^{-6}$ BR $e$ $\mu$ $\tau$ $\nu jj$ $3.58\times10^{-1}$ $l^\pm_i jj$ $1.10\times10^{-6}$ $1.22\times10^{-3}$ $1.18\times10^{-3}$ $\nu l_i^\pm \tau^\mp$ $4.56\times10^{-4}$ $3.12\times10^{-1}$ $3.27\times10^{-1}$ $m_{\chi^0_1}$=49 GeV $\Gamma=$ $7.18\times10^{-14}$ GeV : A bilinear model with the input parameters, $\tan\beta$, $\Lambda_S$, $M_m$ and $\epsilon^0_i$, allowing for the SMA solution. The values of $\epsilon_i^0$ are set at the mediation scale $M_m$. The effective trilinear/bilinear R-parity violating parameters, $\tilde{\lambda}'_i$, $\tilde{\lambda}_i$/$\xi^{(0)}_i$ defined in the text, are shown in the upper part. Here, $\xi^0_i$ and $\xi_i$ are the tree-level and one-loop improved values, respectively. In the lower part are shown the important branching ratios of the LSP with mass $m_{\tilde{\chi}^0_1}$ and its total decay rate $\Gamma$. The three columns correspond to the lepton flavors, $i=e,\mu$ and $\tau$, respectively. For the mode $\nu jj$, we do not distinguish the neutrino flavors. The resulting neutrino oscillation parameters are presented in the last two lines. $$\begin{array}{l} (\Delta m^2_{31},~ \Delta m^2_{21})= (2.5\times10^{-3},~6.1\times10^{-6})~ \mbox{eV}^2 \cr (\sin^22\theta_{atm},~ \sin^22\theta_{sol},~ \sin^22\theta_{chooz}) =(0.99,~0.0018,~0.0017) \end{array}$$ The bilinear model with the universality condition is known to accommodate only the small mixing angle solution (SMA) of solar neutrino oscillations [@CK; @valle1], which is now strongly disfavored by the recent SNO data [@sol-exp]. This model is an attractive option as it is the minimal R-parity violating model and provides fairly neat correlations between the neutrino oscillation parameters and the collider signatures. In this scheme, the effective trilinear couplings are given by $\tilde{\lambda}'_i=\epsilon_i h_b$ and $\tilde{\lambda}_i=\epsilon_i h_\tau$ in Eq. (\[rgesol\]) and thus both the tree and the loop mass matrix takes the form $M^\nu_{ij} \propto \epsilon_i \epsilon_j$. The other flavor dependence comes from the $h_b,h_\tau$ Yukawa coupling effects, which is weak unless $\tan\beta$ is very large. Now that the relation Eq. (\[bicase\]) is applied here, the determination of the overall size of $\xi$ in Eq. (\[xicb\]) and two mixing angles $\theta_{23}$ and $\theta_{13}$ in Eq. (\[twoangles\]) holds almost precisely. Thus, the atmospheric and CHOOZ neutrino experiments require $|\xi_1| \ll |\xi_2| \approx |\xi_3|$ which can be directly translated to the condition $|\epsilon_1| \ll |\epsilon_2| \approx |\epsilon_3|$. This leads to the neutrino mass matrix structure; $M^{\nu}_{11} < M^\nu_{12} <M^\nu_{22,23,33}$. As a consequence, only a small mixing angle for the solar neutrino oscillation can be accounted for in the bilinear model. Indeed, the solar mixing angle $\theta_{12}$ is almost fixed by the relation $\tan\theta_{12} \approx \tan\theta_{13}$, and thus we get the relation $\sin^22\theta_{sol} \approx \sin^22\theta_{chooz}$. Given $\Delta m^2_{32} \approx 3\times10^{-3}$ eV$^2$ for the atmospheric neutrino oscillation, Eq. (\[smallratio\]) tells us that $\Delta m^2_{21}< 3\times10^{-9} t_\beta^2$ eV$^2$. This estimation is by no means exact but can show some qualitative features. For instance, it implies that the right value of $\Delta m^2_{21} \sim 5\times10^{-6}$ eV$^2$ is hardly achieved with small $\tan\beta$ in the GMSB models under consideration. In our numerical calculation, we looked for the SMA solutions varying the parameters $\tan\beta$, $\Lambda_S$ and $M_m$ as well as two R-parity violating parameters $\epsilon^0_1$ and $\epsilon^0_2$ defined at the scale $M_m$ while keeping $\epsilon^0_2=\epsilon^0_3$. We could find a reasonable parameter space only for $\tan\beta\approx 10-25$, limiting ourselves to $\tan\beta < 25$ because the (right-handed) stau becomes the LSP for larger $\tan\beta$. Set2 $\tan\beta=25$ $\Lambda_S=45$ TeV $M_m=90$ TeV ------------------------ ------------------------------- --------------------- -------------------------- $\epsilon^0_i$ $8.94\times10^{-7}$ $2.98\times10^{-5}$ $2.98\times10^{-5}$ $\tilde{\lambda}'_i$ $4.25\times10^{-7}$ $1.42\times10^{-5}$ $1.42\times10^{-5}$ $\tilde{\lambda}_i$ $2.30\times10^{-7}$ $7.67\times10^{-6}$ 0 $\xi^0_i$ $4.36\times10^{-7}$ $1.45\times10^{-5}$ $1.45\times10^{-5}$ $\xi_i$ $4.04\times10^{-7}$ $1.35\times10^{-5}$ $1.28\times10^{-5}$ BR $e$ $\mu$ $\tau$ $\nu jj$ $1.11\times10^{-1}$ $l^\pm_i jj$ $1.55\times10^{-6}$ $1.72\times10^{-3}$ $1.72\times10^{-3}$ $\nu l_i^\pm \tau^\mp$ $6.22\times10^{-4}$ $4.07\times10^{-1}$ $4.07\times10^{-1}$ $m_{\tilde{\chi}^0_1}$=58 GeV $\Gamma=$ $4.30\times10^{-14}$ GeV : Same as in Table 1 but with $\tan\beta=25$. $$\begin{array}{l} (\Delta m^2_{31},~ \Delta m^2_{21})= (3.0\times10^{-3},~4.1\times10^{-6})~ \mbox{eV}^2 \cr (\sin^22\theta_{atm},~ \sin^22\theta_{sol},~ \sin^22\theta_{chooz}) =(0.99,~0.0019,~0.0017) \end{array}$$ In Tables 1 and 2, we present two typical sets of parameters accommodating the SMA solution and the other neutrino data. In the tables, $\epsilon^0_i$ denote input values set at the scale $M_m$ where supersymmetry breaking is mediated. As can be seen, the effective couplings $\tilde{\lambda}'_i$ and $\tilde{\lambda}_i$ are much larger than $\xi_i c_\beta$ and the dominant decay modes are $\nu jj$ and $\nu ll$ where the diagrams with the exchanges of sneutrinos and charged sleptons give main contributions. One of its consequence is that the total decay rate is larger than $10^{-14}$ GeV making the decay length smaller than a few $cm$. We have checked that the total decay rate is in the region of $\Gamma\sim 5\times10^{-14}$ GeV for the LSP mass $m_{\tilde{\chi}^0_1}=25-80$ GeV. That is, the decay length is around $\tau \sim 0.4\, cm$. This has to be contrasted with the supergravity case [@valle2] where one typically gets $\tau > 1\,cm$ for a light LSP. It is also worthwhile to note that the SMA solution requires $\xi_1/\xi_2 \approx 0.03 \approx \epsilon_1/\epsilon_2$ and thus the bilinear model typically predicts the following relation: $$10^3 \mbox{BR}(\nu e^\pm \tau^\mp) \sim \mbox{BR}(\nu \mu^\pm \tau^\mp) \approx \mbox{BR}(\nu \tau^\pm \tau^\mp) \,.$$ As pointed out in Ref. [@valle2], the modes $l_ijj$ are of a great interest. Their decay rates are dominated by the $W$ exchange diagrams as the contribution of the largest coupling $\tilde{\lambda}'_{i33}$ giving $\tilde{\chi}^0_1 \to l_i t \bar{b}$ is kinematically forbidden and the coupling $\tilde{\lambda}'_{i22} = \epsilon_i h_s$ gives the sub-leading effect compared to $\xi_i c_\beta$ which enters the $\chi$-$l$-$W$ vertices. Therefore, the ratio $$\mbox{BR}(ejj)\;:\; \mbox{BR}(\mu jj)\;:\; \mbox{BR}(\tau jj)$$ is almost same as the ratio $\xi_1^2$ : $\xi_2^2$ : $\xi_3^2$ to determine $\theta_{atm}$ and $\theta_{chooz}$ through Eq. (23) as in the case of $m_{\tilde{\chi}^0_1} > M_W$. Here, we remark that the branching fraction $\mbox{BR}(ejj)$ is too small to be observed in the future linear colliders. Assuming the integrated luminosity 1000 fb$^{-1}$ per year, the branching ratios below $10^{-5}$ would not be feasible [@valle2]. However, the measurement of BR($ejj$) $\ll$ $\mbox{BR}(\mu jj) \approx \mbox{BR}(\tau jj)$ will provide a robust test for the bilinear model. $\bullet$ Let us now consider a more general situation that both bilinear and trilinear R-parity violating terms are present. In this case, it is convenient to rotate away the supersymmetric bilinear terms $\epsilon_i$ to the trilinear couplings as we defined the effective ones in the previous sections. In this way, we are allowed to introduce only five couplings, $\tilde{\lambda}'_i$ and $\tilde{\lambda}_i$ which are related to the third generation quarks and leptons. This would be the simplest trilinear model. The trilinear model provides a possibility to realize the LMA solution which is most favored at present. As discussed before, in order to get the LMA solution, sizable contributions to $M^\nu_{11,12,22}$ are needed to enlarge the solar neutrino mixing while keeping the hierarchy of $M_{ij}< M_{i3,33}$ to realize the large atmospheric neutrino mixing. From the numerical calculation scanning the five trilinear parameters, we find that the LMA solution is realized if $\tilde{\lambda}_{1,2} \sim 5 \tilde{\lambda}'_{2,3}$. The conditions, $\tilde{\lambda}_{1}\sim \tilde{\lambda}_{2}$ and $\tilde{\lambda}'_{2}\sim \tilde{\lambda}'_{3}$, are needed to get two large mixing angles, while the small CHOOZ angle requires $\tilde{\lambda}'_{1} < 0.2 \tilde{\lambda}'_{2,3}$. Under such conditions, we could not find any restrictions on the GMSB input parameters $\tan\beta$, $\Lambda_S$ and $M_m$. Set3 $\tan\beta=5$ $\Lambda_S=40$ TeV $M_m=80$ TeV ------------------------ ------------------------------- --------------------- -------------------------- $\tilde{\lambda}'_i$ $1.07\times10^{-7}$ $1.07\times10^{-5}$ $0.96\times10^{-5}$ $\tilde{\lambda}_i$ $4.07\times10^{-5}$ $4.07\times10^{-5}$ 0 $\xi^0_i$ $2.66\times10^{-7}$ $2.99\times10^{-6}$ $2.48\times10^{-6}$ $\xi_i$ $2.57\times10^{-7}$ $2.67\times10^{-6}$ $2.66\times10^{-6}$ BR $e$ $\mu$ $\tau$ $\nu jj$ $7.92\times10^{-3}$ $l^\pm_i jj$ $5.17\times10^{-7}$ $6.55\times10^{-5}$ $4.49\times10^{-5}$ $\nu l^\pm_i \tau^\mp$ $2.35\times10^{-1}$ $2.35\times10^{-1}$ $5.22\times10^{-1}$ $m_{\tilde{\chi}^0_1}$=46 GeV $\Gamma=$ $1.20\times10^{-13}$ GeV : A trilinear model realizing the LMA solution. Here the couplings $\tilde{\lambda}'_i$ and $\tilde{\lambda}_i$ can be considered as input parameters defined at the weak scale. The rests are the same as in the previous tables. $$\begin{array}{l} (\Delta m^2_{31},~ \Delta m^2_{21})= (3.1\times10^{-3},~5.0\times10^{-5})~ \mbox{eV}^2 \cr (\sin^22\theta_{atm},~ \sin^22\theta_{sol},~ \sin^22\theta_{chooz}) =(0.99,~0.80,~0.0001) \end{array}$$ In Tables 3 and 4, we present two examples allowing the LMA solution for $\tan\beta = 5$ and 25, respectively. We fixed $M_m = 2 \Lambda_S$. The total decay rate is found to be in the vicinity of $\Gamma \sim 5\times10^{-13}$ GeV for all the LSP mass below 80 GeV. Therefore, the decay length is of the order $0.4\; mm$. This enhancement compared to the bilinear case is due to the largeness of the $\tilde{\lambda}_i$ couplings which also make the modes $\nu ll$ dominating. As a consequence, we infer the following distinct feature of the LMA solution; $$\mbox{BR}(\nu e \bar{\tau}) \sim \mbox{BR}(\nu \mu \bar{\tau}) \sim \mbox{BR}(\nu \tau \bar{\tau})$$ with the individual branching ratio is larger than 10%. The relation for the atmospheric neutrino mixing angle in Eq. (\[twoangles\]) is not as exact as in the bilinear case, but it still holds to a good approximation as can be seen from the tables. On the other hand, the expression for the CHOOZ angle determined from $\xi_i$’s is not applicable any more and we cannot draw any conclusive prediction for the value of it. Still, the relation, BR($ejj$) $\ll$ BR($\mu jj$) $\sim$ BR($\tau jj$), holds to a certain degree, but these branching fractions become as small as $10^{-5}$ making it difficult to be measured in the planned colliders. Set4 $\tan\beta=25$ $\Lambda_S=40$ TeV $M_m=80$ TeV --------------------- ------------------------------- --------------------- -------------------------- $\lambda'_i$ $7.45\times10^{-8}$ $4.48\times10^{-6}$ $7.43\times10^{-6}$ $\lambda_i$ $1.61\times10^{-5}$ $2.82\times10^{-5}$ 0 $\xi^0_i$ $4.32\times10^{-7}$ $7.09\times10^{-6}$ $1.09\times10^{-5}$ $\xi_i$ $1.62\times10^{-6}$ $1.02\times10^{-5}$ $1.28\times10^{-5}$ BR $e$ $\mu$ $\tau$ $\nu jj$ $1.54\times10^{-3}$ $l^\pm_i jj$ $7.08\times10^{-8}$ $1.91\times10^{-5}$ $4.53\times10^{-5}$ $\nu l_i \bar{l}_3$ $1.15\times10^{-1}$ $3.53\times10^{-1}$ $5.31\times10^{-1}$ $m_{\tilde{\chi}^0_1}$=50 GeV $\Gamma=$ $5.62\times10^{-13}$ GeV : Same as in Table 3 with $\tan\beta=25$. $$\begin{array}{l} (\Delta m^2_{31},~ \Delta m^2_{21})= (3.0\times10^{-3},~5.0\times10^{-5})~ \mbox{eV}^2 \cr (\sin^22\theta_{atm},~ \sin^22\theta_{sol},~ \sin^22\theta_{chooz}) =(0.91,~0.84,~0.16) \end{array}$$ Conclusion ========== The supersymmetric standard model without R-parity is an attractive framework for the neutrino masses and mixing as certain neutrino oscillation parameters can be probed by measuring the decay length and various branching fractions of the neutralino LSP in the future colliders experiments. Taking two simple models of R-parity violation, the bilinear model with three input parameters and the trilinear model with five parameters, we analyzed the neutrino mass matrix which explains both the atmospheric and solar neutrino data and its consequences on collider searches. One of our basic assumptions is the universality of soft terms for which we considered gauge mediation models of supersymmetry breaking. A notable consequence of such an assumption is that the LSP (lighter than the $W$ boson) decays mainly through the (effective) trilinear couplings $\lambda'_i$ and $\lambda_i$ and its decay length is found to be in the ballpark of $ \tau \sim 0.1\,cm$. The observation of the decay modes $\nu l^\pm_i l^\mp_j$ and $l_i^\pm jj$ will be important as they reflect the lepton number violating structure of a certain model. The bilinear model which can accommodate only the SMA solution of the solar neutrino oscillation predicts the relation, $10^3$ BR($\nu e^\pm \tau^\mp$) $\sim$ BR($\nu \mu^\pm \tau^\mp$) $\approx$ BR($\nu \tau^\pm \tau^\mp$). The dominant decay modes are found to be $\tilde{\chi}^0_1 \to$ $\nu \mu^\pm \tau^\mp$, $\nu \tau^\pm \tau^\mp$ and $\nu jj$, which are all of the order 10%. The trilinear model can realize the strongly-favored LMA solution which can be tested by observing the dominant decay modes, $\tilde{\chi}^0_1 \to$ $\nu e^\pm \tau^\mp$, $\nu \mu^\pm \tau^\mp$ and $\nu \tau^\pm \tau^\mp$, satisfying the relation, BR($\nu e^\pm \tau^\mp$) $\sim$ BR($\nu \mu^\pm \tau^\mp$) $\sim$ BR($\nu \tau^\pm \tau^\mp$). In both cases, the relation, BR($ejj$) $\ll$ BR($\mu jj$) $\sim$ BR($\tau jj$), should hold to be consistent with the atmospheric and CHOOZ neutrino data. [**Acknowledgment**]{}: EJC and JDP are supported by the KRF grant No. 2001-003-D00037. [99]{} \#1\#2\#3[Phys. Lett.  [**B\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Nucl. Phys.  [**B\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Phys. Rev.  [**D\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Mod. Phys. Lett. [**A\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Phys. Rep.  [**\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Science [**\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Astrophys. J.  [**\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Eur. Phys. J.  [**C\#1**]{} \#2 (\#3)]{} \#1\#2\#3[JHEP [**\#1**]{} \#2 (\#3)]{} \#1\#2\#3[Prog. Theor. Phys. [**\#1**]{} \#2 (\#3)]{} L.E. Ibanez and G.G. Ross, ; . L. Hall and Suzuki, Nucl. Phys. [**B231**]{}, 419 (1984). A. S. Joshipura and M. Nowakowski, ; M. Nowakowski and A. Pilaftsis, ; F. M. Borzumati, Y. Grossman, E. Nardi and Y. Nir, Phys. Lett. [**B384**]{} 123 (1996); B. de Carlos and P.L. White, Phys. Rev. [**D54**]{} 3427 (1996); A. Yu. Smirnov and F. Vissani, Nucl. Phys. [**B460**]{} 37 (1996); R. Hempfling, Nucl. Phys [**B478**]{} 3 (1996); H.P. Nilles and N. Polonsky, Nucl. Phys. [**B484**]{} 33 (1997); E. Nardi, Phys. Rev. [**D55**]{} 5772(1997); M. Drees, [*et al.*]{}, ; E.J. Chun, [*et al.*]{}, ; A.S. Joshipura and S.K. Vempati, ; ; K. Choi, [*et al.*]{}, ; O. Kong, ; S. Rakshit, [*et al.*]{}, ; R. Adhikari, G. Omanovic, ; Y. Grossman and H.E. Haber, ; D.E. Kaplan and A. Nelson, JHEP 0001;033 (2000); J.L. Chkareuli, [*et al.*]{}, ; O. Haug, [*et al.*]{}, ; S. Davidson and M. Losada, JHEP 0005:021 (2000); F. Takayama and M. Yamaguchi, . H. Dreiner and G.G. Ross, Nucl. Phys. [**B365**]{}, 597 (1991); D.P. Roy, Phys. Lett. [**B128**]{}, 270 (1992); R.M Godbole, P. Roy and X. Tata, Nucl. Phys. [**B401**]{}, 67 (1993). B. Mukhopadhyaya, S. Roy and F. Vissani, ; A. Datta, B. Mukhopadhyaya and F. Vissani, . E.J. Chun and J.S. Lee, ; S.Y. Choi, E.J. Chun, S.K. Kang and J.S. Lee, . W. Porod, M. Hirsch J.C. Romao and J.W.F. Valle, . Super-Kamiokande Collaboration, Y. Fukuda [*et al.*]{}, ; T. Toshito, hep-ex/0105023. B.T. Cleveland [*et al.*]{}, Astrophys. J. [**496**]{}, 505 (1998); Kamiokande Collaboration, K. S. Hirata [*et al.*]{}, ; GALLEX Collaboration, W. Hampel [*et al.*]{}, ; SAGE Collaboration, J. N. Abdurashitov [*et al.*]{}, astro-ph/9907113; Super-Kamiokande Collaboration, Y. Fukuda, [*et al.*]{}, ; ; SNO Collaboration, Q.R. Ahmad, [*et al.*]{}, nucl-ex/0204008; nucl-ex/0204009. CHOOZ Collaboration, M. Apollonio [*et al.*]{}, . M. Dine and A. Nelson, ; M. Dine, A. Nelson and Y. Shirman, ; M. Dine, A. Nelson, Y. Nir and Y. Shirman, ; For a review, see, G.F. Giudice and R. Rattazzi, hep-ph/9801271; and also, S. Dimopoulos, S. Thomas and J.D. Wells, . E.J. Chun and S.K Kang, . S.K. Kang and O.C.W. Kong, hep-ph/0206009. M.A. Diaz, R.C. Romao and J.W.F. Valle, . M. Hirsch, M.A. Diaz, W. Porod, J.C. Romao and J.W.F. Valle, . R. Hempfling in Ref. [@oldies]. A. Abada, S. Davidson and M. Losada, ; A.S. Joshipura, R.D. Vaidya and S.K. Vempati, hep-ph/0203182. [^1]: For a recent detailed analysis, see Ref. [@KK]. [^2]: The LMA solution may also be obtained in the bilinear model if one relaxes the universality condition [@valle1; @anjan].
{ "pile_set_name": "ArXiv" }
--- abstract: 'It is shown here that a Kagomé magnet, with Heisenberg and Dzyaloshinskii-Moriya interactions causes non trivial topological and chiral magnetic properties. Chirality—that is, left or right handedness—is a very important concept in a broad range of scientific areas, and particularly, in condensed matter physics. Inversion symmetry breaking relates chirality with skyrmions, that are protected field configurations with particle-like and topological properties. Here, the reported numerical simulations and theoretical considerations reveal that the magnetic excitations of the Kagomé magnet can both be of regular bulk magnon character, as well as, having a non-trivial topological nature. We also find that under special circumstances, skyrmions emerge as excitations, having stability even at room temperature. Chiral magnonic edge states of a Kagomé magnet offer, in addition, a promising way to create, control and manipulate skyrmions. This has potential for applications in spintronics, magnonics and skyrmionics, i.e., for information storage or as logic devices based on the transportation and control of these particles. Collisions between these particle-like excitations are found to be elastic in the skyrmion-skyrmion channel, albeit without mass-conservation for an individual skyrmion. Skyrmion-antiskyrmion collisions are found to be more complex, where annihilation and creation of these objects have a distinct non-local nature.' author: - Manuel Pereiro - Dmitry Yudin - Jonathan Chico - Corina Etz - Olle Eriksson - Anders Bergman title: Topological excitations in a Kagomé magnet --- Systematic studies of strongly interacting electronic systems had been based on the formalism of Landau Fermi liquid for a long time. Landau pointed out that a strongly correlated problem can be replaced by a set of quasiparticles and effective amplitudes describing the interaction among them. Each quasiparticle is characterized by its own effective mass and can be adiabatically connected to a weakly interacting Fermi gas. A proper many-body description of solid state systems can not be addressed within linear theory, however small anharmonic perturbations do not change qualitatively the quasiparticle picture and can be treated as scattering processes between them, leading to multiple harmonic generation, quasiparticles dressing, etc[@Abrikosov:1975uw]. The inability to explain a variety of phenomena within perturbative expansions have motivated the development of novel mathematical concepts. Since the second half of the 20$^{th}$ century, topological methods have played an increasingly important role in different branches of physics owing to their utility in analysis of high complexity field equations that do not allow a simple general solution. The quantum Hall effect[@1981PhRvB..23.5632L], Aharanov-Bohm’s effect[@1959PhRv..115..485A], and the Josephson junction[@1974Sci...184..527J] are some examples in which topological arguments help to elucidate the physical origin of the observed phenomena. A certain class of non-linear equations allow particle-like solutions, solitary waves or solitons. Thus, a soliton, a freely moving self-trapped state of a non-linear system, can be thought of as a clot of energy concentrated in a small area which is able to move while preserving its shape. Contrary to ‘true’ solitons which preserve their shape during and after collision, there exists so called topological solitons (soliton in a broader sense) which are characterized by a non-trivial topological entity – topological charge. These particular solitons are known as topological excitations since their stability purely relies on topological arguments and can not be adiabatically connected to their ground states. The topological excitations are basically determined by the dimension of the space and the order parameter. Thus, for example, in a two-dimensional space, a Heisenberg ferromagnet allows an excitation called the Belavin-Polyakov (BP) monopole[@1975JETPL..22..245B], shown in Fig. \[fig1\]a. For this particular example, the BP excitation is equivalent to a skyrmionic particle-like solution[@1961RSPSA.262..237S] because of the conformal invariance of the Heisenberg Hamiltonian in two dimensions. In what follows we focus our attention on a skyrmion, a topological soliton corresponding to a non-linear $\sigma$-model. ![image](Figure-1pereiro){width="17.8cm"} The concept of the skyrmion originates from the seminal work of T. Skyrme[@1961RSPSA.262..237S] who argued that a topologically protected field configuration in a continuous field has particle-like solutions (i.e., skyrmions). A pioneering use of the Skyrme model was employed to describe stable elongated particles (baryons) within the framework of non-linear theory of meson fields. The model is characterized by a conserved baryon number, a topological charge, which is independent of the equations of motion and allows to develop a theory of nucleons obeying a mesonic Fermi-statistic (Bose-fields)[@manton]. Within the theory in question, the baryon is nothing but a chiral soliton resulting in collective excitations of pion fields. Its appearance can be attributed to spontaneous breaking of chiral symmetry, while the model itself reveals the nature of Fermi – Bose transmutation. Thanks to that, the skyrmion became known as a particle-like solution in purely bosonic theories obeying Fermi statistics. Developing this idea introduced a new class of particles with fractional statistics, anyons, which are now widely used. Skyrme’s proposal has been extended beyond the scope of high energy physics and in materials science has been an important concept in new and exotic physics, e.g. in nematic liquid crystals[@2011NatCo...2E.246F], ferromagnetic Bose-Einstein condensates[@AlKhawaja:2001iq], high Tc sucperconductivity[@2011arXiv1108.3562B], and quantum Hall magnets[@1995PhRvL..75.2562B]. Studying low-dimensional magnetic structures remains one of the most challenging and fascinating fields of modern condensed matter physics. Depending on the distance between neighboring spins, crystalline symmetry, and hybridization with substrate, a wide range of magnetic configurations ranging from collinear ferromagnetic, antiferromagnetic and non-collinear helimagnetic configurations to more complicated textures can be observed. If in addition, the inversion symmetry of the system is broken, the spins alignment/configuration gains a certain chirality (handedness) due to the spin-orbit driven antisymmetric exchange interaction, i.e. Dzyaloshinskii-Moriya (DM) interaction. Magnetic skyrmions are chiral spin structures with a whirling configuration so that the plane on which the spins are specified is topologically equivalent to a sphere via, for example, a stereographic mapping. Because of that, a certain topological invariant, namely degree of mapping can be ascribed to the structure. It can be thought of as a skyrmion number as well, providing information analogous to the charge in a particle-like description. One can also evaluate the accumulated Berry phase that influences, [*e.g.*]{}, traveling electrons when they pass through a skyrmion[@2010Natur.465..880P]. Recently, it has been shown that the Berry phase gives rise to the DM interaction in the case of smooth magnetic textures when both inversion symmetry is broken and spin-orbit (SO) interaction is present[@2013arXiv1307.8085F]. A Kagomé ferromagnet characterized by a structure with lack of inversion symmetry gives rise to the appearance of DM interaction. In the presence of significantly large DM interactions, the magnon dispersion curves of a Kagomé magnet present similarities with the energy band spectra of topological insulators. In fact, in such a system the magnon dispersion relation is gapped in the bulk but allows traveling gapless edge states which are topologically protected against any variations of the material parameters unless the band gap in the bulk collapses. Even though our studied sample is one atomic-layer thick system we denote hereafter bulk states by the states that exist well inside the sample and consequently, by abuse of language bulk skyrmions refers also to the skyrmions that exist inside the 2D sample. We argue in this communication that this intrinsic property makes a Kagomé ferromagnet ideal for creating and controlling the movement of skyrmions. From a technological perspective, skyrmions are promising objects due to their stability. This comes about from the topological nature of the skyrmion, which prevents a continuous deformation into another magnetic configuration with a different topological invariant. Indeed, the existence of topological invariance originates in the duality between $\hat{\bf{k}}$ – and $\hat{\bf{r}}$ – space and, as a consequence, when dealing with the group velocity, its standard quasiclassical expression gains an anomalous term, which is proportional to the so called Berry curvature. Mathematically, the integral of the Berry curvature over a two-dimensional manifold defines an integer invariant, a first Chern class[@1999geometry]. For a system which is time-reversal invariant and spatial-inversion invariant, the Berry curvature vanishes. Thus, the Chern number is zero unless the symmetry is broken. In two-dimensional systems, the bands with non-zero Chern numbers must be separated from the other bands by a band gap, such that the boundary between topological phases with different Chern numbers (or other topological invariants) must support the edge modes (similar to the electronic structure at the surface of a topological insulator). Because of the aforementioned properties, the skyrmions are appealing for applications in information storage or logic devices. However, they need to meet several requirements in order to be useful for magnetic applications, namely, they need to have high mobility, small size, and allow for full control of the direction of movement. In this article, we report on the conditions under which skyrmions are created in a Kagomé lattice, and we show that this system has all the aforementioned technologically relevant properties. The vanadate pyrochlores with generic formula A$_2$V$_2$O$_7$ (A=Lu, Yb, Tm, Y) form a class of ferromagnetic insulators, where the V atoms carry a magnetic moment of about 1.0 $\mu_B$/atom. A very important member of this family is Lu$_2$V$_2$O$_7$, in which very recently the magnon Hall effect has been observed[@2010Sci...329..297O] and in addition it was predicted to be a topological magnon insulator[@Zhang:2013ws]. In this compound only the V atoms carry a magnetic moment, since Lu is trivalent with a filled 4f shell. The structure of this vanadate (the unit cell is displayed in Fig. \[fig1\]b) can be represented as a stacking of alternating Kagomé and triangular lattices along the \[111\] direction. In Fig. \[fig1\]c, we show the stacking of these layers, in such a way that inversion symmetry is broken. As a consequence, the DM interaction is non-zero. According to Moriya’s rules[@1960PhRv..120...91M] on a single tetrahedron, the DM vector on each bond is parallel to the surface of the surrounding cube and perpendicular to the bond as indicated in Fig. \[fig1\]d with blue arrows. We concentrate here our efforts on a model system given by a ferromagnetic Kagomé lattice as shown in Fig. \[fig1\]e for the vanadium atoms of Lu$_2$V$_2$O$_7$. The relevant factor in our analysis of topological excitations in the Kagomé magnet is represented by the ratio between the antisymmetric ($\mathcal{D}$) and isotropic ($\mathcal{J}$) exchange interactions, namely $\mathcal{D}/\mathcal{J}$. This ratio captures the basic features about the interactions present in the Lu$_2$V$_2$O$_7$ pyrochloride. In experiments[@2010Sci...329..297O], $\mathcal{D}/\mathcal{J}$ has been found to be close to 0.32. Our simulations, described in the methods section, show that the creation and manipulation of skyrmions in the Kagomé lattice do not rely too much on small variations of this ratio. Hence, although our results, shown below, are for $\mathcal{D}/\mathcal{J}$=0.4, our theoretical predictions are expected to be relevant for the V atoms forming a Kagomé lattice in the pyrochloride Lu$_2$V$_2$O$_7$ as well as systems with similar $\mathcal{D}/\mathcal{J}$-ratio. ![image](Figure-2pereiro){width="17.8cm"} Results and Discussion ====================== [**Topological bulk and edge states**]{} We consider here magnetic excitations of a two-dimensional Kagomé lattice, e.g. as given by a \[111\]-surface of the pyrochlore lattice (Fig. \[fig1\]e). We start by analyzing bulk and edge states of this system, and to this end we show in Fig. \[fig2\]a the magnon dispersion, as revealed from the dynamical structure factor. Note that the results in Fig. \[fig2\]a are obtained using open boundary conditions for a large sample composed of 50$\times$400 unit cells. The intense coloured curves represent bulk magnons, which are seen to be grouped in three branches, with noticeable gaps in-between. Most noteworthy is that in-between the gaps one can find four twisted edge states which form a continuous state. These states cannot be perturbed, so that a gap opens, because they represent edge modes[@Zhang:2013ws] that are topologically protected. In fact, the presence of a Dzyaloshinskii-Moriya interaction with a fixed handedness acts as an effective magnetic field in the system. This makes the magnon spectrum of the Kagomé lattice looks like the energy band structure of topological insulators. In order to analyze the edge and bulk modes in real space, we show the real-space Berry curvature, averaged over a time-interval of 100 ps, in Figs. \[fig2\]b-c (see Supplementary Section S2 for further details about the calculation of the Berry curvature and topological invariants in the framework of our theoretical method). The results of Figs. \[fig2\]b-c were obtained after exciting the first two rows of atoms with an external magnetic field on the left side of the sample, and then the time-evolution of the system was monitored. The excitation was carried out first with an energy of 261 meV and second with an energy of 814 meV. The first excitation represents magnon energies which are allowed in the bulk, whereas the second excitation represents an energy for which bulk states are forbidden, since this energy lies in the magnon-gap. Figure \[fig2\]c (261 meV case) displays spin-waves that propagate over the whole sample, corresponding to the expected bulk modes of the 261 meV excitation, whereas Fig. \[fig2\]b (814 meV case) shows magnon excitations which only propagate along the edges, i.e. representing edge modes. The data in Fig. \[fig2\]a-c demonstrate that magnetic excitations of a Kagomé lattice have just as rich physics, in terms of non-trivial topology, as the electronic structure of topological insulators. The advantage with investigations of magnetic excitations is that it is possible to keep track of the real-space information of the magnetic excitation as a function of time, e.g. as given by the information in Figs. \[fig2\]b and \[fig2\]c. [**Local topological excitations**]{} Next, we consider local excitations, e.g. as they would emerge from a spin-transfer torque of a spin-polarized STM tip or as described in the experiments of Ref. [@2013Sci...339.1295M]. This excitation was generated by applying a local torque that causes all the atomic spins subjected to this torque to have their moments reversed with respect to the majority of the spins in the simulation cell (Fig. \[fig3\] shows the configuration at t=0 ps). We then instantaneously removed this torque and observed that the resulting excited state could be thought of as skyrmion-like or a skyrmion/anti-skyrmion (SA) pair, that in most simulations were stable for long times ($>$ 100 ps). This procedure was repeated several times and it was found that skyrmionic excitations were always stabilized. Figure \[fig3\] shows the generation and time evolution of two such skyrmionic excitations (a movie of these excitations is found in Supplementary videos 1 and 2). It may be seen that they are both stable over long times but that the initial direction of their movement is different (see Figs. \[fig3\]a-b). ![image](Figure-3pereiro){width="17.8cm"} The size and magnitude of the linear momentum of the skyrmionic excitation is in general found to be stochastic and depends on minute details of the conditions of the spin-system just after removing the local torque. As discussed above, this is obvious when comparing Fig. \[fig3\]a and Fig. \[fig3\]b. In both figures, the simulation parameters were exactly the same, but the highly non-linear process of the creation of the SA pair, combined with small thermal fluctuations, edge effects on the spin-reversed region at the beginning of the simulation etc., causes the direction of the linear momentum of the skyrmionic excitation to be very different. Furthermore, in Fig. \[fig3\]b, the excitation is found to experience stronger damping, in a process which emits spin-waves. As a result of the new direction, the SA pair collides with the edge 8 ps after it is created, which is a very short period of time for the SA pair to decrease its linear momentum and consequently, the collision destroys the SA pair and produces spin waves. The results in Fig. \[fig3\] demonstrate that SA pairs are easily generated but that there is an uncertainty in the determination of their linear momentum and their life-time. The competition between exchange and DM interaction favours canted magnetization and as a result, the effects associated with finite size geometry are of crucial importance. A non-trivial magnetic texture is in general subjected to the Magnus force, which prevents it from moving to the boundary. However, it can be easily shown that as long as the strength of a local torque is sufficiently small, a repulsion from the edge is circumvented. In Fig. \[fig3\]a, a coupled SA pair has time to become stabilized ($\sim$ 80 ps) and is found to travel with a speed of $\sim$ 1290 m/s to the right edge of the sample. The linear momentum is large enough for the SA pair to reach the edge, where it splits into a separate meron and antimeron. Both the meron and antimeron are stable during a significant period of time, i.e. at least longer than 10 ps. The results in Fig. \[fig3\] motivate an analysis in terms of a meron[@1976PhLB...65..163D] (in our case, half-skyrmion), that originally was introduced to resolve the problem of confinement in particle physics, namely when quarks are bound by strong forces at large distances [@Rajaraman:1982tv]. The latter fact prevents the application of a semiclassical approximation directly. In fact, within the semiclassical approach one may look for a perturbative expansion around the minima of the Euclidean action (this effectively leads to a series obtained by re-summing an infinite number of graphs). Thus, using this formalism restricts one to finite-action solutions, also known as instantons. However, elevated temperatures result in higher-energy configurations being involved, possibly with infinite-action. An elegant way to take into account such terms [@1978PhRvD..17.2717C] suggests to add so called meron solutions. In fact, originally proposed in the framework of Yang-Mills theory [@1976PhLB...65..163D], the meron is characterized by a non-integer topological charge, namely $1/2$. In this spirit, a finite-action solution with topological charge 1 can be made of two merons with finite separation (a bound meron pair) which breaks up into constituents when large distance effects become prominent. Hereafter, the terms half-skyrmion or meron and half-antiskyrmion or antimeron will be considered as synonymous, respectively. In general, a skyrmion that moves towards an edge becomes annihilated. However, for the Kagomé magnet the annihilation of the skyrmion does not occur once it reaches the edge, since the existence of chiral magnonic edge states gives rise to profound changes in the excitations of the Kagomé lattice, as we have seen in Fig. \[fig2\]. Thus, once the SA pair reaches the edge, the chirality forces them to be separated as two distinct entities. As shown in Fig. \[fig3\]a, the meron and antimeron emanating from the collision of the SA pair at the edge of the Kagomé lattice have drastically different speeds, since the distance traveled by the antimeron is shorter than the one of the meron during the same period of time. A magnification of the SA pair discussed in Fig. \[fig3\]a is shown in Fig. \[fig4\]. Notice that because of the chosen magnetic orientation of the spins studied here, the meron has counter-clockwise chirality contrary to the antimeron which has clockwise chirality. In Fig. \[fig4\]a we show the coupled SA pair, just before it reaches the edge. Note that the dashed line shows the magnetic texture that has a half-antiskyrmion. Hence this illustrates how the SA pair is coupled before it reaches the edge. Figures \[fig4\]b-f show the time-evolution of this SA pair over a time interval of 90 ps, just before (Fig. \[fig4\]b) and just after (Figs. \[fig4\]c-f) it reaches the edge. Note that once the coupled SA pair reaches the edge, it becomes unstable, due to non-trivial topology, and breaks up into a separated meron and antimeron (Figs. \[fig4\]e and f), that then travel along the edge in a decoupled fashion and in opposite directions. ![\[fig4\] [**Snapshots of a coupled skyrmion-antiskyrmion pair colliding with the edge of the Kagomé stripe**]{}. [**a**]{} Illustration of a skyrmion-antiskyrmion pair before the collision with the edge of the Kagomé stripe. Dashed lines are a guide to the eye for recognizing the antiskyrmion magnetic texture while the other half of the magnetic excitation resembles a magnetic texture like in the Belavin-Polyakov monopole. [**b-f**]{} Several frames showing the coupled SA pair colliding with the edge of the stripe and the resulting SA decoupling due to the chiral edge states. The snapshots were taken from the same simulation displayed in Fig. \[fig3\]a. ](Figure-4pereiro){width="8.5cm"} The results of Figs. \[fig3\] and \[fig4\] demonstrate that it is possible to create SA pairs in a Kagomé lattice and that they are stable over substantial times. It is also clear that the SA pairs can travel with supersonic velocities of about 1300 m/s. However, the linear momentum of such SA pairs is unfortunately difficult to control or design by the initial conditions of their generation. A way to overcome this problem is to make use of the topological properties of the edge states of the Kagomé magnet and to place the local excitation at the edge of the sample. In Fig. \[fig5\] and in Supplementary videos 3 and 4, we illustrate as an example the generation and time evolution of a skyrmion and antiskyrmion pair, created in exactly the same way as described in Fig. \[fig3\]. In Fig. \[fig5\]a it is shown that the meron moves to the right side whereas the antimeron travels in the opposite direction. The difference in their direction is produced by their different chirality. Local excitations placed at the edge do not always generate SA pairs; it is quite possible that single merons are generated. As an example we show in Figure \[fig5\]b a singular meron (the antimeron is indeed created also here, but is damped out very quickly). The important message from the results of Fig. \[fig5\] is that the direction of the meron and antimeron is totally controlled at the edge of the Kagomé lattice. Indeed, the topological nature of these excitations prevents the meron or antimeron to leave the edge, or to collapse. The stability of these chiral excitations is illustrated further by the fact that they can survive traveling along a 90 degree corner, as is shown in Fig. \[fig5\]a and Supplementary video 3, for an antimeron. The strong stability and long life-times of these excitations become promising when considering applications, because magnetic information can be transported over long distances along complicated paths and for long times. Our results suggest that it is quite possible to create a meron (write) at one point of space and time and to detected it (read) at another distant point along the edge. This property paves the way for novel applications of skyrmions like, for example, performing logical operations (skyrmionics). ![image](Figure-5pereiro){width="17.8cm"} Recent experimental results on chiral bulk magnets like MnSi [@2009Sci...323..915M] and Fe$_{0.5}$Co$_{0.5}$Si [@2010Natur.465..901Y] have indeed identified a skyrmion lattice phase. These works came to the conclusion that the so-called skyrmionic “A phase” is stabilized in a very limited region of a phase diagram defined by the applied magnetic field and temperature of the sample. The onset of the skyrmionic “A phase” is observed only in a narrow window of this phase diagram, basically at B=20 mT and T=25-30 K. Spin polarized scanning tunnelling experiments of Fe monolayers grown on Ir(111) also show that skyrmions are observed only at very low temperatures (T=11 K) [@2011NatPh...7..713H]. As far as we know, the highest temperature for the skyrmionic phase has been reported in Ref. [@yu], where the skyrmions have been manipulated with electrical currents at temperatures up to 275 K in FeGe. All in all, this makes these excitations less suitable for room temperature applications. However, our theoretical considerations outlined above, show that SA pairs generated in a Kagomé lattice do not have this limitation. As an example, we show in Fig. \[fig6\] and Supplementary videos 5 and 6, the generation and movement of bulk and edge skyrmions at a temperature of 300 K. Figures \[fig6\]a and \[fig6\]b show the creation and temporal evolution of bulk skyrmions, whereas Figs. \[fig6\]c and \[fig6\]d show edge skyrmions. Note that the strength of the exchange and DM interaction in Figs. \[fig6\]b and \[fig6\]d is twice as large as that used in Figs. \[fig6\]a and  \[fig6\]c and that skyrmionic excitations seem to be insensitive to the actual strength of the exchange and DM interaction, as long as their ratio is close to 0.4. We note here that we also performed simulations for slightly different ratios between the strength of the exchange and DM interaction, with very similar results as shown in Figs. \[fig3\]-\[fig6\]. ![image](Figure-6pereiro){width="17.8cm"} [**Collision of topological excitations**]{} Our results so far point out that it is possible to create skyrmionic excitations that can move with significant speed along edges and it is natural to ask if collisions between such particle-like excitations can happen, and if so, how do they happen? To unravel this question, we have studied meron-meron and meron-antimeron collisions (at T= 1mK). This was done by creating, simultaneously, two excitations at the edge of the Kagomé lattice. The excitations were created with sufficient distance, in order to avoid any initial interaction or correlation between them due to the emission of short lifetime spin waves generated during the excitation of the system. Once the two edge skyrmions were excited they started to travel counter-clockwise along the edge of the lattice, the first (meron [**a**]{}) being ahead of the second (meron [**b**]{}). For this particular pair of merons, the second one had higher speed, hence allowing for investigating a meron-meron collision. In Fig. \[fig7\], three snapshots show the moment before ($t_0$), during ($t_1$) and after ($t_2$) this collision. As the figure and Supplementary video 7 shows, the two merons seems to experience an elastic collision, so that before the collision ($t_0$) meron [**b**]{} has a linear momentum that is larger than that of meron [**a**]{}. At the collision ($t_1$) the meron [**b**]{} provides linear momentum to meron [**a**]{}, in such a way that 10 ps after the collision ($t_2$), meron [**a**]{} has a higher momentum than meron [**b**]{}. The results of Fig. \[fig7\] pose questions regarding the details of a seemingly elastic collision. We analyze this by considering that the dynamics of the meron traveling along the x-direction can be described by the Lagrangian $\mathcal{L}=\frac{m}{2}(\partial_t x)^2$, where x is the coordinate of the meron center of mass and m is the “mass” of the meron. The “mass” is considered to be equal to the number of spins constituting the meron and the momentum is then $P_x=\frac{\partial\mathcal{L}}{\partial(\partial_t x)}=m\partial_t x$. It should be noted that it is not possible to exactly specify the boundary of a meron due to the discrete nature of the spin-lattice. Hence the calculation of the “mass” of a meron is associated with a small numerical error. Nevertheless, the calculated masses for the merons in Fig. \[fig7\], before the collision, are m$_a\simeq 176$ spins and m$_b\simeq 210$ spins, while after the collision, they change the mass slightly, becoming m$_a\simeq 220$ spins and m$_b\simeq 182$ spins. We have also evaluated the velocities of the merons just before and after the collision and hence been able to analyze the linear momentum of the collision in Fig. \[fig7\]. The total linear momentum before the collision is about $P_x\simeq 4652$ (Å spins)/ps and after the collision it is $P_x\simeq 4620$ (Å spins)/ps. Moreover, we have also computed the kinetic energy before and after the collision and found that it is 11.40 and 11.30 (Å/ps)$^2$spin, respectively. Hence, within numerical errors, both the linear momentum and kinetic energy are conserved, demonstrating the elastic nature of the collision in Fig. \[fig7\]. In contrast to elastic collisions of purely classical systems the mass of an individual meron is not conserved during the collision in Fig. \[fig7\]. ![\[fig7\] [**Snapshots of the meron-meron collision**]{}. At $t_0$, meron [**b**]{} has higher linear momentum (${\bf p}_b$) than meron [**a**]{} (${\bf p}_a$) but 10 ps after the collision ($t_2$), meron [**a**]{} moves away from meron [**b**]{} with higher velocity. (Supplementary video 7) ](Figure-7pereiro){width="8.5cm"} It is possible also to investigate meron-antimeron collisions and we show in Fig. \[fig8\] several snapshots of the generation of a meron and an antimeron and their time evolution. Figure \[fig8\] and Supplementary video 8 shows that once created, they move towards each other. Figure \[fig8\] also shows the time-evolution of the skyrmion number of the whole simulation cell. The skyrmion number was calculated as shown in the Methods section at every simulation time step. Similar to the generation of the separated pair of skyrmionic excitations (Fig. \[fig7\]), the skyrmion - anti-skyrmion pair was generated at the edge, by first reversing moments in a region by a local torque and then (after 10 ps) removing this torque rapidly. Looking at the details of Fig. \[fig8\] we note that initially a meron-antimeron is generated at the right hand side of the sample and only a single meron is generated on the left side. We hence have three dynamical excitations which we will refer to as the right meron (from the right side of the sample), the left meron (from the left side of the sample) and the antimeron. Figure \[fig8\] shows that the right meron gets damped after a while and leaves the system. The left meron and the antimeron are more long-lived and are found to move towards each other and collide after 39 ps (Fig. \[fig8\]c). At this collision, they fuse and destroy each other. As a result, the energy is carried out by outgoing spin waves, which after 54 ps create a meron in a totally different part of the lattice (Fig. \[fig8\]d) that subsequently moves to the right along the edge before it gets damped and leaves the system. This process hence shows a very non-localized process of the destruction and creation of skyrmionic excitations. Figure \[fig8\] also shows that the skyrmion number ($\mathfrak{N}$) is not an integer and it is not very smooth as a function of time. This behaviour is due to the fact that the skyrmion number was calculated for the whole sample and we used a very low damping. In consequence, there are contributions to $\mathfrak{N}$ arising also from regions far away from the place where the skyrmionic excitations are located. We can nevertheless get some insight from this number if we concentrate on the difference of this parameter at certain time steps. The horizontal red line indicates the value of the skyrmion number for the initial step, so we take this value as the background skyrmion number. During the first 10 ps (yellow region), the applied local torques are present. In region b, just after the creation of the topological excitations, the skyrmion number is reduced with roughly 0.5. This is because we have created an antimeron with skyrmion number -0.5. In the transition b$\rightarrow$c, the meron-antimeron collision is produced and they annihilate each other while the remaining meron shown on the right side dies after colliding with the vertical edge. In region c, the skyrmion number tends to come back towards the ground value with a jump of about 0.5 because the antimeron was annihilated. In this region we still can appreciate the remaining meron just before dying. In region d, the increase of the skyrmion number of about 0.5 is due to the creation of another meron. This excitation is created by spin-waves that were emitted during the meron-antimeron annihilation. This object is however damped away very quickly with strong spin-wave activity during the first part of region e. After some time, the system equilibrates and the skyrmion number goes back to the initial value as it is observed in Fig. \[fig8\]e. ![image](Figure-8pereiro){width="17.8cm"} [**Choice of materials and concluding remarks**]{} As regards suitable materials for which the here predicted phenomena should be possible to observe at room temperature, we note that larger values of $\mathcal{J}$ and $\mathcal{D}$ have a better chance for stabilizing skyrmions. However, the skyrmionic excitations discussed here are stable for a wide range of $\mathcal{J}$ and $\mathcal{D}$. The exchange interaction considered here ranged from 1 up to 10 mRy in steps of 1 mRy while $\mathcal{D}$ was varied from 0.4 till 4 mRy in steps of 0.4 mRy. Hence, we choose to keep $\mathcal{J}/\mathcal{D}=0.4$, since this ratio is known from previous experimental investigations. Skyrmionic phases show up when $\mathcal{J}\geq5$ mRy and $\mathcal{D}\geq2$ mRy (e.g. as shown in Fig. \[fig6\]). For cases in which $\mathcal{J}<5$ mRy, skyrmions are still stable but their shape is not well defined and skyrmionic excitations with lower values of $\mathcal{J}$ and $\mathcal{D}$ have higher probability to be damped away just after their creation. As shown in Figs. \[fig6\]b and d, larger values of $\mathcal{J}$ and $\mathcal{D}$ results in skyrmions that are more stable and less susceptible from thermal fluctuations. We have outlined conditions for novel exotic magnetic excitations of the Kagomé lattice. The possibility to observe bulk and edge states is outlined here, where in particular, the edge states have non-trivial topological properties that provide unique magnon modes. We have also shown that the Kagomé lattice has potential to exhibit SA pairs, meron and antimeron excitations even at room temperature. This is found for a wide range of parameters describing the magnetic excitations, in particular, the Dzyaloshinskii-Moriya interaction and the Heisenberg exchange interaction, as long as the ratio between them is in the range 0.3-0.4. Such a value of the ratio between these two magnetic interactions, is close to the value found in experiments[@2010Sci...329..297O]. Due to the non-trivial topology of the edge-states, we find that it is possible to control the movement of skyrmionic excitations and that such excitations are long lived. We have even found that these particle like-excitations can move on well defined straight lines and even be made to turn around corners. This offers a potential to use such excitations in different emerging technologies like for example in data storage or manipulation of magnetic bits. We have also investigated meron-meron collisions as well as meron-antimeron collisions. The former are found to undergo basically elastic collisions, whereas the latter are more complex, where annihilation of a meron-antimeron collision is followed by the birth of a new meron, a process which occurs as a highly non-local phenomenon. Pyrochlores are possible candidates for observing the phenomena predicted in this work and particularly we discussed here the 111-cut of the vanadate pyrochlores. The Kagomé lattice can however be found in many other compounds, e.g. SrCr$_8$Ga$_4$O$_{19}$[@obradors], Ba$_2$Sn$_2$Ga$_3$ZnCr$_7$O$_{22}$[@cava] and the jarosites like KM$_3$(OH)$_6$(SO$_4$)$_2$[@keren; @lee; @inami] with M = V, Cr, or Fe. These materials are also potential candidates for investigations of non-trivial topological magnon edge-states and skyrmionic excitations. Methods ======= Our theoretical methodology[@2008JPCM...20E5203S] consists of mapping an itinerant electron system onto an effective spin-Hamiltonian[@2003PhRvB..68j4436U; @1987JMMM...67...65L], $$\label{hamiltonian} \mathcal{H}=\sum_{<ij>}\left[-\mathcal{J}_{ij} {\bf s}_i \cdot{\bf s}_j+ \boldsymbol{\mathcal{D}}_{ij} {\bf s}_i \times {\bf s}_j\right]-g\mu_B {\bf B} \sum_i {\bf s}_i$$ where $<$$ij$$>$ denotes the atomic indices to the first neighbour pairs, $s_i$ is the atomic moment, $\mathcal{J}_{ij}$ is the strength of the exchange interaction and the spin-orbit contribution is included via the strength of the DM vector $\boldsymbol{\mathcal{D}}_{ij}$. The last term comes from the Zeeman effect under an external magnetic field $\bf{B}$, where g is the g-factor and $\mu_B$ represents the Bohr magneton. We consider that the exchange-interaction as well as the modulus of the DM vector are the same for every pair of atoms, which is consistent with all atoms of the Kagomé lattice being of the same type. Hence, we use $\mathcal{J}_{ij}=\mathcal{J}$ and $|\boldsymbol{\mathcal{D}}_{ij}|=\mathcal{D}$. In order to capture the dynamical properties of spin systems at finite temperatures we used an atomistic spin dynamics (ASD) approach[@2008JPCM...20E5203S]. The equation of motion of the classical atomistic spins at finite temperature is governed by Langevin dynamics via a stochastic differential equation, normally referred to as the atomistic Landau-Lifshitz-Gilbert (LLG) equations, which can be written in the form; $$\begin{aligned} \frac{\partial s_i}{\partial t}&=&-\frac{\gamma}{1+\alpha_i^2} {\bf s}_i \times [{\bf B}_i+{\bf b}_i(t)]\nonumber\\&-&\frac{\gamma \alpha_i}{s (1+\alpha_i^2)} {\bf s}_i \times {\bf s}_i \times [{\bf B}_i+{\bf b}_i(t)] \end{aligned}$$ where $\gamma$ is the gyromagnetic ratio and $\alpha_i$ denotes a dimensionless site-resolved damping parameter which accounts for the energy dissipation that eventually brings the system into a thermal equilibrium. The effective field in this equation, is calculated as ${\bf B}_i=-\partial \mathcal{H}/\partial {\bf s}_i$ and the temperature fluctuations (T) are considered through a random Gaussian shaped field ${\bf b}_i(t)$ with the following stochastic properties: $$\begin{aligned} \langle b_{i,\mu}(t)\rangle&=&0, \nonumber \\ \langle b_{i,\mu}(t) b_{j,\nu}(t')\rangle&=&\frac{2\alpha_i k_B T \delta_{ij}\delta_{\mu\nu} \delta(t-t')}{s(1+\alpha_i)^2\gamma } \end{aligned}$$ where $i$ and $j$ are atomic sites, while $\mu$ and $\nu$ represent cartesian coordinates of the stochastic field. After solving the LLG equations, we have direct access to the dynamics of the atomic magnetic moments $s_i(t)$ and with this information we can calculate all relevant dynamical properties like the connected space- and time-displaced correlation function defined as $$C^k({\bf r}-{\bf r}^\prime,t, t^\prime)=\langle s_r^k(t)s_{r^\prime}^k(t^\prime)\rangle-\langle s_r^k(t)\rangle \langle s_{r^\prime}^k(t^\prime)\rangle$$ where $\langle\cdots\rangle$ denotes an ensemble average and k is the cartesian component. The magnon dispersion relation can be evaluated via the dynamic structure factor $S({\bf q},\omega)$, which is nothing else than the space and time Fourier transform of the connected correlation function, $$S^k({\bf q},\omega)=\frac{1}{N}\sum_{\bf{r},\bf{r}^\prime} e^{-i{\bf q}\cdot({\bf r}-{\bf r}^\prime)}\int_{-\infty}^{\infty}e^{i\omega t} C^k({\bf r}-{\bf r}^\prime,t)\;\mathrm{d}t$$ where $N$ is the number of terms in the summation while ${\bf q}$ and $\omega$ are the momentum and energy transfer (see Supplementary Section S1 for further details about the simulations). The skyrmion number $\mathfrak{N}$ represents a topological index of the field configuration and is defined and evaluated here, by $$\label{sknumber} \mathfrak{N}=\int_\mathbb{S} {\bf s}\cdot \left(\frac{\partial {\bf s}}{\partial x}\times\frac{\partial {\bf s}}{\partial y}\right)\; \mathrm{d}x \mathrm{d}y$$ where $\mathbb{S}$ represents a two-dimensional compact orientable manifold. The topological excitation with $\mathfrak{N} > 0$ is called a skyrmion while in the case of $\mathfrak{N} < 0$ it is termed an antiskyrmion. The same formalism can be extended straightforwardly to describe multiskyrmionic states and in particular skyrmion-antiskyrmion pairs[@Komineas:2007ur]. Acknowledgements ================ The authors thank the European Research Council (ERC Project No. 247062-ASD), the Swedish Research Council (VR), the Knut and Alice Wallenberg Foundation for financial support. A.B. and O.E. acknowledge support from eSSENCE. The computer simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre (NSC), Chalmers Centre for Computational Science and Engineering (C3SE) and High Performance Computing Center North (HPC2N). Supporting Information:\ Topological excitations in a Kagomé magnet ========================================== [**S1: Atomistic spin model**]{} The atomistic modelling used in this work has recently been developed in our group. It basically follows standard techniques in this research area. We include a brief description of the model here for completeness but more details can be found in Ref. [[@2008JPCM...20E5203S]]{}. The method relies on solving the Landau-Lifshitz-Gilbert stochastic differential equation of motion $$\label{LLG} \frac{\partial s_i}{\partial t}=-\frac{\gamma}{1+\alpha_i^2} {\bf s}_i \times [{\bf B}_i+{\bf b}_i(t)]-\frac{\gamma \alpha_i}{s (1+\alpha_i^2)} {\bf s}_i \times {\bf s}_i \times [{\bf B}_i+{\bf b}_i(t)]$$ where $\gamma$ is the gyromagnetic ratio and $\alpha$ is the dimensionless site-resolved damping parameter which accounts for the energy dissipation and eventually brings the system into a thermal equilibrium. The effective field ${\bf B}_i$ acting on every spin ${\bf s}_i$ is calculated as ${\bf B}_i=-\partial \mathcal{H}/\partial {\bf s}_i$, where $\mathcal{H}$ represents a parametrized Hamiltonian which can accounts for several terms including both interatomic isotropic (Heisenberg) and anisotropic (Dzyaloshinskii-Moriya) exchange interactions, magnetocrystalline anisotropy, dipolar interaction, Zeeman term and many others. The coupling of the spin system with a thermal reservoir is through Langevin dynamics so that the spin excitations are generated by performing classical rotations in such a way that the spin energy satisfy Boltzmann statistics. Thus, the thermal fluctuations are represented here by $b_i(t)$ which is nothing else than a random Gaussian shaped field with the following stochastic properties $$\begin{aligned} \langle b_{i,\mu}(t)\rangle&=&0, \nonumber \\ \langle b_{i,\mu}(t) b_{j,\nu}(t')\rangle&=&\frac{2\alpha_i k_B T \delta_{ij}\delta_{\mu\nu} \delta(t-t')}{s(1+\alpha_i)^2\gamma } \end{aligned}$$ where $i$ and $j$ are atomic sites while $\mu$ and $\nu$ represent cartesian coordinates. The equation of motion is integrated with the semi-implicit midpoint solver using a time step of 0.1 fs to ensure numerical stability[@2010JPCM...22q6001M]. The flowchart of the UppASD code [@2008JPCM...20E5203S] follows basically three steps. In the initialization step, all the parameters necessary to describe the system (geometry, dimensions, boundary conditions, ...) are set up. During the optional second step, also called initial phase, the system is brought into thermal equilibrium while in the last step (measurement phase) the system is evolved in time with a complete data sampling being made. For bringing the system into thermal equilibrium (thermalization), we have performed a Monte Carlo simulation with Metropolis dynamics[@2009gmcs.book.....L]. We used five simulated annealing steps in order to bring the system to the agreed temperature. After that, spin dynamics simulations solving Eq. (\[LLG\]) has been performed to properly describe the spin dynamics of the system. The material parameters along with the simulation parameters we use in the model are given in Tables \[table1\] and \[table2\] for T=0.001 K and T=300 K, respectively. We used a large sample (50$\times$400 unit cells) to ensure that the statistical calculations as well as the numerical stability of Fourier transform in this system is reliable. According to the size of the sample, the total number of spins used in the simulations is 127200. In Fig. \[fig1\] we show the unit cell we have used for setting up the Kagomé lattice. Thus, we ensure that our Kagomé stripe has four 90 degree corners. This point is pretty much important because it allow us to asses the stability of the skyrmions when they collide with a 90 degree corner. In most of the cases, they were able to overcame this difficulty and consequently, it proves that the topology protect them against structural deformations. Moreover, we also set up the system with open boundary conditions to ensure the onset of the chiral edge states in the Kagomé magnet. ---------------------------------------------- ---------------------- --------- ----------------- 1$^{\rm st}$ neighbours exchange interaction $\mathcal{J}_{ij}$ 1.0 (mRy) Dzyaloshinskii-Moriya interaction $|\mathcal{D}_{ij}|$ 0.4 (mRy) Atomic Magnetic moment s 2.0 ($\mu_{\rm B}$) Damping $\alpha_i$ 0.001 Gyromatnetic ratio $\gamma$ 1.0 ($\gamma_e$) Global external field $B_z$ 0.2 (T) Simulation time $t_s$ 100 (ps) STM tip application time $t_{STM}$ 10 (ps) Local STM tip B field $B_z$ $-10^5$ (T) ---------------------------------------------- ---------------------- --------- ----------------- : \[table1\]Atomistic material parameters and simulation parameters for T=0.001 K. ---------------------------------------------- ---------------------- ---------------------- ----------------- 1$^{\rm st}$ neighbours exchange interaction $\mathcal{J}_{ij}$ 1.0$\rightarrow$10.0 (mRy) Dzyaloshinskii-Moriya interaction $|\mathcal{D}_{ij}|$ 0.4$\rightarrow$ 4.0 (mRy) Atomic Magnetic moment s 2.0 ($\mu_{\rm B}$) Damping $\alpha_i$ 0.001 Gyromatnetic ratio $\gamma$ 1.0 ($\gamma_e$) Global external field $B_z$ 0.9 (T) Simulation time $t_s$ 100 (ps) STM tip application time $t_{STM}$ 10 (ps) Local STM tip B field $B_z$ $-10^7$ (T) ---------------------------------------------- ---------------------- ---------------------- ----------------- : \[table2\]Atomistic material parameters and simulation parameters for T=300 K. The exchange interaction has been varied in steps of 1 mRy while the DM interaction in steps of 0.4 mRy. ![\[fig1\] [**Plot of a portion of the Kagomé stripe**]{}. In green colour is plotted the unit cell. The total size of the calculated two-dimensional Kagomé lattice is $53\times400$ unit cells. ](fig1_supp.pdf){width="8.6"} In Tables \[table1\] and \[table2\] we can observe that the intensity of the local magnetic field generated by the STM tip is rather huge. The reason for choosing those fields is just a technical detail related to the time we need for rendering the videos. In a ferromagnet, it is very well-known that the switching time (time required for reversing the spin direction) is very much dependent on the damping parameter [@1956JAP....27.1352K]. Thus, the switching time decreases as we increase the damping up to a critical value of the damping for which the switching time reaches a minimum value and then it continuously grows but in a more moderate way. In our simulations, the damping is very low ($\alpha$=0.001). This means that the switching time has to be quite large and in order to reduce this time to see the videos in a short period of time, we need to increase the magnetic field to an unrealistic value, but this technical detail does not affect at all the results of this work. In order to shed more light on this particular issue, we have performed several simulations with STM fields of 2000, 1000 and 100 T for four dampings ($\alpha=$0.005, 0.01, 0.05 and 0.08). In all cases for damping values less than 0.01 we have observed the skyrmion-antiskyrmion phase and above this damping the skyrmionic excitations are generated but they do not move and die after some time because of the dissipation process driven by the damping, as shown in Fig. \[fig2\] for a STM field of 1000 T. It is very clear how the skyrmion lifetime grows with the inverse of the damping parameter. Above 0.01 the skyrmions undergo a breathing behaviour characterized by a cyclic shrinking and enlargement of their size until eventually they collapse into a spin-wave. Below 0.01 the skyrmions are absolutely stable and they move just after their creation with the same properties as the ones we have generated with lower damping and larger local STM fields. The first message is that it is possible to reduce the values of the STM fields to more realistic values as 1 T or even less but waiting longer time to generate them, i.e., at least more than 0.1 ps. The second message is that for dampings in the interval from 0 to 0.01, the skyrmions can be succesfully stabilized so that the closer the damping is to 0.01, the lower the switching time becomes. A precise tuning of these parameters could give rise to an “ultrafast” generation of stable skyrmions. ![\[fig2\] [**Plot of damping parameter versus skyrmion lifetime $t_s$**]{}. The figure is for an STM tip with a magnetic field of $10^3$ T. Some snapshots of skyrmions have been included in the figure in order to show how they reduce their size and eventually die. The topological excitations only survive in the skyrmion-antiskyrmion phase. ](fig2_supp.pdf){width="8.6"} [**S2: Calculation of the Berry curvature and topological invariants for a damped magnet in the tight-binding limit**]{} We start deriving the Berry curvature and the associated Euler characteristic assuming that we can grid the space into N different volume cells V$_j$. Every cell is represented by a monodomain with a pinned magnetic moment which is only allowed to rotate in space (tight-binding limit). The size of the volume cell not only can be at atomic level but also bigger or smaller. The magnetic moment or spin per unit cell is defined as $${\bf{s}}_j:=\left<\psi({\mathfrak{s}})\right|{\bf S}_j\left|\psi({\mathfrak{s}})\right>$$ where ${\bf S}_j=\int_{V_j}{\bf S}({\bf r}) d^3\bf{r}$ is the spin operator per unit cell and ${\bf{S}}({\bf{r}})$ represents the spin density operator. We denote $\mathcal{E}(\mathfrak{s}$) as the ground state energy for the spin configuration $\mathfrak{s}$ and $\psi(\mathfrak{s})$ is the corresponding wavefunction which satisfies the time-dependent Schrödinger equation. In order to simplify the equations, we have intentionally removed the temporal dependency of the wavefunctions and quantum mechanical operators. Hereafter, we used units in which $g \mu_\mathrm{B}=1$, i.e. $\gamma=\frac{1}{\hbar}$ where g is the g-factor, $\mu_\mathrm{B}$ is the Bohr magneton and ${\gamma}$ is the gyromagnetic ratio, respectively. Extremizing the action $\mathcal{S}=\int \mathcal{L} dt$ with respect to $\mathfrak{s}$ using the time-dependent variational principle we end up with the Euler-Lagrange equations or the equations of the spin dynamics. The Lagrangian is given by: $$\mathcal{L}:=\left<\psi(\mathfrak{s})\right| i\hbar \frac{\partial}{\partial t}\left|\psi(\mathfrak{s})\right>-\left<\psi(\mathfrak{s})\right| \mathcal{H}\left|\psi(\mathfrak{s})\right>=\hbar\sum_{j=1}^\mathrm{N} \dot{{\bf s}}_j \mathcal{A}({\bf s}_j)-\mathcal{E} (\mathfrak{s})$$ where $\mathcal{A}({\bf s}_j):=i\left<\psi(\mathfrak{s})\right|\frac{\partial}{\partial {\bf s}_j}\left|\psi(\mathfrak{s})\right>$ is the Berry connection. Energy dissipation can be implemented in the magnetic system adding a damped magnetization field to the equations of motion of the spin system $$\frac{\partial\mathcal{R}(\dot{{\bf s}}_j)}{\partial \dot{{\bf s}}_j}=\eta_j \dot{{\bf s}}_j$$ where $\mathcal{R}(\dot{{\bf s}}_j)$ is the local Rayleigh dissipation functional. This local functional allows us to define a site-resolved damping parameter $\eta_j$ in every single cell V$_j$. Taking into account that $\frac{\partial\mathcal{L}}{\partial \dot{{\bf s}}_j}=\hbar \mathcal{A}({\bf s}_j)$ and $\frac{\partial\mathcal{L}}{\partial {\bf s}_j}=\hbar\sum_{j=1}^\mathrm{N}\dot{{\bf s}}_j \frac{\partial \mathcal{A}({\bf s}_j)}{\partial {\bf s}_j}-\frac{\partial \mathcal{E}(\mathfrak{s})}{\partial {\bf s}_j}$ is easy to show that the equations of motion are: $$\hbar \sum_{j'=1}^\mathrm{N}\dot{{\bf s}}_{j'}\Theta({\bf s}_{j'},{\bf s}_j)-\eta_j \dot{{\bf s}}_j=-\frac{\partial \mathcal{E}(\mathfrak{s})}{\partial {\bf s}_j} \label{equationofmotion}$$ where the Berry curvature $\Theta({\bf s}_{j'},{\bf s}_j)$ is a 3x3 matrix. The ($\alpha$,$\beta$) element is given by: $$\Theta_{j'j}^{\alpha\beta}:=\Theta(s_{j'}^\alpha,s_j^\beta)=\frac{\partial \mathcal{A}(s_{j'}^\alpha)}{\partial s_j^\beta}-\frac{\partial \mathcal{A}(s_j^\beta)}{\partial s_{j'}^\alpha}$$ [**S2.1: Calculation of the Berry curvature in the rigid spin limit**]{} In the tight-binding limit, we rigidly rotate the spins within each unit cell, so that the constrained ground state $\psi(\mathfrak{s})$ is calculated as: $$\left|\psi(\mathfrak{s})\right>=\prod_{j=1}^N \mathrm{e}^{i {\bm \theta}_j {\bf S}_j} \left|\psi(\mathfrak{s}_0)\right>$$ where $ {\bm \theta}_j\propto {\bf s}_j^0\times {\bf s}_j$ and $\psi(\mathfrak{s}_0)$ is the true ground state. Assuming for simplicity that ${\hat{z}}$ is the axis of symmetry, so that ${\bf s}_j^0= \mathrm{sgn}(s_j^0) s_j^0 \hat{z}$ and the norm of the spin is the same in every volume cell ($||{\bf s}_{j'}||=||{\bf s}_j||$), then after some cumbersome tensor algebra, the Berry curvature takes the form: $$\Theta_{j'j}^{\alpha\beta}=-\frac{i}{{\bf s}_j^2}\left<\psi(\mathfrak{s}_0)\right|\left[ S_j^\alpha, S_{j'}^\beta\right]\left|\psi(\mathfrak{s}_0)\right>$$ Since that the spin operator ${\bf S}_j$ satisfies the commutation relations for the angular momentum operators, i. e. $[S_j^\alpha, S_{j'}^\beta]=i \delta_{jj'}\varepsilon^{\alpha\beta}_{\phantom{\alpha\beta}\gamma} S_j^\gamma$, where $\varepsilon_{\alpha\beta\gamma}$ is the antisymmetric Levi-Civita symbol, the Berry curvature can be represented finally in terms of the spin components as[@1999PhRvL..83..207N]: $$\Theta_{j'j}^{\alpha\beta}=\frac{1}{{\bf s}_j^2}\delta_{jj'} \varepsilon_{\alpha\beta\gamma}s_j^\gamma=\frac{\delta_{jj'}}{{\bf s}_j^2}\begin{pmatrix} 0 & s_j^z & -s_j^y \\ -s_j^z & 0 & s_j^x\\ s_j^y & -s_j^x & 0\end{pmatrix} \label{berry_curvature}$$ [**S2.2: Calculation of the Euler charateristic**]{} Let $\mathbb{S}$ be a manifold describing the parameter space of the spin fields. After the calculation of the Berry curvature, It is straightforward to calculate the Euler characteristic ($\chi(\mathbb{S})$) for a compact orientable manifold $\mathbb{S}$ using the generalized Gauss-Bonet theorem[@1999PhRvL..83..207N], which states that: $$(2\pi)^n\chi(\mathbb{S})=\int_\mathbb{S} e(\mathbb{S})=\int_\mathbb{S} \mathrm{Pf}(\Theta) \label{eq_gauss}$$ where e($\mathbb{S}$) is the Euler class and Pf($\Theta$) is the Pfaffian of the Berry curvature and $n$ is related to the dimension of the curvature. In a general case, the Berry curvature is calculated for an odd-dimensional manifold, i.e. the Berry curvature is a 3x3 matrix, so that the Pf($\Theta$)=0 and consequently, Eq. (\[eq\_gauss\]) gives $\chi(\mathbb{S})=0$. In the particular case of a Kagomé lattice with the Hamiltonian given in the main text (Eq. (1)) at very low damping ($\alpha$=0.001), the system has an axis of quantization along the $\hat{z}$ direction. The average in time, $<\cdots>_t$, gives $<s_j^z>_t=s_j^z$ and $<s_j^x>_t=<s_j^y>_t=0$. In consequence, only the x and y components of the Berry curvature survive. Then, for this system, $\Theta$ can be mapped into a $2\times2$ matrix: $$<\Theta_{j'j}^{\alpha\beta}>_t=\frac{\delta_{jj'}}{{\bf s}_j^2}\begin{pmatrix} 0 & s_j^z\\-s_j^z&0 \end{pmatrix}$$ with $\alpha,\beta\in\{x,y\}$. Taking into account that Pf($\Theta$)=$s_j^z/{\bf s}_j^2$, $n$=1 and also that in a Kagomé lattice there are 3 atoms per unit cell, the Euler characteristic can be computed from the generalized Gauss-Bonet theorem: $$\chi({\mathbb{S}})=\frac{3}{2\pi}\frac{\mathrm{sgn}(s_j^z)}{{\bf s}_j^2}\int_0^{s_j}\int_0^{s_j}\sqrt{{\bf s}_j^2-(s_j^x)^2-(s_j^y)^2} ds_j^xds_j^y$$ The integral is easy to compute changing to spherical coordinates, i. e. $s_j^x=r_j \cos\theta_j$, $s_j^y=r_j \sin\theta_j$. Keeping in mind that the Jacobian determinant of this coordinate transformation is $\mathrm{det}(\mathcal{J})=\left|\frac{\partial (s_j^x,s_j^y)}{\partial(r_j,\theta_j)}\right|=r_j$, then $ds_j^xds_j^y=r_j dr_jd\theta_j$. Thus, the topological invariant takes the final form: $$\chi({\mathbb{S}})=\frac{3}{2\pi}\frac{\mathrm{sgn}(s_j^z)}{{\bf s}_j^2}\int_0^{s_j}\int_0^{2\pi}\sqrt{{\bf s}_j^2-r_j^2} r_j dr_j d\theta_j=\mathrm{sgn}(s_j^z) s_j$$ where $s_j$ is an integer number because of the bosonic nature of the magnonic excitations. For simplicity, we consider that $s_j=1$, then the Euler characteristic takes the values +1, 0 and -1. It means that in this system we have 3 different homotopical classes, so that they cannot be continuously deformed one into another. Likewise, every homotopical class is connected to a constrained ground state for the Kagomé system. The physical interpretation of this fact is as follows: Let’s firstly reduce the problem to the unit cell of the Kagomé lattice, which has 3 atoms. If $\chi({\mathbb{S}})=+1$, then the three spins point to the +$\hat{z}$ direction while for $\chi({\mathbb{S}})=-1$ the spins are in the opposite direction. The third option is the case in which $\chi({\mathbb{S}})=0$. To meet the requirement that $s_j^z$=0, the spins should lie in the Kagomé plane. In consequence, the system has a 3-fold degenerated state at T=0. It is interesting to note that a transition from a region, in spin space, with a positive Euler characteristic to a negative one is only possible through out a region with zero Euler characteristic because of the continuity in the spin rotation. It definitively opens the door to new topological magnetic solutions in the Kagomé lattice like skyrmions[@1961RSPSA.262..237S] or Belavin-Polyakov[@1975JETPL..22..245B] monopoles. [**S2.3: Spin dynamics equations**]{} In the tight-binding limit, the equations of motion can be reduced to the Landau$-$Lifshitz$-$Gilbert equations. To do so, the Berry curvature computed in Eq. (\[berry\_curvature\]) is inserted in Eq. (\[equationofmotion\]): $$\hbar \sum_{j'}^N \dot{s}_{j'}^\alpha \delta_{jj'} \varepsilon_{\alpha\beta\gamma}\frac{s_j^\gamma}{{\bf s}_j^2}+\frac{\partial \mathcal{E}(\mathfrak{s})}{\partial s_j^\alpha}-\eta_j \dot{s}_j^\alpha=0 \label{equation_landau}$$ Defining the effective field acting on the $j$ moment as $H_j^\alpha=\frac{\partial \mathcal{E}(\mathfrak{s})}{\partial s_j^\alpha}$, we can rewrite Eq. (\[equation\_landau\]) in vectorial notation as: $$-\hbar(\dot{{\bf s}}_j\times {\bf s}_j)+{\bf H}_j-\eta_j {\dot {\bf s}}_j=0$$ with $\dot{{\bf s}}_j$ and ${\bf s}_j$ being unit vectors. After some algebra, we finally end up with the Landau$-$Lifshitz$-$Gilbert equations of motion for the spins: $$\dot{{\bf s}}_j=\gamma({\bf s}_j\times {\bf H}_j)-\eta'_j({\bf s}_j\times \dot{{\bf s}}_j)$$ where $\eta'_j=\gamma \eta_j$ and $\gamma=\frac{1}{\hbar}$. [10]{} A. A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinski. . Courier Dover Publications, 1975. R. B. Laughlin. . , 23:5632, 1981. Y. Aharonov and D. Bohm. . , 115:485, 1959. B. D. Josephson. . , 184:527, 1974. A. A. Belavin and A. M. Polyakov. . , 22:245, 1975. T. H. R. Skyrme. . In [*Proc. R. Soc . Lond. A*]{}, page 237, 1961. N. S. Manton and P. M. Sutcliffe. . Cambridge University Press, 2004. J.-I. J. Fukuda and S. Zumer. . , 2:246, 2010. U. Al Khawaja and H. Stoof. . , 411:918, 2001. G. Baskaran. . . L. Brey, H. A. Fertig, R. C[ô]{}t[é]{}, and A. H. MacDonald. , 75:2562, 1995. C. C. Pfleiderer and A. A. Rosch. , 465:880, 2010. F. Freimuth, R. Bamler, Y. Mokrousov, and A. Rosch. . . M. Nakahara. . Graduate student series in physics. Institute of Physics Publishing, New York, 1990. Y. Onose, T. Ideue, H. Katsura, Y. Shiomi, N. Nagaosa, and Y. Tokura. , 329:297, 2010. L. Zhang, J. Ren, J. S. Wang, and B. Li. . , 87:14, 2013. T. Moriya. . , 120:91, 1960. S. M. Mohseni, S. R. Sani, J. Persson, T. N. Anh Nguyen, S. Chung, Y. Pogoryelov, P. K. Muduli, E. Iacocca, A. Eklund, R. K. Dumas, S. Bonetti, A. Deac, M. A. Hoefer, and J. [Å]{}kerman. . , 339:1295, 2013. V. De Alfaro, S. Fubini, and G. Furlan. . , 65:163, 1976. R. Rajaraman. . North Holland, 1982. C. G. Callan, Jr, R. Dashen, and D. J. Gross. . , 17:2717, 1978. S. M[ü]{}hlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. B[ö]{}ni. , 323:915, 2009. X. Z. Yu, Y. Onose, N. Kanazawa, J. H. Park, J. H. Han, Y. Matsui, N. Nagaosa, and Y. Tokura. , 465:901, 2010. S. Heinze, K. Von Bergmann, M. Menzel, J. Brede, A. Kubetzka, R. Wiesendanger, G. Bihlmayer, and S. Bl[ü]{}gel. . , 7:713, 2011. X. Z. Yu, N. Kanazawa, W. Z. Zhang, T. Nagai, T. Hara, K. Kimoto, Y. Matsui, Y. Onose, and Y. Tokura. , 3:988, 2012. X. Obradors, A. Labarta, A. Isalgu[é]{}, J. Tejada, J. Rodriguez, and M. Pernet. , 65:189, 1988. I. S. Hagemann, Q. Huang, X. P. A. Giao, A. P. Ramirez, and R. J. Cava. , 86:894, 2001. A. Keren, K. Kojima, L. P. Le, G. M. Luke, W. D. Wu, Y. J. Uemura, M. Takano, H. Dabkowska, and M. J. P. Gingras. , 53:6451, 1996. S.-H. Lee, C. Broholm, M. F. Collins, L. Heller, A. P. Ramirez, C. Kloc, E. Bucher, R. W. Erwin, and N. Lacevic. , 56:8091, 1997. T. Inami, M. Nishiyama, S. Maegawa, and Y. Oka. , 61:12181, 2000. B. Skubic, J. Hellsvik, L. Nordstr[ö]{}m, and O. Eriksson. . , 20:5203, 2008. L. Udvardi, L. Szunyogh, K. Palot[á]{}s, and P. Weinberger. . , 68:104436, 2003. A. I. Liechtenstein, M. I. Katsnelson, V. P. Antropov, and V. A. Gubanov. . , 67:65, 1987. S. Komineas. , 99:117202, 2007. B. Skubic, J. Hellsvik, L. Nordstr[ö]{}m, and O. Eriksson. . , 20:5203, 2008. J. H. Mentink, M. V. Tretyakov, A. Fasolino, M. I. Katsnelson, and Th. Rasing. . , 22:176001, 2010. David P Landau and Kurt Binder. . Cambridge University Press, Cambridge, 3rd edition, September 2009. R. Kikuchi. . , 27:1352, 1956. Q. Niu, X. Wang, L. Kleinman, W.-M. Liu, D. M. C. Nicholson, and G. M. Stocks. . , 83:207, 1999. T. H. R. Skyrme. . In [*Proceedings of the Royal Society of London. Series A*]{}, pages 237–245, July 1961. A. A. Belavin and A. M. Polyakov. . , 22:245, 1975.
{ "pile_set_name": "ArXiv" }
[**On Long Virtual Biquandles**]{} D. A. Fedoseev *Moscow State University, Main Building,\ Chair of Differential Geometry and Applications,\ Moscow, 119991, Leninskie Gory, 1, Russia* **Abstract** Virtual quandles with two operations are discussed in the article. Certain knot invariant is constructed and used to distinguish two long virtual knots. **Keywords:** quandle, biquandle, long virtual knot, virtual trefoil, knot invariant. Introduction ============ Object named “quandle” is well-known in modern knot theory. It provides good knot invariants. We will remind how this object can be constructed (as described in \[1\]). Let $\Gamma$ be a finite set if “colours” with an operation “circle”: $\circ \colon \Gamma \times \Gamma \to \Gamma.$ *Correct colouring* of an oriented knot (link) diagram $D$ is such correlation between arcs of the diagram $D$ and elements of $\Gamma,$ that for each crossing the following is verified: $c = a \circ b$ if arcs are marked with colours $a, b$ and $c$ as shown on the diag.1: ![image](quandles_1){width="30mm"} We do not look at the orientation of arcs denoted a and c. Now we will enforce several conditions on the operation “circle” which ensure that the number of correct colourings of an oriented diagram is invariant under Reidemeister movements. Direct computation shows that the conditions are as follows: 1. $\forall a \in \Gamma$ $a \circ a = a;$ 2. $\forall a,b \in \Gamma$ equation $x \circ a = b$ has exactly one solution $x \in \Gamma.$ Further it will be denoted as $b/a;$ 3. $\forall a,b,c \in \Gamma$ $(a \circ b) \circ c = (a \circ c) \circ (b \circ c).$ Any set with operation “circle” satisfying the above conditions is called a *quandle*. From the very definition follows The number of correct colourings of an oriented diagram with elements of a quandle is a knot (link) invariant. There also exists a more general approach to the construction of a quandle, the one using generators and relations. Let $A$ be an alphabet – a set of letters. A *word* in alphabet $A$ is by definition any finite sequence of elements of $A$ and symbols $\circ$ and $/$. Now we will define a set $D(A)$ of *allowed words*. $D(A)$ is defined inductively, following the following rules: 1. Any letter of the alphabet $A$ is an allowed word; 2. If $W_1, W_2 \in D(A)$, then $(W_1) \circ (W_2)$ and $(W_1) / (W_2)$ are allowed words; 3. There are no other allowed words. Further throughout the text we will ignore writing brackets in cases when the meaning of the structure is clear. Consider a set of relations $R=\{r_\alpha = s_\alpha|r_\alpha, s_\alpha \in D(A)\}.$ We introduce an equivalence relation for $D(A)$ such that for any $W_1, W_2 \in D(A)$ $W_1 \equiv W_2$ if ad only if there exists a sequence of transformations beginning with $W_1$ and ending $W_2$ constructed according to the following rules (trivial equivalences): 1. $x \circ x \Leftrightarrow x;$ 2. $(x \circ y)/y \Leftrightarrow x;$ 3. $(x/y) \circ x \Leftrightarrow x;$ 4. $(x \circ y) \circ z \Leftrightarrow (x \circ z) \circ (y \circ z);$ 5. $r_i \Leftrightarrow s_i.$ A set of allowed words factorized according to this equivalence is, clearly, a quandle with operation $\circ.$ Now for a given knot we construct a quandle invariant according to the following scheme. First of all wi assign a letter to each arc of the knot diagram and take this set of letters as an alphabet. Then we produce a set of relations $R:$ for every crossing of the diagram we state $a \circ b = c$ (as shown on the diag.1). After that we construct a quandle as described above. Such “*knot quandle*” is an almost complete knot invariant in the sense that if two knots are equal, corresponding quandles are isomorphic. It is not very convenient one, though, because it is usually difficult to verify if two quandles are isomorphic or not. So some modifications of the structure are considered ang used. Basic constructios ================== Let’s consider a virtual knot. An object not unlike quandle can be constructed – a *virtual quandle*. For now we will consider “long arcs” of a virtual knot diagram – a connected component of the set obtained from the diagram by deleting all virtual crossings. Again we label all the long arcs with letters (generators) $x_i$ and note the same relations $a \circ b = c$ for each classical crossing of the diagram (again we consider an oriented knot or link). A quandle (according to Kauffman) – is a formal quandle of a knot, obtained by ignoring all the virtual crossings of the diagram. The object defined above provides some knot invariants but it is comparatively weak. For example virtual trefoils (right and left) cannot be distinguished with it. A better generalization of a quandle is called a *virtual quandle*. A virtual quandle is a quandle $(M, \circ)$ with an operation $f$ such that there exists inverse operation $f^{-1}$ and for any $a,b \in M$ $f(a) \circ f(b) = f(a \circ b).$ Now we construct an invariant $Q(L)$ for a given oriented diagram $L$ of a virtual knot $K$. The structure present will be a strong virtual knot invariant. First of all we label all the arcs of the diagram with letters $a_i.$ Let $X(L)$ be a set of words obtained inductively using letters $a_i$ and symbols $\circ,$ $/$, $f$, $f^{-1}.$ We will factorize this set using a following equivalence: a transitive and reflexive closure of the following set of trivial equivalences: 1. $f^{-1}(f(a)) \Leftrightarrow f(f^{-1}(a)) \Leftrightarrow a;$ 2. $x \circ x \Leftrightarrow x;$ 3. $(x \circ y)/y \Leftrightarrow x;$ 4. $(x/y) \circ x \Leftrightarrow x;$ 5. $(x \circ y) \circ z \Leftrightarrow (x \circ z) \circ (y \circ z);$ 6. $f(a \circ b) \Leftrightarrow f(a) \circ f(b);$ furthermore for every classical crossing we state $a_{i_1} \circ a_{i_2} \Leftrightarrow a_{i_3},$ as shown on the diag.2: ![image](quandles_3){width="30mm"} and for every virtual crossing we state $x' \Leftrightarrow f(x)$ and $y' \Leftrightarrow f(y)$ as shown on the diag.3: ![image](quandles_2){width="30mm"} Virtual quandle $Q(L)$ constructed as show above is an invariant of virtual knots (links). Rigor proof is given in \[2\]. Until now all the classical crossings were “equal” in the sense that we applied the same equivalence for arcs incident to any classical crossings. If we can somehow divide all the classical crossings into two categories, we can construct a *biquandle* $(M, \circ, \star)$ which gives stronger invariant than the one described above. Good example of such an object can be presented using long virtual oriented knots. Long virtual biquandle is a set $M$ with operations $\circ,$ $\star,$ $\bar{\circ},$ $\bar{\star}$ and $f$ such that $(M, \circ, f)$ is a virtual quandle, $(M, \star, f)$ is a virtual quandle and the following is verified: $\cdot$ $\forall a,b\in M$ $(a\circ b)/_{\circ}b=(a/_{\circ}b)\circ b=(a\star b)/_{\star}b=(a/_{\star}b)\star b=a$, $\cdot$ $\forall a,b,c\in M$ $ (a\diamond b)\bullet c=(a\bullet c)\diamond(b\bullet c)$, where $\diamond$ and $\bullet$ – are some operations from the following list: $\circ$, $\star$, $/_{\circ}$, $/_{\star}$, $\cdot$ $\forall a,b\in M$ $ f(a\diamond b)=f(a)\diamond f(b),$ where $\diamond$ – is an operation from the list $\circ$, $\star$, $\cdot$ $\forall a\in M$ $ f(f^{-1}(a))=f^{-1}(f(a)) = a.$ And “strange relations” are verified: $\cdot$ $\forall x,a,b\in M$ $ x\diamond(a\circ b)=x\diamond(a\star b)$, $\cdot$ $\forall x,a,b\in M$ $ x\diamond(a/_{\circ}b)=x\diamond(a/_{\star}b)$, where again $\diamond$ are some operations from the list $\circ$, $\star$, $/_{\circ}$, $/_{\star}$. Now for a given diagram we produce a free long biquandle (a quandle, formally generated by operations $\circ,$ $\star,$ $\bar{\circ},$ $\bar{\star}$ and factorized according to relations $1$ – $5$) and then we factorize it according to the structure of the diagram: we state a relation $c = a \circ b$ for every classical crossing which is an *early overcrossing* according to the knot’s orientation; $c = a \star b$ for every early undercrossing and treat virtual crossings as above (as shown on the diag.3). This object gives us a knot invariant. Construction of a long virtual biquandle ======================================== To construct an example of a long virtual biquandle invariant we will use the following fact. Let $G$ be a group such that $\exists a, b \in G:$ $[a,b] \notin Z(G)$ but there exists $n \in \mathbb{N}$ such that for any $a,b \in G$ $[a,b^n] \in Z(G)$ but $\exists a, b \in G:$ $[a, b^n] \neq e.$ Here square brackets denote a commutator in the group (i.e. $[a,b]=ab-ba$), $Z(G)$ is the group’s center and $e$ denotes neutral element in the group. If given such a group we can use $G$ as an alphabet and define $a \circ b := bab^{-1}$ and $a \star b := b^{n+1}ab^{-n-1}.$ Operation $f$ can be chosen freely (though it must “respect” both binary operations $\circ$ and $\star$). Let us give an example of such a group using a Cayley graph. The graph we will be using is a square divided into $64$ smaller squares; horizontal sides of those a marked with letter $a$, vertical – with letter $b.$ Finally all the horizontals and verticals of the big square are oriented: the lowest horizontal is oriented right, next one left and so on; the leftmost vertical is oriented up, next one – down and so on. Here $a$ and $b$ are generators and relations are given by the graph, assuming that the square is glued into a torus and all the horizontals an verticals are oriented as described. The group $G$ consists of all “paths” in the graph. To elements are considered equal if the corresponding paths connect the same vertices. a\) $\exists x, y \in G: [x,y] \notin Z(G);$ b\) $\forall x, y \in G [x,y^2] \in Z(G)$ and $\exists x, y \in G: xy^2x^{-1}y^{-2} \neq e.$ $\square$ a) Let $x=a,$ $y=b.$ Then $a(aba^{-1}b^{-1}) = a^3b^2$ and $(aba^{-1}b^{-1})a = a^3b^{-2} = a^3b^6 \neq a^3b^2.$ Therefore $[a,b] \notin Z(G).$ b\) Obviously, every $x$ in $G$ is equal to an element of the form $a^kb^l.$ So we are to prove that for any $i, j, k, l$ $$A=(a^{k}b^{l})(a^{i}b^{j})^{2}(a^{k}b^{l})^{-1}(a^{i}b^{j})^{-2}=a^{k\pm i\pm i\pm(-k)\pm(-i)\pm(-i)}b^{l\pm j\pm j\pm(-l)\pm(-j)\pm(-j)} = a^\alpha b^\beta\in Z(G)$$ and $\exists i, j, k, l:$ $A \neq e.$ $A$ depends solely on parity of numbers $i, j, k, l.$ Direct computation shows the following correspondence between parity of $i, j, k, l$ and numbers $\alpha$ and $\beta:$ $i$ $1$ $1$ $0$ $0$ $1$ $1$ $0$ $0$ $1$ $1$ $0$ $0$ $1$ $1$ $0$ $0$ ---------- ----- ------- ------- ----- ----- ----- ------- ----- ----- ------- ----- ----- ----- ----- ----- ----- $j$ $1$ $0$ $1$ $0$ $1$ $0$ $1$ $0$ $1$ $0$ $1$ $0$ $1$ $0$ $1$ $0$ $k$ $1$ $1$ $1$ $1$ $1$ $1$ $1$ $1$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $l$ $1$ $1$ $1$ $1$ $0$ $0$ $0$ $0$ $1$ $1$ $1$ $1$ $0$ $0$ $0$ $0$ $\alpha$ $0$ $-4i$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $-4i$ $0$ $0$ $0$ $0$ $0$ $0$ $\beta$ $0$ $0$ $-4j$ $0$ $0$ $0$ $-4j$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ Therefore not every possible value of $A$ is equal to $e,$ but certainly all the values of $A$ are in $Z(G).$ So the lemma is proved. $\blacksquare$ Finally, we construct the biquandle $(G, \circ, \star, f):$ $$x \circ y = yxy^{-1};$$ $$x \star y = y^3xy^{-3};$$ $$f(a) = ab; f(b) = b;$$ $$\forall \alpha \in G \quad f(\alpha^{-1})=f(\alpha)^{-1};$$ $$\forall \alpha, \beta \in G \quad f(\alpha \beta) = f(\alpha)f(\beta).$$ Use of the quandle ================== For example we will use the biquandle constructed above to distinguish right and left long virtual trefoils. We will use “colourings invariant” with elements of the biquandle. It is important to notice that the number of correct colourings with the colour of the first (according to orientation) long arc of a long knot fixed is invariant. Even more, in that case the set of possible colourings of the second long arc is invariant as well. $\includegraphics[width=70mm]{Knots_3}$ $\includegraphics[width=70mm]{Knots_2}$ Let arcs of the first knot be labeled $a_i$ (according to orientation) and arcs of the second knot be labeled $b_i.$ Let $a_1 = a.$ In that case we have: $a_2 = ab^{-1}, a_3 = a^2b^{-1}a^{-1}, a_4 = (ab)^2a^{-1}, a_5 = ab^2.$ To show inequality of the knots it is enough to prove that there is no correct colouring of the second knot with $b_1 = a, b_5 = ab^2.$ Assume that is not the case: $b_1 = a, b_5 = ab^2.$ Then $b_4 = ab, b_2 \star b_4 = a, b_4 \star b_5 = b_3, b_2 = f(b_3).$ Therefore $\alpha = (ab)^{-3}a(ab)^3=(ab^3)^3ab(ab^3)^{-3} = \beta.$ But direct computions show taht $\alpha = a^7b^6$ and $\beta = a^7b.$ So $\alpha \neq \beta$ and our assumption is incorrect. So we have proved the inequality of the knots under consideration. This work is supported by RFFI grant (project  10-01-00748-a), grant of President of RF: aid for Scientific Schools (project 3224.2010.1), program Development of Scientific Potential of Higher School (project 2.1.1.3704), programs Scientific and Scientifically-Teaching personnel of innovate Russia (contracts 02.740.11.5213 and 14.740.11.0794). Joyce D. (1982) A classifiying invariant of knots, the knot quandle, *Journal of Pure and Applied Algebra,* **23** (1), pp. 37-65 V.O. Manturov, Knot Theory, Chapman & Hall, London, CRC Press. Kauffman, L. H. and Manturov, V. O. Virtual Biquandles, *Fundamenta Mathematica. (Proceedings of “Knots in Poland-2003” conference)* Afanasiev D. (2009) On Generalization of Alexander Polynomial for Long Virtual Knots, *arXiv: math.GT/0906.4245v1.* Fenn R., Kamada N., Kamada S. New Invariants of Long Virtual Knots.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In the present work, we determine explicitly the genus of any separable cubic extension of any global function field given the minimal polynomial of the extension. We give algorithms computing the ramification data and the genus of any separable cubic extension of any global rational function field.' author: - Sophie Marques and Jacob Ward bibliography: - 'references.bib' title: Explicit genus formula for any separable cubic global function field --- Introduction {#introduction .unnumbered} ============ In [@MWcubic3], we proved that any separable cubic extension of an arbitrary field admits a generator $y$, explicitly determined in terms of an arbitrary initial generating equation, such that 1. $y^3 =a$, with $a \in F$, or 2. 1. $y^3 -3y=a$, with $a \in F$, when $p\neq 3$, or 2. $y^3 +ay+a^2 = 0$, with $a \in F$, when $p=3$. As we will show in this paper, this classification allows one to deduce ramification at any place of $K$ to obtain a very explicit formula for the different, and therefore deduce explicit Riemann-Hurwitz formulae computable entirely using the only parameter of the minimal polynomial for the extension. These data may also be used to calculate integral bases [@MadMad Theorems 3 and 9], classify low genus fields, and also have applications to differentials and computation of Weirstrass points. In this paper, we describe the ramification for any place in a separable cubic extension of a global field. This study of ramification permits us to obtain in §3.3 a Riemann-Hurwitz formula for any separable extension of a cubic global function field (Theorems \[RHPC\], \[RH\], \[char3RH\]). In the Appendix, we offer an algorithms which given any irreducible polynomial of degree $3$ return the ramification data and the genus, an explicit integral basis for the given extension over a global rational field. Notation ======== Throughout the paper we denote the characteristic of the field by $p$ (including the possibility $p=0$). We let $K$ denote a function field with field of constants $\mathbb{F}_q$, where $q = p^n$ and $p > 0$ is a prime integer. Let $\overline{K}$ be the algebraic closure of $K$. For an extension $L/K$, we let $\mathcal{O}_{L,x}$ denote the integral closure of $\mathbb{F}_q[x]$ in $L$. We denote by $\mathfrak{p}$ a place of $K$ (Section 3). The *degree* $d_K(\mathfrak{p})$ of $\mathfrak{p}$ is defined as the degree of its residue field, which we denote by $k(\mathfrak{p})$, over the constant field $\mathbb{F}_q$. The cardinality of the residue field $k(\mathfrak{p})$ may be written as $|k (\mathfrak{p})| = q^{d_K(\mathfrak{p})}$. For a place $\mathfrak{P}$ of $L$ over $\mathfrak{p}$, we let $f(\mathfrak{P}|\mathfrak{p}) = [k(\mathfrak{P}):k (\mathfrak{p})]$ denote the inertia degree of $\mathfrak{P}|\mathfrak{p}$. We let $e(\mathfrak{P}|\mathfrak{p})$ be the ramification index of $\mathfrak{P}|\mathfrak{p}$, i.e., the unique positive integer such that $v_\mathfrak{P}(z) = e(\mathfrak{P}|\mathfrak{p})v_\mathfrak{p}(z)$, for all $z \in K$. If $v_\mathfrak{p}(a) \geq 0$, then we let $$\overline{a} := a \mod \mathfrak{p}$$ denote the image of $a$ in $k(\mathfrak{p})$. Henceforth, we let $F$ denote a field and $p = \text{char}(F)$ the characteristic of this field, where we admit the possibility $p = 0$ unless stated otherwise. We let $\overline{F}$ denote the algebraic closure of $F$. \[pureclosuredef\] - If $p \neq 3$, a generator $y$ of a cubic extension $L/F$ with minimal polynomial of the form $X^3 -a$ $(a \in F)$ is called a [purely]{} cubic generator, and $L/F$ is called a [purely]{} cubic extension. If $p = 3$, such an extension is simply called [purely inseparable]{}. - If $p\neq 3$ and a cubic extension $L/F$ does not possess a generator with minimal polynomial of this form, then $L/F$ is called [impurely]{} cubic. - For any cubic extension $L/F$, we define the [purely cubic closure]{} of $L/F$ to be the unique smallest extension $F'$ of $F$ such that $LF'/F'$ is purely cubic. We proved in [@MWcubic3 Theorem 2.1] that the purely cubic closure exists and is unique. We would like to point out that by [@MWcubic3 Corollary 1.2], if $p \neq 3$, then every impure cubic extension $L/K$ has a primitive element $y$ with minimal polynomial of the form $$f(X) = X^3 - 3X - a.$$ We mention this here, as we will use it whenever this case occurs in §3. When this is used, the element $y$ will denote any such choice of primitive element. Function fields =============== Constant extensions ------------------- In this subsection, we wish to determine when a cubic function field over $K$ is a constant extension of $K$. We do this before a study of ramification, as constant extensions are unramified and their splitting behaviour is well understood [@Vil Chapter 6]. In the subsequent subsections, we will thus assume that our cubic extension $L/K$ is not constant, which, as 3 is prime, is equivalent to assuming that the extension is geometric. ### $X^3-a$, $a\in K$, when $p\neq 3$ \[constantextension\] Let $p\neq 3$, and let $L/K$ be purely cubic, i.e. there exists a primitive element $y\in L$ such that $y^3 = a$, $a \in K$. Then $L/K$ is constant if, and only if, $a=ub^3$, where $b\in K$ and $u \in \mathbb{F}_q^*$ is a non-cube. In other words, there is a purely cubic generator $z$ of $L/K$ such that $z^3 = u$, where $u \in \mathbb{F}_q^*$. Suppose that $a=ub^3$, where $b\in K$ and $u$ is a non-cube in $\mathbb{F}_q^*$. Then $z=\frac{y}{b}\in L$ is a generator of $L/K$ such that $z^3 = u$. The polynomial $X^3 -u$ has coefficients in $\mathbb{F}_q$, and as a consequence, $L/K$ is constant. Suppose then that $L/K$ is constant. We denote by $l$ the algebraic closure of $\mathbb{F}_q$ in $L$, so that $L=Kl$. Let $l= \mathbb{F}_q (\lambda )$, where $\lambda$ satisfies a cubic polynomial $X^3 + e X^2 + f X + g $ with $e, f , g \in \mathbb{F}_q$. Hence, $L= K(\lambda )$. We denote $$\alpha = -2-\frac{ ( 27 g^2 -9efg +2f^3)^2}{27(3ge-f^2)^3} \in \mathbb{F}_q.$$ As $L/F$ is purely cubic, it then follows by [@MWcubic3 Corollary 1.2 and Theorem 2.1] that either $3eg = f^2$ or the quadratic polynomial $X^2+ \alpha X +1$ has a root in $K$. In both cases, there is a generator $\lambda ' \in L$ such that $$\lambda'^3 = \beta \in \mathbb{F}_q.$$ Hence $\lambda' \in l$. The elements $\lambda'$ and $y$ are two purely cubic generators of $L/K$, whence by [@MWcubic3 Theorem 3.1], it follows that $y = c \lambda'^j$ where $j=1$, or $2 $ and $c\in K$. Thus, $a = c^3 \beta^j$, where $\beta\in \mathbb{F}_q$. The result follows. ### $X^3-3X-a$, $a\in K$, when $p\neq 3$ Via [@MWcubic3 Corollary 1.2 and Theorem 3.3], a proof similar to that of Lemma \[constantextension\] yields the following result. \[constantextension2\] Let $p\neq 3$ and $L/K$ be an impurely cubic extension, so that there is a primitive element $y\in L$ such that $y^3-3y = a$ (see [@MWcubic3 Corollary 1.2]). Then $L/K$ is constant if, and only if, $$u= -3a\alpha^2\beta+a\beta^3+6\alpha+\alpha^3a^2-8\alpha^3 \in \mathbb{F}_q^*,$$ for some $\alpha, \beta \in K$ such that $\alpha^2 + a_2 \alpha \beta + \beta^2 =1$. In other words, there is a generator $z$ of $L/K$ such that $z^3 -3z=u$, where $u \in \mathbb{F}_q^*$. ### $X^3+aX+a^2$, $a\in K$, when $p= 3$ In this case, one may prove the following result, similarly to the proof of Lemma \[constantextension\], via [@MWcubic3 Corollary 1.2 and Theorem 3.6]. \[constantextension2\] Let $p= 3$ and $L/K$ be a separable cubic extension, so that there is a primitive element $y\in L$ such that $y^3+a y+ a^2=0$ (see [@MWcubic3 Corollary 1.2]). Then $L/K$ is constant if, and only if, $$u= \frac{(ja^2 + (w^3 + a w) )^2}{a^3} \in \mathbb{F}_q^*,$$ for some $w\in K$ and $j=1,2$. In other words, there is a generator $z$ of $L/K$ such that $z^3 +uz+u^2=0$, where $u \in \mathbb{F}_q^*$. Ramification ------------ In this section, we describe the ramifcation of any place of $K$ in a cubic extension $L/K$. As usual, we divide the analysis into the three fundamental cubic forms derived in [@MWcubic3 Corollary 1.2]. ### $X^3 -a$, $a\in K$, when $p\neq 3$ If the extension $L/K$ is purely cubic, one may find a purely cubic generator of a form which is well-suited to a determination of ramification, as in the following lemma. \[purelocalstandardform\] Let $L/K$ be a purely cubic extension. Given a place $\mathfrak{p}$ of $K$, one may select a primitive element $y$ with minimal polynomial of the form $X^3 -a$ such that either 1. $v_\mathfrak{p}(a)=1,2$, or 2. $v_\mathfrak{p} (a)=0$. Such a generator $y$ is said to be in [local standard form]{} at $\mathfrak{p}$. Let $y$ be a generator of $L$ such that $y^3=a \in K$. Given a place $\mathfrak{p}$ of $K$, we write $v_\mathfrak{p} (a)= 3 j + r$ with $r=0,1,2$. Via weak approximation, one may find an element $c \in K$ such that $v_\mathfrak{p} (c)= j$. Then $\frac{y}{c}$ is a generator of $L$ such that $$\left(\frac{y}{c}\right)^3 =\frac{y^3}{c^3}= \frac{a}{c^3}$$ and $v_\mathfrak{p} \left( \frac{a}{c^3}\right) = r$. Hence the result. When a purely cubic extension $L/K$ is separable, one may also easily determine the fully ramified places in $L/K$. \[RPC\] Let $p\neq 3$, and let $L/K$ be a purely cubic extension. Given a purely cubic generator $y$ with minimal polynomial $X^3 -a$, a place $\mathfrak{p}$ of $K$ is ramified if and only if it is fully ramified if, and only if, $(v_\mathfrak{p} (a), 3)=1$. Let $\mathfrak{p}$ be a place of $K$ and $\mathfrak{P}$ be a place of $L$ above $\mathfrak{p}$. Suppose that $(v_\mathfrak{p} (a), 3)=1$. Then $$3v_\mathfrak{P} (y) = v_\mathfrak{P} (y^3)= v_\mathfrak{P} (a)= e(\mathfrak{P}|\mathfrak{p}) v_\mathfrak{p} (a).$$ Since $ (v_\mathfrak{p} (a), 3)=1$, we obtain $ 3 | e(\mathfrak{P}|\mathfrak{p}) $, and as $e(\mathfrak{P}|\mathfrak{p}) \leq 3$, it follows that $e(\mathfrak{P}|\mathfrak{p}) =3$, so that $\mathfrak{p}$ is fully ramified in $L$. Conversely, suppose that $(v_\mathfrak{p} (a), 3)\neq 1$. By Lemma \[purelocalstandardform\], we know that there exists a generator $z$ of $L$ such that $z^3 -c=0$ and $v_\mathfrak{p} (c)=0$. It is not hard to see that the polynomial $X^3 -a$ is either 1. irreducible modulo $\mathfrak{p}$, 2. $X^3 -a = (X-\alpha ) Q(X) \mod \mathfrak{p}$, where $\alpha \in k(\mathfrak{p})$ and $Q(X)$ is an irreducible quadratic polynomial over modulo $\mathfrak{p}$, or 3. $f(X) = (X-\alpha ) (X-\beta) (X- \gamma)$ modulo $\mathfrak{p}$ with $\alpha , \beta, \gamma \in k(\mathfrak{p})$ all distinct. In any of these cases, by Kummer’s theorem [@Sti Theorem 3.3.7], $\mathfrak{p}$ is either inert or there exist $2$ or $3$ places above it in $L$. Thus, $\mathfrak{p}$ cannot be fully ramified in any case. Moreover, there are no partially ramified places. Indeed, if $L/K$ is Galois then this is clear and if $L/K$ is not Galois, its Galois closure of $L/K$ is $L(\xi)$ with $K(\xi)/K$ constant therefore unramified and since the index of ramification is multiplicative in tower the only possible index of ramification in $L(\xi)/K$ is $3$ and so is the only possible index of ramification in $L/K$. ### $X^3-3X-a$, $a\in K$, $p\neq 3$ In order to determine the fully ramified places in extensions of this type, we begin with an elementary but useful lemma. These criteria and notation will be employed throughout what follows. \[purelycubicclosure\] We consider the polynomial $X^2 +a X +1$ where $a \in K$, we suppose this polynomial is irreducible over $K$, let $c_-,c_+$ denote the roots of this polynomial in $\overline{K}$ the algebraic closure of $K$ and we denote $K(c)$ the quadratic extension $K(c_{\pm})$ of $K$. Then $$c_+ \cdot c_- =1,\ c_+ + c_-= -a, \text{ and } \ \sigma( c_\pm ) = c_\mp,\ \text{ where } \text{\emph{Gal}}(K(c) /K)= \{ Id , \sigma \}.$$ Let $\mathfrak{p}$ be a place of $K$ and $\mathfrak{p}_{c}$ be a place of $K(c )$ above $\mathfrak{p}$. Furthermore, we have: 1. For any place $\mathfrak{p}_{c}$ of $K({c})$, $$v_{\mathfrak{p}_{c}}(c_\pm )= -v_{\mathfrak{p}_{c}}(c_\mp ).$$ 2. For any place $\mathfrak{p}_{c}$ of $K({c})$ above a place $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a) <0$, $$v_{\mathfrak{p}}(a)=-|v_{\mathfrak{p}_{c}}(c_\pm )|,$$ and otherwise, $v_{\mathfrak{p}_{c}}(c_\pm )=0$. <!-- --> 1. At any place $\mathfrak{p}_{c}$ of $K({c})$, we have $$v_{\mathfrak{p}_{c}}(c_+\cdot c_-) = v_{\mathfrak{p}_{c}} (c_+)+ v_{\mathfrak{p}_{c}}( c_-)=v_{\mathfrak{p}_{c}}( 1)=0,$$ whence $$v_{\mathfrak{p}_{c}}(c_+)=-v_{\mathfrak{p}_{c}}( c_-).$$ 2. As $c_\pm^2 +a c_\pm +1 =0,$ the elements $c_\pm' =\frac{ c_\pm }{a}$ satisfy $$c_\pm'^2 + c_\pm' + \frac{1}{a^2} =0.$$ Thus, for any place $\mathfrak{p}_c$ of $K({c})$ above a place $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a) <0$, we obtain $$v_{\mathfrak{p}_{c}}(c_\pm'^2 + c_\pm' ) = - 2 v_{\mathfrak{p}_{c}}(a) > 0.$$ By the non-Archimedean triangle inequality, this is possible if, and only if, $v_{\mathfrak{p}_{c}}(c_\pm' ) >0$ or $v_{\mathfrak{p}_{c}}(c_\pm' ) =0$. If $v_{\mathfrak{p}_{c}}(c_\pm' ) >0$, then $$v_{\mathfrak{p}_{c}}(c_\pm' )= - 2 v_{\mathfrak{p}_{c}}(a) \quad \text{ and } \quad v_{\mathfrak{p}_{c}}(c_\pm )= - v_{\mathfrak{p}_{c}}(a).$$ If on the other hand $v_{\mathfrak{p}_{c}}(c_\pm' ) =0$, we obtain $$v_{\mathfrak{p}_{c}}(c_\pm)= v_{\mathfrak{p}_{c}}(a).$$ Thus, the latter together with part $(1)$ of this lemma implies that either $$v_{\mathfrak{p}_{c}}(c_+) = v_{\mathfrak{p}_{c}}(a) \quad \text{and} \quad v_{\mathfrak{p}_{c}}(c_-)=-v_{\mathfrak{p}_{c}}(a)$$ or vice versa (with the roles of $c_-$ and $c_+$ interchanged). Moreover, note that $\mathfrak{p}_{c}$ is unramified in $K(c)/ K$ so that $ v_{\mathfrak{p}_{c}}(a)= v_{\mathfrak{p}}(a)$. For if, when $p\neq 2$, then $K(c)/K$ has a generator $w$ such that $w^2 = -27 (a^2-4)$ and $ 2| v_{\mathfrak{p}}( -27 (a^2-4))$, thus by Kummer theory, $\mathfrak{p}$ is unramified and when $p =2$, $K(c)/K$ has a generator $w$ such that $w^2 -w= \frac{1}{a^2}$ and $ v_{\mathfrak{p}}( \frac{1}{a^2})\geq 0$, thus by Artin-Schreier theory, we have that $\mathfrak{p}$ is unramified in $K(c)$, thus the first part of $(2)$. For any place $\mathfrak{p}_{c}$ of $K(b)$ above a place $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a) > 0$. As $$v_{\mathfrak{p}_{c}}(c_\pm'^2 + c_\pm' ) = - 2 v_{\mathfrak{p}_{c}}(a) < 0,$$ again using the non-Archimedean triangle inequality, we can only have $v_{\mathfrak{p}_{c}}(c_\pm'^2) <0$, whence $v_{\mathfrak{p}_{c}}(c_\pm'^2 ) = - 2 v_{\mathfrak{p}_{c}}(a).$ This implies that $v_{\mathfrak{p}_{c}}(c_\pm ) =0$. Finally, via the triangle inequality once more, for any $\mathfrak{p}_{c}$ such that $v_{\mathfrak{p}_{c}}(a) = 0$, we must have $v_{\mathfrak{p}_{c}}(c_\pm ) =0.$ \[ramification\] Let $p \neq 3$, and let $L/K$ be an impurely cubic extension and $y$ primitive element with minimal polynomial $f(X) = X^3-3X-a$. Then 1. the fully ramified places of $K$ in $L$ are precisely those $\mathfrak{p}$ such that $(v_{\mathfrak{p}}(a), 3)=1$ and 2. the partially ramified places $\mathfrak{p}$ that is the one with index of ramification $2$ are precisely such that $a \equiv \pm 2 \mod \mathfrak{p}$ and 1. $(v_{\mathfrak{p}} ( a^2 - 4) , 2)=1 $, when $p \neq 2$; 2. there exist $w \in K$ such that $v_{\mathfrak{p}} ( 1/a +1 + w^2 -w ) <0$ and $(v_{\mathfrak{p}} (1/a+1 + w^2 -w) , 2)=1$, when $p=2$. <!-- --> 1. As usual, we let $\xi$ be a primitive $3^{rd}$ root of unity. We also let $r$ be a root of the quadratic resolvent $R(X)= X^2 +3aX + ( -27 + 9 a^2)$ of the cubic polynomial $X^3-3X-a$ in $\overline{K}$. As in [@Con Theorem 2.3], we know that $L(r)/K(r)$ is Galois, and by [@MWcubic3 Corollary 1.6], we have that $L(\xi, r) / K(\xi , r)$ is purely cubic. We denote by $\mathfrak{p}$ a place in $K$, $\mathfrak{P}_{\xi,r}$ a place of $L(\xi,r)$ above $\mathfrak{p}$, $\mathfrak{P}=\mathfrak{P}_{\xi,r}\cap L$, and $\mathfrak{p}_{\xi,r}=\mathfrak{P}_{\xi,r}\cap K(\xi , r)$. By [@MWcubic3 Theorem 1.5], we know that that $L(\xi, r)/K(\xi, r)$ is Kummer; more precisely, there exists $v \in K(\xi , r)$ such that $v^3 = c$ where $c$ is a root of the polynomial $X^2+aX+1$. We thus obtain a tower $L(\xi, r) / K(\xi, r) /K(\xi) / K$ with $L(\xi, r )/K(\xi,r )$ Kummer of degree $3$, and where $K(\xi ,r)/ K(\xi)$ and $K(\xi)/K$ are both Kummer extensions of degree $2$. As the index of ramification is multiplicative in towers and the degree of $L(\xi , r) / K(\xi ,r)$ and $K(\xi ,r)/K$ are coprime, the places of $K$ that fully ramify in $L$ are those places of $K$ which lie below those of $K(\xi , r )$ which fully ramify in $L(\xi, r)/K(\xi, r)$. As $L(\xi,r)/K(\xi,r)$ is Kummer, the places of $K(\xi ,r)$ that ramify in $L(\xi,r)$ are described precisely by Kummer theory (see for example [@Vil Example 5.8.9]) as those $\mathfrak{p}_{\xi,r}$ in $K(\xi ,r)$ such that $$(v_{\mathfrak{p}_{\xi,r}} (c_\pm ) , 3)=1.$$ Lemma \[purelycubicclosure\] states that if $v_{\mathfrak{p}}(a) <0$, then $v_{\mathfrak{p}_{\xi, r}}(c_\pm )= \pm v_{\mathfrak{p}_{\xi,r}}(a)$ and that otherwise, $v_{\mathfrak{p}_{\xi,r}}(c_\pm )=0$. Thus, the ramified places of $L/F$ are those places $\mathfrak{p}$ below a place $\mathfrak{p}_{\xi , r}$ of $K(\xi ,r)$ such that $(v_{\mathfrak{p}_{\xi,r}}(a), 3)=1$. Also, $$v_{\mathfrak{p}_{\xi, r}}(a) = e(\mathfrak{p}_{\xi, r} | \mathfrak{p}) v_{\mathfrak{p}}(a),$$ where $e(\mathfrak{p}_{\xi, r} | \mathfrak{p})$ is the ramification index of a place $\mathfrak{p}$ of $K$ in $K(\xi, r)$, equal to $1$, $2$, or $4$, and in any case, coprime with $3$. Thus, $(v_{\mathfrak{p}_{\xi,r}}(a),3)=1$ if, and only if, $(v_{\mathfrak{p}}(a),3)=1$. As a consequence of the above argument, it therefore follows that a place $\mathfrak{p}$ of $K$ is fully ramified in $L$ if, and only if, $v_{\mathfrak{p}}(a)<0$. If $L/K$ is Galois then all the places are fully ramified. 2. Now, if $L/K$ is not Galois and a ramified place $\mathfrak{p}$ is not fully ramified in $L/K$ its index of ramification is $2$. The Galois closure of $L/K$ is $L(r)/K$ since $L(r)/K(r)$ is Galois then all the ramified places in $L(r)/ K(r)$ are fully ramified and the only possible way that the index of ramification of a place is $2$ in $L/K$ is that this place is ramified in $K(r)/K$ since the index of ramification is multiplicative in tower. By Kummer and Artin-Schreier theory this implies that $v_{\mathfrak{p}}(a)>0$. Since $K(r)/K$ is defined by a minimal equation $X^2 = -27 (a^2-4)$ when $p \neq 2$ and and $X^2 -X = 1 + 1/a$ when $p=2$. When $v_{\mathfrak{p}} (a)\geq 0$, via Kummer’s theorem, for $\mathfrak{p}$ to be partially ramified in $L/K$, the only possible decomposition of $X^3 -3 X -a$ mod $\mathfrak{p}$ is $$X^3 -3 X -a = (X-\alpha)^2 ( X-\beta) \mod \mathfrak{p}$$ The equality $f(X) = (X - \alpha) (X - \beta)^2$ gives us $$X^3 - 3X - a = (X - \alpha)(X - \beta)^2 = X^3 - (2\beta + \alpha) X^2 + (\beta^2 + 2\alpha \beta) X - \alpha \beta^2.$$ Thus $\alpha = - 2\beta$. We therefore have $- 3 = \beta^2 - 4\beta^2 = - 3 \beta^2$ and $a = - 2\beta^3$. The first of these implies that $$3(\beta^2 - 1) = 3 \beta^2 - 3 = 0.$$ Thus $\beta = \pm 1$ and $a = \mp 2$. Conversely, when $a = \mp 2$, then $$X^3-3 X \mp 2= ( X \pm 2)(X\mp 1 )^2.$$ Therefore, in order for $\mathfrak{p}$ to be partially ramified we need that $\mathfrak{p}$ ramified in $K(r)$ that is 1. $(v_{\mathfrak{p}} ( a^2 - 4) , 2)=1 $, when $p \neq 2$; 2. there exist $w \in K$ such that $v_{\mathfrak{p}} ( 1/a +1 + w^2 -w ) <0$ and $(v_{\mathfrak{p}} (1/a+1 + w^2 -w) , 2)=1$, when $p=2$. and $a \equiv \mp 2 \mod \ \mathfrak{p}$. Conversely, suppose that $\mathfrak{p}$ is a place such that $a \equiv \mp 2 \mod \ \mathfrak{p}$ and $\mathfrak{p}$ ramified in $K(r)$. Since $\mathfrak{p}$ cannot be ramified in $K(r)$ without $v_{\mathfrak{p}}(a) \geq 0$ and $L(r)/K$ is Galois, then when $a \equiv \mp 2 \mod \ \mathfrak{p}$ and $\mathfrak{p}$ ramified in $K(r)$, then the place above $\mathfrak{p}$ in $K(r)$ is unramified in $L(r)/K(r)$ (see proof of (1)) therefore completely split and we must have $$\mathfrak{p} \mathcal{O}_{L(r),x} = (\mathfrak{P}_{1,r}\mathfrak{P}_{2,r} \mathfrak{P}_{3,r})^2$$ Since $a \equiv \mp 2 \mod \ \mathfrak{p}$, we have $X^3 - 3 X -a = (X-\alpha ) (X-\beta)^2 \mod \mathfrak{p} $ with $\alpha , \beta \in k (\mathfrak{p})$ and $\alpha \neq \beta$, by Kummer’s theorem, we know that there is at least two place above $\mathfrak{p}$ in $L$ thus either 1. $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2$ where $\mathfrak{P}_i$, $i=1,2$ place of $L$ above $\mathfrak{p}$, or 2. $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1^2 \mathfrak{P}_2$ where $\mathfrak{P}_i$, $i=1,2$ place of $L$ above $\mathfrak{p}$, or 3. $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2 \mathfrak{P}_3$ where $\mathfrak{P}_i$, $i=1,2,3$ place of $L$ above $\mathfrak{p}$. By [@Neu p. 55], we know that $\mathfrak{p}$ is completely split in $L$ (case (c)) if, and only if, (1) $\mathfrak{p}$ is completely split in $K(r)$ and (2) $\mathfrak{p}_r$ completely split in $L(r)$. Thus, either $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2$ or $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1^2 \mathfrak{P}_2$ where each $\mathfrak{P}_i$ ($i=1,2$) is a place of $L$ above $\mathfrak{p}$. Note that $2\;|\; e ( \mathfrak{P}_{r}| \mathfrak{p})$ for any places $\mathfrak{P}_r$ in $L(r)$ above $\mathfrak{p}$. If $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2$, then as $e( \mathfrak{P}_i| \mathfrak{p} )=1$, we have that $2\;|\;e( \mathfrak{P}_{i,r}| \mathfrak{P}_i)$ and $\mathfrak{p}\mathcal{O}_{L(r),x} = \mathfrak{P}_{1,r}^2 \mathfrak{P}_{2,r}^2$, where $\mathfrak{P}_{i,r}$, $i=1,2$ are places above $\mathfrak{p}$ in $L(r)$, which is impossible, as $\mathfrak{p} \mathcal{O}_{L(r),x} = (\mathfrak{P}_{1,r}\mathfrak{P}_{2,r} \mathfrak{P}_{3,r})^2$. Thus, in this case, we must have $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1^2 \mathfrak{P}_2$ and $\mathfrak{P}_1$ is split in $K(r)$ and $\mathfrak{P}_2$ ramifies in $K(r)$. This theorem yields the following corollaries, the first being immediate. \[ramificationgalois\] Suppose that $q \equiv - 1 \mod 3$. Let $L/K$ be a Galois cubic extension, so that there exists a primitive element $y$ of $L$ with minimal polynomial $f(X) = X^3-3X-a$. Then the (fully) ramified places of $K$ in $L$ are precisely those places $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a)<0$ and $(v_{\mathfrak{p}}(a), 3)=1$. \[odddegree\] Suppose that $q \equiv - 1 \mod 3$. Let $L/K$ be a Galois cubic extension, so that there exists a primitive element $y$ of $L$ with minimal polynomial $f(X) = X^3-3X-a$. Then, only those places of $K$ of even degree can (fully) ramify in $L$. More precisely, any place $\mathfrak{p}$ of $K$ such that $v_\mathfrak{p}(a) <0$ is of even degree. In Lemma \[purelycubicclosure\], it was noted that $ \sigma( c_\pm ) = c_\mp$ where $\text{\emph{Gal}}(K(c_\pm ) /K)= \{ Id , \sigma \}$, when $c_\pm \notin K$. Let $\xi$ again be a primitive $3^{rd}$ root of unity. We denote by $\mathfrak{p}$ a place of $K$ and $\mathfrak{p}_\xi$ a place of $K(\xi)$ above $\mathfrak{p}$. We find that $$v_{\mathfrak{p}_\xi}(c_\pm) = v_{\sigma(\mathfrak{p}_\xi)}(\sigma ( c_\pm ))=v_{\sigma(\mathfrak{p}_\xi)}( c_\mp ).$$ Note that if $\sigma ( \mathfrak{p}_\xi )= \mathfrak{p}_\xi$, it follows that $v_{\mathfrak{p}_\xi}(c_\pm) =v_{\mathfrak{p}_\xi}( c_\mp )$. However, by Lemma \[purelycubicclosure\], we have that, for any place $\mathfrak{p}_\xi$ of $K(\xi)$ above a place $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a) <0$, it holds that $v_{\mathfrak{p}_\xi}(c_\pm )= \pm v_{\mathfrak{p}_\xi}(a)$, and that $v_{\mathfrak{p}_\xi }(c_\pm )= -v_{\mathfrak{p}_\xi }(c_\mp )$. Thus, for any place $\mathfrak{p}_\xi$ of $K(\xi )$ above a place $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a) <0$, we find that $v_{\mathfrak{p}_\xi}(c_\pm )\neq v_{\mathfrak{p}_\xi}(c_\mp )$ and thus $\sigma ( \mathfrak{p}_\xi )\neq \mathfrak{p}_\xi$. Therefore, by [@Vil Theorem 6.2.1], we obtain that $ \mathfrak{p} $ is of even degree, for any place $\mathfrak{p}$ of $K$ such that $v_{\mathfrak{p}}(a)<0$. \[infty\] Suppose that $q \equiv - 1 \mod 3$. Let $L/K$ be a Galois cubic extension, so that there exists a primitive element $y$ of $L$ with minimal polynomial $f(X)=X^3-3 X -a$. Then one can choose a single place $\mathfrak{P}_\infty$ at infinity in $K$ such that $v_{\mathfrak{P}_\infty}(a) \geq 0$. One can choose $x \in K \backslash \mathbb{F}_q$ such that the place $\mathfrak{p}_\infty$ at infinity for $x$ has the property that all of the places in $K$ above it are of odd degree. In order to accomplish this, we appeal to a method similar to the proof of [@Vil Proposition 7.2.6]; because there exists a divisor of degree 1 [@Vil Theorem 6.3.8], there exists a prime divisor $\mathfrak{P}_\infty$ of $K$ of odd degree; for if all prime divisors of $K$ were of even degree, then the image of the degree function of $K$ would lie in $2\mathbb{Z}$, which contradicts [@Vil Theorem 6.3.8]. Let $d$ be this degree. Let $m \in \mathbb{N}$ be such that $m > 2 g_K - 1$. Then, by the Riemann-Roch theorem [@Vil Corollary 3.5.8], it follows that there exists $x \in K$ such that the pole divisor of $x$ in $K$ is equal to $\mathfrak{P}_\infty^m$. By definition, the pole divisor of $x$ in $k(x)$ is equal to $\mathfrak{p}_\infty$. It follows that $$(\mathfrak{p}_\infty)_K = \mathfrak{P}_\infty^m,$$ from which it follows that $\mathfrak{P}_\infty$ is the unique place of $K$ above $\mathfrak{p}_\infty$, and by supposition that $\mathfrak{P}_\infty$ is of odd degree. From this argument, we obtain that, with this choice of infinity, all places above infinity in $k(x)$ are of odd degree. (We also note that we may very well choose $m$ relatively prime to $p$, whence $K/k(x)$ is also separable; in general, $K/k(x)$ as chosen will not be Galois.) As $q \equiv -1 \mod 3$, $L/K$ is a Galois extension, and $y$ is a primitive element with minimal polynomial of the form $X^3 -3 X -a$ where $a \in K$, we know that all of the places $\mathfrak{p}$ of $K$ such that $v_\mathfrak{p} (a)<0$, and in particular, all the ramified places, are of even degree (see Corollary \[odddegree\]). It follows that the process described in this proof gives the desired construction, and the result follows. We note that when $K$ is a rational function field, one may use Corollary \[infty\] to show that the parameter $a$ has nonnegative valuation at $\mathfrak{p}_\infty$ for a choice of $x$ such that $K = \mathbb{F}_q(x)$, and thus such $\mathfrak{p}_\infty$ is unramified. ### $X^3+aX+a^2$, $a\in K$, $p = 3$ As for purely cubic extensions, there exist a local standard form which is useful for a study of splitting and ramification. \[char3localstandardform1\] Let $p=3$, and let $L/K$ be a cubic separable extension. Let $\mathfrak{p}$ be a place of $K$. Then there is a generator $y$ such that $y^3 +a y +a ^2 =0$ such that $v_\mathfrak{p}(a) \geq 0$, or $v_\mathfrak{p} (a) <0 $ and $( v_\mathfrak{p} (a), 3)=1$. Such a $y$ is said to be [in local standard form]{} at $\mathfrak{p}$. Let $\mathfrak{p}$ be a place of $K$. Let $y_1$ be a generator of $L/K$ such that $y_1^3 +a_1 y_1 + a_1^2=0$ (this was shown to exist in [@MWcubic]). By [@MWcubic3 Theorem 3.6], any other generator $y_2$ with a minimal equation of the same form $y_2^3 +a_2 y_2 +a_2^2=0$ is such that $y_2 =-\beta (\frac{j}{a_1}y_1- \frac{1}{a_1} w )$, and we have $$a_2 = \frac{(ja_1^2 + (w^3 + a_1 w) )^2}{a_1^3}.$$ Suppose that $v_\mathfrak{p}(a_1) < 0$, and that $3 \;|\; v_\mathfrak{p}(a_1)$. Using the weak approximation theorem, we choose $\alpha \in K$ such that $v_\mathfrak{p}(\alpha) = 2v_\mathfrak{p}(a_1)/3$, which exists as $3 \;|\; v_\mathfrak{p}(a_1)$. Then $$v_\mathfrak{p}(\alpha^{-3} ja_1^2) = 0.$$ Let $w_0 \in K$ be chosen so that $w_0 \neq - \alpha^{-3} ja_1^2$ and $$v_\mathfrak{p}(\alpha^{-3} ja_1^2 + w_0) > 0.$$ This may be done via the following simple argument: As $v_\mathfrak{p}(\alpha^{-3} ja_1^2) = 0$, then $\overline{\alpha^{-3} ja_1^2} \neq 0$ in $k(\mathfrak{p})$. We then choose some $w_0 \neq - \alpha^{-3} ja_1^2 \in K$ such that $\overline{ w_0} = -\overline{\alpha^{-3} ja_1^2}$ in $k(\mathfrak{p})$. Note that $v_{\mathfrak{p}}(w_0)=0$. Thus, $\overline{\alpha^{-3} ja_1^2 + w_0}=0$ in $k(\mathfrak{p})$ and $v_\mathfrak{p}(\alpha^{-3} ja_1^2 + w_0) > 0$. As $p = 3$, it follows that the map $X \rightarrow X^3$ is an isomorphism of $k(\mathfrak{p})$, so we may find an element $w_1 \in K$ such that $w_1^3 = w_0 \mod \mathfrak{p}$. Hence $$v_\mathfrak{p}(\alpha^{-3} ja_1^2 + w_1^3) > 0.$$ We then let $w_2 = \alpha w_1$, so that $$v_\mathfrak{p}(j a_1^2 + w_2^3) = v_\mathfrak{p}(j a_1^2 + \alpha^3 w_1^3) > v_\mathfrak{p}(j a_1^2) .$$ Thus, as $v_\mathfrak{p}(a_1) < 0$, we obtain $$\begin{aligned} v_\mathfrak{p}(ja_1^2 + (w_2^3 + a_1 w_2)) &\geq \min\{v_\mathfrak{p}(ja_1^2 + w_2^3 ) ,v_\mathfrak{p}(a_1 w_2) \} \\&> \min\{v_\mathfrak{p}(j a_1^2),v_\mathfrak{p}(a_1 w_2)\}\\& = \min\{v_\mathfrak{p}(j a_1^2),v_\mathfrak{p}(a_1) + 2v_\mathfrak{p}(a_1)/3\} \\& = \min\{2v_\mathfrak{p}(a_1),5v_\mathfrak{p}(a_1)/3\} \\& = 2v_\mathfrak{p}(a_1). \end{aligned}$$ Hence $$v_\mathfrak{p}(a_2) = v_\mathfrak{p}\left(\frac{(ja_1^2 + (w^3 + a_1 w) )^2}{a_1^3}\right) > 4v_\mathfrak{p}(a_1) - v_\mathfrak{p}(a_1^3) = v_\mathfrak{p}(a_1).$$ We can thus ensure (after possibly repeating this process if needed) that we terminate at an element $a_2 \in K$ for which $v_\mathfrak{p}(a_2) \geq 0$ or for which $v_\mathfrak{p}(a_2) < 0$ and $(v_\mathfrak{p}(a_2),3) = 1$. Note that we can do what we have done in the previous Lemma simultaneously at any finite place (see [@MWcubic4 Lemma 1.2]). \[char3localstandardform\] Suppose that $p = 3$. Let $L/K$ be a separable cubic extension and $y$ a primitive element with minimal polynomial $X^3 + aX + a^2$. Let $\mathfrak{p}$ be a place of $K$ and $\mathfrak{P}$ a place of $L$ above $\mathfrak{p}$. Then 1. $\mathfrak{p}$ is fully ramified if, and only if, there is $w \in K$, $v_\mathfrak{p}(\alpha ) < 0$ and $(v_\mathfrak{p}(\alpha ),3) = 1$ with $$\alpha = \frac{(ja^2 + (w^3 + a w) )^2}{a^3}.$$ Equivalently, there is a generator $z$ of $L$ whose minimal polynomial is of the form $X^3 + \alpha X + \alpha^2$, where $v_\mathfrak{p}(\alpha ) < 0$ and $(v_\mathfrak{p}(\alpha ),3) = 1$, and 2. $\mathfrak{p}$ is partially ramified if and only if $(v_\mathfrak{p}( a) , 2)=1$ and there is $w \in K$ such that $v_\mathfrak{p}(\alpha ) \geq 0$ with $$\alpha = \frac{(ja^2 + (w^3 + a w) )^2}{a^3}.$$ The later is equivalent to the existence of a generator $z$ of $L$ whose minimal polynomial is of the form $X^3 + \alpha X + \alpha^2$, where $v_\mathfrak{p}(\alpha ) \geq 0$. <!-- --> 1. Let $\mathfrak{p}$ be a place of $K$, and denote by $\mathfrak{P}$ a place of $L$ above $\mathfrak{p}$. When $L/F$ is Galois, this theorem is simply the usual Artin-Schreier theory (see [@Sti Proposition 3.7.8]). Otherwise, since the discriminant of the polynomial $X^3+a X+a^2$ is equal to $\Delta=-4a^3=-a^3$, by [@Con Theorem 2.3], we know that the Galois closure of $L/F$ is equal to $L(\Delta )= L(b)$, where $b^2 = -a$. Let $\mathfrak{p}_b$ a place of $K(b)$ above $\mathfrak{p}$. The extension $L(b)/K(b)$ is an Artin-Schreier extension with Artin-Schreier generator $y/b$ possessing minimal polynomial $X^3-X+b$. As $L(b)/K(b)$ is Galois, if $\mathfrak{p}_b$ is ramified in $L(b)$, then it must be fully ramified. Furthermore, as the degree $K(b)/K$ is equal to $2$, which is coprime with $3$, and the index of ramification is multiplicative in towers, it follows that the place $\mathfrak{p}$ is fully ramified in $L$ if, and only if, $\mathfrak{p}_b$ is fully ramified in $L(b)$. By [@Sti Proposition 3.7.8], 1. $\mathfrak{p}_b$ is fully ramified in $L(b)$ if, and only if, there is an Artin-Schreier generator $z$ such that $z^3-z -c$ with $v_{\mathfrak{p}_b}(c) < 0$ and $(v_{\mathfrak{p}_b}(c),3)=1$, and 2. $\mathfrak{p}_b$ is unramified in $L(b)$ if, and only if, there is an Artin-Schreier generator $z$ such that $z^3-z -c$ with $v_{\mathfrak{p}_b}(c) \geq 0$. Suppose that there is a generator $w$ such that $w^3 +a_1 w +a_1 ^2 =0$, $v_\mathfrak{p} (a_1) <0 $ and $( v_\mathfrak{p} (a_1), 3)=1$. Then over $K(b_1)$, where $b_1^2 =-a_1$, we have an Artin-Schreier generator $z$ of $L(b_1)$ such that $z^3-z +b_1$. Moreover, $$v_{\mathfrak{p}_{b_1}}(b_1)= \frac{ v_{\mathfrak{p}_{b_1}}(a_1)}{2}= \frac{e(\mathfrak{p}_{b_1} |\mathfrak{p}) v_{\mathfrak{p}}(a_1)}{2},$$ where $e(\mathfrak{p}_{b_1} |\mathfrak{p})$ is the index of ramification of $\mathfrak{p}_{b_1}$ over $K(b_1)$, whence $e(\mathfrak{p}_{b_1} |\mathfrak{p})=1$ or $2$. As a consequence, $$(v_{\mathfrak{p}_{b_1}}(b_1), 3)=( v_\mathfrak{p} (a_1), 3)=1,$$ and $\mathfrak{p}_{b_1}$ is fully ramified in $L(b_1)$, so that $\mathfrak{p}$ too must be fully ramified in $L$. Suppose that there exists a generator $w$ such that $w^3 +a_1 w +a_1 ^2 =0$, $v_\mathfrak{p} (a_1) \geq 0$. Then over $K(b_1)$, where $b_1^2 =-a_1$, we have a generator $z$ of $L(b_1)$ such that $z^3-z +b_1$ and $$v_{\mathfrak{p}_{b_1}}(b_1)= \frac{e(\mathfrak{p}_{b_1} |\mathfrak{p}) v_{\mathfrak{p}}(a_1)}{2}\geq 0.$$ Thus $\mathfrak{p}_b$ is unramified in $L(b)$, so that $\mathfrak{p}$ cannot be fully ramified in $L$, since the ramification index is multiplicative in towers. The theorem then follows by Lemma \[char3localstandardform1\]. If $L/K$ is Galois, then the ramified places are all fully ramified. 2. If $L/K$ is not Galois and a ramified place $\mathfrak{p}$ is not fully ramified in $L/K$ its index of ramification is $2$. Moreover, when $\mathfrak{p}$ is not fully ramified we know by $(1)$ and Lemma \[char3localstandardform1\] that there is $w \in K$ such that $v_\mathfrak{p}(\alpha ) \geq 0$ with $$\alpha = \frac{(ja^2 + (w^3 + a w) )^2}{a^3}.$$ The Galois closure of $L/K$ is $L(b)/K$ where $b^2 = -\alpha$ since $L(b)/K(b)$ is Galois then all the ramified places in $L(b)/ K(b)$ are fully ramified and the only possible way that the index of ramification of a place is $2$ in $L/K$ is that this place is ramified in $K(b)/K$ since the index of ramification is multiplicative in tower. That is $$(v_{\mathfrak{p}}(a),2)= (v_{\mathfrak{p}}(\alpha) , 2)=1.$$ Since the Galois closure has also as generator $c$ such that $c^2 = -a$. Therefore, $v_{\mathfrak{p}} (\alpha )>0$ and $(v_{\mathfrak{p}}(a),2)=1$. Conversely, suppose there is $w \in K$ such that $v_\mathfrak{p}(\alpha ) > 0$ with $$\alpha = \frac{(ja^2 + (w^3 + a w) )^2}{a^3}$$ and $(v_{\mathfrak{p}}(a),2)=1$. If $v_\mathfrak{p}(\alpha) > 0$, then $L(b)/K(b)$ is an Artin-Schreier extension by [@Con Theorem 2.3], and there is an Artin-Schreier generator $w=\frac{z}{b}$ such that $w^3 - w + b=0$ and $v_{\mathfrak{p}_b} ( b)>0$, where $\mathfrak{p}_b$ is a place of $K(d)$ above $\mathfrak{p}$. Thus $b \equiv 0 \mod \mathfrak{p}_b$, and the polynomial $$X^3 -X +b \equiv X^3 - X \mod \mathfrak{p}_b$$ factors as $X(X-1)(X+1)$ modulo $\mathfrak{p}_b$. By Kummer’s theorem ([@Sti Theorem 3.3.7]), we then have that $\mathfrak{p}_b$ is completely split in $L(b)$. As $\mathfrak{p}_b$ is completely split in $L(b )$, we have that $\mathfrak{p}$ cannot be inert in $L$. Indeed, if $\mathfrak{p}$ were inert in $L$, then there are at most two places above $\mathfrak{p}$ in $L(b)$, in contradiction with the proven fact that $\mathfrak{p}_b$ is completely split in $L(b)$. By [@Neu p.55], $\mathfrak{p}$ splits completely in $L$ if, and only if, $\mathfrak{p}$ is completely split in $K(b)$ and $\mathfrak{p}_b$ is completely split in $L(b)$. Also, since by the previous argument $\mathfrak{p}$ cannot be inert in $L$, we have that either $$\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2\qquad\text{or}\qquad\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2^2,$$ where $\mathfrak{P}_i$, $i=1,2$ are places of $L$ above $\mathfrak{p}$. Let $\mathfrak{P}_b$ be a place of $L(b)$ above $\mathfrak{p}$. When $\mathfrak{p}$ is ramified in $K(b)$, then the index of ramification at any place above $\mathfrak{p}$ in $L(b)$ is divisible by $2$, since $L(b)/K$ is Galois by [@Con Theorem 2.3], whence $\mathfrak{p}\mathcal{O}_{L,x} = \mathfrak{P}_1 \mathfrak{P}_2^2$. Riemann-Hurwitz formulae ------------------------ Using the extension data, it is possible to give the Riemann-Hurwitz theorem for each of our forms in [@MWcubic3 Corollary 1.2]. These depend only on information from a single parameter. $X^3 -a$, $a\in K$, $p \neq 3$ ------------------------------ \[diffexpqneq1mod3\] Let $p \neq 3$. Let $L/K$ be a purely cubic extension and $y$ a primitive element of $L$ with minimal polynomial $f(X) = X^3-a$. Let $\mathfrak{p}$ be a place of $K$ and $\mathfrak{P}$ a place of $L$ over $\mathfrak{p}$. Then the following are true: 1. $d(\mathfrak{P}|\mathfrak{p}) =0$ if, and only if, $e(\mathfrak{P}|\mathfrak{p})=1$. 2. $d(\mathfrak{P}|\mathfrak{p}) =2$, otherwise. That is, $e(\mathfrak{P}|\mathfrak{p})=3$, which by Theorem \[RPC\] is equivalent to $(v_{\mathfrak{p}}(a), 3)=1$. By Theorem \[RPC\], either $e(\mathfrak{P}|\mathfrak{p})=1$ or $e(\mathfrak{P}|\mathfrak{p})=3$. 1. As the constant field $\mathbb{F}_q$ of $K$ is perfect, all residue field extensions in $L/K$ are automatically separable. The result then follows from [@Vil Theorem 5.6.3]. 2. If $e(\mathfrak{P}|\mathfrak{p})=3$, then as $p \nmid 3$, it follows again from \[Theorem 5.6.3, Ibid.\] that $d(\mathfrak{P}|\mathfrak{p}) = e(\mathfrak{P}|\mathfrak{p}) - 1 = 2$. We thus find the Riemann-Hurwitz formula as follows for purely cubic extensions when the characteristic is not equal to 3, which resembles that of Kummer extensions, but no assumption is made that the extension is Galois. \[RHPC\] Let $p \neq 3$. Let $L/K$ be a purely cubic geometric extension, and $y$ a primitive element of $L$ with minimal polynomial $f(X) = X^3-a$. Then the genus $g_L$ of $L$ is given according to the formula $$g_L = 3g_K - 2 + \sum_{\substack{ ( v_\mathfrak{p}(a),3) = 1}} d_{K}(\mathfrak{p}).$$ This follows from Lemma \[diffexpqneq1mod3\], [@Vil Theorem 9.4.2], and the fundamental identity $\sum e_i f_i = [L:K] = 3$. $X^3 -3X -a$, $a\in K$, $p\neq 3$ --------------------------------- \[diffexpqneq1mod3\] Let $p \neq 3$. Let $L/K$ be an impurely cubic extension and $y$ a primitive element of $L$ with minimal polynomial $f(X) = X^3-3X-a$. Let $\mathfrak{p}$ be a place of $K$ and $\mathfrak{P}$ a place of $L$ over $\mathfrak{p}$. Let $\Delta=-27(a^2-4)$ be the discriminant of $f(X)$ and $r \in \overline{K}$ a root of the quadratic resolvent $R(X)= X^2 +3aX + ( -27 + 9 a^2)$ of $f(X)$. Then the following are true: 1. $d(\mathfrak{P}|\mathfrak{p}) =0$ if, and only if, $e(\mathfrak{P}|\mathfrak{p})=1$. 2. If $e(\mathfrak{P}|\mathfrak{p})=3$, which by Theorem \[ramification\] is equivalent to $v_\mathfrak{p}(a) < 0$ and $(v_{\mathfrak{p}}(a), 3)=1$, then $d(\mathfrak{P}|\mathfrak{p}) =2$. 3. If $e(\mathfrak{P}|\mathfrak{p})=2$, 1. If $p \neq 2$, by Theorem \[ramification\], this occurs precisely when $\Delta$ is not a square in $K$, $a\equiv \pm 2 \mod \mathfrak{p}$, $( v_{\mathfrak{p}}(\Delta ), 2)=1$. In this case, $2 \;| \;v_{\mathfrak{P}}(\Delta )$ and $d(\mathfrak{P}|\mathfrak{p})=1$. 2. If $p = 2$, by Theorem \[ramification\], this occurs when $r\notin K$, $a\equiv 0 \mod \mathfrak{p}$, there is $w_\mathfrak{p} \in K$ such that $v_{\mathfrak{p}}(\left( \frac{1}{a^2}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right) , 2)=1\quad\text{and}\quad v_{\mathfrak{p}}\left( \frac{1}{a^2}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)<0$. Also, in this case, there exists $\eta_\mathfrak{P} \in L$ such that $v_{\mathfrak{P}}\left( \frac{1}{a^2}+1 -\eta_\mathfrak{P} ^2+\eta_\mathfrak{P} \right)\geq 0,$ and we have for this $\mathfrak{P}$ that $$d(\mathfrak{P}|\mathfrak{p})=-v_{\mathfrak{p}}\left( \frac{1}{a^2} +1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)+1 .$$ Let $\mathfrak{p}$ be a place of $K$, $\mathfrak{P}_r$ a place of $L(r)$ above $\mathfrak{p}$, $\mathfrak{P}=\mathfrak{P}_r\cap L$, and $\mathfrak{p}_r=\mathfrak{P}_r\cap K(r)$. 1. As the constant field $\mathbb{F}_q$ of $K$ is perfect, all residue field extensions in $L/F$ are automatically separable. The result then follows from [@Vil Theorem 5.6.3]. 2. If $e(\mathfrak{P}|\mathfrak{p})=3$, then as $p \nmid 3$, it follows again from \[Theorem 5.6.3, Ibid.\] that $d(\mathfrak{P}|\mathfrak{p}) = e(\mathfrak{P}|\mathfrak{p}) - 1 = 2$. 3. When $e(\mathfrak{P}|\mathfrak{p})=2$, 1. if $p\neq 2$, then by \[Theorem 5.6.3, Ibid.\], $d(\mathfrak{P}|\mathfrak{p}) = e(\mathfrak{P}|\mathfrak{p}) - 1 = 1$. 2. if $p= 2$, then we work on the tower $L(r)/K(r)/ K$. If $e(\mathfrak{P}|\mathfrak{p})=2$, then $e(\mathfrak{p}_r|\mathfrak{p})=2$, $e(\mathfrak{P}_r|\mathfrak{p}_r)=1$ and $e(\mathfrak{P}_r|\mathfrak{P})=1$. As $p=2$, the extension $K(r)/K$ is Artin-Schreier and is generated by an element $\alpha$ such that $\alpha^2 - \alpha = \frac{1}{a^2}+1$. By Artin-Schreier theory (see [@Sti Theorem 3.7.8]), as $ e(\mathfrak{p}_r|\mathfrak{p})=2$, there exists an element $w_\mathfrak{p} \in K$ such that $$(v_{\mathfrak{p}}\left( \frac{1}{a^2}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right) , 2)=1\qquad\text{and}\qquad v_{\mathfrak{p}}\left( \frac{1}{a^2}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)<0.$$ In addition, since $e(\mathfrak{P}_r|\mathfrak{P})=1$, there exists $\eta_\mathfrak{P} \in L$ such that $$v_{\mathfrak{P}}\left( \frac{1}{a^2}+1 -\eta_\mathfrak{P} ^2+\eta_\mathfrak{P} \right)\geq 0.$$ By Artin-Schreier theory (see [@Sti Theorem 3.7.8]), we obtain $$d(\mathfrak{p}_r | \mathfrak{p})= -v_{\mathfrak{p}}\left( \frac{1}{a^2}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)+1.$$ By [@Vil Theorem 5.7.15], we then find by equating differential exponents in the towers $L(r)/K(r)/K$ and $L(r)/L/K$ that $$d( \mathfrak{P}_r| \mathfrak{p})= d( \mathfrak{P}_r| \mathfrak{P})+ e( \mathfrak{P}_r| \mathfrak{P}) d( \mathfrak{P}| \mathfrak{p})= d( \mathfrak{P}_r| \mathfrak{p}_r)+ e( \mathfrak{P}_r| \mathfrak{p}_r) d( \mathfrak{p}_r| \mathfrak{p}).$$ This implies that $$d( \mathfrak{P}| \mathfrak{p})= d( \mathfrak{p}_r| \mathfrak{p})= -v_{\mathfrak{p}}\left( \frac{1}{a^2}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)+1,$$ as $e(\mathfrak{P}_r|\mathfrak{P})=e(\mathfrak{P}_r|\mathfrak{p}_r)=1$ implies $d( \mathfrak{P}_r| \mathfrak{P})=d(\mathfrak{P}_r|\mathfrak{p}_r)=0$. We are now able to state and prove the Riemann-Hurwitz formula for this cubic form. \[RH\] Let $p \neq 3$. Let $L/K$ be a cubic geometric extension and $y$ a primitive element of $L$ with minimal polynomial $f(X) = X^3-3X-a$. Let $\Delta=-27(a^2-4)$ be the discriminant of $f(X)$ and $r$ a root of the quadratic resolvent $R(X)= X^2 +3aX + ( -27 + 9 a^2)$ of the cubic polynomial $X^3-3X-a$ in $\overline{K}$.Then the genus $g_L$ of $L$ is given according to the formula 1. If $p \neq 2$, then $$g_L = 3g_K - 2 +\frac{1}{2}\sum_{\mathfrak{p}\in \mathcal{S}} d_{K}(\mathfrak{p})+ \sum_{\substack{ v_\mathfrak{p}(a)<0\\ ( v_\mathfrak{p}(a),3) = 1}} d_{K}(\mathfrak{p}).$$ where $\mathcal{S}$ is the set of places of $K$ such that both $a\equiv \pm 2 \mod \mathfrak{p}$ and $ v_{\mathfrak{p}}(\Delta, 2)=1$. Moreover, $\Delta$ is a square in $K$ up to a unit if, and only if, the set $\mathcal{S}$ is empty. 2. If $p = 2$, then $$g_L = 3g_K - 2 + \frac{1}{2} \sum_{\mathfrak{p}\in \mathcal{S}} [ -v_{\mathfrak{p}}\left( \frac{1}{a}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)+1] d_{K}(\mathfrak{p})+ \sum_{\substack{ v_\mathfrak{p}(a)<0\\ ( v_\mathfrak{p}(a),3) = 1}} d_{K}(\mathfrak{p}),$$ where $\mathcal{S}$ is the set of places of $K$ such that both $a \equiv 0 \mod \mathfrak{p}$ and there exists $w_\mathfrak{p} \in K$ such that $v_{\mathfrak{p}}\left( \frac{1}{a}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right)<0$ and $(v_{\mathfrak{p}}\left( \frac{1}{a}+1 -w_\mathfrak{p}^2+w_\mathfrak{p} \right) , 2)=1$. Moreover, if $r \in K$ (hence the extension $L/K$ is Galois), then the set $\mathcal{S}$ is empty. <!-- --> 1. By [@Vil Theorem 9.4.2], the term associated with a place $\mathfrak{P}$ of $L$ in the different $\mathfrak{D}_{L/F}$ contributes $\frac{1}{2} d_L(\mathfrak{P})^{d(\mathfrak{P}|\mathfrak{p})}$ to the genus of $L$, where $\mathfrak{p}$ is the place of $K$ below $\mathfrak{P}$, $d_L(\mathfrak{P})$ is the degree of the place $\mathfrak{P}$, and $d(\mathfrak{P}|\mathfrak{p})$ is the differential exponent of $\mathfrak{P}|\mathfrak{p}$. By the fundamental identity $\sum_i e_i f_i = [L:K] = 3$ for ramification indices $e_i$ and inertia degrees $f_i$ of all places of $L$ above $\mathfrak{p}$, we always have that $f_i=1$ whenever $\mathfrak{p}$ ramifies in $L$ (fully or partially). Thus from Lemma \[diffexpqneq1mod3\], it follows that $d(\mathfrak{P}|\mathfrak{p}) = 2$ if $\mathfrak{p}$ is fully ramified, whereas $d(\mathfrak{P}|\mathfrak{p}) = 1$ if $\mathfrak{p}$ is partially ramified. The result then follows by reading off \[Theorem 9.4.2, Ibid.\] and using the conditions of Lemma \[diffexpqneq1mod3\]. 2. This follows in a manner similar to part (1) of this theorem, via Lemma \[diffexpqneq1mod3\] for $p=2$. We obtain directly the following corollary when the extension $L/K$ is Galois. \[RHGalois\] Let $p \neq 3$. Let $L/K$ be a Galois cubic geometric extension and $y$ a primitive element of $L$ with minimal polynomial $f(X) = X^3-3X-a$. Then the genus $g_L$ of $L$ is given according to the formula $$g_L = 3g_K - 2 + \sum_{\substack{v_\mathfrak{p}(a)<0\\ ( v_\mathfrak{p}(a),3) = 1}} d_{K}(\mathfrak{p}).$$ $X^3 +aX +a^2$, $a\in K$, $p=3$ ------------------------------- \[char3diffexp\] Suppose that $p = 3$. Let $L/K$ be a separable cubic extension and $y$ a primitive element with minimal polynomial $X^3 + aX + a^2$. Let $\mathfrak{p}$ be a place of $K$ and $\mathfrak{P}$ a place of $L$ above $\mathfrak{p}$. 1. $d(\mathfrak{P}|\mathfrak{p}) =0$ if, and only if, $e(\mathfrak{P}|\mathfrak{p})=1$. 2. when $e(\mathfrak{P}|\mathfrak{p})=3$, by Theorem \[char3localstandardform\], there is $w_\mathfrak{p} \in K$ such that $v_\mathfrak{p}(\alpha_\mathfrak{p} ) < 0$ and $(v_\mathfrak{p}(\alpha_\mathfrak{p} ),3)=1$ with $$\alpha_\mathfrak{p} = \frac{(ja^2 + (w_\mathfrak{p} ^3 + a w_\mathfrak{p} ) )^2}{a^3}.$$ Then $d ( \mathfrak{P}| \mathfrak{p}) = -v_{\mathfrak{p}}(\alpha_\mathfrak{p} )+2$. 3. $d(\mathfrak{P}|\mathfrak{p}) = 1$ whenever $e(\mathfrak{P}|\mathfrak{p})=2$. Moreover, by Lemma \[char3localstandardform\], when $e(\mathfrak{P}|\mathfrak{p})=2$, there is generator $z_\mathfrak{p} $ such that $z_\mathfrak{p} ^3 +c_\mathfrak{p} z_\mathfrak{p} +c _\mathfrak{p} ^2 =0$ and $v_\mathfrak{p}(c_\mathfrak{p} ) \geq 0$ and $(v_\mathfrak{p} (c_\mathfrak{p}),2)=1$. Let $b \in \overline{K}$ such that $b^2 = -a$, $\mathfrak{p}$ be a place of $K$, $\mathfrak{P}_b$ be a place of $L(b)$ above $\mathfrak{p}$, $\mathfrak{p}_b=\mathfrak{P}_b\cap K(b)$, $\mathfrak{P}= \mathfrak{P}_b\cap L$. 1. This is an immediate consequence of [@Vil Theorem 5.6.3]. 2. Suppose that $\mathfrak{p}$ is ramified in $L$, whence $\mathfrak{p}_b$ is ramified in $L(b)$. Moreover, by Theorem \[char3localstandardform\], there exists $w_\mathfrak{p} \in K$ such that $v_\mathfrak{p} (\alpha_\mathfrak{p}) <0$ and $(v_\mathfrak{p} (\alpha_\mathfrak{p}),3)=1$, where $$\alpha_\mathfrak{p} = \frac{(ja^2 + (w_\mathfrak{p}^3 + a w_\mathfrak{p}) )^2}{a^3},$$ and furthermore, there exists a generator $z_\mathfrak{p}$ of $L$ such that $z_\mathfrak{p}^3 + \alpha_\mathfrak{p} z_\mathfrak{p} + \alpha_\mathfrak{p}^2=0$. Again by \[Theorem 5.6.3, Ibid.\], the differential exponent $d(\mathfrak{p}_b | \mathfrak{p})=d(\mathfrak{P}_b | \mathfrak{P})$ of $\mathfrak{p}$ over $K(b)$ (resp. $\mathfrak{P}$ over $L(b)$) is equal to 1. $1$ if $\mathfrak{p}$ is ramified in $K(b)$, whence $e(\mathfrak{p}_b | \mathfrak{p})=e(\mathfrak{P}_b | \mathfrak{P})=2$, and 2. $0$ if $\mathfrak{p}$ is unramified in $K(b)$, whence $e(\mathfrak{p}_b | \mathfrak{p})=e(\mathfrak{P}_b | \mathfrak{P})=1$. By [@Con Theorem 2.3], $L(b)/K(b)$ is Galois and $-\alpha_\mathfrak{p}$ is a square in $K(b)$. We write $-\alpha_\mathfrak{p} = \beta_\mathfrak{p}^2$. Moreover, $w_\mathfrak{p}= \frac{z_\mathfrak{p}}{\beta_\mathfrak{p}}$ and $w_\mathfrak{p}^3 - w_\mathfrak{p}- \beta_\mathfrak{p}=0$. Moreover, $$v_{\mathfrak{p}_b}(\beta_\mathfrak{p})= \frac{ v_{\mathfrak{p}_b}(\alpha_\mathfrak{p})}{2}=\frac{ e(\mathfrak{p}_b | \mathfrak{p})v_{\mathfrak{p}}(\alpha_\mathfrak{p})}{2}$$ with $e(\mathfrak{p}_b | \mathfrak{p})=2$ or $1$, depending on whether $\mathfrak{p}$ is ramified or not in $K(b)$. Also, $v_{\mathfrak{p}_b}(\beta_\mathfrak{p})=v_{\mathfrak{p}}(\alpha_\mathfrak{p})$ when $\mathfrak{p}$ is ramified in $K(b)$, whereas $v_{\mathfrak{p}_b}(\beta_\mathfrak{p})= \frac{ v_{\mathfrak{p}}(\alpha_\mathfrak{p})}{2} $ when $\mathfrak{p}$ is unramified in $K(b)$ (note that in this case $2|v_{\mathfrak{p}}(\alpha_\mathfrak{p})$). Thus $v_{\mathfrak{p}_b}(\beta_\mathfrak{p})<0$ and $(v_{\mathfrak{p}_b}(\beta_\mathfrak{p}),3)=1$ and by [@Sti Theorem 3.7.8], we also have that the differential exponent $d(\mathfrak{P}_b | \mathfrak{p}_b)$ of $\mathfrak{p}_b$ in $L(b)$ satisfies $$d(\mathfrak{P}_b | \mathfrak{p}_b)=2 (-v_{\mathfrak{p}}(\beta_\mathfrak{p})+1).$$ By [@Vil Theorem 5.7.15], the differential exponent of $\mathfrak{p}$ in $L(b)$ satisfies $$d(\mathfrak{P}_b | \mathfrak{p})= d(\mathfrak{P}_b | \mathfrak{p}_b) + e(\mathfrak{P}_b | \mathfrak{p}_b) d(\mathfrak{p}_b | \mathfrak{p})= d(\mathfrak{P}_b | \mathfrak{P}) + e(\mathfrak{P}_b | \mathfrak{P}) d(\mathfrak{P} | \mathfrak{p}).$$ Thus, 1. if $\mathfrak{p}$ is ramified in $K(b)=K(\beta_\mathfrak{p})$, that is, $(v_\mathfrak{p}(\alpha_\mathfrak{p}),2)=1$ by [@Sti Proposition 3.7.3], then $ 2 (-v_{\mathfrak{p}}(\alpha_\mathfrak{p})+1) + 3= 1 + 2d(\mathfrak{P} | \mathfrak{p})$ and $$d(\mathfrak{P} | \mathfrak{p})= -v_{\mathfrak{p}}(\alpha_\mathfrak{p})+2,$$ whereas 2. if $\mathfrak{p}$ is unramified in $K(b)$, that is, $2|v_\mathfrak{p}(\alpha_\mathfrak{p})$ again by [@Sti Proposition 3.7.3], then also $$d(\mathfrak{P} | \mathfrak{p})= 2 \left( -\frac{v_{\mathfrak{p}}(\alpha_\mathfrak{p})}{2}+1\right)= -v_{\mathfrak{p}}(\alpha_\mathfrak{p})+2.$$ 3. This is immediate from Theorem \[char3localstandardform\] and [@Vil Theorem 5.6.3], via application of the same method as in Lemma \[diffexpqneq1mod3\] $(3)$. Finally, we use this to conclude the Riemann-Hurwitz formula for cubic extensions in characteristic 3. \[char3RH\] Suppose that $p = 3$. Let $L/K$ be a separable cubic extension and $y$ a primitive element with minimal polynomial $X^3 + aX + a^2$. Then the genus $g_L$ of $L$ is given according to the formula $$g_L = 3g_K - 2 + \frac{1}{2} \sum_{\mathfrak{p} \in S}\left(-v_{\mathfrak{p}}(\alpha_\mathfrak{p} )+2 \right) d_{K}(\mathfrak{p}) + \frac{1}{2} \sum_{\mathfrak{p} \in T} d_{K}(\mathfrak{p}),$$ where 1. $S$ is the set of places of $K$ for which there exists $w_\mathfrak{p} \in K$ such that $v_\mathfrak{p}(\alpha_\mathfrak{p} ) < 0$, $(v_\mathfrak{p}(\alpha_\mathfrak{p} ),3)=1$ with $$\alpha_\mathfrak{p} = \frac{(ja^2 + (w_\mathfrak{p} ^3 + a w_\mathfrak{p} ) )^2}{a^3},$$ and 2. $T$ is the set of places of $K$ for which there is generator $z_\mathfrak{p} $ such that $z_\mathfrak{p} ^3 +c_\mathfrak{p} z_\mathfrak{p} +c_\mathfrak{p} ^2 =0$, $v_\mathfrak{p}(c_\mathfrak{p} ) \geq 0$ and $(v_\mathfrak{p} (c_\mathfrak{p}), 2)=1$. This follows from Lemma \[char3diffexp\], [@Vil Theorem 9.4.2], and the fundamental identity $\sum e_i f_i = [L:K] = 3$. Appendix: Algorithm for computing the genus of a cubic equation over $\mathbb{F}_q(x)$ {#appendix-algorithm-for-computing-the-genus-of-a-cubic-equation-over-mathbbf_qx .unnumbered} ====================================================================================== In this all section $K = \mathbb{F}_q(x)$ and $L/K$ denotes a cubic extension. ### Transforming any general cubic polynomial into our forms In this section, we keep all the previous notations. We are given a cubic extension with a generator $y$, whose minimal polynomial is of the form $X^3 + e X^2 + f X + g$. We will first transform this generator into one of our forms.\ \ [**Algorithm 1: Takes $x^3 + e x^2 + f x + g$ and $p$**]{} 1. [**Case $p\neq 3$,** ]{} 1. if $3eg= f^2$,\ [**RETURN**]{} $x^3 -a$ with $a = \frac{27g^3}{-27g^2+f^3}$ and $z= \frac{3g y}{fy + 3g}$. 2. otherwise,\ [**RETURN**]{} $x^3 -3x -a$ with $a=-2-\frac{ ( 27 g^2 -9efg +2f^3)^2}{(3ge-f^2)^3}$ and $z= \frac{-(6efg-f^3-27g^2)y+3g(3eg-f^2)}{(3eg-f^2)(fy+3g)}$. [ that if $27 g^2 -9efg +2f^3=0$, the cubic polynomial is reducible, which violates the assumption that it must be irreducible. For the same reason, for instance, $g$ cannot be $0$. Note that this case is also not necessarily disjoint from (a); see also [@MWcubic3 Theorem 2.1].]{} 2. [**Case $p=3$**]{} 1. if $e=f=0$,\ [**RETURN**]{} $x^3 -a$, with $a =g$ ; 2. otherwise,\ [**RETURN**]{} $x^3 + a x + a^2$ with 1. if $e=0$ and $f \neq 0$,\ $a = \frac{g^2}{f^3}$ and $z= \frac{g}{f^2} y$; 2. if $e\neq 0$ and $f=0$,\ $a = \frac{g}{e^3}$ and $z= \frac{g}{e^2x}$; 3. otherwise,\ $a = \frac{-f^2e^2+ge^3+f^3}{e^6}$ and $z= \frac{-f^2e^2+ge^3+f^3}{e^4(ey -f)}$.\ [ that if $-f^2e^2+ge^3+f^3=0$, the cubic polynomial is again reducible, contrary to the assumption that it must be an irreducible polynomial.]{} ### $p\neq 3$ [**[Case 1: The Algorithm 1 returned a form $X^3 -a$. ]{}** ]{} The next algorithm returns the list of ramified places, indices of ramification, differential exponents, and the value of the genus for purely cubic extensions.\ [**Algorithm 2: takes $x^3-a $ and $p\neq 3$**]{} Use the [factorization Algorithm]{} to factor $a$ into $$\frac{ \prod_{i=1}^s p_i(x)^{e_i} }{ \prod_{i=1}^t q_i(x)^{f_i} },$$ with $p_i(x), q_j(x)$ distinct irreducible polynomials and $e_i$ and $f_j$ natural numbers, $i \in \{1, \cdots, s\}$ and $j \in \{1, \cdots , t\}$. [**RETURN**]{} - [**List of triples (ramified place, index of ramification, different exponent):**]{} - if $ 3\mid \left( \sum_{i=1}^s e_i \deg (p_i (x)) - \sum_{i=1}^i f_i \deg (q_i (x)) \right)$ $$\{ (p_i(x), 3, 2), (q_j(x), 3, 2), i \in \{ 1, \cdots , s\} \ with \ 3 \nmid e_i , j \in \{ 1, \cdots , t\} \ with \ 3 \nmid f_i \}$$ - otherwise, $$\{ (p_i(x), 3, 2), (q_j(x), 3, 2), (\infty, 3, 2), i \in \{ 1, \cdots , s\} \ with \ 3 \nmid e_i , j \in \{ 1, \cdots , t\} \ with \ 3 \nmid f_i \}$$ - [**Genus of the extension**]{} - if $ 3\mid \left( \sum_{i=1}^s e_i \deg (p_i (x)) - \sum_{i=1}^i f_i \deg (q_i (x)) \right)$, [(i.e., the place at infinity is unramified,)]{} then $$g = -2 + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid e_i$}} \deg( p_i(x) ) + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid f_i$}} \deg( q_i(x))$$ - otherwise, $$g = -1 + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid e_i$}} \deg( p_i(x) ) + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid f_i$}} \deg( q_i(x)) .$$\ [ one could also easily check such an extension is Galois indeed it suffices to check if $q \equiv \ \pm 1 \ mod \ 3$ as it is Galois if and only if $\mathbb{F}_q (x)$ contains a third root of unity. The later being equivalent to $q \equiv \ 1 \ mod \ 3$ (see [@MWcubic3 Theorem 4.2]).\ ]{} [Application: finding integral basis for a purely cubic extension]{} The statement used for this Algorithm is done in [@MadMad Theorem 3], and finds explicitly an integral basis for any purely cubic extension.\ [**Algorithm finding integral basis takes $y$ (generator of $L/K$ with minimal polynomial), $X^3-a $ and $p\neq 3$**]{} Use the [factorization Algorithm]{} to factor $a$ into $$\frac{ \prod_{i=1}^s p_i(x)^{e_i} }{ \prod_{i=1}^t q_i(x)^{f_i} },$$ with $p_i(x), q_j(x)$ distinct irreducible polynomials and $e_i$ and $f_j$ natural numbers, $i \in \{1, \cdots, s\}$ and $j \in \{1, \cdots , t\}$. Use [Euclidean Algorithm]{} to find $\lambda_i$, $\lambda_i'$ integers and $r_i, \ r_i' \in \{ 0 , 1, 2\}$ such that $$e_i = 3 \lambda_i + r_i$$ and $$f_i = -3 \lambda_i' - r_i'$$ [**RETURN**]{} [**Integral basis for $L/K$:**]{} $$\{ \theta_0, \theta_1 , \theta_2\}$$ where $$\theta_j = \frac{y^j}{\prod_{i=1}^s p_i(x)^{s_{ij}}\prod_{i=1}^t q_i(x)^{s_{ij}'}}$$ with $j = 0, 1, 2$, $s_{ij}= \big[ \frac{j r_i}{3}\big] + j \lambda_i$ is the greatest integer not exceeding $ \frac{-j r_i}{3}$ and $s_{ij}'= \big[ \frac{j r_i'}{3}\big] + j \lambda_i'$ is the greatest integer not exceeding $ \frac{-j r_i'}{3}$ .\ [**[Case 2: The Algorithm 1 returned a form $x^3 -3x-a$. ]{}**]{} Note that a cubic extension defined by an irreducible polynomial of the form $x^3-3x-a$ is not necessarily impurely cubic, see [@MWcubic3 Theorem 2.1]. If the reader wishes to first decide if the extension is impurely cubic or not, and if it is not, transform it to a purely cubic extension and go back to Case 1, then s/he could use the following algorithm. Otherwise, the reader can also go directly to Algorithm 3.\ [**Optional algorithm: takes $x^3 -3x-a$ and $p\neq 3$**]{} 1. if $p \neq 2$ and 1. if $\Delta =a^2-4$ is not a square (one can use for instance the [factorization algorithm]{} to test this),\ [**RETURN**]{} $L/K$ is impurely cubic. 2. $\Delta= a^2-4$ is a square (one can use for instance the [factorization algorithm]{} to test this) and determine $\sqrt{\Delta}$ such that $\sqrt{\Delta}^2 = \Delta$,\ [**RETURN**]{} $L/K$ is purely cubic, taking $c= \frac{ -b + \sqrt{\Delta}}{2a}$ or $c=\frac{ -b - \sqrt{\Delta}}{2a}$, $U= \frac{cY-1}{Y-c}$, $U^3 - c$ irreducible polynomial for $L/K$. 2. if $p=2$ and 1. if $X^2 +aX +1$ has no root in $\mathbb{F}_q(x)$ (testing this requires an algorithm permitting one to check for roots for quadratic polynomials over $\mathbb{F}_q(x)$ in characteristic $2$.)\ [**RETURN**]{} $L/K$ is not purely cubic. 2. if $X^2 + aX+1$ has a root in $\mathbb{F}_q(x)$, compute a root $c$ for this polynomial (this requires an algorithm finding root for quadratic polynomials over $\mathbb{F}_q(x)$ in characteristic $2$.)\ [**RETURN**]{} $L/K$ is purely cubic, $c$ a root of $X^2 + X+1/a$, $U= \frac{cY-1}{Y-c}$, and $U^3 =c$ irreducible polynomial for $L/K$.\ that one can also to check if an extension with minimal polynomial $x^3 -3x -a$ is a Galois extension, 1. [**when $p \neq 2$,**]{} one will only need to check if $-27(a^2-4)$ is a square or not in $\mathbb{F}_q(x)$, which is achievable with the [factorization algorithm]{}, for instance. Once one knows it is Galois, that is, when $-27(a^2-4)$ is a square, one writes $\delta = \sqrt{-27 (a^2 -4)}$, taking $b = -\frac{1}{2} + \frac{9}{\delta} + \frac{9a}{2\delta}$ or $b=-\frac{1}{2} - \frac{9}{\delta} + \frac{9a}{2\delta}$ and $$a = \frac{2 b^2 + 2 b -1}{ b^2 +b +1}$$ 2. [**when $p=2$,**]{} one will need to check if $R(x)= x^2 +a x + (1+a^2)$ (quadratic resolvent) has a root or not in $\mathbb{F}_q(x)$ (this would require an algorithm finding roots for quadratic polynomial in $\mathbb{F}_q(x)$ in characteristic $2$). Once one knows it is Galois, that is when this polynomial has a root in $\mathbb{F}_q(x)$, one computes a root for the quadratic resolvent. Call such a root $r$; taking $b= \frac{r}{a}$, one writes $$a = \frac{1}{ b^2 +b +1}$$\ The next algorithm returns the list of ramified places, indices of ramification, differential exponents, and the value of the genus for cubic extensions with minimal polynomial of the form $x^3 -3x -a$.\ [**Algorithm 3: takes $x^3-3x-a $ and $p\neq 3$**]{} Use the [factorization Algorithm]{} to factor $a$ into $$\frac{f(x) }{ \prod_{i=1}^t q_i(x)^{f_i} },$$ with $q_i(x)$ distinct irreducible polynomials, $f(x)$ polynomial, $( f(x), \prod_{i=1}^t q_i(x)^{f_i} ) =1$ and $f_i$ natural number, for $i \in \{ 1, \cdots , t\}$. 1. [**Case $p\neq 2$**]{}, Use the [factorization Algorithm]{} to factor $a^2 -4$ into $$a^2 -4= \frac{\prod_{i=1}^r r_i(x)^{g_i}}{ \prod_{i=1}^k s_i(x)^{h_i} },$$ with $r_i(x), s_j (x)$ distinct irreducible polynomials, $g_i$, $h_i$ natural numbers for $i \in \{ 1, \cdots, r\}$, $j \in \{ 1, \cdots , k\}$. [**RETURN**]{} - When $3 | deg (a^2 -4)$, 1. [**List of triples (ramified places, indices of ramification, differential exponents):** ]{} $$\begin{array}{lll} \{ (q_i(x), 3, 2), (r_j(x), 2, 1), (s_u(x) , 2, 1), i \in \{ 1, \cdots , t \} \ with \ 3 \nmid e_i , \\ j \in \{ 1, \cdots , r\} \ with \ 2 \nmid g_j, \ u \in \{ 1, \cdots , k\} \ with \ 2 \nmid h_u \} \end{array}$$ 2. [**Genus of the extension:**]{} $$\begin{aligned} g &= -2 + \frac{1}{2} \sum_{ \text{ for \ $i$ \ such \ that \ $2 \nmid g_i$}} \deg( r_i(x)) + \frac{1}{2} \sum_{ \text{ for \ $i$ \ such \ that \ $2 \nmid h_i$}} \deg( s_i(x)) \\ &\qquad + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid f_i$}} \deg( q_i(x)) \end{aligned}$$ - Otherwise, 1. [**List of triples (ramified places, indices of ramification, differential exponents):** ]{} $$\begin{array}{lll} \{ (q_i(x), 3, 2), (r_j(x), 2, 1), (s_u(x) , 2, 1), (\infty, 2, 1), i \in \{ 1, \cdots , t \} \ with \ 3 \nmid e_i , \\ j \in \{ 1, \cdots , r\} \ with \ 2 \nmid g_j, \ u \in \{ 1, \cdots , k\} \ with \ 2 \nmid h_u \}\end{array}$$ 2. [**Genus of the extension:**]{} $$\begin{aligned} g &= -3/2 + \frac{1}{2} \sum_{ \text{ for \ $i$ \ such \ that \ $2 \nmid g_i$}} \deg( r_i(x)) + \frac{1}{2} \sum_{ \text{ for \ $i$ \ such \ that \ $2 \nmid h_i$}} \deg( s_i(x)) \\ &\qquad + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid f_i$}} \deg( q_i(x)) \end{aligned}$$ 2. [**Case $p=2$**]{}, Do [**Algorithm Artin-Schreier**]{} below giving it the polynomial $x^2 - x -\frac{1}{a}-1$, it will return, in particular, $$b = \frac{ g(x)}{ \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t}}$$ with $g(x)$ polynomial, $(g(x) , \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t})=1$ and $2 \nmid \alpha_{i_j}^t$. [**RETURN**]{} - If $3 | \deg (b)$, - [**List of triples (ramified places, indices of ramification, differential exponents):** ]{} $$\begin{array}{lll} \{ (q_i(x), 3, 2), (p_{i_j}(x), 2, \alpha^t_{i_j}+1), i \in \{ 1, \cdots , t \} \ \text{with} \ 3 \nmid f_i , j \in \{ 1, \cdots , s \} \} \end{array}$$ - [**Genus of the extension:**]{} $$\begin{aligned} g = -2 + \frac{1}{2} \sum_{j=1}^s ( \alpha^t_{i_j}+1) \deg(p_{i_j}(x)) + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid e_i$}} \deg( q_i(x)) \end{aligned}$$\ - if $3 \nmid \deg(b)$, - [**List of triples (ramified places, indices of ramification, differential exponents):** ]{} $$\begin{aligned} \{ &(q_i(x), 3, 2), (p_{i_j}(x), 2, \alpha^t_{i_j}+1), (\infty, 2, deg(b)+1), \\& i \in \{ 1, \cdots , t \} \ \text{with} \ 3 \nmid e_i , j \in \{ 1, \cdots , s \} \} \end{aligned}$$ - [**Genus of the extension;**]{} $$\begin{aligned} g = -\frac{3}{2} - \frac{1}{2} \deg(b) + \frac{1}{2} \sum_{i=1}^s ( \alpha^t_{i_j}+1) \deg(p_{i_j}(x)) + \sum_{ \text{ for \ $i$ \ such \ that \ $3 \nmid e_i$}} \deg( q_i(x)) \end{aligned}$$\ The following algorithm is an intermediate algorithm used in the previous algorithm that computes ramified places for a Artin-Schreier extension. More precisely, given an Artin-Schreier extension of prime degree $r$, that is, a extension with a generator $y$ such that its minimal polynomial is of the form $x^r -x -a$ called Artin-Schreier, this algorithm will find an Artin-Schreier generator $z$ such that its minimal polynomial if of the form $x^r -x -b$ where $b=a + \eta^r - \eta$ for some $\eta \in \mathbb{F}_q(x)$ and $$b = \frac{g(x) }{ \prod_{i=1}^k s_i(x)^{h_i} }$$ with $g(x)$ polynomial, $s_i(x)$ distinct irreducible polynomials, $h_i$ natural numbers with $r \nmid h_i$, $i \in \{1, \cdots, k\}$. This algorithm is a consequence of [@Vil Example 5.8.8]; we add it for completeness.\ [**Algorithm Artin-Schreier: takes $y$ (a generator for the Artin-Schreier extension such that its minimal polynomial is), $x^r-x-a $ and $r$ prime number** ]{} 1. Use the [factorization Algorithm]{} to factor $a$ into $$\frac{ f(x)}{ \prod_{i=1}^t p_i(x)^{\alpha_i} },$$ with $p_i(x)$ distinct irreducible polynomials, $f(x)$ polynomial and $( f(x), \prod_{i=1}^t p_i(x)^{\alpha_i})=1$, $\alpha_i$ natural numbers, $i \in \{ 1, \cdots t\}$.\ For each $i \in \{ 1, \cdots , t \}$, we denote $\mathfrak{p}_i$ to be the finite place associated to $p_i(x)$.\ Do , for each $i$ such that $ r | \alpha_i$. 2. Given $i$ as above, 3. Write $\alpha_i = r \lambda_i$, for some $\lambda_i$ natural number. Do the [algorithm computing the partial fraction decomposition]{} of $a$ as $$a = s(x)+ \sum_{i=1}^t \sum_{k=0}^{\alpha_i -1} \frac{ t_k^{(i)} (x) }{ p_i (x)^{\alpha_i -k}}$$ such that $$\deg ( t_k^{(i)} (x)) < \deg ( p_i (x)), \ for \ k = 0 , 1, \cdots , \alpha_i -1$$ Write $$a = \frac{ t_0^{(i)} (x) }{ p_i (x)^{ 2 \lambda_i }} + t_1 (x)$$ 4. Find $m(x) \in k[x]$ such that $$m(x)^r \equiv t^{(i)}_0 (x) \mod p_i(x)$$ [ this is possible since $k$ is a perfect field and $[ k[x]/ (p_i (x)) : k ]< \infty$, so that $M= k[x]/ (p_i (x)) $ is perfect, that is, $M^2= M$. The previous step can be achieved by finding a root $\delta$ for $p_i(x)$ in $\mathbb{F}_{q^{\deg(p_i(x))}}$ then computing $ t^{(i)}_0 (\delta )$ in $\mathbb{F}_{q^{\deg(p_i(x))}}$ and finding $\beta \in \mathbb{F}_{q^{\deg(p_i(x))}}$ such that $\beta^r = t^{(i)}_0 (\delta)$ in $\mathbb{F}_{q^{\deg(p_i(x))}}$. Then $m(x) = \beta + p_i(x)$ will satisfy the congruence above. ]{} 5. We set - $$z = y - \left(\frac{m(x)}{p_i(x)^{\lambda_i}} \right)$$ - $$b =a- \left(\frac{m(x)}{p_i(x)^{\lambda_i}} \right)^r -\left( \frac{m(x)}{p_i(x)^{\lambda_i}} \right).$$ We write $b = p_i(x)^{\alpha'_i} \frac{ g(x)}{ q(x)}$ with $q(x)$ polynomial and $p_i(x)$ irreducible polynomial and $(p_i(x) , q(x))=1$, $(g(x),q(x))=1$, $(p_i(x), g(x))=1$ and $\alpha'_i $ integer. 1. If $\alpha'_i \geq 0$; Change $i$ in [**STEP 2**]{} with $a=b$ and $y=z$, if there are no more $i$ to work with, taking $a=b$ and $y=z$ exit to [**STEP 3**]{}. 2. If $\alpha'_i < 0$ with $r \nmid \alpha'_i $; Change $i$ in [**STEP 2**]{} with $a=b$ and $y=z$, if there are no more $i$ to work with, taking $a=b$ and $y=z$ exit to [**STEP 3**]{}. 3. Otherwise, repeat [**Step 2**]{} with taking $a=b$ and $y=z$. that $$v_{\mathfrak{p}_i} ( a + w^r -w) \geq min \{ v_{\mathfrak{p}_i} ( a + w^r ), v_{\mathfrak{p}_i} ( w)\}$$ but $v_{\mathfrak{p}_i} ( a + w^r ) > - r\lambda_i$ and $ v_{\mathfrak{p}_i} (w) = -\lambda_i > -r \lambda_i$. Therefore, $$v_{\mathfrak{p}_i} (a + w^r -w) > v_{\mathfrak{p}_i} ( a ).$$ [*Note*]{} that the other prime valuations are affected in a way that does not cause any nonnegative valuations to become negative. Hence, the process will end in finite time. 6. We set $$s = \deg(a)$$ 1. If $s \leq 0$, move to [**STEP 4**]{}. 2. If $s>0$, $r \nmid s$, move to [**STEP 4**]{}. 3. Otherwise 1. Use the [euclidean algorithm]{} to write $s = rd$ with $d$ a natural number, 2. Write $$a = \frac{g(x)}{ h(x)}$$ where $g(x)$, $h(x)$ polynomials with $(f(x), g(x))=1$. 3. Apply the [Euclidean algorithm]{} to find $q(x)$ and $r(x)$ polynomial with $ \deg( r(x)) < h(x)$ or $r(x) =0$, and $$g(x) = h(x) q(x) + r(x)$$ 4. Write $q(x) = \alpha x^{rt} + t(x)$ [*[Note $\deg(t(x))< rt$.]{}*]{} 5. Find $\beta$ in $\mathbb{F}_q$ such that $\beta ^r= \alpha $. 6. Take - $$z = y- \beta x^t$$ - $$b = a- (\beta x^t)^r + \beta x^t$$ 7. - If $r | \deg(a)$, then redo [**STEP 3**]{} with $y=z$ and $a =b$. - Otherwise exit to [**STEP 4**]{}. [. Note that the other prime valuations are affected in a way that does not cause any nonnegative valuations to become negative. Hence, the entire process will end in finite time.]{} 7. [**RETURN**]{} - $z$ in terms of initial $y$ given into the algorithm. - $$b = \frac{ g(x)}{ \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t}}$$ with $g(x)$ polynomial, $\alpha_{i_j}^t$ natural number with $3 \nmid \alpha_{i_j}^t$, for $j \in \{ 1, \cdots , s\}$.\ [Application: finding an integral basis for a cubic extension defined by an irreducible polynomial of the form $X^3 -3X -a$.]{} The statement used for this Algorithm is done in [@MWcubic4 Theorem 2.1] and finds explicitly an integral basis for cubic extensions defined by a irreducible polynomial of the form $X^3 -3X -a$.\ [**Algorithm integral basis 2: takes $y$ (generator of $L/K$ with minimal polynomial), $X^3-3X-a $ and $p\neq 3$**]{} Use the [factorization algorithm]{} to factor $a$ into $$a=\frac{ \alpha}{\gamma^3 \beta},$$ where $(\alpha,\beta\gamma) = 1$, $\beta$ polynomial is cube-free, and $\beta = \beta_1 \beta_2^2$, where $\beta_1$ and $\beta_2$ are polynomials square-free. - [**Case $p\neq 2$**]{}, 1. Use the [factorization algorithm]{} to factor $4\gamma^6 \beta^2 - \alpha^2$ into $$(4\gamma^6 \beta^2 - \alpha^2)=\eta_1 \eta_2^2,$$ where $\eta_1$ is polynomial square-free. 2. Use [Chinese remainder theorem]{} to find $T$ and $S$ polynomials such that $$T \equiv -\frac{ \alpha }{2 \gamma^2 \beta_2} \mod \eta_2^2\quad\text{ and } \quad T\equiv 0 \mod \beta_1^2;$$ 3. Set $I= \beta_1^2 \eta_1^2$. - [**Case $p= 2$**]{},\ Use [Factorization Algorithm]{} to factor $\alpha$ into $$\alpha = \prod_{i=1}^s \alpha_i^{s_i },$$ where $\alpha_i$ distinct irreducible polynomials.\ - If the polynomial $R(X)= X^2 +a X +(1+a^2)$ (quadratic resolvent) has a root in $\mathbb{F}_q(x)$:\ Then, 1. Find a root $r$ of $R(X)$. 2. Set $I = \beta_1 \alpha$. 3. Use [Chinese remainder theorem]{} to find a polynomial $T$ such that $$T \equiv 0 \mod \beta_1 \quad\text{ and }\quad T \equiv \frac{r}{ \gamma^2 \beta_1^2 \beta_2^2} \mod \alpha;$$ - If the polynomial $R(X)= X^2 +a X +(1+a^2)$ (quadratic resolvent) has no root in $\mathbb{F}_q(x)$. 1. Use [Artin-Schreier Algorithm]{} above with $x^2 -x- \frac{ \gamma^6 \beta_1^2 \beta_2^4 }{ \alpha ^2} $, it will return, in particular $$b = \frac{ g(x)}{ \prod_{j=1}^s \alpha_{i_j}^{t_{i_j}^t}}$$ with $g(x)$ polynomial, $t_{i_j}^t$ natural number such that $3\nmid t_{i_j}^t$. 2. Set $$I =\beta_1 \alpha \prod_{j=1}^s \alpha_{i_j}^{-\frac{1}{2} (t_{i_j}^t+1) }$$ 3. Use [Chinese remainder theorem]{} to find a polynomial $T$ such that $$T \equiv 0 \mod \beta_1 \quad \text{ and }\quad T \equiv \frac{\alpha b}{\gamma^2 \beta_2}\mod \alpha \prod_{j=1}^s \alpha_{i_j}^{-\frac{1}{2} (t_{i_j}^t+1) }$$ [**RETURN**]{} Integral basis of $L/K$ $$\mathfrak{B} = \left\{ 1, \omega+ S, \frac{1}{I} (\omega^2+ T\omega +V ) \right\}$$ with $S, V \in \mathbb{F}_q[x]$ and $ V \equiv T^2 -3(\gamma \beta_1 \beta_2)^2 \mod I$. ### $p=3$ [The Algorithm 1 returned a form $X^3 + a X + a^2$. ]{}\ The next algorithm returns the list of ramified places, indices of ramification, differential exponents, and the value of the genus for cubic extensions with minimal polynomial of the form $x^3 +a x +a^2$.\ [**Algorithm 4: takes $X^3 + a X + a^2$ and $p=3$**]{}\ 1. Use the [factorization algorithm]{}, to factorize $a$ as $$\frac{ f(x)}{\prod_{i=1}^m p_i(x)^{\alpha_i}}$$ with $p_i(x)$ distinct irreducible polynomials, $f(x)$ polynomial and $(f(x) , \prod_{i=1}^m p_i(x)^{\alpha_i}) =1$ and $$f(x) = \prod_{i=1}^u r_i(x)^{\beta_i}$$ where $r_i(x)$ distinct irreducible polynomials and $\beta_i$ natural integer, for $i \in \{ 1, \cdots , s\}$. 2. Use the [Generalized Artin-Schreier Algorithm]{} below for $X^3 + a X + a^2$, it will return in particular, $$b = \frac{ g(x)}{ \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t}}$$ with $g(x)$ polynomial, $\alpha_{i_j}^t$ natural number with $3 \nmid \alpha_{i_j}^t$, for $j \in \{ 1, \cdots , s\}$. [**RETURN**]{} - if $2 \nmid \deg (a)$ (initial $a$ given into algorithm) and $\deg(b) \leq 0$, - [**List of triples (ramified places, indices of ramification, differential exponents):** ]{} $$\begin{array}{lll} \{ (p_{i_j}(x), 3, \alpha^t_{i_j}+1), (r_i(x), 2 , 1), (p_k(x), 2,1) , (\infty, 2, 1), j \in \{ 1, \cdots , s \},\\ i \in \{ 1, \cdots , u \} \text{ such that } 2 \nmid \beta_i , k \in \{1, \cdots , m \}\backslash \{ i_1, \cdots , i_s\} \} \end{array}$$ - [**Genus of the extension:**]{} $$\begin{aligned} g =& -\frac{3}{2} + \frac{1}{2} \sum_{i=1}^s ( \alpha^t_{i_j}+1) \deg(p_{i_j}(x)) +\frac{1}{2} \sum_{ \text{ for \ $i$ \ such \ that \ $2 \nmid \beta_i$}} \deg( r_i(x))\\ & +\frac{1}{2} \sum_{ k \in \{1, \cdot , m \}\backslash \{ i_1, \cdots , i_s\}} \deg( p_k(x)) \end{aligned}$$\ - Otherwise, - [**List of triples (ramified places, indices of ramification, differential exponents):** ]{} $$\begin{array}{lll} \{ (p_{i_j}(x), 3, \alpha^t_{i_j}+1), (r_i(x), 2 , 1), (p_k(x), 2,1) , (\infty, 2, 1), j \in \{ 1, \cdots , s \},\\ i \in \{ 1, \cdots , u \} \text{ such that } 2 \nmid \beta_i , k \in \{1, \cdots , m \}\backslash \{ i_1, \cdots , i_s\} \} \end{array}$$ - [**Genus of the extension:**]{} $$\begin{aligned} g =& -2 + \frac{1}{2} \sum_{i=1}^s ( \alpha^t_{i_j}+1) \deg(p_{i_j}(x)) +\frac{1}{2} \sum_{ \text{ for \ $i$ \ such \ that \ $2 \nmid \beta_i$}} \deg( r_i(x))\\ & +\frac{1}{2} \sum_{ k \in \{1, \cdots , m \}\backslash \{ i_1, \cdots , i_s\}} \deg( p_k(x)) \end{aligned}$$\ The following algorithm is an intermediate algorithm used in the previous algorithm that computes ramified places for a extension with minimal polynomial $X^3+aX+a^2$. This algorithm will find a generator $z$ such that its minimal polynomial is of the form $x^3+ b x +b^2$ where $b= \frac{( a^2 + \eta ^3 + a\eta )^2}{ a^3}$ for some $\eta \in \mathbb{F}_q(x)$ and $$b = \frac{g(x) }{ \prod_{i=1}^k s_i(x)^{h_i} }$$ with $g(x)$ polynomial, $ s_i(x)$ distinct irreducible polynomials, $h_i$ natural numbers with $r \nmid h_i$, $i \in \{1, \cdots, k\}$. This algorithm uses the same arguments as the ones used in Theorem \[char3localstandardform\] and [@MWcubic3 Lemma 1.2], one difference is that we also address the place at infinity, which was not done in [@MWcubic3 Lemma 1.2].\ [**Generalized Artin-Schreier Algorithm: takes $y$ (generator for the extension $L/K$ with minimal polynomial), $X^3 + a X + a^2$ and $p=3$**]{}\ 1. Use the [factorization algorithm]{}, to factorize $a$ as $$\frac{ f(x)}{\prod_{i=1}^t p_i(x)^{\alpha_i}}$$ with $p_i(x)$ distinct irreducible polynomials, $f(x)$ polynomial and $(f(x) , \prod_{i=1}^t p_i(x)^{\alpha_i}) =1$, $\alpha_i$ natural numbers, $i \in \{ 1, \cdots, t\}$. For each $i \in \{ 1, \cdots , t\}$, we denote $\mathfrak{p}_i$ to be the finite place associated to $p_i(x)$. For each $i$ such that $ 3 | \alpha_i$, do [**STEP 2**]{}: 2. For some $i$ as above 3. Write $\alpha_i = 3 \lambda_i$, for some $\lambda_i$ natural number. Use the [partial fraction decomposition algorithm]{} and write $$a = s(x)+ \sum_{i=1}^t \sum_{k=0}^{\alpha_i -1} \frac{ t_k^{(i)} (x) }{ p_i (x)^{\alpha_i -k}}$$ such that $$\deg ( t_k^{(i)} (x)) < \deg ( p_i (x)), \ for \ k = 0 , 1, \cdots , \alpha_i -1$$ Then, write $$a = \frac{ t_0^{(i)} (x) }{ p_i (x)^{ 3 \lambda_i }} + t_1 (x)$$ 4. Find $m(x) \in k[x]$ such that $$m(x)^3 \equiv t^{(i)}_0 (x)^2 \mod p_i(x)$$ [This is possible since $k$ is a perfect field and $[ k[x]/ (p_i (x)) : k ]<\infty$, then $M= k[x]/ (p_i (x)) $ is perfect, that is $M^3= M$. The previous step can be achieved by finding a root $\delta$ for $p_i(x)$ in $\mathbb{F}_{q^{\deg(p_i(x))}}$, then computing $ t^{(i)}_0 (\delta )^2$ in $\mathbb{F}_{q^{\deg(p_i(x))}}$, and finding $\beta \in \mathbb{F}_{q^{\deg(p_i(x))}}$ such that $\beta^3 = t^{(i)}_0 (\delta)^2$ in $\mathbb{F}_{q^{\deg(p_i(x))}}$. Then $m(x) = \beta + p_i(x)$ will satisfy the congruence above. ]{} 5. We set - $$z = \frac{1}{a} \left( y - \big( \frac{ m(x)}{ p_i (x)^{2 \lambda_i}}\big) \right)$$ - $$b = \frac{( a^2 + \big( \frac{ m(x)}{ p_i (x)^{2 \lambda_i}}\big) ^3 + a\big( \frac{ m(x)}{ p_i (x)^{2 \lambda_i}}\big) \big) ^2}{ a^3}$$ Using the [Factorization algorithm]{}, we write $$b = \frac{ g(x)}{ p_i(x)^{\alpha'_i} q(x)}$$ with $q(x)$ polynomial and $g(x)$ polynomial, $(p_i(x) , q(x))=1$ and $(g(x), p_i(x))=1$, $(g(x), q(x))=1$ and $\alpha_i'$ an integer. 1. If $\alpha'_i \geq 0$; Change $i$ in [**STEP 2**]{} with $a=b$ and $y=z$, if there are no more $i$ to work with, taking $a=b$ and $y=z$ exit to [**STEP 3**]{}. 2. If $\alpha'_i < 0$ with $r \nmid \alpha'_i $; Change $i$ in [**STEP 2**]{} with $a=b$ and $y=z$, if there are no more $i$ to work with, taking $a=b$ and $y=z$ exit to [**STEP 3**]{}. 3. Otherwise, repeat [**Step 2**]{} with taking $a=b$ and $y=z$. that one can write $$a^2 = \frac{ t_0^{(i)} (x)^2 }{ p_i (x)^{6 \lambda_i }} + t_2(x)$$ where $$v_{\mathfrak{p}_i} ( t_2 (x) ) > -6\lambda_i$$ We get $$v_{p_i} ( a^2 + w^3 + a w) \geq \min \{ v_{\mathfrak{p}_i} \left( \frac{ t_0^{(i)} (x)^2 }{ p_i (x)^{6 \lambda_i }} + \frac{m^3 (x)}{ p_i(x)^{ 6 \lambda_i}} \right) + v_{\mathfrak{p}_i }( t_2 (x)) + v_{\mathfrak{p}_i } (a w)\}$$ But, $$v_{\mathfrak{p}_i} \left( \frac{ t_0^{(i)} (x)^2 }{ p_i (x)^{6 \lambda_i }} + \frac{m^3 (x)}{ p_i(x)^{ 6 \lambda_i}} \right)\geq 1 - 6 \lambda_i > - 6 \lambda_i,$$ $$v_{\mathfrak{p}_i }( t_2 (x)) > - 6 \lambda_i,$$ and $$v_{\mathfrak{p}_i } (a w)= v_{\mathfrak{p}_i } (a )+ v_{\mathfrak{p}_i } ( w) = -3\lambda_i - 2 \lambda_i > -6 \lambda_i$$ Therefore, $$v_{p_i} ( a^2 + w^3 + a w)> 2 v_{\mathfrak{p}_i} (a)$$ and $$v_{p_i} (b)= v_{p_i} \left( \frac{( a^2 + w^3 + a w )^2}{ a^3} \right) > v_{p_i} ( a)$$ [*Note*]{} that the other prime valuations are affected in a way that does not cause any nonnegative valuation to become negative (see [@MWcubic3 Lemma 1.2] to find the argument one can use to prove this). So the process will finish in finite time. 6. We set $$s = \deg(a)$$ 1. If $s \leq 0$, move to [**STEP 4**]{}. 2. If $s>0$, $3 \nmid s$, move to [**STEP 4**]{}. 3. Otherwise 1. Use the [euclidean algorithm]{} to write $s = 3d$ with $d$ a natural number, 2. Write $$a = \frac{g(x)}{ h(x)}$$ where $g(x)$, $h(x)$ polynomials with $(f(x), g(x))=1$. 3. Apply the [Euclidean algorithm]{} to find $q(x)$ and $r(x)$ polynomial with $ \deg( r(x)) <h(x) $ or $r(x) =0$, and $$g(x) =h(x) q(x) + r(x)$$ 4. Write $q(x) = \alpha x^{3t} + t(x)$ [*[Note $\deg(t(x))< 3t$.]{}*]{} 5. Find $\beta$ in $\mathbb{F}_q$ such that $\beta ^3= \alpha^2 $. 6. Take - $$z = \frac{1}{a} \big( y+ \beta x^{2t} \big)$$ - $$b = \frac{( a^2 - (\beta x^{2t})^3 - a (\beta x^{2t}) )^2}{ a^3}$$ 7. - If $r | \deg(b)$, then redo [**STEP 3**]{} with $y=z$ and $a =b$. - otherwise exit to [**STEP 4**]{}. [. Note also that the other prime valuations are affected in a way that does not cause any non negative valuations to become negative. Hence, the entire process will end in finite time.]{} 7. [**RETURN**]{} - $z$ in terms of initial $y$ given into the algorithm, - $$b = \frac{ g(x)}{ \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t}}$$ with $g(x)$ polynomial, $\alpha_{i_j}^t$ natural number with $3 \nmid \alpha_{i_j}^t$, for $j \in \{ 1, \cdots , s\}$. [ that it is also easy to check if a cubic extension over $\mathbb{F}_q(x)$ with generator $y$ such that its minimal polynomial is $x^3 +ax^2 + a$ is Galois, as one only needs to check if $-a$ is a square in $\mathbb{F}_q(x)$. When it is a square, the extension is Galois, and one finds $b$ such that $-a = b^2$ using for instance the [factorization algorithm]{}, and then $\frac{y}{a}$ is an Artin-Schreier generator with minimal polynomial $x^3 -x - \frac{1}{a}$.]{}\ [Application: finding integral basis for an extension with minimal polynomial of the form $x^3 +ax^2 + a$.]{} The statement used for this Algorithm is done in [@MadMad Theorem 3], and finds explicitly an integral basis for any purely cubic extension.\ [**Algorithm integral basis 3: takes $y$ (generator of $L/K$ with minimal polynomial), $X^3+aX+a^2 $ and $p= 3$**]{} 1. Use the [factorization algorithm]{}, to factorize $a$ as $$\frac{ f(x)}{\prod_{i=1}^m p_i(x)^{\alpha_i}}$$ with $p_i(x)$ distinct irreducible polynomials, $f(x)$ polynomial and $(f(x) , \prod_{i=1}^m p_i(x)^{\alpha_i}) =1$, $\alpha_i$ natural number and $$f(x) = \prod_{i=1}^u r_i(x)^{\beta_i}$$ where $r_i(x)$ distinct irreducible polynomials and $\beta_i$ natural integer, for $i \in \{ 1, \cdots , u\}$. 2. Use the [Generalized Artin-Schreier Algorithm]{} above, it will return - $z$ in terms of $y$. - $$b = \frac{ g(x)}{ \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t}}$$ with $g(x)$ polynomial, $\alpha_{i_j}^t$ natural number with $3 \nmid \alpha_{i_j}^t$, for $j \in \{ 1, \cdots , s\}$. 3. Write $b$ into $$b= \frac{ \xi_1 \xi_2^2 }{ \prod_{j=1}^s p_{i_j} (x)^{\alpha_{i_j}^t}},$$ with $\xi_1, \xi_2 \in \mathbb{F}_q[x]$, $\xi_1$ is square-free, and $(\xi_1 \xi_2, \beta ) = 1$. [**RETURN:**]{} [**Integral basis for $L/K$:**]{} $$\mathfrak{B}=\left\{ \frac{P_2}{\xi_1\xi_2^2} z^2 , \frac{P_1}{\xi_2} z, 1 \right\}$$ where $$P_k = \prod_{j=1}^s p_{i_j}(x)^{1+ \left\lfloor \frac{k 2\alpha_{i_j}^t}{3}\right\rfloor},$$ for $k=1,2$, where $\left\lfloor \frac{k 2\alpha_{i_j}^t}{3}\right\rfloor$ is the integral part of $\frac{k 2 \alpha_{i_j}^t}{3}$.
{ "pile_set_name": "ArXiv" }
--- author: - 'Guillaume Cébron[^1], Antoine Dahlqvist[^2], Camille Male[^3]' bibliography: - 'Biblio.bib' title: Universal constructions for spaces of traffics --- [abstract:]{} We investigate questions related to the notion of *traffics* introduced in [@Male2011] as a noncommutative probability space with numerous additional operations and equipped with the notion of *traffic independence*. We prove that any sequence of unitarily invariant random matrices that converges in noncommutative distribution converges in distribution of traffics whenever it fulfills some factorization property. We provide an explicit description of the limit which allows to recover and extend some applications (on the freeness from the transposed ensembles [@MingoPopa2014] and the freeness of infinite transitive graphs [@Accardi2007]). We also improve the theory of traffic spaces by considering a positivity axiom related to the notion of *state* in noncommutative probability. We construct the free product of spaces of traffics and prove that it preserves the positivity condition. This analysis leads to our main result stating that every noncommutative probability space endowed with a tracial state can be enlarged and equipped with a structure of space of traffics. Introduction ============ Motivations for traffics ------------------------ Thanks to the fundamental work of Voiculescu [@Voiculescu1991], it is now understood that noncommutative probability is a good framework for the study of large random matrices. Here are two important considerations which sum up the role of noncommutative probability in the description of the macroscopic behavior of large random matrices: 1. A large class of families of random matrices $\mbf A_N\in \mrm{M}_N(\mbb C)$ *converge in noncommutative distribution* as $N$ tends to $\infty$ (in the sense that the normalized trace of any polynomial in the matrices converges). 2. If two independent families of random matrices $\mbf A_N$ and $\mbf B_N$ converge separately in noncommutative distribution and are invariant in law when conjugating by a *unitary* matrix, then the joint noncommutative distribution of the family $\mbf A_N\cup\mbf B_N$ converges as well. Moreover, the joint limit can be described from the separate limits thanks to the relation of *free independence* introduced by Voiculescu. In [@Male2011; @Male122; @MP14], it was pointed out that there are cases where other important macroscopic convergences occur in the study of large random matrices and graphs. The notion of noncommutative probability is too restrictive and should be generalized to get more information about the limit in large dimension. This is precisely the motivation to introduce the concept of *space of traffics*, which comes together with the notion of distribution of traffics and the notion of traffic independence: it is a non-commutative probability space where one can consider not only the usual operations of algebras, but also more general $n$-ary operations called *graph operations*. We will introduce those concept in details, but let us first describe the role of traffics enlightened in [@Male2011] for the description of large $N$ asymptotics of random matrices: 1. A large class of families of random matrices $\mbf A_N\in \mrm{M}_N(\mbb C)$ *converge in distribution of traffics* as $N$ tends to $\infty$ (in the sense that the normalized trace of any graph operation in the matrices converges). 2. If two independent families of random matrices $\mbf A_N$ and $\mbf B_N$ converge separately in distribution of traffics, satisfy a factorization property and are invariant in law when conjugating by a *permutation* matrix, then the joint distribution of traffics of the family $\mbf A_N\cup\mbf B_N$ converges as well. Moreover, the joint limit can be described from the separate limits thanks to the relation of *traffic independence* introduced in [@Male2011]. In general, asymptotic traffic independence is different than Voiculescu’s notion. Nevertheless, they coincide if one family has the same limit in distribution of traffics as a family of random matrices invariant in law by conjugation by any unitary matrix. We now present our main results in the three next subsections. Distribution of traffics of random matrices ------------------------------------------- Let us first describe how we encode new operations on space of matrices. For all $K{\geqslant}0$, a *$K$-graph operation* is a connected graph $g$ with $K$ oriented and ordered edges, and two distinguished vertices (one input and one output, not necessarily distinct). The set $\mathcal G$ of graph operations is the set of all $K$-graph operations for all $n{\geqslant}0$. A $K$-graph operation $g$ has to be thought as an operation that accepts $K$ objects and produces a new one. For example, it acts on the space $\mrm{M}_N(\mbb C)$ of $N$ by $N$ complex matrices as follows. For each $K$-graph operation $g\in \mathcal G$, we define a linear map $Z_g: \mrm{M}_N(\mbb C) \otimes \cdots \otimes \mrm{M}_N(\mbb C)\to \mrm{M}_N(\mbb C)$ (or equivalently a $K$-linear map on $\mrm{M}_N(\mbb C)^{ K}$) in the following way. Denoting by $V$ the vertices of $g$, by $(v_1,w_1),\ldots, (v_K,w_K)$ the ordered edges of $g$, and by $E_{k,l}$ the matrix unit $(\delta_{ik}\delta_{jl})_{i,j=1}^N\in \mrm{M}_N(\mbb C)$, we set, for all $A^{(1)},\ldots,A^{(K)}\in \mrm{M}_N(\mbb C)$, $$Z_g(A^{(1)} \otimes \dots \otimes A^{(K)})=\sum_{k:V\to \{1,\ldots,N\}}\left(A^{(1)}_{k(w_1),k(v_1)}\cdots A^{(K)}_{k(w_K),k(v_K)}\right)\cdot E_{k(out),k(in)}.$$ Following [@Mingo2012], we can think of the linear map $\mbb C^N\to \mbb C^N$ associated to $Z_g(A^{(1)}\otimes \dots \otimes A^{(K)})$ as an algorithm, where we are feeding a vector into the input vertex and then operate it through the graph, each edge doing some calculation thanks to the corresponding matrix $A^{(i)}$, and each vertex acting like a logic gate, doing some compatibility checks. Those operations encode naturally the product of matrices, but also other natural operations, like the Hadamard (entry-wise) product $(A,B)\mapsto A\circ B$, the real transpose $A\mapsto A^t$ or the degree matrix $deg(A) = diag(\sum_{j=1}^NA_{i,j})_{i=1\etc N}$. Starting from a family $\mbf A=(A_j)_{j\in J}$ of random matrices of size $N\times N$, the smallest algebra closed by the adjointness and by the action of the $K$-graph operations is the space of traffics generated by $\mbf A_N$. The *distribution of traffics* of $\mbf A_N$ is the data of the noncommutative distribution of the matrices which are in the space of traffics generated by $\mbf A_N$. More concretely, it is the collection of the quantities $$\frac{1}{N}\esp\Big[\Tr \big(Z_g(A_{j_1}^{\epsilon_1}\otimes \dots \otimes A_{j_K}^{\epsilon_K})\big)\Big]$$ for all $K$-graph operations $g\in\mathcal{G}$, indices $j_1,\ldots, j_K\in J$ and labels $\epsilon_1,\cdots, \epsilon_K\in \{1,\ast\}$. Those quantities appear quite canonically in investigations of random matrices and have been first considered in [@Mingo2012]. The following theorem shows that the unitarily invariance is sufficient to deduce the convergence in distribution of traffics from the convergence in $*$-distribution. \[Th:Matrices\]For all $N{\geqslant}1$, let $\mbf A_N=(A_j)_{j\in J}$ be a family of random matrices in $\mrm{M}_N(\mbb C)$. We assume 1. The unitary invariance: for all $N{\geqslant}1$ and all $U\in \mrm{M}_N(\mbb C)$ which is unitary, $U\mbf A_NU^*:=(UA_jU^*)_{j\in J}$ and $\mbf A_N$ have the same law. 2. The convergence in $*$-distribution of $\mbf A_N$: for all indices $j_1,\ldots, j_K\in J$ and labels $\epsilon_1,\cdots, \epsilon_K\in \{1,\ast\}$, the quantity $(1/N)\esp[\Tr (A_{j_1}^{\epsilon_1}\cdots A_{j_K}^{\epsilon_K})]$ converges. 3. The factorization property: for all $*$-monomials $m_1,\ldots,m_k$, we have the following convergence $$\begin{gathered} \lim_{N\to \infty} \mathbb{E}\left[\frac{1}{N}\mathrm{Tr}\left(m_1(\mbf A_N)\right)\cdots \frac{1}{N}\mathrm{Tr}\left(m_k(\mbf A_N)\right)\right]\\=\lim_{N\to \infty} \mathbb{E}\left[\frac{1}{N}\mathrm{Tr}\left(m_1(\mbf A_N)\right)\right]\cdots \lim_{N\to \infty} \mathbb{E}\left[\frac{1}{N}\mathrm{Tr}\left(m_k(\mbf A_N)\right)\right].\end{gathered}$$ Then, $\mbf A_N$ converges in distribution of traffics: for all $K$-graph operation $g\in\mathcal{G}$, indices $j_1,\ldots, j_K\in J$ and labels $\epsilon_1,\cdots, \epsilon_K\in \{1,\ast\}$, the following quantity converges $$\frac{1}{N}\esp\Big[\Tr \big(Z_g(A_{j_1}^{\epsilon_1} \otimes \ldots \otimes A_{j_K}^{\epsilon_K})\big)\Big].$$ It has to be noticed that a similar result about the convergence of observables related to traffic distributions, for unitarily invariant random matrices, is also proved independently by Gabriel in [@Gabriel2015]. More generally, the framework developed by Gabriel in [@Gabriel2015a; @Gabriel2015; @Gabriel2015c] is related to the framework of traffic, and will certainly lead to further investigations in order to understand the precise link between both theories. In practice, the limit of the distribution of traffic of $\mbf A_N$ depends explicitely on the limit of the noncommutative $*$-distribution of $\mbf A_N$. For example, a recent result of Mingo and Popa [@MingoPopa2014] tells that for all sequence of unitarily invariant random matrices $\mbf A_N$ the family $\mbf A_N^t$ of the transposes of $\mbf A_N$ has the same noncommutative $*$-distribution as $\mbf A_N$ and is asymptotically freely independent with $\mbf A_N$ (under assumptions stronger than those of Theorem \[Th:Matrices\] that also imply the asymptotic free independence of second order). Thanks to the description of the limiting distribution of traffics of unitarily invariant matrices, we will get that for a family $\mbf A_N=(A_j)_{j\in J}$ as in Theorem \[Th:Matrices\], $\mbf A_N$, $\mbf A_N^t$ and $deg(\mbf A_N)$ are asymptotically free independent, as well as $\mbf A_N \otimes \mbf A_N:=( A_j\otimes A_{j'})_{j,j'\in J}$, $deg( \mbf A_N \otimes \mbf A_N)$ and their transpose. Spaces of traffics and their free product ----------------------------------------- Recall that a non commutative probability space is a pair $(\mcal A, \Phi)$, where $\mcal A$ is unital algebra and $\Phi$ is a trace, that is a unital linear form on $\mcal A$ such that $\Phi(ab)=\Phi(ba)$ for any $a,b \in \mcal A$. A $^*$-probability space is a non commutative probability space equipped with an anti-linear involution $\cdot ^*$ satisfying $(ab)^*=b^*a^*$ and such that $\Phi$ is a state, that is $\Phi(a^*a){\geqslant}0$ for any $a\in \mcal A$. The $^*$-distribution of a family $\mbf a$ of elements of $\mcal A$ is the linear form $\Phi_{\mbf a}:P \mapsto \Phi\big(P(\mbf a)\big)$ defined for non commutative polynomials in elements of $\mbf a$ and their adjoint. The convergence in $^*$-distribution of a sequence $\mbf a_N$ is the pointwise convergence of $\Phi_{\mbf a_N}$. In [@Male2011], the notion of space of traffics was defined in an algebraic framework as a non-commutative probability space $(\mcal A,\tau)$, with a collection of $K$-linear map indexed by the $K$-graph operations in a consistent way. It allows to consider the additional operations for matrices as the *Hadamard* (entry-wise) product, or the *real transpose* for non commutative random variables, and hopefully will lead to new probabilistic investigations in the general theory of quantum probability theory. More precisely, the set of graph operation $\mcal G$ can be endowed naturally with a structure of operad, and we say that *the operad $\mcal G$ acts* on a vector space $\mcal A$ if to each $K$-graph operation $g\in \mcal G$, there is a linear map $$Z_g: \underbrace{\mathcal{A}\otimes \cdots \otimes \mathcal{A}}_{K\ \text{times}}\to \mathcal{A}$$(or equivalently a $K$-ary multilinear operation) subject to some requirements of compatibility (see Definition \[Def:Gaction\]). In Definition \[Def:Traffic\] of Section \[Sec:SpacesOfTraffics\], we go further defining a *space of traffics* as a $^*$-probability space $(\mcal A,\tau)$ on which acts the graph operations $\mcal G$, with two additional properties: the compatibility of the involution $\cdot ^*$ with graph operations, and a positivity condition on $\tau$ which is stronger than saying that it is a state. Moreover, in Section \[Sec:FreeProd\], we define the free product $(\ast_{j\in J}\mcal A_j , \star_{j\in J}\tau_j)$ of a family $(\mcal A_j ,\tau_j)_{j\in J}$ of algebraic spaces of traffics, in such a way that the algebras $A_j$ seen as subspaces of traffics of $\ast_{j\in J}\mcal A_j$ are traffic independent. The free product of spaces is compatible with the positivity condition for spaces of traffics, as the following theorem shows. \[Th:PosFreeProd\] The free product of distributions of traffics satisfies the positivity condition for spaces of traffics, i.e. the free product of a family of spaces of traffics is well-defined as a space of traffic. One may be surprised by this additional positivity condition for spaces of traffics. Let us give a short explanation. The fact that the traces $\tau_j$ are states is not sufficient to ensure that $\star_{j\in J}\tau_j$ is a state as well. One has to require a bit more on $\tau_j$ to get the positivity of $\star_{j\in J}\tau_j$. A consequence of Theorem \[Th:PosFreeProd\] of conceptual importance is that for any traffic $a$ there exists a space of traffics that contains a sequence of traffic independent variables distributed as $a$. As a byproduct of the proof of Theorem \[Th:PosFreeProd\], we get a new characterization of traffic independence (Theorem \[Equivalence Free product free independance\]) which is much more similar to the usual definition of free independence. We deduce from it a simple criterion to characterize the free independence of variables assuming their traffic-independence (proving that the criterion in [@Male2011 Corollary 3.5] is actually a characterization of free independence in that context). An example is a new proof of the free independence of the spectral distributions of the free product of infinite deterministic graphs [@Accardi2007]. A canonical lifting from $^*$-probability spaces to spaces of traffics ---------------------------------------------------------------------- We turn now to our last result, which was the first motivation of this article and whose demonstration uses both Theorem \[Th:Matrices\] and Theorem \[Th:PosFreeProd\]. It states that the $^*$-probability spaces of Voiculescu can be enlarged and equipped with the structure of space of traffics. Let us be more explicit. As explained, Theorem \[Th:Matrices\] in its full form gives a formula of the limiting distribution of traffics which involves only the limiting noncommutative distribution of the matrices. Replacing in this formula the limiting noncommutative distribution of matrices by an arbitrary distribution, we obtain a distribution of traffics which implies the following result. The difficulty consists in proving that this distribution satisfies the positivity condition. \[MainTh\] Let $(\mcal A,\Phi)$ be a $^*$-probability space. There exists a space of traffics $(\mcal {B},\tau)$ such that $\mcal A\subset \mcal {B}$ as $*$-algebras and such that the trace induced by $\tau$ restricted to $\mcal A$ is $\Phi$. Moreover, the distribution of traffics $\tau$ is canonical in the sense that 1. If $\mbf A_N$ is a sequence of random matrices that converges in $^*$-distribution to $\mbf a\in \mcal A$ as $N$ tends to $\infty$ and verifies the condition of Theorem \[Th:Matrices\], then $\mbf A_N $ converges in distribution of traffics to $\mbf a\in \mcal B$ as $N$ tends to $\infty$. 2. Two families $\mbf a$ and $\mbf b\in \mcal A$ are freely independent in $\mcal A$ if and only if they are traffic independent in $\mcal {B}$. Remark that, starting from an abelian non-commutative probability space $(\mcal A,\Phi)$, there exists another procedure described in [@Male2011] which allows to define a space of traffics $(\mcal {B},\tau)$ such that $\mcal A\subset \mcal {B}$ as $*$-algebras and such that the state induced by $\tau$ on $\mcal A$ is $\Phi$, and where two families $\mbf a$ and $\mbf b\in \mcal A$ are tensor independent in $\mcal A$ if and only if they are traffic independent in $\mcal {B}$. In other words, the free product of space of traffics leads to the tensor product or the free product of the probability spaces, depending on the way the $*$-distribution and the distribution of traffics of our random variables are linked. The rest of the article is organized as follows. In section \[Sec:SpacesOfTraffics\] we first recall the definition of algebraic spaces of traffics and define non-algebraic ones. Then we recall the definition of traffic independence. In Section \[Sec:FreeProd\] we define the free product of spaces of traffics. We state therein the new characterization of traffic independence and prove Theorem \[Th:PosFreeProd\]. In Section \[Sec:CanonicalExtension\], we prove Theorem \[MainTh\] on the canonical extension of $^*$-probability spaces and Theorem \[Th:Matrices\] on the distribution of traffics of unitarily invariant matrices. Definitions of spaces of traffics {#Sec:SpacesOfTraffics} ================================= $\mcal G$-algebras ------------------ We first recall and make more precise the definition of graph operations given in the introduction. For all $K{\geqslant}0$, a *$K$-graph operation* is a finite, connected and oriented graph with $K$ ordered edges, and two particular vertices (one input and one output). The set of $K$-graph operations is denoted by $\mcal G_K$, and the sequence $(\mcal G_K)_{K{\geqslant}0}$ is denoted by $\mcal G$. A $K$-graph operation can produce a new graph operation from $K$ different graph operations in the following way. Let us consider the *composition maps* $$\begin{aligned} \circ:\mcal G_K\times \mcal G_{L_1}\times \cdots \times \mcal G_{L_K}&\to \mcal G_{L_1+\cdots +L_K}\\ (g,g_1,\ldots, g_K)&\mapsto g\circ (g_1,\ldots, g_K)\end{aligned}$$ for $K{\geqslant}1$ and $L_i{\geqslant}0$, which consists in replacing the $i$-th edge of $g\in \mcal G_K$ by the $L_i$-graph operation $g_i$ (which leads at the end to a $(L_1+\cdots +L_K)$-graph operation). Let also consider the *action of the symmetric group* $S_K$ on $\mcal G_K$ by defining $g\circ \sigma$ to be the $K$-graph operation $g$ where the edges are reordered according to $\sigma\in S_K$ (if $e_1,\ldots ,e_K$ are the ordered edges of $g$, $e_{\sigma^{-1}(1)},\ldots ,e_{\sigma^{-1}(K)}$ are the ordered edges in $g_\sigma$). We introduce some important graph operations for later use: - the constant $0=(\cdot )\in \mcal G_1$ which consists in one vertex and no edges, - the identity $I=(\cdot \leftarrow \cdot )\in \mcal G_1$ which consists in two vertices and one edge from the input to the output, - the product $(\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot)\in \mcal G_1$ which consists in three vertices and two successive edges from the input to the output, - the Hadamard product $h$, which consists in two vertices and two edges from the input to the output, - the diagonal $\Delta$, which consists in one vertex and one edge, - the degree $deg=\ _\cdot^\uparrow$, which consists in two vertices, where one is the input and the output, and an edge from the input/output to the other vertex. Endowed with those composition maps and the action of the symmetric groups, the sequence $\mcal G=(\mcal G_K)_{K{\geqslant}0}$ is an *operad*, in the sense that it satisfies 1. the *identity* property $g\circ(I,\ldots, I)=g=I\circ g$, 2. the *associativity* property $$\begin{aligned} & g \circ \big(g_1 \circ (g_{1,1}, \ldots, g_{1,k_1}), \ldots, g_K \circ (g_{K,1}, \ldots,g_{K,k_K})\big) \\ &= \big(g \circ (g_1, \ldots, g_K)\big) \circ (g_{1,1}, \ldots, g_{1,k_1}, \ldots, g_{K,1}, \ldots, g_{K,k_K})\end{aligned}$$ 3. the *equivariance* properties $ (g\circ \pi)\circ(g_{\pi^{-1}(1)},\ldots,g_{\pi^{-1}(K)}) = g\circ(g_1,\ldots,g_K); $ and $ g\circ(g_1\circ \sigma_1,\ldots,g_K\circ \sigma_K) = \big(g\circ(g_1,\ldots,g_K)\big)\circ(\sigma_1\times\ldots\times\sigma_K). $ Let us now define how a $K$-graph operation can produce a new element from $K$ elements of a vector space in a linear way. \[Def:Gaction\]An *action* of the operad $\mcal G=(\mcal G_K)_{K{\geqslant}0}$ on a vector space $\mcal A$ is the data, for all $K{\geqslant}0$ and $g\in \mcal G_K$, of a linear map $Z_g: \underbrace{\mathcal{A}\otimes \cdots \otimes \mathcal{A}}_{K\ \text{times}}\to \mathcal{A}$ such that 1. $Z_I=\Id_{\mcal A}$, 2. $Z_g \circ (Z_{g_1}\otimes \ldots \otimes Z_{g_K})=Z_{g\circ (g_1,\ldots, g_K)}$, 3. $Z_g(a_1\otimes \ldots\otimes a_K)=Z_{g\circ {\sigma}}(a_{\sigma^{-1}(1)}\otimes\ldots\otimes a_{\sigma^{-1}(K)})$ whenever all the objects and compositions are well-defined. By convention, for the graph $0$ with a single vertex and no edge, $Z_0$ is a map $\mbb C \to \mcal A$. We denote $\mbb I=Z_0(1)$ and call it the unit of $\mcal A$. A vector space on which acts $\mcal G$ is called a *$\mcal G$-algebra*. A $\mcal G$-subalgebra is a subvector space of a $\mcal G$ algebra stable by the action of $\mcal G$. A $\mcal G$-morphism between two $\mcal G$-algebras $\mcal A$ and $\mcal B$ is a linear map $f:\mcal A \to \mcal B$ such that $f\big( Z_g(a_1\etc a_K)\big)=Z_g\big( f(a_1) \etc f(a_K)\big)$ for any $K$-graph operation $g$ and $a_1\etc a_K \in \mcal A$. The graph operation $(\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot)$ induces a linear map $Z_{\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot}:\mathcal{A}\otimes \mathcal{A}\to \mathcal{A}$ which gives to $\mathcal{A}$ a structure of associative algebra over $\C$, with unit $\mbb I$. Every $\mcal G$-algebra is in particular a unital algebra. We represent graphically the element $Z_g(a_1\otimes \ldots\otimes a_K)$ as the graph where the ordered edges are labelled by $a_1, \ldots, a_K$, and the second condition of equivariance allows to forget about the order of the edges. Let us define also an *involution* $*:g\to g^*$ on graph operation $\mcal G$, where $g^*$ is obtained from $g$ by reversing the orientation of its edges and interchanging the input and the output. A $\mcal G^*$-algebra is a $\mcal G$-algebra $\mcal A$ endowed with an antilinear involution $*:\mathcal{A}\to \mathcal{A}$ which is compatible with the action of $\mcal G$: for all $K$-graph operation $g$ and $a_1,\ldots,a_K\in \mcal A$, we have $Z_g(a_1\otimes \ldots\otimes a_K)^*=Z_{g^*}(a_1^*\otimes \ldots\otimes a_K^*)$. A $\mcal G^*$-subalgebra is a $\mcal G$-subalgebra closed by adjointness. A $\mcal G^*$-morphism between $\mcal A$ and $\mcal B$ is a $\mcal G$-morphism $f:\mcal A \to \mcal B$ such that $f(a^*)=f(a)^*$ for any $a\in \mcal A$. \[Rk:DiagAlg\] Recall that $\Delta$ denotes the graph operation with one vertex and one edge. Any $\mcal G$-algebra $\mcal A$ can be written $\mcal A = \mcal A_0 \oplus B$, where $\mcal A_0:= \big\{ \Delta(a), \, a\in \mcal A \big\}$ is a commutative algebra. We call $\mcal A_0$ the diagonal algebra of $\mcal A$. \[Ex:MgAlg\] Denote $\mrm{M}_N ( \mbb C)$ the algebra of $N$ by $N$ matrices. For any $K{\geqslant}1$ and $g\in \mcal G_K$ with vertex set $V$ and ordered edges $(v_1,w_1) \etc (v_K,w_K)$, let us define $Z_g$ by setting, for all $A^{(1)},\ldots ,A^{(K)}\in \mrm{M}_N(\mbb C)$, the $(i,j)$-coefficient of $Z_g(A^{(1)}\otimes \ldots \otimes A^{(K)})$ as $$\left[Z_g(A^{(1)}\otimes \ldots \otimes A^{(K)})\right]_{ij}= \sum_{ \substack { k: V \to [N] \\ k(input) = j,\, k(output) = i}} A^{(1)}_{ k(w_1),k(v_1) }\cdots A^{(K)}_{k(w_K),k(v_K)}.$$ This defines an action of the operad $\mcal G=(\mcal G_K)_{K{\geqslant}0}$ on $\mrm{M}_N(\mbb C)$, compatible with the usual complex transpose of matrices, and so $\mrm{M}_N ( \mbb C)$ is a $\mcal G^*$-algebra. The product $Z_{\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot}(A\otimes B)$ induced by this action coincides with the classical product of matrices, but we also have others operations like the Hadamard product $Z_h(A\otimes B)=(A_{ij}B_{ij})_{i,j=1}^N$, the projection on the diagonal $Z_\Delta(A)=(\delta_{ij}A_{ii})_{i,j=1}^N$, or the transpose $Z_{\cdot \rightarrow \cdot}(A)= (A_{ji})_{i,j=1}^N$. The diagonal algebra of $\mrm{M}_N ( \mbb C)$ defined in Remark \[Rk:DiagAlg\] is the algebra of diagonal matrices. \[Ex:MgAlgGraph\] Let $\mcal V$ be an infinite set and let $\trm{M}_{\mcal V}(\mbb C)$ denotes the set of complex matrices indexed by $\mcal V$, $A=(A_{v,w})_{v,w\in \mcal V}$ such that each row and column have a finite number of nonzero entries. For any $g\in \mcal G$ and $A^{(1)},\ldots ,A^{(K)}\in \trm{M}_{\mcal V}(\mbb C)$, we define $Z_g(A^{(1)}\otimes \ldots \otimes A^{(K)})$ by the same formula as in Example \[Ex:MgAlg\] with summation now over the maps $k: V \to \mcal V$. This defines as well a structure of $\mcal G^*$-algebra for $\trm{M}_{\mcal V}(\mbb C)$. When the entries of the matrices are non negative integers, they encode the adjacency of a locally finite directed graph: the graph associated to a matrix $A$ has $A(v,w)$ edges from a vertex $v\in V$ to a vertex $w\in V$ (see [@Male2011]). Space of traffics ----------------- Recall the definition from [@Male2011]. \[Def:Traffic\]An *algebraic space of traffics* is the data of a vector space $\mcal A$ with a linear functional $\Phi:\mathcal{A}\to \C$ such that - there exists an action of $\mcal G$ on $\mcal A$: $\mcal A$ is a $\mcal G$-algebra, - $\Phi$ is unital: $\Phi(\mbb I)=1$, - $\Phi$ is *input-independent*: for all $g\in \mcal G_n$, $\Phi \circ Z_g=\Phi \circ Z_{\Delta\circ g}$ and does not depend on the place of the input in $\Delta\circ g$. A homomorphism between two algebraic spaces of traffics $\mcal A$ and $\mcal B$ with respective linear functionals $\Phi$ and $\Psi$ is a $\mcal G$-morphism $f:\mcal A\to \mcal B$ such that $\Phi \circ f = \Psi$. The condition of input-independence for $\Phi$ implies that it is a trace for the structure of associative algebra of $\mcal A$ with product $(a,b)\mapsto Z_{\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot}(a,b)$. Moreover, it is possible to describe completely $\Phi$ in terms of a functional defined on some graphs where the input and output are totally forgotten. For later purpose, let us define more generally a notion of $n$-graph monomial, where we outline $n{\geqslant}0$ particular vertices, instead of two. \[Def:nGraphPol\] A $0$-graph monomial indexed by a set $J$ (called test-graph in [@Male2011]) is a collection $t = (V,E,\gamma)$, where $(V,E)$ is a finite, connected and oriented graph and $\gamma:E\to J$ is a labeling of the edges by indices. For any $n{\geqslant}1$, a $n$-graph monomial indexed by $J$ is a collection $t = (V,E,\gamma,\mbf v)$, where $(V,E,\gamma)$ is a $0$-graph monomial and $\mbf v = (v_1 \etc v_n)$ is a $n$-tuple of vertices of $T$, considered as the outputs of $t$. For any $n{\geqslant}0$, we set $\mathbb{C}\mathcal{G}^{(n)}\langle J \rangle$ the vector space spanned by the $n$-graph monomials indexed by $J$, whose elements are called $n$-graph polynomials indexed by $J$. Let us fix an algebraic space of traffics $\mcal A$ with linear functional $\Phi:\mcal A \to \mbb C$, and consider a $0$-graph monomial $t=(V,E,\gamma)$ with labels on $ \mcal A$. Let us list arbitrarily the edges of $E=\{e_1,\ldots, e_K\}$ and denote by $g$ the $K$-graph operation $(V,E)$ with the ordered edges $e_1,\ldots, e_K$ and choose arbitrarily for input and output a same vertex of $g$. Set \[Eq:DefTau\] (t)=(Z\_g([(e\_1)]{})), which does depend neither on the choice of the ordering of $e_1,\ldots, e_K$ nor on the input and output of $g$, thanks to the equivariance and the input-independence properties. This map extends to $\tau:\C\mcal G^{(0)}\langle \mcal A \rangle\to \C$ by linearity, and characterizes entirely the functional $\Phi:\mathcal{A}\to \C$, thanks to the relation $\Phi(a)=\tau(\circlearrowleft_a)$. Let $\mcal A$ be an algebraic space of traffics with linear functional $\Phi:\mcal A \to \mbb C$. The map $\tau:\C\mcal G^{(0)}\langle \mcal A \rangle\to \C$ defined above is called the distribution of traffics on $\mcal A$. Saying that $(\mcal A, \tau)$ is an algebraic space of traffics, we mean that $\tau$ denotes this functional and call $\Phi$ the associated trace on $\mcal A$. We now define the non-algebraic spaces of traffics. Let $\mathcal{A}$ be a set with an antilinear involution $*:\mathcal{A}\to \mathcal{A}$. Let $t,t'$ be two $n$-graph monomials indexed by $\mathcal{A}$. We set $t|t'$ the $0$-graph monomial obtained by merging the $i$-th output of $t$ and $t'$ for any $i=1 \etc n$. We extend the map $(t , t') \mapsto t|t'$ to a bilinear application $\mbb C \mcal G^{(n)}\langle \mathcal{A} \rangle^2 \to \mbb C \mcal G^{(0)}\langle \mathcal{A} \rangle$. Moreover, given an $n$-graph monomial $t = (V,E,\gamma, \mbf v)$ we set $t^*=(V,E^*,\gamma^*,\mbf v)$, where $E^*$ is obtained by reversing the orientation of the edges in $E$, and $\gamma^*$ is given by $e\mapsto \gamma(e)^*$. We extend the map $t \mapsto t^*$ to a linear map on $\mbb C \mcal G^{(n)}\langle \mathcal{A} \rangle$. \[Def:Positivity\]A *space of traffics* is an algebraic space of traffics $(\mcal A,\tau)$ such that: - $\mcal A$ is a $\mcal G^*$-algebra, - the distribution of traffics on $\mcal A$ satisfies the following *positivity condition* : for any $n$-graph polynomial $t$ indexed by $\mcal A$, \[eq:NonNegCond\] 0. A homomorphism between two spaces of traffics is a $\mcal G^*$-morphism which is a homomorphism of algebraic space of traffics. Note that for $n=2$ is equivalent to say that the trace $\Phi$ induced by $\tau$ is a state on the $*$-algebra $\mcal A$. By consequence, the product graph operation $(\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot)$ induces a linear map $Z_{\cdot \overset{1}{\leftarrow} \cdot \overset{2}{\leftarrow} \cdot}:\mathcal{A}\otimes \mathcal{A}\to \mathcal{A}$ which gives to $\mathcal{A}$ a structure of $^*$-probability space. Hence every space of traffics is in particular a $^*$-probability space. Theorem \[MainTh\] states that the reciprocal is true. \[Ex:MtSpaces\] Let $(\Omega, \mcal F, \mbb P)$ be a probability space in the classical sense and let consider the algebra $\mrm{M}_N\big(L^{\infty-}(\Omega, \mbb C )\big)$ of matrices whose coefficient are random variables with finite moments of all orders. Endowed with the action of the operad $\mathcal G$ described in Example \[Ex:MgAlg\], it is a $\mathcal G^*$-algebra. The linear form $\Phi_N:= \esp\big[ \Tr \, \cdot \, ]/N$ equips $\mrm{M}_N\big(L^{\infty-}(\Omega, \mbb C )\big)$ with the structure of algebraic space of traffics and the distribution of traffics $\tau_N$ is given by: for any $0$-graph monomial $T=(V,E,M)$ indexed by $\mrm{M}_N\big(L^{\infty-}(\Omega, \mbb C )\big)$, where $M:E\to \mrm{M}_N\big(L^{\infty-}(\Omega, \mbb C )\big)$, \[eq:IntroDefTrace\] \_N = . Moreover, $(\mrm{M}_N\big(L^{\infty-}(\Omega, \mbb C )\big), \tau_N)$ is actually a space of traffics since $\tau_N$ is positive. First, for any $n$-graph monomial $t=(V,E,M,\mbf v)$, we define a random tensor $T(t) \in (\mbb C^N)^{\otimes n}$ as follows. Let us denote by $\mbf v=(v_1\etc v_n)$ the sequence of outputs of $t$ and by $(\xi_i)_{i=1\etc N}$ the canonical basis of $\mbb C^N$. Then we set, T(t) =\_[ k: V ]{} (\_[e=(v,w)E]{} (M(e))\_[k(v), k(w) ]{}) \_[k(v\_1)]{} …\_[k(v\_n)]{}. We extend the definition by linearity on $n$-graph polynomials Positivity is clear since one has \_N :=0 \[Ex:Graphs2\] Let $\mcal V$ be an infinite set. A locally finite rooted graph on $\mcal V$ is a pair $(G,\rho)$ where $G$ is a directed graph such that each vertex has a finite number of neighbors (or equivalently an element of the space $\trm{M}_{\mcal V}(\mbb C)$ of Example \[Ex:MgAlgGraph\] with integers entries) and $\rho$ is an element of $\mcal V$. Recall briefly that the so-called weak local topology is induced by the sets of $(G,\rho)$ such that the subgraph induced by vertices at fixed distance of the root is given [@Male2011 Section 2.7.2]. The notion of locally finite random rooted graphs refers to the Borel $\sigma$-algebra given by this topology. Let $(\Omega, \mcal F, \mbb P)$ be a probability space, $\mcal V$ and $\rho\in \mcal V$. Let $\mbf G$ be a family of locally finite random rooted graphs on $\Omega$ with vertex set $\mcal V$ and common root $\rho$. Consider the $\mcal G$-subalgebra $\mcal A$ of $\trm{M}_{\mcal V}(\mbb C)$ induced by the adjacency matrices of $\mbf G$. In general, the linear form $\Phi_\rho(A) = \esp\big[ A(\rho,\rho)\big]$ is neither well defined nor input-independent. In [@Male2011], certain situations where $\Phi_\rho$ equips $\mcal A$ with the structure of algebraic space of traffics were characterized: in particular, if the degree of the vertices of the graphs $\mbf G$ are uniformly bounded, then $\Phi_\rho$ is well defined and is input-independent if and only if $\mbf G$ is called *unimodular*. When $\Phi_\rho$ is well defined, then the associated map $\tau_\rho$ always satisfies the positivity condition. Indeed, for any $n$-graph monomial $t$ we define a tensor $T(t) \in (\mbb C^\mcal V)^{\otimes n}$ with the same formula as for matrices, but with summation over $k:V \to \mcal V$ with $k(r)=\rho$, for an arbitrary vertex $r$ of $V$ and with $(\xi_i)_{i\in \mcal V}$ the canonical basis of $\mbb C^\mcal V$. The positivity of $\tau$ follows as well since $\tau_\rho\big[ t| t^* \big] :=\esp \big[ \sum_{\mbf i\in \mcal V^n:i_1=\rho} T(t)_{\mbf i}\, \overline{T(t)}_{\mbf i}\big]$ is nonnegative. Let $(\mathcal{A},\tau)$ be a space of traffics, with associated trace $\Phi$, $J$ an arbitrary index set, and $\mbf a=(a_j)_{j\in J}$ a family of elements in $\mathcal{A}$ 1. The *distribution of traffics* of $\mbf a$ is the linear functional $\tau_{\mbf a}:\mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle\to \mathbb{C}$ given by the distribution of traffics $\tau:\mbb C \mcal G^{(0)}\langle \mathcal{A} \rangle\to \mathbb{C}$ composed with the linear map $$\begin{array}{rl} \mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle & \to \mbb C \mcal G^{(0)}\langle \mathcal{A} \rangle \\ (V,E,j\times \epsilon) &\mapsto (V,E,a_{j(\cdot)}^{\epsilon(\cdot)}), \end{array}$$ or in other words, for all $0$-graph monomial $T=(V,E,j\times \epsilon)\in \mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle$, the quantity $\tau_{\mbf a}(T)$ is given by $\tau(t)$, where $t$ is the $0$-graph monomial $(V,E, \gamma)\in \mbb C \mcal G^{(0)}\langle \mathcal{A} \rangle$ such that $\gamma(e)=a_{j(e)}^{\epsilon(e)}$. 2. Let $(\mathcal{A}_N,\tau_N)$ a sequence of spaces of traffics, with associated trace $\tau_N$, $J$ an arbitrary index set, and for each $N{\geqslant}1$, a family $\mbf a_N=(a_j)_{j\in J}$ of elements of $\mathcal{A}_N$. We say that the sequence $\mbf a_N$ *converges in distribution of traffics* to $\mbf a$ if the distribution of traffics of $\mbf a_N$ converges pointwise to the distribution of traffics of $\mbf a$ on $\mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle$, or equivalently, if, for all $K$-graph operations $g\in\mathcal{G}$, indices $j_1,\ldots, j_K\in J$ and labels $\epsilon_1,\ldots, \epsilon_K\in \{1,\ast\}$, we have the following convergence $$\lim_{N\to \infty}\Phi_N\left[Z_g(a_{j_1}^{\epsilon_1} \otimes \ldots \otimes a_{j_K}^{\epsilon_K})\right]= \Phi\left[Z_g(a_{j_1}^{\epsilon_1} \otimes \ldots \otimes a_{j_K}^{\epsilon_K})\right].$$ \[Def:DistrTraff\] The distribution of traffics of a family $\mbf A_N=(A(j))_{j\in J}$ of random matrices is given, for all $0$-graph monomial $T=(V,E,j\times \epsilon)\in \mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle$, by $$\tau_{\mbf A_N}[T]=\esp\Big[ \frac 1 N \sum_{ k: V \to [N]} \prod_{e=(v,w)\in E} (A(j(e)))^{\epsilon(e)}_{k(v), k(w) } \Big] .$$ Möbius inversion and injective trace ------------------------------------ In order to define traffic independence, we need first to define a transform of distributions of traffics. Recall that a poset is a set $\mcal X$ with a partial order ${\leqslant}$ (see [@NS Lecture 10]). If $\mcal X$ is finite, then there exists a map $\mrm{Mob_{\mcal X}}:\mcal X\times \mcal X\to \mbb C$, called the Möbius function on $\mcal X$, such that for two functions $F,G:\mcal X \to \mbb C$ the statement that $$F(x) = \sum_{ x'{\leqslant}x} G(x'), \ \forall x\in \mcal X$$ is equivalent to $$G(x) = \sum_{ x' {\leqslant}x} \mrm{Mob_{\mcal X}}(x',x) F(x'), \ \forall x\in \mcal X.$$ Hence the first formula implicitly defines the function $G$ in terms of $F$. For any set $V$, denote by $\mcal P(V)$ the poset of partitions of $V$ equipped with inverse refinement order, that is $\pi'{\leqslant}\pi$ if the blocks of $\pi$ are included in blocks of $\pi'$. Let $(\mcal A, \Phi)$ be a non-commutative probability space and denote by $\mrm{N.C.}(K) \subset \mcal P(\{1\etc K\})$ the set of non-crossing partitions of $\{1\etc K\}$ [@NS Lecture 9]. Recall that the free cumulants are the multi-linear maps $\kappa$ given implicitly by \[Def:FreeCumm\] ( a\_1 …a\_K) = \_[(K)]{} \_[ {i\_1 i\_L} ]{} (a\_[i\_1]{}a\_[i\_L]{}). We introduce now a similar concept for traffics. Let $g$ be a $0$-graph monomial in $\mcal G^{(0)}\langle \mathcal{A} \rangle$, with vertex set $V$. For any partition $\pi\in \mcal P(V)$ of $V$, we denote by $g^\pi$ the $0$-graph monomial obtained by identifying vertices in a same block of $\pi$ (the edges link the associated blocks). Denote $\mbf 1_{\mcal P(V)}$ the partition of $V$ with singletons only. \[Def:TraffCum\]Let $\mcal A$ be an ensemble and let $\tau : \mbb C \mcal G^{(0)}\langle \mathcal{A} \rangle \to \mbb C$ be a linear map (for instance $\tau$ is the distribution of traffics). The linear form $\tau^0$ on $\mcal G^{(0)}\langle \mathcal{A} \rangle$, called *injective version* of $\tau$, is implicitly given by the following formula: for any $0$-graph monomial $t\in \mcal G^{(0)}\langle \mathcal{A} \rangle$ \[eq:TraffCum\] = \_[P(V)]{} \^0 , in such a way for each $0$-graph monomial $g$ one has $$\tau^0 \big[ t\big] = \sum_{\pi \in \mcal P(V)} \mrm{Mob}(\pi, \mbf 1_{\mcal P(V)}) \tau\big[ t^\pi\big].$$ The injective version $ \Tr^0$ of the trace of $0$-graph monomials in random matrices of $M_N(\C)$ defined in is given, for $T=(V,E,M)$ a $0$-graph monomial indexed by $\mrm{M}_N\big(L^{\infty-}(\Omega, \mbb C )\big)$, by \[eq:IntroDefTraceInj\] \_N\^0 = . Definition of traffic independence {#Sec:DefFree} ---------------------------------- Let $J$ be a fixed index set and, for each $j\in J$, let $\mcal A_j$ be some set. Given a family of linear maps $\tau_j : \mathbb{C}\mathcal{G}^{(0)}\langle \mcal A_j \rangle \to \mbb C, $ $j\in J$, sending the graph with no edge to one, we shall define a linear map denoted $\star_{j\in J}\tau_j: \mathbb{C}\mathcal{G}^{(0)}\langle \, \bigsqcup_{j\in J} \mcal A_j \rangle$ with the same property and called the free product[^4] of the $\tau_j$’s. Therein, $\bigsqcup_{j\in J} \mcal A_j $ has to be thought as the disjoint union of copies of $\mcal A_j$, although the sets $\mcal A_j$ can originally intersect (they can even be equal). Let us consider $0$-graph monomial $T$ in $\mathbb{C}\mathcal{G}^{(0)}\langle \, \bigsqcup_{j\in J}\mcal A_j \rangle$ and introduce the following indirect graph. We call colored components of $T$ with respect to the families $(\mcal A_j)_{j\in J}$ the maximal nontrivial connected subgraphs whose edges are labelled by elements of $\mcal A_j$ for some $j\in J$ (it is an element of $\mathbb{C}\mathcal{G}^{(0)}\langle \mcal A_j \rangle$). There is no ambiguity about the definition of colored components since $T$ is labeled in $\bigsqcup_{j\in J}\mathcal{A}_j $ where $\bigsqcup$ means that we distinguish the origin of a element that can come from several $\mcal A_j$’s. We call connectors of $T$ the vertices of $T$ belonging to at least two different colored components. The graph $\bar T$ defined below is called graph of colored components of $T$ with respect to $(\mathcal{A}_j)_{j\in J}$: - the vertices of $\bar T$ are the colored components of $T$ and its connectors - there is an edge between a colored component of $\bar T$ and a connector if the connector belongs to the component. \[Def:Freeness\] 1. For each $j\in J$, let $\mcal A_j$ be a set and $\tau_j : \mbb C \mcal G^{(0)}\langle \mathcal{A}_j \rangle \to \mbb C$ be a linear map sending the graph with no edges to one. The free product of the maps $\tau_j$ is the linear map $\star_{j\in J}\tau_j: \mathbb{C}\mathcal{G}^{(0)}\langle \, \bigsqcup \mcal A_j \rangle \to \mbb C$ whose injective version is given by: for any $0$-graph monomial $T$, (\_[jJ]{}\_j)\^0 \[T \] = ( |T ) \_  \^0. 2. Let $(\mcal A, \tau)$ be an algebraic space of traffics and let $J$ be a fixed index set. For each $j\in J$, let $\mcal A_j \subset \mcal A$ be a $\mcal G$-subalgebra. The subalgebras $(\mcal A_j)_{j\in J}$ are called traffic independent whenever the restriction of $\tau$ to the $\mcal G$-subalgebra induced by the $\mcal A_j$ is $\star_{j\in J}\tau_j$. 3. Let $X_j, j\in J$ be subsets of $\mcal A$ and let $(\mbf a_j)_{j\in J}$ be a family of elements of $\mcal A$. Then $(X_j)_{j\in J}$ (resp. $(\mbf a_j)_{j\in J}$) are called traffic independent whenever the $\mcal G$-subalgebra induced by the $X_j$’s (resp. by the $\mbf a_j$’s) are traffic independent. 4. In the context of space of traffics, we say that $(X_j)_{j\in J}$ are traffic independent whenever the $\mcal G^*$-subalgebras are traffic independent. The free product of spaces of traffics {#Sec:FreeProd} ====================================== Free products of algebraic spaces of traffics --------------------------------------------- The free product $\ast_{j\in J}\mcal A_j$ of a family $(\mcal A_j)_{j\in J}$ of $\mcal G$-algebras will be a $\mcal G$-algebra made with ”graphs whose edges are labelled by elements“ from the $\mcal A_j$ and the free product of a family $(\mcal A_j, \tau_j)_{j\in J}$ of spaces of traffics is their free product of $\mcal G$-algebras $\ast_{j\in J} \mcal A_j$ equipped with the free product $\ast_{j\in J} \tau_j$ of their distributions of traffics. Let $J$ be a fixed index set and, for each $j\in J$, $\mcal A_j$ be some set. As in Section \[Sec:DefFree\], while considering a monomial $g$ in $\mathbb{C}\mathcal{G}^{(2)}\langle \bigsqcup_{j\in J}\mathcal{A}_j \rangle$ we mean that $g$ is the data of a finite connected graph $(V,E)$ with an input and an output, and that for each edge is associated an index $j\in J$ and then an element of $\mcal A_j$. \[Def:QuotProd\] For all family of $\mcal G$-algebras $(\mathcal{A}_j)_{j\in J}$, we denote by $\ast_{j\in J} \mathcal{A}_j$ the $\mcal G$-algebra $\mathbb{C}\mathcal{G}^{(2)}\langle \bigsqcup_{j\in J}\mathcal{A}_j \rangle$, quotiented by the space generated by the following relations: $$Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)=Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)$$ whenever $a_1, \ldots, a_{k}$ are in a same algebra $\mathcal{A}_j$; which allows to consider the $\mcal G$-algebra homomorphisms $V_j:\mathcal{A}_j \to \ast_{j\in J} \mathcal{A}_j$ given by the image of $a\mapsto (\cdot\overset{a}{\leftarrow} \cdot)$ by the quotient map. The $\mcal G$-algebra $\ast_{j\in J} \mathcal{A}_j$ is the free product of the $\mcal G$-algebras in the following sense. Let $\mathcal{B}$ be a $\mcal G$-algebra, and $f_j:\mathcal{A}_j\to \mathcal{B}$ a family of $\mcal G$-morphism. There exists a unique $\mcal G$-morphism $\ast_{j\in J} f_j:\ast_{j\in J} \mathcal{A}_j\to \mathcal{B}$ such that $f_j=(\ast_{j\in J} f_j)\circ V_j$ for all $j\in J$. As a consequence, the maps $V_j$ are injective. The existence is given by the following definition of $\ast_{j\in J} f_j$ on $\ast_{j\in J} \mathcal{A}_j$: $$\ast_{j\in J} f_j(Z_g(\cdot \overset{a_1}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))=Z_g(f_{j(1)}(a_1)\otimes\ldots \otimes f_{j(n)}(a_n))$$ whenever $a_1\in \mathcal{A}_{j(1)},\ldots,a_n\in \mathcal{A}_{j(n)}$; which obviously respects the relation defining $\ast_{j\in J} \mathcal{A}_j$. The uniqueness follows from the fact that $\ast_{j\in J} f_j$ is uniquely determined on $\bigcup_j V_j(\mathcal{A}_j)$ (indeed, $\ast_{j\in J} f_j(a)$ must be equal to $f_j(b)$ whenever $a=V_j(b)$) and that $\bigcup_j V_j(\mathcal{A}_j)$ generates $\ast_{j\in J} \mathcal{A}_j$ as a $\mcal G$-algebra. Let $(\mcal A_j,\tau_j)_{j\in J}$ be a family of algebraic spaces of traffics. The free product of distributions of traffics $\star_{j\in J}\tau_j:\mathbb{C}\mathcal{G}^{(0)}\langle \bigsqcup_{j\in J}\mathcal{A}_j \rangle\to \C$ of Definition \[Def:Freeness\] respects the quotient structure of $\ast_{j\in J}\mcal A_j$, and consequently yields an algebraic space of traffics $(\ast_{j\in J}\mcal A_j , \star_{j\in J}\tau_j)$. Furthermore, we have $\tau_i=(\star_{j\in J}\tau_j)\circ V_i$, where $V_i$ is the canonical injective algebra homomorphism from $\mathcal{A}_i$ to $\ast_{j\in J}\mcal A_j$. We first need to prove that we have: for any graph operations $g,g_1$, $$\begin{aligned}\star_{j\in J}\tau_j&\left(Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)\right)\\&\hspace{2 cm}=\star_{j\in J}\tau_j\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)\right).\end{aligned}$$ Let us prove the corresponding properties at the level of the injective trace. Let $\pi$ be a partition of the vertices $V$ of $g$. We denote by $V_1$ the vertices in $g_1$ different from the output or the input of $g_1$. We have $$\begin{gathered} (\star_{j\in J}\tau_j)^0\left(Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\pi\right)\\=\sum_{\substack{\sigma\in \mathcal{P}(V\cup V_1)\\\sigma\setminus V_1=\pi}}(\star_{j\in J}\tau_j)^0\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\sigma\right).\end{gathered}$$ Because the colored component containing $Z_{g_1}(a_1\otimes \cdots\otimes a_{k})$ has the same edges on both sides on the equation, and because $(\star_{j\in J}\tau_j)^0$ factorizes on colored component, it suffices to prove the lemma when only one color (let say $j_0$) is involved. In this case, $(\star_{j\in J}\tau_j)^0=\tau_{j_0}^0$, and we can compute in $(\mcal A_{j_0} , \tau_{j_0})$. Below, we denote by $\trm{Mob}(\, \cdot\, , \, \cdot \,)$ the Möbis function on $\mcal P(V)$. $$\begin{aligned}\tau_{j_0}^0&\left(Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\pi\right)\\ &=\sum_{\pi{\leqslant}\pi'\in \mathcal{P}(V)}\trm{Mob}(\pi,\pi')\tau_{j_0}\left(Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^{\pi'}\right)\\ &=\sum_{\pi{\leqslant}\pi'\in \mathcal{P}(V)}\trm{Mob}(\pi,\pi')\tau_{j_0}\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^{(\pi'\cup 0_{V_2})}\right)\\ &=\sum_{\pi{\leqslant}\pi'\in \mathcal{P}(V)}\sum_{(\pi'\cup 0_{V_2}){\leqslant}\sigma\in \mathcal{P}(V\cup V_1)}\trm{Mob}(\pi,\pi')\tau^0_{j_0}\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^{\sigma}\right)\\ &=\sum_{\sigma\in \mathcal{P}(V\cup V_1)}\left(\sum_{\pi{\leqslant}\pi'{\leqslant}\sigma\setminus V_1 \in \mathcal{P}(V)}\trm{Mob}(\pi,\pi')\right)\tau^0_{j_0}\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^{\sigma}\right)\\ &=\sum_{\sigma\in \mathcal{P}(V\cup V_1)}\delta_{\pi,\sigma\setminus V_1 }\tau^0_{j_0}\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^{\sigma}\right)\\ &\sum_{\substack{\sigma\in \mathcal{P}(V\cup V_1)\\\sigma\setminus V_1=\pi}}(\tau_{j_0})^0\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\sigma\right).\end{aligned}$$ Now we can conclude, since $$\begin{aligned}\star_{j\in J}&\tau_j\left(Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot,\cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)\right)\\ &=\sum_{\pi\in V}(\star_{j\in J}\tau_j)^0\left(Z_g(\cdot \overset{Z_{g_1}(a_1\otimes \cdots\otimes a_{k})}{\longleftarrow} \cdot,\cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\pi\right)\\ &=\sum_{\pi\in V}\sum_{\substack{\sigma\in \mathcal{P}(V\cup V_1)\\\sigma\setminus V_1=\pi}}(\star_{j\in J}\tau_j)^0\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\sigma\right)\\ &=\sum_{\sigma\in \mathcal{P}(V\cup V_1)}(\star_{j\in J}\tau_j)^0\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\sigma\right)\\ &=\star_{j\in J}\tau_j\left(Z_g (Z_{g_1}(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes \cdots\otimes \cdot \overset{a_{k}}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)\right).\end{aligned}$$ A new characterization of traffic independence ---------------------------------------------- Let $(\mcal A_j, \tau_j)_{j\in J}$ be spaces of traffics and $(\ast_{j\in J}\mcal A_j, \ast_{j\in J}\tau_j)$ their algebraic free product. in order to finish the construction of the free product of spaces of traffics, it remains to prove Theorem \[Th:PosFreeProd\], that is the positivity of the free product $ \ast_{j\in J}\tau_j$ of positive distributions of traffics $\tau_j$. We will reason as for the construction of the free product of $^*$-probability spaces [@NS Lecture 6] using a structure result for $(\ast_{j\in J}\mcal A_j, \ast_{j\in J}\tau_j)$. Before that, we shall first state in Proposition \[Equivalence Free product free independance\] a characterization of traffic independence whose statement is closer to the more familiar free-independence than Definition \[Def:Freeness\]. A *bigraph* is a finite, connected and bipartite graph $g$, endowed with a bipartition of its vertices into two sets $V_{\mathit{in}}(g)$ and $V_{\mathit{co}}(g)$, whose elements we call *inputs* and *connectors*. For all $L,n{\geqslant}0$, a *$(L,n)$-bigraph operation* is the data of a bigraph with exactly $L$ ordered inputs, together with an ordering of its edges around each input, and the data of an ordered subset $V_{\mathit{out}}(g)$ consisting in $n$ elements of the connectors $V_{\mathit{co}}(g)$ that we call output, and such that all connectors which are not an output have a degree larger than $2$. For all integer $(L,n)\ge 0$ and tuple $\mbf d=(d_1\etc d_L)\in (\N^*)^L,$ we denote by $\mcal G_{L,\mbf d}^{(n)}$ (if $L\not=0$ and by $\mcal G_0^{(n)}$ otherwise) the set of $(L,n)$-bigraph operations such that the $k$-th inputs have a degree $d_k.$ A $(L,n)$-bigraph operation with degrees $d_1,\ldots, d_L$ is to be thought as an operation that accepts $L$ objects with types $d_1,\ldots, d_L,$ and produces a new object of type $n$. In particular, a $(L,n)$-bigraph operation can produce a new $n$-graph monomial from $L$ different graphs monomials in the following way. See figure \[Fig:01\] ![Left: a bigraph with four inputs (squares), five connectors (circles) and three outputs (with links exiting the box). The ordering of adjacent connectors is noticed for an input. Substituing in an obvious way the inputs of the bigraph by graph monomials one get the rightmost $3$-graph monomial.[]{data-label="Fig:01"}](Fig01.pdf){width="100mm"} Let us consider $L$ graph monomials $t_1,\ldots, t_L$ on some set of labels $\mcal A$, with respective number of outputs given by $\mbf d\in (\N^*)^L$ (that is $t_\ell \in \C\mcal G^{(d_\ell)}\larac$), and a $(L,n)$-bigraph operation $g\in \mcal G_{L,\mbf d}^{(n)}.$ Replacing the $\ell$-th input of $g$ and its adjacent ordered edges $(e_1,\ldots,e_{d_{\ell}})$ by the graph of $t_\ell$, identifying for each $k\in[L],$ the connector attached to $e_k$ with the $k$-th output of $t_\ell$, yields a connected graph. We denote by $T_g(t_1\otimes \ldots \otimes t_L)\in \C\mcal G^{(n)}\larac$ the $n$-graph monomial whose labelling is induced by those of $t_1,\ldots, t_L$, and with outputs given by the outputs of $g$. We then define by linear extension $$\begin{aligned} T_g: \mbb C\mcal G^{(d_1)}\larac \otimes \dots \otimes \mbb C\mcal G^{(d_L)}\larac &\longrightarrow \mbb C\mcal G^{(n)}\larac \\ t_1\otimes\ldots\otimes t_L&\longmapsto T_g(t_1\otimes\ldots\otimes t_L).\end{aligned}$$ One can show that the set of bigraph operations defines an operad with a compatible action on $n$-graph polynomials. It acts on the tensors of order $n$ in a slight generalization of Example 2.6 of [@Jones] We do not use this fact here. Let $J$ be an index set and $(\mcal A_j)_{j\in J}$ be a family ensembles, and let $g\in \mcal G_{L,\mbf d}^{(n)}$ be a bigraph operation with $\mbf d=(d_1,\ldots,d_L)$. A sequence $t_1\in \C\mcal G^{(d_1)} \langle \mcal A_{j_1}\rangle,\ldots, t_L\in \C\mcal G ^{(d_L)}\langle \mcal A_{j_L}\rangle$ of graph polynomials is *$g$-alternated* if for all $p,q\in [L]$ such that the $p$-th and the $q$-th inputs are neighbors of a same connector, then $j_p\neq j_q.$ If $t_1,\ldots, t_L$ are graph monomials alternated along $g\in \mcal G_{L,\mbf d}^{(0)}$, then $T_{g}(t_1\otimes \ldots \otimes t_L)$ is a graph monomial with graph of colored components $g,$ and its colored components are $t_1,\ldots,t_L,$ (considered as graphs with no outputs). For any $n\ge 1,$ $\pi\in \mcal P_n$ and any $n$-graph monomial $g$ made of a finite graph with outputs $(v_1,\ldots, v_n)$, let us denote by $g^\pi$ the quotient graph obtained by identifying vertices $(v_1,\ldots, v_n)$ according to $\pi$, with outputs given by the images of $(v_1,\ldots,v_n)$ by the quotient map, so that edges of $g^\pi$ can be identified with the one of $g.$ This defines a linear map $\Delta_\pi: \mbb C\mcal G^{(n)} \larac\to \mbb C\mcal G^{(n)} \larac$ such that $\Delta_\pi (g) = g^\pi$ for $n$-graph monomials $g$. Denote respectively by $1_n$ and $0_n$ the partition of $n$ made of $n$ singletons and of $1$ single block. Let $\phi:\C\mcal G^{(1)} \larac\to\C$ be a linear form. A graph polynomial $t\in \C\mcal G^{(n)}\larac$ is called *reduced* with respect to $\phi$, if $n\ge 2$ and for any $\pi \in \mathcal P(n)\setminus \{1_n\},$ $\Delta_\pi(t)=0$ or $n=1$ and $\phi(t)=0.$ For any $t\in \mbb C G^{(n)}\larac,$ one has $\Delta_{1_n}(t)=t$. If $n=2$, then $\Delta_{0_2}(t)=\Delta (t),$ where we recall that $\Delta$ is the graph operation with one vertex and one edge (and so $t$ is reduced if and only if $\Delta(t)=0$.) Let $\mbf A_N$ be a family of matrices of size $N$ by $N$ and let $g$ be a $n$-graph polynomial. Recall that we defined in Example \[Ex:MtSpaces\] a tensor $Z_g(\mbf A_N) = (B_{\mbf i})_{\mbf i\in [N]^n}$ of order $n$. Then $T_g(\mbf A_N)$ is reduced if and only if $B_{{\mbf i}}=0$ as soon as two indices of $\mbf i$ are equal. In particular for $n=2$, a matrix is reduced whenever its diagonal is null. \[Equivalence Free product free independance\]Let $(\mcal A, \tau)$ be a space of traffics. Denote by $\phi:\mbb C\mcal G^{(1)}\langle \mcal A \rangle \to \mbb C$ the linear map given by $\phi(g) = \tau(\tilde g)$ where $\tilde g \in \mbb C\mcal G^{(0)}\langle \mcal A \rangle $ is obtained by forgetting the position of the output of $g$. Say that a graph polynomial is reduced when it is reduced with respect to $\phi$. Then, the $\mcal G$-subalgebras $(\mcal A_j)_{j\in J}$ are traffic independent if and only if for any bigraph $g\in \mcal G_{L, \mbf d}^{(0)}$ and any $g$-alternated sequence $(t_1,\ldots, t_L)$ of reduced graph polynomials in $ \mbb C \mcal G\larac$, one has $\tau\big[T_g(t_1\otimes\ldots\otimes t_L)\big]=0.$ This characterization shows that traffic independence is stronger than free independence in the following situation, which has to be compared with [@Male2011 Corollary 3.5] and will be satisfied in Section \[Sec:CanonicalExtension\]. Let $(\mcal A, \tau)$ be a space of traffics. Denote $\Phi$ the associated trace on $\mcal A$ and $\eta(a)=\Phi\big(\Delta(a^*)\Delta(a)\big)-|\Phi(a)|^2=\Phi (a^*\circ a )-|\Phi(a)|^2$. If for any $a\in \mcal A,$ $\eta(a)=0$ then any family of $\mcal G$-subalgebras that is traffic independent is free independent in the $^*$-probability space $(\mcal A, \Phi)$. Similarly, for any subalgebra $\mcal B$ of $\mcal A$, if $\eta(a)=0$ for all $a\in \mcal B$, then the free independence of families in $(\mcal B,\Phi_{|\mcal B})$ is a consequence of traffic independence in $(\mcal A, \tau)$.\[lemmafreeind\] The two statement are proved in a similar way, and we only prove the first one. Since the trace defined on $\mcal A$ is a state, the assumption implies, for every $a\in \mcal A$, that $\Delta(a)$ has the same $^*$-distribution as $\Phi(a)\mbb I$. Let $(\mcal A_j)_{j\in J}$ be traffic-independent $\mcal G$-subalgebras and let $a_1,\ldots,a_n \in \mcal A,$ such that for any $k\in [n],$ $\Phi(a_k) =0$ and $a_k\in \mcal A_{j_k},$ with $j_{k}\not= j_{k+1},$ whenever $k<n.$ Then, $$\Phi((a_1-\Delta(a_1)) \ldots(a_{n}-\Delta(a_{n})) )=\Phi((a_1-\Phi(a_1))\ldots (a_{n}-\Phi(a_{n})) )=\Phi(a_1\ldots a_n ).$$ Let $g$ be the bigraph with no outputs, $n$ inputs and $n-1$ connectors whose graph is a segment, with inputs vertices (alternating with the connectors) labeled consecutively from one side to the other, from $1$ to $n$. Then one has $$\Phi((a_1-\Delta(a_1)) \ldots(a_{n}-\Delta(a_{n})) )=\tau\Big(T_g( (a_1-\Delta(a_1)) \otimes\ldots\otimes(a_{n}-\Delta(a_{n})) )\Big),$$ and $(a_1-\Delta(a_1)) \otimes\ldots\otimes(a_{n}-\Delta(a_{n}))$ is a $g$-alternated reduced tensor, so that by Proposition \[Equivalence Free product free independance\] we get $\Phi(a_1\ldots a_n)=0$ as desired. Recall Example \[Ex:Graphs2\] of the $\mcal G$-algebra $\mcal A$ of locally finite rooted graphs on a set of vertices $\mcal V$. It is a classical fact that an element $A$ of $\mcal A$ which is both deterministic and unimodular is vertex-transitive (there exists automorphisms exchanging each pair of vertices). This property implies that the diagonal $\Delta(A) = \big(A(v,v) \one_{v=w}\big)_{v,w\in \mcal V}$ of $A$ is constant, and so one can apply the lemma. This gives a new proof of the free independence of the spectral distributions of the free product of infinite deterministic graphs of [@Accardi2007], thanks to [@Male2011 Proposition 7.2]. Proof of Proposition 3.11 {#seq:Equivalence Free product free independance} ------------------------- We start by stating two preliminary lemmas. \[Mobius\] Let $m$ a graph monomial with output set $\mcal O.$ For each partition $\pi$ of $\mcal O$, denote by $m^\pi$ the graph operation obtained by identifying the outputs of $m$ that belong to a same block of $\pi$. Let us denote by $\mrm{Mob}$ the Möbius function for the poset of partitions of $\mcal O$ and $0_{\mcal O}$ the partition of $\mcal O$ made of singletons. Then, $p(m)= \sum_{\pi\in \mcal P(\mcal O)} \mrm{Mob}(0_{\mcal O}, \pi)m^\pi$ is a reduced graph polynomial, and every reduced graph polynomial $m$ satisfies $m=p(m)$. For any $\nu \in \mcal P(\mcal O),$ $$\begin{aligned} \Delta_\nu \left(\sum_{\pi\in \mcal P(\mcal O)} \mrm{Mob}(0_{\mcal O}, \pi)m^\pi\right)=\sum_{\mu\in \mcal P(\mcal O)} \left(\sum_{\pi\in \mcal P(\mcal O):\pi\vee\nu=\mu } \mrm{Mob}(0_{\mcal O},\pi)\right) m^\mu \end{aligned}$$ Now, for any $\mu\in \mcal P(\mcal O),$ $$\begin{aligned} \sum_{\pi\in \mcal P(\mcal O):\pi\vee\nu=\mu }\mrm{Mob}(0_{\mcal O},\pi)&=\sum_{\pi\le \mu}\sum_{\pi\vee \nu\le \sigma\le \mu}\mrm{Mob}(\sigma,\mu) \mrm{Mob}(0_{\mcal O},\pi)\\ &=\sum_{\nu\le \sigma\le \mu}\mrm{Mob}(\sigma,\mu) \left(\sum_{\pi\le \sigma}\mrm{Mob}(0_{\mcal O},\pi)\right)\\ &=\sum_{\nu\le \sigma\le \mu}\mrm{Mob}(\sigma,\mu)\delta_{\sigma,0_{\mcal O}}=\delta_{\nu,0_{\mcal O}}\mcal M(0_{\mcal O},\mu).\end{aligned}$$ \[Lem:DSD\] For any linear from $\phi$ on $\mcal G^{(1)}\langle \bigsqcup_{j\in J} \mcal A_j \rangle$ sending the graph with no edges to one, and calling *reduced* graph polynomials reduced with respect to $\phi$, one has G\^[(n)]{}\_[jJ]{} A\_j = C I \_ \_[:=W\_[g,]{}]{} , Let us denote by $\mcal E$ the vector space spanned by the right hand side. For any integers $k{\geqslant}1, s\ge 0,$ let us consider the vector space $\mcal E_k^s$ spanned by the family of graphs polynomials $T_g(\mbf t) $, where $g\in \mcal G_{L,\mbf d}^{(n)}$ has less than $k$ vertices and $\mbf t=t_1\otimes\ldots\otimes t_{L}$ is such that the number of $k\in [L]$ with $t_k$ reduced is greater than $\max\{0,L-s\}.$ Let us set $\mcal E_k=\Span(\mcal E_k^s)_{s\ge 0}$ and prove by induction that for any $k\ge 0,$ $\mcal E_k\subset \mcal E$, which shall conclude the proof. To begin with, note that $\mcal E_1= \mbb C \mbb I \subset \mcal E.$ Let us assume the claim for $k\in\N$ and prove by induction on $s\ge 0$ that $\mcal E^s_{k+1}\subset \mcal E.$ First, $$\mcal E_{k+1}^0\subset \bigcup_{\substack{g\in \mcal G^{(n)}_d\\ m \text{ a } g\text{-alternated sequence}\\\text{of graph monomials}}}\{T_g(\mbf t): \mbf t \in \Lambda_{\mbf m}\text{ and } \mbf t\text{ is reduced}\}\subset\mcal E.$$ Let us assume that $\mcal E_{k+1}^s\subset \mcal E$ and consider $g$ a bigraph with $k+1$ connectors and $\mbf t=t_1\otimes\ldots \otimes t_L\in \Lambda_{\mbf m}$ a $g$-alternated tensor with $\max\{L-s-1,0\}$ reduced components. Let us assume that $t_1\in \mbb C\mcal G^{(d_1)} \mcal \langle A_j\rangle $ is not reduced, for some $j\in J, d_1\ge 1$. If $d_1=1,$ then $T_g\left((t_1-\tau_j(t_1))\otimes t_2 \ldots\otimes t_L \right)\in \mcal E_{k+1}^s$ and $\tau_j(t_1) T_g\left(1\otimes t_2 \ldots\otimes t_L\right)\in \mcal E_{k},$ so that $T_g(\mbf t)\in \mcal E$. If $d_1\ge 2,$ according to Lemma \[Mobius\], we can write $t_1=r+\sum_{i=1}^m x_i$, where $r\in \mbb C\mcal G^{(d_1)} \mcal A_j$ is a reduced graph polynomial and $x_1,\ldots, x_m\in\mbb C\mcal G^{(d_1)} \langle\mcal A_j\rangle$ are graph monomials having two ouputs equal to the same vertex. Then, for any $i\in [m],$ $T_g\left(x_i\otimes t_2\ldots\otimes t_L\right)\in \mcal E_{k}$ and $T_g(r\otimes t_2\otimes \ldots \otimes t_L)\in \mcal E_{k+1}^s,$ so that $T_g(\mbf t)\in \mcal E.$ To prove Proposition \[Equivalence Free product free independance\] it is then sufficient to prove that if $(\mcal A_j)_{j\in J}$ are traffic independent in $(\mcal A, \tau)$ then for each bi-graph operation $g\in \mcal G^{(2)}_{L,\mbf d}$ and each $g$-alternated sequence $\mbf t$ of reduced graph polynomials one has $\tau\big[T_g(\mbf t)\big]=0$. Indeed, it implies that this property is true for the free product $\ast_{j\in J}\tau_j$ and so the reciprocal assertion follows from Lemma \[Lem:DSD\] since it implies that $\tau$ coincides with $\ast_{j\in J}\tau_j$. The formal difficulty is that we shall use Definition \[Def:TraffCum\] of the injective trace in order to prove that $\tau\big[T_g(\mbf t)\big]$ vanishes. Formula is only valid for graph monomials as the summation involves the vertex set of the graph, but $h$ is not a monomial because of *reduceness* of $\mbf t$. We fix from now a sequence $\mbf m=(m_1,\ldots, m_L)$ of graph monomials with respective vertex sets $V_1\etc V_L$, and define $$\Lambda_{\mbf m} = \Span\big\{ m_1^{\pi_1}\otimes \ldots \otimes m_L^{\pi_L}\, \big| \, \forall k\in [L],\pi_k\in \mcal P(V_{k})\big\}.$$ We claim that it suffices to prove that $\tau[h]=0$ for any $h = T_g(\mbf t)$, where $g$ is a bigraph operation and $\mbf t$ is $g$-alternated, reduced and belongs to $\Lambda_{\mbf m}$. Indeed, let $\mbf t=(t_1,\dots, t_L)$ be an arbitrary sequence of $g$-alternated, reduced graph polynomials and denote $\mbf t = \sum_i \alpha_i \mbf x_i$ where the $\mbf x_i$, are the sequences of graph monomials of $\mbf t$. Let $p$ be the projection of Lemma \[Mobius\] and denote $p(\mbf t) = \big( p(t_1) \etc p(t_L) \big)$. Then $t = p(t) = \sum_i \alpha_i p(\mbf x_i)$ where $p(\mbf x_i) \in \Lambda_{\mbf m_i}$ is reduced for each $i$. The interest in fixing the monomial $\mbf m$ is that each *monomial* $\mbf x \in \Lambda_\mbf m$ satisfies that $T_g(\mbf x) = T_g(\mbf m)^{\nu_{g,\mbf m}(\mbf x)}$ for a unique partition $\nu_{g,\mbf m}(\mbf x)$ of the set $V$ of vertices of $T_g(\mbf m)$. Denoting $\nu=\nu_{g,\mbf m}(\mbf x) $, Formula yields $\tau[ T_g(\mbf x) ] = \tau[T_g(\mbf m)^\nu] = \sum_{\pi{\geqslant}\nu \in \mcal P(V)} \tau^0[ T_g(\mbf m)^\pi]$, where we recall that $\pi{\geqslant}\nu$ means that $\pi$ refines the identifications made by $\nu$. We then define the linear form defined for *monomials* by $$\alpha_\pi: \mbf x\mapsto \one( \pi{\geqslant}\nu_{g,\mbf m}(\mbf x))$$ on $\Lambda_\mbf m$. By linearity of $\tau^0$, for any $\mbf t$ reduced and $g$-alternated graph polynomial we get $$\tau[h] = \tau[ T_g(\mbf t)] = \sum_{\pi \in \mcal P( V)} \tau^0\big[ \alpha_\pi(\mbf t) .T_g(\mbf m)^\pi\big].$$ Moreover, one can write $T_g(\mbf m)^\pi= T_{G_\pi}\big( F_\pi \big)$, where $G_\pi$ is the bigraph of colored components of $T_g(\mbf m)^\pi$ with an arbitrary choice of ordering of the inputs and of edges around inputs, and $F_\pi$ is the sequence of colored component of $T_g(\mbf m)^\pi$ (with ordering fixed by the previous choice). Both $G_\pi$ and $F_\pi$ depend implicitly on $g$ and $\mbf m$. \[Rk:Dist\] Making this operation $g\mapsto G_\pi$ can increase the number of connectors, so that $G_\pi$ is not a quotient of $g$. However, it cannot increase the number of colored component: the set $V_{\mathit{in}}(G_\pi)$ of inputs of $G_\pi$ is a quotient of $V_{\mathit{in}}(g)$. The mapping $p_\pi: V_{\mathit{in}}(g)\to V_{\mathit{in}}(G_\pi)$ induced by the quotient by $\pi,$ of $T_g(\mbf m)$ respects the bipartition, is 1-Lipschitz for the graph distances, and is surjective. We set $F_\pi(\mbf t) = \alpha_\pi(\mbf t) . T_g(\mbf m)^\pi$ and obtain \[Eq:BlaBla1\] = \_[P( V)]{} \^0. Recall now that traffic-independence means that for any bi-graph $G$ and any $G$-alternated sequence of graph monomials $\mbf x = (x_\ell)_{\ell=1}^L$, one has \[Eq:BlaBla2\] \^0\[ T\_[G]{}(x)\]= ( G) \_[=1]{}\^L \^0\[ x\_\] where $x_\ell$ is considered has a $0$-graph monomial. This equality is then valid for $\tau^0[ T_{G}(\mbf t)]$ when $\mbf t$ is a $G$-alternated sequence of graph polynomial. We need the following Lemma whose proof is postponed to the end of the proof. \[arbres reduits\] Let $g\in \mcal G_{L,\mbf d}^{(0)}$ and let $\mbf m$ be a $g$-alternated sequence of graph monomials. Let $\mbf t\in \Lambda_{\mbf m}$ be a sequence of reduced graph polynomials and $\pi$ a partition of the vertex set $V$ of $T_g(\mbf m)$. With $G_\pi$ and $\alpha_\pi$ defined as above, if $G_\pi$ is a tree then, $G_\pi=g$ or $ \alpha_\pi(\mbf t) =0.$ Assuming for the moment Lemma \[arbres reduits\], we deduce from that $$\tau[h] =\one( g\mrm{ \ is \ a \ tree \ }) \times \sum_{ \substack{ \pi \in \mcal P( V) \\ \mrm{s.t.} \ G_\pi=g}} \tau^0\Big[ T_g\big( F_\pi(\mbf t) \big) \Big]$$ which is zero if $g$ is not a tree. From now, we assume that $g$ is a tree. Note that the partitions $\pi$ such that $G_\pi=g$ are those given by first considering a sequence $(\pi_1\etc \pi_L)\in \prod_{\ell=1}^L\mcal P(V_k)$ of partitions of the vertex sets of the monomials of $\mbf m$ such that $\pi_{|V_k}=0_{V_k}$ for all $k\in [L]$ (i.e. does not identifies outputs of the $t_k$’s), and forming a smallest partition $\bar{\pi}$ of $V$. Moreover, for such $\pi =\bar \pi $ one has $F_\pi = (m_1^{\pi_1}, \dots , m_L^{\pi_L})$ and that the linear map $\alpha_\pi$ factorizes $\alpha_\pi(\mbf t) =\prod_\ell \alpha_{\pi_\ell}(t_\ell)$. By we can therefore rewrite & = & \_[ ]{} \_[=1]{}\^L \^0 , where in the r.h.s. we see $m_\ell^{\pi_\ell}$ as a $0$-graph monomial. By definition of $\alpha_\pi(\mbf t)$ and since the graphs $t_\ell$ are reduced we get = \_[=1]{}\^L \_[\_P(V\_)]{} \^0\[ t\_\^[\_]{}\] = \_[=1]{}\^L , where $t_\ell$ is also seen as a $0$-graph monomial. Since $g$ a tree it possesses a leaf for which reduceness condition implies $\tau[t_\ell]=0$. Hence we get $\tau[h]=0$ as desired. The rest of this section is devoted to the proof of Lemma \[arbres reduits\] \[chemins reduits\] Let $g\in \mcal G_{L,\mbf d}^{(0)}$ and let $\mbf m=(m_1\etc m_L)$ be a $g$-alternated sequence of graph monomials. Let $\mbf t\in \Lambda_{\mbf m}$ be a sequence of reduced graph polynomials and $\pi$ a partition of the vertex set $V$ of $T_g(\mbf m)$. Assume $G_\pi$ is a tree and there exists $\omega$ a simple path on $g$ visiting exactly $R{\geqslant}3$ inputs of $g$ whose source and destination are identified in $G_\pi$. More precisely, denote the inputs that $\omega$ visits in consecutive order $v_{i_1},\ldots,v_{i_R}$, with $i_1,\ldots,i_R\in [L]$ (pairwise distinct by simplicity of $\omega$). Recall that $p_\pi$ is the distance map on the inputs of $g$ induced by $\pi$ and assume $p_\pi(v_{i_1})=p_\pi(v_{i_R})$. Then $\alpha_\pi(\mbf t)=0$ and we can allow $v_{i_1}=v_{i_R}$ without changing this conclusion. As $G_\pi$ is a tree it has two leaves and, since $\mbf m$ is $g$-alternated, there exists $1<r<R$ such that when $\omega$ enters and exit neighboring connectors $c^-$ and $c^+$ of $v_r$ that are identified. More precisely, let $\pi_{-,+}$ be the finest partition of the outputs of $t_{i_r}$ including $\{c^-,c^+\}$. Then, $\alpha_\pi(\mbf t) =\alpha_\pi (t_1\otimes \ldots\otimes \Delta_{\pi_{-,+}}t_{i_r}\otimes\ldots \otimes t_L)$. But $\Delta_{\pi_{-,+}}t_{i_r}=0$ since $t_{i_r}$ is reduced. Assume that $g$ is not a tree. Since $G_\pi$ is a tree, there exist two distinct inputs $\underline{v}, \overline{v}$ of $g$ with $p_\pi(\underline{v})=p_\pi(\overline{v}),$ so as $\mbf t$ is $g$-alternated, there exists a path $\omega$ in $g$ going through at least three inputs satisfying the condition of Lemma \[chemins reduits\], hence $\alpha_\pi(\mbf t)=0.$ \[Rk:chemins reduits\] The conclusion of Lemma \[chemins reduits\] remains valid when we relax the condition that $\mbf t$ is reduced and only assume that $\Delta_{\pi_{-,+}}t_{i_r}=0$ at each input $v_{i_r}$ with $1<r<R$. Moreover if exactly one graph polynomial $t_{i_r}$ does not satisfy $\Delta_{\pi_{-,+}}t_{i_r}=0$ then $\pi$ must identify the entering and exiting output of $t_{i_r}$ in order for $\alpha_\pi(\mbf t)$ not to vanish. Proof of Theorem 1.2 -------------------- For each $j\in J$ let $\tau_j$ be a distribution of traffics. It remains to prove that the free product $\tau:=\star_{j\in J}\tau_j$ is also a distribution of traffics, showing that it satisfies the positivity condition . Therefor, we reason as in [@NS Chapter 6] where is stated a structural result for the free product of unital algebras with identification of units [@NS Formula (6.2)]. Let us consider for $n{\geqslant}1$ a bigraph $g\in \mcal G_{L,\mbf d}^{(n)}$ and a $g$-alternated sequence $\mbf m=(m_1,\ldots, m_L)$ of graph monomials such that for any $k\in [L],$ $m_k\in \mbb C \mcal G^{(d_k)}\langle \mcal A_{\gamma(k)}\rangle$, where $\gamma(k)\in J$ and $d_k\in \{1,2,\dots\}$ for traffic independent $\mcal G$-subalgebras $\mcal A_j$, $j\in J$. Let us denote by $Aut_{g,\mbf m}$ the set of automorphisms $\sigma$ of the bigraph $g$ i.e. the set of maps from the vertex set of $g$ to itself preserving - the adjacency, the bipartition and the set of outputs of $g$, - the coloring of $g$ given by $\mbf m$, i.e. $\gamma\circ \sigma=\gamma$ on the inputs. It does not necessarily respect the ordering of the edges around inputs. Every $\sigma \in Aut_{g,\mbf m}$ and $\mbf t\in \Lambda_{\mbf m}$ induces a new $g$-alternated sequence of graph polynomials $\mbf t_\sigma=t_{1,\sigma}\otimes\cdots \otimes t_{L,\sigma}$: we define $t_{i,\sigma}$ to be $t_{\sigma(i)}$ with a reordering of labels of inputs and ordering of neighbor connectors in such a way that $T_g(\mbf t)=T_{g}(\mbf t_\sigma)$. We have the property $(\mbf t_{\sigma_1})_{\sigma_2}=\mbf t_{\sigma_2 \sigma_1}$ for all $\sigma_1, \sigma_2 \in Aut_{g,\mbf m}$. \[Lem:Structure\]Let us fix $n{\geqslant}1$, $g$ be a bigraph in $ \mcal G_{L,\mbf d}^{(n)}$ and $\mbf m$ be a $g$-alternated sequence of graph monomials. Let $g'$ be a bigraph in $ \mcal G_{L', \mbf d'}^{(n)}$ and $\mbf m'$ be a $g$-alternated sequence of graph monomials. Let $\mbf t\in \Lambda_{\mbf m}$ and $\mbf t'\in \Lambda_{\mbf m'}$ be reduced. 1. If $\tau[T_g(\mbf t)|T_{g'}(\mbf t')]\neq 0$, then $g$ is a tree, and $T_{g'}(\mbf t')=T_g(\mbf t'')$ for some reduced graph polynomials $\mbf t''\in \Lambda_{\mbf m''}$ and $\mbf m''$ some $g$-alternated sequence of graph monomials which have the same coloring as $\mbf m$. In particular the spaces $W_{g, \gamma}$ of Lemma \[Lem:DSD\] are orthogonal. 2. Assume that $g$ is a tree and that $\mbf m$ and $\mbf m'$ have the same coloring. Then we have $$\tau[T_g(\mbf t)|T_{g}(\mbf t')]= \sum_{\sigma\in Aut_{g,\mbf m}}\tau[t_1|t'_{1,\sigma}] \cdots \tau[t_L|t'_{L,\sigma}].$$ Assuming this lemma for the moment, let us deduce Theorem \[Th:PosFreeProd\]. By the same reasoning as in the previous section, it suffices to prove that $\tau[ h|h^*]{\geqslant}0$ for each finite combination $h = \sum_{i} \beta_i T_{g_i}(\mbf t^i)$ for bigraphs $g_i$ and sequences of reduced polynomials $\mbf t^i \in \Lambda_{\mbf m^i}$ where the $\mbf m^i$’s are fixed sequences of $g_i$-alternated monomials. Moreover the previous lemma allows to restrict our consideration to the case where all $g_i$ are equal to one particular tree $g$ and all $\mbf m^i$ have the same coloring (after a modification of the $\mbf t^i$s and $\mbf m^i$s if necessary). In this case, we denote by $Aut_{g,\mbf m}$ the sets $Aut_{g_i,\mbf m^i}$ (which are all equal), and we have $$\begin{aligned} \tau\Big[\sum_{i}\beta_iT_{g}(\mbf t^i) \big |\sum_j \bar{\beta}_jT_{g}(\mbf t^j)^*\Big] &=\sum_{ij}\beta_i\bar{\beta}_j\tau\big[T_{g}(\mbf t^i)\big|T_{g}({\mbf t^j}^*)\big]\\ &= \frac{1}{\sharp Aut_{g,\mbf m}}\sum_{\substack{ ij \\ \sigma\in Aut_{g,\mbf m}}}\beta_i\bar{\beta}_j \tau\big[T_{g}(\mbf t_\sigma^i)\big|T_{g}({\mbf t_\sigma^j}^*)\big],\\ &= \frac{1}{\sharp Aut_{g,\mbf m}}\sum_{\substack{ ij \\ \sigma,\sigma'\in Aut_{g,\mbf m}}}\beta_i\bar{\beta}_j \tau \big[t_{1,\sigma}^i \big|{t^j_{1,\sigma'\circ \sigma}}^*\big] \cdots \tau\big[ t_{L,\sigma}^i \big| {t^j_{L,\sigma'\circ \sigma}}^*\big],\\ &= \frac{1}{\sharp Aut_{g,\mbf m}}\sum_{\substack{ ij \\ \sigma,\sigma'\in Aut_{g,\mbf m}}}\beta_i\bar{\beta}_j\tau \big[t_{1,\sigma}^i \big| {t^j_{1,\sigma'}}^*\big] \cdots \tau\big[ t_{L,\sigma}^i \big| {t^j_{L,\sigma'}}^*\big].\\\end{aligned}$$ We can now see that the r.h.s. is nonnegative. First, the matrices $\big(\tau[t_{\ell,\sigma}^i|{t^j_{\ell,\sigma'}}^*]\big)_{i,j,\sigma,\sigma'}$, $\ell=1\etc L$ are positive definite since $\tau$ is positive on each $\mcal G$-subalgebra $\mcal A_j$. Moreover, their entry wise product (also called Schur product) $\big(\tau[t_{1,\sigma}^i|{t^j_{1,\sigma'}}^*] \cdots \tau[ t_{L,\sigma}^i|{t^j_{L,\sigma'}}^*]\big)_{i,j,\sigma,\sigma'}$ is also positive ([@NS Lemma 6.11]). This yields by consequence the positivity of the free product. We will prove this lemma by induction on the number of inputs $g$. If this number is $0$, this means that $g$ consists in a single connector (and the sequence of outputs of $g$ is constant and equal to this single connector) and then $\tau\big[T_g(\mbf t)|T_{g'}(\mbf t)\big]$ is proportional to $\tau\big[ T_{\tilde g'}(\mbf t)\big]$ where $\tilde g'$ is the bigraph with no outputs obtained by identifying the outputs of $g'$. Hence, by Proposition \[Equivalence Free product free independance\], $\tau\big[T_g(\mbf t)|T_{g'}(\mbf t)\big]$ is zero if $g'$ as one input or more and so the lemma is true. The hypotheses is that the number of colored components of $T_g(\mbf t)$ is larger than $1$ and that the lemma is true for all inferior numbers of colored components of $T_g(\mbf t)$. Let us assume that $\tau[T_g(\mbf t)|T_{g'}(\mbf t')]\neq 0$. Remark that we have $T_g(\mbf m)|T_{g'}(\mbf m')=T_{g|g'}(\mbf m\otimes\mbf m')$ for the bigraph $g|g'$ with no outputs and $L+L'$ inputs which consists in collapsing the outputs of $g$ with those of $g'$. Then, denoting by $V$ the vertex set of $T_{g|g'}(\mbf m\otimes\mbf m')$, for all $\pi\in \mcal P(V)$ we define the linear map $\alpha_\pi $, the bigraph $G_\pi$ and the sequence of monomials $F_\pi$ as in the previous section, namely - $T_{g|g'}(\mbf m \otimes \mbf m')^\pi = T_{G_\pi}(F_\pi)$ where $G_\pi$ is the bigraph of colored components of $T_{g|g'}(\mbf m \otimes \mbf m') $, - for each sequence of monomials $\mbf x \in \Lambda_m, \mbf x'\in \Lambda_{m'}$, one has $\alpha_\pi(\mbf x, \mbf x') = \one(\pi{\geqslant}\nu)$ where $T_{g|g'}(\mbf x, \mbf x') = T_{g|g'}(\mbf m, \mbf m')^\nu$. Denoting $F_\pi(\mbf t, \mbf t')= \alpha_\pi(\mbf t, \mbf t') \times T_{G_\pi}(F_\pi)$ we get $$\tau\big[ T_g(\mbf t) | T_{g'}(\mbf t')\big] = \sum_{\pi \in \mcal P(V)} \tau^0\Big[ T_{G_\pi}\big(F_\pi(\mbf t, \mbf t') \big)\Big],$$ and by definition of traffic independence the terms in the sum are possibly nonzero only if $G_\pi$ is a tree. We shall use the following Lemma. \[arbres reduits deux2\] With notations as above, let $\pi\in \mcal P(V)$ such that $G_\pi$ is a tree and $\alpha_\pi(\mbf t, \mbf t')\neq 0$. Then 1. $g$ and $g'$ are trees. 2. $\pi$ respects the decomposition of $T_{g}(\mbf m)$ and $T_{g'}(\mbf m')$ into colored components, in the sense that the image by $\pi$ of two vertices that belong to different colored components of $T_{g}(\mbf m)$ (resp. $T_{g'}(\mbf m')$) belong to different colored components of $T_{g|g'}(\mbf m\otimes \mbf m')^\pi$. 3. two different connectors of $g$ (resp. $g'$) are not identified by $\pi$ in $T_{g|g'}(\mbf m\otimes \mbf m')^\pi$. By Lemma \[chemins reduits\] and Remark \[Rk:chemins reduits\], if $G_\pi$ is a tree and there is a simple path $\omega$ on $g$ (resp. $g'$) visiting exactly in $V_{\mathit{in}}(g)$ (resp. $V_{\mathit{in}}(g')$) the vertices $v_{i_1},\ldots,v_{i_l},$ in consecutive order, with $l\ge 3$ and $i_1,\ldots,i_l\in [L]$, such that $p_\pi(v_{i_1})$ and $p_\pi(v_{i_l})$ belong to a same colored component of $T_{g|g'}(\mbf m \otimes \mbf m')^\pi$, then $\alpha_\pi(\mbf t, \mbf t')=0$. We can allow $v_{i_1}=v_{i_l}$ without changing the conclusion. Indeed, since the path $\omega$ is in $g$ the graph polynomials corresponding to the inputs it visits are reduced. If $g$ is not a tree, then there exists again a simple loop $\omega$ in $g$ from a colored component to itself which visit another colored component, which then satisfies the condition of Lemma \[chemins reduits\], and so $\alpha_\pi(\mbf t, \mbf t')= 0.$ Now, let us take two vertices $\underline v$ and $\overline v$ in different colored components of $T_g(\mbf m)$. If their images by $\pi$ belong to a same colored component, there exists a path $\omega$ in $g$ from $\underline{v}$ to $\overline{v}$ going through at least three inputs, satisfying the condition of Lemma \[chemins reduits\], so that $\alpha_\pi(\mbf t, \mbf t')= 0.$ Finally, let us take two different connectors $\underline{c}$ and $\overline{c}$ which are identified by $\pi$. Here again there exists a path $\omega$ in $g$ from $\underline{c}$ to $\overline{c}$ going through at least three inputs, satisfying the condition of Lemma \[chemins reduits\], yielding the same conclusion. We can deduce the same properties for $g'$ by interchanging $g$ and $g'$. We then can assume that $g$ and $g'$ are trees. Moreover, we know from Proposition \[Equivalence Free product free independance\] that $\tau\big[ T_g(\mbf t) | T_{g'}(\mbf t')\big]$ vanishes if $(\mbf m \otimes \mbf m') $ is $(g|g')$-alternated and reduced. Hence we can assume that there is a $k$ in $\{1\etc n\}$ such that the color of one particular neighbor input $v$ next to the $k$-th output of $T_g(\mbf m)$ is the same that the color of some neighbor input $v'$ next to the $k$-th output of $T_{g'}(\mbf m')$. Without loss of generality we assume that $v$ and $v'$ are neighbors of the first output of $T_g(\mbf m)$ and $T_{g'}(\mbf m')$, corresponding to the graph monomials $m_1$ and $m_1'$ respectively. We denote by $c_1 \etc c_m$ the connectors around $v$ in $T_g$ and by $s_1\etc s_m$ the connected components of $T_g$ when $v$ is removed, in such a way that $c_i$ belongs to $s_i$ for each $i=1\etc m$, with $c_i$ considered as an additional output. Some of these bigraphs have a single output (which is the corresponding $c_i$) and we assume that those bigraphs are $c_1 \etc c_p$ for $0{\leqslant}p {\leqslant}m$. Similarly we define $c'_i$, $s'_i$, $i=1\etc m'$ and $p'$ by considering $T_{g'}$ instead of $T_{g}$. Moreover, given $\sigma\in \Sigma(p)$ a permutation of $\{1\etc p'\}$, we denote by $t'_{1,\sigma}$ the graph polynomial obtained from $t'_1$ by permuting the outputs attached to the connectors $c_1\etc c_p$ according to $\sigma$. \[Lem:RecPos\] With the above notations, up to a reordering of the $s'_i$ for $i>p'$ and a reordering of its outputs different from $c_i$, $$\tau\big[ T_g(\mbf t) | T_g(\mbf t') \big] = \one(p=p') \one(m=m') \sum_{\sigma \in \Sigma(p)} \tau\big[ t_1 | t_{1,\sigma}\big] \times \prod_{i=1}^p \tau\big[ s_i | s'_{\sigma(i)}\big] \times \prod_{i=p+1}^m \tau\big[ s_i | s'_{i}\big].$$ Assuming momentarily this Lemma, let us finish the proof. Applying the induction hypothesis, we get that if $T_{g'}(\mbf t')$ cannot be written $T_g(\mbf t'')$ for some reduced graph polynomials $\mbf t''\in \Lambda_{\mbf m''}$ and $\mbf m''$ some $g$-alternated sequence of graph monomials which has the same coloring as $\mbf m$, then we would have a vanishing expression. For the second part of the lemma, let us assume that $g=g'$ and that $\mbf m'$ has the same coloring as $\mbf m$. Remark that an element of $Aut_{g,\mbf m}$ is nothing else than a bijection from $\{c_1,\ldots,c_m\}$ to itself, and an automorphism of each $s_i$. Using the induction hypothesis on each $s_i$, we see that a non-vanishing term is such that $s_i$ and $s'_{i,\sigma}$ for $i{\leqslant}p$, resp. $s_i$ and $s'_{i}$ for $i> p$, are of the same type of tree of colored component, and there exists an automorphism for each of these couples, which allows to define a global automorphism $\sigma \in Aut_{g,\mbf m}$. Hence by recurrence we get as expected $$\begin{aligned} \tau[T_g(\mbf t)|T_{g'}(\mbf t')]&= \sum_{\sigma\in Aut_{g,\mbf t}}\tau[t_1|t'_{1,\sigma}] \cdots \tau[t_L|t'_{L,\sigma}].\end{aligned}$$ Let $\pi \in \mcal P(V)$ such that $\alpha_\pi(\mbf t, \mbf t'\neq 0)$. Assume that $\pi$ identifies a vertex of $v_1$ of $s_i$ with a vertex of $v_2$ of $s'_{i'}$ for $i>p$ and $i'>p'$. Consider a simple path in $T_{g|g'}(\mbf m \otimes \mbf m')$ from (the colored component of) $v_1$ to (the one of) $v_2$, consisting in a sub-path in $T_g$ from $v_1$ to $c_1$, going through $t_1$ and $t'_1$, and finishing with a subpath from $c_2$ to $v_2$. By Lemma \[chemins reduits\] and the last sentence of Remark \[Rk:chemins reduits\] we get that $\pi$ identifies $c_i$ and $c'_{i'}$. By Lemma \[arbres reduits deux2\], $\pi$ does not identify $c_i$ and $c'_{i'}$ with an other connector of $T_{g|g'}$, and so up to a reordering we can assume $i'=i$. Assume now that an output $o$ of $s_i$ is not attached to any output of $s'_i$ in $T_{g|g'}$. Consider a simple path from (the colored component of) $v_1$ to (the one of) $v_2$, consisting in a sub-path in $T_g$ from $v_1$ to $o$ (which then does not visit $c_i$), continuing with a sub-path in $T_g'$ going $v_2$. While entering in $T_g'$ the path goes through a subgraph $s'_j$ for $j\neq i$ and goes through $t'_1$. Necessarily $\pi$ must identify a vertex of $s_i$ with a vertex of $s'_{j}$ (since otherwise $\alpha_\pi(\mbf t, \mbf t'\neq 0)$ by Lemma \[chemins reduits\] and the last sentence of Remark \[Rk:chemins reduits\]). But by the previous paragraph this implies that $c_i$ and $c'_j$ are identified. This is absurd since $c'_i$ and $c'_j$ cannot be identified by $\pi$. Hence, since the argument of this paragraph remains true by exchanging the roles of $s_i$ and $s'_i$, the outputs of $s_i$ and $s'_i$ are in correspondence and up to a reordering we can assume that the $k$-th output of $s_i$ is attached with the $k$-th output of $s'_i$. Moreover, since different connectors of $T_g$ (resp. $T_{g'}$) cannot be identified by $\pi$, there exists an injective partial function $\sigma_\pi$ from $\{1\etc p\}$ to $\{1\etc p'\}$ such that for each $i{\leqslant}p, i'{\leqslant}p'$, $\pi$ identifies $c_i$ and $c'_{i'}$ if and only if $i\in Dom(\sigma)$ and $\sigma(i)=i'$. With the same argument as a the beginning of the proof, vertices of $s_i$ for $i{\leqslant}p$ can only be identified with vertices $s'_{i'}$ for $i'{\leqslant}p'$ with $i \in Dom(\sigma)$ and $\sigma(i)=i'$. At last, a direct use of Lemma \[chemins reduits\] implies that a vertex of $t_1$ (resp. $t'_1$) cannot be identified with a vertex of any $c'_{i'}$ for any $i'=1\etc m'$ (resp. $c_{i}$ for any $i=1\etc m$). With a small abuse of notations, for each partial functions $\sigma: \{1\etc p\} \to \{1\etc p'\}$ denote by $t'_{1,\sigma}$ the graph polynomial $t'_1$ where outputs are ordered in such a way in $t_1 | t'_{1,\sigma}$ the outputs $c_i$ and $c'_{i'}$ for $i<p$ and $i'<p'$ are identified if $\sigma(i)=i'$, and are not identified if $i\notin Dom(\sigma)$ of $i' \notin Im(\sigma)$. The conclusion so far is that \_[s.t. \_=]{}\^0 &=&(m-p=m’-p’) \_[i=1m-p]{}\ & & \_ \_\_, where $\mbf 1$ stands for the graph with no edges. But $\tau[s_i|\mbf 1]=\tau[\mbf 1|s'_{i'}]=0$, thanks to Proposition \[Equivalence Free product free independance\]. Hence this sum is zero if $p\neq p'$ and otherwise this is equivalent to consider $\sigma$ as a permutation. This yields the Lemma and conclude the proof. Canonical extension of non-commutative spaces into traffic spaces {#Sec:CanonicalExtension} ================================================================= This section is dedicated to the proof of Theorem \[MainTh\]. We define, for all $^*$-probability space, a space of traffics $(\mcal {B},\tau)$ such that $\mcal A\subset \mcal {B}$ as $*$-algebras and such that the trace induced by $\tau$ restricted to $\mcal A$ is $\Phi$. The first step is to give this construction at the algebraic level, which is the aim of Section \[subsec:Def\], where we also prove the second item of Theorem \[MainTh\]. Then we prove in Section \[subsec:conv\] a version of Theorem \[Th:Matrices\] which yields the first item of Theorem \[MainTh\]. In Section \[subsec:MainTh\], we prove the positivity of the distribution of traffics we introduce now. Definition and properties {#subsec:Def} ------------------------- Let $\mathcal A$ be an algebra. We denote by $\mcal G(\mathcal{A})$ the $\mcal G$-algebra $\mathbb{C}\mathcal{G}^{(2)}\langle \mathcal{A} \rangle$, quotiented by the following relations: for all $g\in \mcal G_n$, $a_1,\ldots, a_n\in \mcal A$ and $P$ non-commutative polynomial in $n$ variables, we have $$Z_g(\cdot \overset{P(a_1,\ldots,a_k)}{\longleftarrow} \cdot\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)=Z_g(P(\cdot \overset{a_1}{\leftarrow} \cdot,\ldots,\cdot \overset{a_k}{\leftarrow} \cdot)\otimes \cdot \overset{a_{k+1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)\label{Firstrelation}$$ which allows to consider the algebra homomorphism $V:\mathcal{A} \to \mcal G(\mathcal{A})$ given by $a\mapsto (\cdot\overset{a}{\leftarrow} \cdot)$. The algebra $\mcal G(\mathcal{A})$ is the free $\mcal G$-algebra generated by the algebra $\mathcal{A}$ in the following sense. Let $\mathcal{B}$ be a $\mcal G$-algebra and $f:\mathcal{A}\to \mathcal{B}$ a algebra homomorphism. There exists a unique $\mcal G$-algebra homomorphism $f':\mcal G(\mathcal{A})\to \mathcal{B}$ such that $f=f'\circ V$. As a consequence, the algebra homomorphism $V:\mathcal{A} \to \mcal G(\mathcal{A})$ is injective. The existence is given by the following definition of $f'$ on $\mcal G(\mathcal{A})$: $$f'(Z_g(\cdot \overset{a_1}{\longleftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))=Z_g(f(a_1)\otimes\ldots \otimes f(a_n))$$ for all $a_1,\ldots,a_n\in \mathcal{A}$; which obviously respects the relation defining $\ast_{j\in J} \mathcal{A}_j$. The uniqueness follows from the fact that $f'$ is uniquely determined on $ V(\mathcal{A})$ (indeed, $f'(a)$ must be equal to $f(b)$ whenever $a=V(b)$) and that $V(\mathcal{A})$ generates $\mcal G(\mathcal{A})$ as a $\mcal G$-algebra. For example, the free $\mcal G$-algebra generated by the variables $\mbf x=(x_i)_i\in J$ and $\mbf x^*=(x_i^*)_i\in J$ is the $\mcal G$-algebra $\mbb C \mcal G\lara$ of graphs whose edges are labelled by $\mbf x$ and $\mbf x^*$. For all $0$-graph monomial $T = (G,\gamma,\mbf v)$ indexed by $\mcal A$, we say that $T$ is a cactus whenever each edge belongs exactly to one cycle of $T$. See figure \[Fig:03\]. Equivalently, for all vertices $v_1$ and $v_2$ of $G$, the two following equivalent statement are true: - the minimum number of edges whose removal disconnect $v_1$ and $v_2$ is exactly $2$; - the maximum number of edge-disjoint paths from $v_1$ to $v_2$ is exactly $2$. ![A cactus whose cycle are oriented.[]{data-label="Fig:03"}](Fig03.pdf){width="35mm"} \[orientation\]For all non-commutative probability space $(\mathcal{A},\Phi)$, we define the linear functional $\tau_\Phi:\mathbb{C}\mathcal{G}^{(0)}\langle \mathcal{A}\rangle\to \C$ by it injective trace $\tau^0_\Phi: \mathbb{C}\mathcal{G}^{(0)}\langle \mathcal{A}\rangle \to \C$ given by: 1. $\tau^0_\Phi[C] = \kappa(a_1 \etc a_n)$ if $C$ is an oriented cycle with edges labelled by $a_1 \etc a_n$, where $\kappa$ denotes the free cumulants defined in . 2. $\tau_\Phi^0[T] = \prod_{C \in T} \tau_\Phi^0[C]$, if $T$ is a cactus with oriented cycles, where the product is over all cycles of $T$. 3. $\tau_\Phi^0[T] = 0,$ otherwise. The linear form $\Psi:\mathbb{C}\mathcal{G}^{(2)}\langle \mathcal{A}\rangle\to \C$ given by $\Psi(t)=\tau_\Phi(\tilde{\Delta}(t))$, where $\tilde{\Delta}(t)$ is $\Delta(t)$ where the input and output are forgotten, is invariant under the relations  defining $\mcal G(A)$, and consequently yields to an algebraic space of traffics $(\mcal G(A) , \tau_\Phi)$. Furthermore, we have $\Phi=\Psi\circ V$, where $\varphi$ is the canonical injective algebra homomorphism from $\mathcal{A}$ to $\mcal G(A)$.\[interestingproposition\] \[Rk:Motiv\]Before proving the Proposition, let us underline the motivation to introduce the distribution of traffics of Definition \[orientation\]: it gives a parallel between the relation moments-free cumulants of formula and the relation trace-injective trace of . Let $t$ be the graph consisting in a simple cycle labeled $a_1 \times \dots \times a_K$ along the orientation. One has $$\Phi( a_1 \times \dots \times a_K) = \tau [ t ] = \sum_{\pi\in \mcal P(V)} \tau [ t^\pi ] = \sum_{ \substack{ \pi\in \mcal P(V) \\ \mrm{s.t. \ } t^\pi \mrm{ \ cactus} } } \prod_{c \mrm{ \ cycle \ of \ } t^\pi}\tau^0 [ c ].$$ It can be seen that the partitions $\pi$ of the set of vertices for which $t^\pi$ is a cactus are the Kreweras dual of the non crossing partitions $\nu$ of the edges of the cycle, see figure \[Fig:04\]. The cycles of the cactus correspond to the blocks of $\nu$, so that getting from the above r.h.s. is a matter of change of variables. ![Left: A cycle of length nine, a non crossing partition $\nu$ of its edges (grey) and the Kreweras complement $\pi$ (dotted) of $\nu$. Right: the quotient of the cycle by $\pi$.[]{data-label="Fig:04"}](Fig04.pdf){width="60mm"} Proving that $\Psi$ is invariant under the relations  is equivalent to the prove the following: for all $0$-graph monomial $g$, with a slight abuse of notation, denoting $Z_g(\cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)$ the element of $\mathbb{C}\mathcal{G}^{(0)}\langle \mathcal{A}\rangle$ obtained by replacing the corresponding edges of $g$ by the $a_i$’s, one has 1. $\tau_{\Phi}(Z_g(\cdot \overset{a_1+\alpha a_2}{\longleftarrow} \cdot\otimes \cdot \overset{a_{3}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))=\tau_{\Phi}(Z_g(\cdot \overset{a_1 }{\leftarrow} \cdot\otimes \cdot \overset{a_{3}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))+\alpha \tau_{\Phi}(Z_g(\cdot \overset{ a_2}{\leftarrow} \cdot\otimes \cdot \overset{a_{3}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)),$ 2. $\tau_{\Phi}(Z_g(\cdot \overset{1_{\mathcal{A}}}{\longleftarrow} \cdot\otimes \cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))=\tau_{\Phi}(Z_g(\cdot \otimes \cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)),$ 3. $\tau_{\Phi}(Z_g(\cdot \overset{a_1a_2}{\longleftarrow} \cdot\otimes \cdot \overset{a_{3}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))=\tau_{\Phi}(Z_{g^+}(\cdot \overset{a_1}{\leftarrow} \cdot\otimes \cdot \overset{a_2}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)),$ where $g^+$ is the graph $g$ where the first edge is replaced by two consecutive edges. The first property is an immediate consequence of the linearity of the cumulants. Let us prove the others properties at the level of the injective trace. \[Lem:gplus\]Let $a_1,\ldots, a_n\in (\mathcal{A},\Phi)$ and $\pi$ be a partition of the vertices $V$ of $g$. We denote by $v_0$ the new vertex in $g^+$. We have 1. $\tau_{\Phi}^0(Z_g(\cdot \overset{1_\mathcal{A}}{\leftarrow} \cdot \otimes \cdot \overset{a_3}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot ))=\tau_{\Phi}^0(Z_g(\cdot \otimes \cdot \overset{a_3}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot ))$ if the goal and the source of $\cdot \overset{1_\mathcal{A}}{\leftarrow} \cdot$ are identified in $g$, and $\tau_{\Phi}^0(Z_g(\cdot \overset{1_\mathcal{A}}{\leftarrow} \cdot \otimes \cdot \overset{a_3}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot ))=0$ if not; 2. $\tau_{\Phi}^0(Z_g(\cdot \overset{a_1a_2}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot )^\pi)=\sum_{\substack{\sigma\in \mathcal{P}(V\cup\{v_0\})\\\sigma\setminus \{v_0\}=\pi}}\tau_{\Phi}^0(Z_{g^+}(\cdot \overset{a_1}{\leftarrow} \cdot \otimes \cdot \overset{a_2}{\leftarrow} \cdot \otimes \ldots \cdot \overset{a_n}{\leftarrow} \cdot )^\sigma).$ This implies the proposition, since it gives $$\begin{aligned} \tau_{\Phi}(Z_g(\cdot \overset{1_{\mathcal{A}}}{\longleftarrow} \cdot\otimes \cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot))=&\sum_{\pi\in \mcal P(V)}\tau_{\Phi}^0(Z_g(\cdot \overset{1_{\mathcal{A}}}{\longleftarrow} \cdot\otimes \cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\pi)\\ =&\sum_{\substack{\pi\in \mcal P(V)\\ \text{the goal and the source of }\cdot \overset{1_{\mathcal{A}}}{\longleftarrow} \cdot\\\text{are identified in }g}}\tau_{\Phi}^0(Z_g(\cdot \otimes \cdot \overset{a_1}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\pi)\\ =&\sum_{\pi\in \mcal P(V)}\tau_{\Phi}^0(Z_g( \cdot\otimes \cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)^\pi)\\ =&\tau_{\Phi}(Z_g(\cdot \otimes \cdot \overset{a_{1}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot),\end{aligned}$$ and $$\begin{aligned} \tau_{\Phi}(Z_g(\cdot \overset{a_1a_2}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot ))&=\sum_{\pi \in \mathcal{P}(V)} \tau_{\Phi}^0(Z_g(\cdot \overset{a_1a_2}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot )^\pi)\\ &=\sum_{\pi \in \mathcal{P}(V)}\sum_{\substack{\sigma\in \mathcal{P}(V\cup\{v_0\})\\\sigma\setminus \{v_0\}=\pi}}\tau_{\Phi}^0(Z_{g^+}(\cdot \overset{a_1}{\leftarrow} \cdot \otimes \cdot \overset{a_2}{\leftarrow} \cdot \otimes \ldots \cdot \overset{a_n}{\leftarrow} \cdot )^\sigma)\\ &=\sum_{\sigma \in \mathcal{P}(V)}\tau_{\Phi}^0(Z_{g^+}(\cdot \overset{a_1}{\leftarrow} \cdot \otimes \cdot \overset{a_2}{\leftarrow} \cdot \otimes \ldots \cdot \overset{a_n}{\leftarrow} \cdot )^\sigma)\\ &=\tau_{\Phi}(Z_{g^+}(\cdot \overset{a_1}{\leftarrow} \cdot \otimes \cdot \overset{a_2}{\leftarrow} \cdot \otimes \ldots \cdot \overset{a_n}{\leftarrow} \cdot )).\end{aligned}$$ Now, let $a\in \mcal A$. We can write $$\Psi(V(a))=\Psi(\cdot \overset{a}{\leftarrow} \cdot )=\tau_{\Phi}(\circlearrowleft_{a}))=\tau^0_{\Phi}(\circlearrowleft_{a})=\kappa(a)=\Phi(a)$$ which finishes the proof of the proposition. The first item follows from the fact that a cumulant involving $1_{\mcal A}$ is equal to $0$, except $\kappa(1_{\mcal A})=1$ (see [@NS Proposition 11.15]). As a consequence, for any cactus $Z_g(\cdot \overset{1_{\mathcal{A}}}{\longleftarrow} \cdot\otimes \cdot \overset{a_{3}}{\leftarrow} \cdot\otimes\ldots\otimes \cdot\overset{a_n}{\leftarrow} \cdot)$ on which $\tau_{\Phi}^0$ is not vanishing, we can just remove the little cycle $\circlearrowleft_{1_{\mcal A}}$ of the cactus and it yields exactly $Z_g(\cdot \otimes \cdot \overset{a_3}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot )$. Let us prove the second item. We denote $Z_g(\cdot \overset{a_1a_2}{\leftarrow} \cdot \otimes \ldots\otimes \cdot \overset{a_n}{\leftarrow} \cdot )$ by $T$ and $Z_{g^+}(\cdot \overset{a_1}{\leftarrow} \cdot \otimes \cdot \overset{a_2}{\leftarrow} \cdot \otimes \ldots \cdot \overset{a_n}{\leftarrow} \cdot )$ by $T_+$. If $T^\pi$ is not a cactus, then the two side of the equation are equal to zero. Assume that $T^\pi$ is a cactus. We denote by $c$ the cycle of $\cdot \overset{a_1a_2}{\leftarrow} \cdot $ in $T^{\pi}$ and $a_1a_2,b_2,\ldots, b_{k-1}$ the elements of the cycle $c$ starting at $a_1a_2$. Let us consider a partition $\sigma\in \mathcal{P}(V\cup\{v_0\})$ such that $T_+^\sigma$ is a cactus and $\pi=\sigma\setminus \{v_0\}$. Then, we have two cases: 1. $v_0$ is of degree $2$ (this occurs for only one partition $\sigma$ given by $\pi\cup\{\{v_0\}\}$). Denoting by $c^+$ the cycle of $T_+^\sigma$ which contains $v_0$, we have $c^+=(a_2,b_2,\ldots, b_{k-1},a_1)$. The cycles of $T^\pi$ and $T_+^\sigma$ different from $c$ and $c^+$ are the same, and by consequence $$\tau^0_\Phi[T^\pi]/k(a_1a_2,b_2,\ldots, b_{k-1})=\tau^0_\Phi[T_+^\sigma]/k(a_2,b_2,\ldots, b_{k-1},a_1).$$ 2. $v_0$ is of degree $>2$. We denote by $c_1$ the cycle of $\cdot \overset{a_2}{\leftarrow} \cdot $ in $T_+^\sigma$, $c_2$ the cycle of $\cdot \overset{a_1}{\leftarrow} \cdot $ in $T_+^\sigma$ (of course, $c_1$ and $c_2$ are not equal, because if it is the case, $T^\pi$ would be disconnected, which is not possible). The cycles of $T^\pi$ different from $c$ are exactly the cycles of $T_+^\sigma$ different from $c_1$ or $c_2$. We have $c_1=(a_2,b_2,\ldots , b_l)$ and $c_2=(b_{l+1},\ldots , b_k,a_1)$ with $l$ the place of the vertex which is identified with $v_0$ in $T_+^\sigma$. By definition, we have $$\tau^0_\Phi[T^\pi]/k(a_1a_2,b_2,\ldots, b_{k-1})=\tau^0_\Phi[T_+^\sigma]/(k(a_2,b_2,\ldots , b_l)\cdot k(b_{l+1},\ldots , b_k,a_1)).$$ Conversely, for each vertex $v_1$ in the cycle $c$, we are in the above situation for $\sigma=\pi_{|v_0\simeq v_1}$. Finally, using [@NS Theorem 11.12] for computing $k(a_1a_2,b_2,\ldots, b_{k-1})$, we can compute $$\begin{aligned} \tau^0_\Phi[T^\pi]=&\tau^0_\Phi[T^\pi]/k(a_1a_2,b_2,\ldots, b_{k-1})\cdot k(a_1a_2,b_2,\ldots, b_{k-1})\\ =&\tau^0_\Phi[T^\pi]/k(a_1a_2,b_2,\ldots, b_{k-1})\\ &\ \ \cdot \left(k(a_2,b_2,\ldots, b_{k-1},a_1)+\sum_{1{\leqslant}l {\leqslant}k}k(a_2,b_2,\ldots , b_l)\cdot k(b_{l+1},\ldots , b_k,a_1)\right)\\ =&\tau^0_\Phi[T_+^{\pi\cup\{\{v_0\}\}}]+\sum_{\substack{\sigma\in \mathcal{P}(V\cup\{v_0\})\setminus \{\pi\cup\{\{v_0\}\}\}\\\sigma\setminus \{v_0\}=\pi}}\tau^0_\Phi[T_+^{\sigma}]\\ =&\sum_{\substack{\sigma\in \mathcal{P}(V\cup\{v_0\})\\\sigma\setminus \{v_0\}=\pi}}\tau_{\Phi}^0(T_+^\sigma).\end{aligned}$$ The algebras $\C\langle \cdot \overset{a}{\leftarrow} \cdot:a\in \mathcal{A}\rangle$, $\C\langle \cdot \overset{a}{\rightarrow} \cdot:a\in \mathcal{A}\rangle$ and $\C\langle \ ^{\uparrow a}_{\cdot}\ :a\in \mathcal{A}\rangle$ are free in the sense of Voiculescu in $(\mathbb{C}\mathcal{G}^{(2)}\langle \mathcal{A}\rangle,\tau_\Phi)$, or equivalently the algebras $\C\langle a:a\in \mathcal{A}\rangle$, $\C\langle ^ta:a\in \mathcal{A}\rangle$ and $\C\langle deg(a):a\in \mathcal{A}\rangle$ are free in the sense of Voiculescu in $(\mcal G(\mcal A),\tau_\Phi)$).\[freeness\] We first prove that $\C\langle \cdot \overset{a}{\leftarrow} \cdot:a\in \mathcal{A}\rangle$ and $\C\langle \cdot \overset{a}{\rightarrow} \cdot:a\in \mathcal{A}\rangle$ are free. Let us consider $2n$ elements $c_1,\ldots, c_{2n}$ alternatively in $\C\langle \cdot \overset{a}{\leftarrow} \cdot:a\in \mathcal{A}\rangle$ and $\C\langle \cdot \overset{a}{\rightarrow} \cdot:a\in \mathcal{A}\rangle$ such that $\tau_\Phi(c_1)=\ldots=\tau_\Phi(c_{2n})=0$. We want to prove that $\tau_\Phi(\Delta(c_1\ldots c_{2n}))=0$. Using Proposition \[interestingproposition\] in order to regroup consecutive edges which are oriented in the same direction, we can assume that the $c_i's$ are written as $\cdot \overset{a_i}{\leftarrow} \cdot$ with $a_i\in \mathcal{A}$ such that $\Phi(a_i)=0$, and $c_{i}$ and $c_{i+1}$ not oriented in the same direction. Consider now a partition $\pi$ such that $\tau_\Phi^0(\Delta (c_1\ldots c_{2n})^\pi)\neq 0$. Then, take a leaf of the oriented cactus $\Delta (c_1\ldots c_{2n})^\pi$. This leaf is a cycle of only one edge, because if not, the cycle cannot be oriented, since two consecutive edges in $\Delta (c_1\ldots c_{2n})$ are not oriented in the same way. This produces a term $\tau_\Phi^0(\Delta(c_{i}))=0$ in the product $\tau_\Phi^0(\Delta (c_1\ldots c_{2n})^\pi)$, which leads at the end to a vanishing contribution. Finally, $\tau_\Phi(c_1\ldots c_{2n})=0$ and we have the freeness wanted. Now, let us prove that $\C\langle \ ^{\uparrow a}_{\cdot}\ :a\in \mathcal{A}\rangle$ is free from $\C\langle \cdot \overset{a}{\leftarrow} \cdot, \cdot \overset{a}{\rightarrow} \cdot:a\in \mathcal{A}\rangle$. By the same argument as above, we can consider that we have a cycle $\Delta(c_1\ldots c_{n})$ which consists in an alternating sequence of $c_i's$ written as $\cdot \overset{a_i}{\leftarrow} \cdot$ with $a_i\in \mathcal{A}$ such that $\Phi(a_i)=0$, $\cdot \overset{a_i}{\rightarrow} \cdot$ with $a_i\in \mathcal{A}$ such that $\Phi(a_i)=0$, and $c_i \in \C\langle \ ^{\uparrow a}_{\cdot}\ :a\in \mathcal{A}\rangle$ such that $\tau_\Phi(c_i)=0$. We want to prove that $\tau_\Phi(\Delta(c_1\ldots c_{n}))=0$. If there is no term $c_i\in \C\langle \ ^{\uparrow a}_{\cdot}\ :a\in \mathcal{A}\rangle$, we are in the case of the previous paragraph. Let us assume that there exists at least one such term, say $c_1$. By linearity, we can consider that the term $c_1\in \C\langle \ ^{\uparrow a}_{\cdot}\ :a\in \mathcal{A}\rangle$ is written as $^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot}-\tau_\Phi(^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})$, where $^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot}$ is some vertex input/output from which start $k$ edges labelled by $b_1,\ldots, b_k \in \mathcal{A}$. Let us prove that $\tau_\Phi(\Delta((^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})c_2\ldots c_{n}))$ and $\tau_\Phi(^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})\tau_\Phi(\Delta(c_2\ldots c_{n}))$ are equal, which implies by linearity that $\tau_\Phi(\Delta(c_1\ldots c_{n}))=0$. Decomposing into injective trace, we are left to prove that for all partition $\pi$ of the vertices of $\Delta((^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})c_2\ldots c_{n})$ which do not respect the blocks $(^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})$ and $\Delta (c_2\ldots c_{n})$, $\tau_\Phi^0(\Delta((^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})c_2\ldots c_{n})^\pi)=0$. The same argument as previous paragraph works again. If one of the vertex of $(^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})$ is identified by $\pi$ with one of the vertex of $\Delta (c_2\ldots c_{n})$, and $\Delta((^{\uparrow b_1}_{\cdot}\cdots ^{\uparrow b_k}_{\cdot})c_2\ldots c_{n})^\pi$ is a cactus there exists a cycle not oriented or a leaf labelled by one $a_i$, which leads to a vanishing contribution. We can now prove the second item of Theorem \[MainTh\]. \[prop:freeness\] Let $(\mathcal{A},\Phi)$ be a non-commutative probability space $(\mathcal{A},\Phi)$. We define $(\mathcal G(\mathcal{A}),\tau_\Phi)$ as in Proposition \[interestingproposition\]. Two families $\mbf a$ and $\mbf b\in \mcal A$ are freely independent in $\mcal A$ if and only if they are traffic independent in $(\mathcal G(\mathcal{A}),\tau_\Phi)$. Let $\mbf a$ and $\mbf b\in \mcal A$ be freely independent in $\mcal A$. The mixed cumulants of $\mbf a$ and $\mbf b$ vanish (see [@NS Theorem 1.16]). Definition \[orientation\] of $\tau_\Phi^0$ implies in particular that, for all $0$-graph monomial $T$ indexed by $\mcal A$, $\tau_\Phi^0(T)=0$ whenever the graph of color component of $T$ is not a tree and is equal to the product of $\tau_\Phi^0$ applied on each color component in the other case. In other words, $\mbf a$ and $\mbf b$ are traffic independent in $(\mathcal G(\mathcal{A}),\tau_\Phi)$. Now, let $\mbf a$ and $\mbf b\in \mcal A$ be traffic independent in $(\mathcal G(\mathcal{A}),\tau_\Phi)$. We denote by $\Psi$ the trace on $\mathcal G(\mathcal{A})$ associated to $\tau_\Phi$. Lemma \[lemmafreeind\] says that, in order to prove that $\mbf a$ and $\mbf b$ are freely independent, it suffices to prove that $\Psi(\Delta(a^*)\Delta(a))-|\Psi(a)|^2=0$ for our variables in $\mbf a$ and $\mbf b$. We compute $$\begin{gathered} \Psi(\Delta(a^*)\Delta(a))-|\Psi(a)|^2=\tau_\Phi\left(a\lefttorightarrow \righttoleftarrow a^*\right)-|\tau_\Phi(\circlearrowleft_a)|^2\\=\tau_\Phi^0\left(a\lefttorightarrow \righttoleftarrow a^*\right)-|\tau_\Phi^0(\circlearrowleft_a)|^2=\Phi(a) \Phi(a^*)-|\Phi(a)|^2=0, \end{gathered}$$ which allows us to conclude. Proof of the convergence of random matrices (Theorem 1.1) {#subsec:conv} --------------------------------------------------------- The purpose of this section is to prove Theorem \[Th:Matrices\] in the following more precise form. \[Th:MatricesBis\]For all $N{\geqslant}1$, let $\mbf X_N=(X_j)_{j\in J}$ be a family of random matrices in $\mrm{M}_N(\mbb C)$ satisfying the hypothesises of Theorem \[Th:Matrices\]. Let $\mbf x=(x_j)_{j\in J}$ be a family in some non-commutative probability space $(\mathcal{A},\Phi)$ which is the limit in $*$-distribution of $\mbf X_N$. Let consider the algebraic space of traffic $(\mathcal G (\mathcal{A}),\tau_\Phi)\supset (\mathcal{A},\Phi)$ given by Proposition \[interestingproposition\]. Then $\mbf X_N$ converges in distribution of traffics to $\mbf x$.\[TrafficAdjunction\] In other words, for all $0$-graph polynomials $t$ in $\mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle$, we have $$\tau_{\mbf X_N}[t]\limN \tau_{\mbf x}[t],$$ where $\tau_{\mbf x}[t]$ is given by Definition \[orientation\] . Let us first derive some consequences of this theorem. The first one is obtained as an application of Corollary \[freeness\], and generalize a recent result of Mingo and Popa [@MingoPopa2014]. \[Cor:FreeTransp\]For all family of random matrices $\mbf X_N$ satisfying the previous theorem, the family $\mbf X_N$, the family of the transposes $^t\mbf X_N$ and the family of the degrees $deg(\mbf X_N)$ are asymptotically free . Let $\mbf X_N = (X_j)_{j\in J}, \mbf Y_M=(Y_k)_{k\in K}$ be independent unitarily invariant families of random matrices of size $N$ and $M$ respectively. Assume that $\mbf X_N$ and $\mbf Y_M$ converge in $^*$-distribution as $N,M$ goes to infinity. Then $\mbf X_N \otimes \mbf Y_N = (X_j \otimes Y_k)_{j\in J, k\in K}$, seen as an element of $\big( \mrm M_{NM}(\mbb C), \esp [ \frac 1{NM} \Tr ]\big)$, converges in distribution of traffics. Moreover, in the set of $0$-graph monomials $T$ such that there exists a cycle visiting each edge once, the limiting distribution of $\mbf X_N \otimes \mbf Y_N$ has the form of the distribution in Definition \[orientation\]. In particular the conclusion of Corollary \[Cor:FreeTransp\] for this family of matrices holds true, namely, $\mbf X_N \otimes \mbf Y_N $ is asymptotically free from $\mbf X_N^t \otimes \mbf Y_N^t$. We index the entries of a matrix $X\otimes Y \in \mrm M_{NM}(\mbb C)$ by pairs of indices $\mbf i=(i,i'), \mbf j=(j,j') \in [N]\times [M]$ with the convention that the entry $(X\otimes Y)_{\mbf i, \mbf j}$ is $X_{i,j}Y_{i',j'}$. Let $T=(V,E,j\times \epsilon \in \mbb C \mcal G^{(0)}\langle J \times K\times \{1,*\}\rangle$. Then one has\ & = & 1 [NM]{} \_[ ]{}\ & = & 1 [NM]{} \_[ ]{} Denote by $\Lambda_T$ the set of pairs $(\pi_1,\pi_2) \in \mcal P(V)^2$ such that if two elements belong to a same block of $\pi_i$ then they belong to different blocks of $\pi_j$, $i\neq j \in \{1,2\}$. Denote also by $\mrm{ker} \phi_i$ the partition of $V$ such that $v\sim_{\mrm{ker} \phi_i} w $ if and only if $\phi_i(v) = \phi_i(w)$. Then we get \^0\_[X\_N Y\_M]{}\[ T\] & = & 1 [NM]{} \_[(\_1,\_2) \_T]{} \_\ & &    \_\ & = & \_[(\_1,\_2) \_T]{} \_N\^0\_N\^0. By Theorem \[Th:Matrices\] we get that $\mbf X_N \otimes \mbf Y_M$ converges in distribution of traffics. Moreover, the partitions $\pi_1,\pi_2$ which contribute in the limit are those such that $T^{\pi_1}$ and $T^{\pi_2}$ are cacti with oriented cycles. Recall that cacti are characterized by the fact that each edge belong exactly to one cycle. But for a graph $T'$ and partition $\pi'$ of its vertices, the number of cycles an edge of $T'$ belongs to can only increase in the quotient graph $(T')^{\pi'}$. Hence we deduce that $\tau^0_{\mbf X_N \otimes \mbf Y_M}[ T] $ does not vanishes at infinity only if each edge of $T$ belongs at most to one cycle. In particular, if there is a cycle visiting each edge of $T$ once, then $T$ must be a cactus. Assume from now on, that $T$ is a cactus and let $\pi\in \mcal P(v)$ such that $T^\pi$ is a cactus. Then denoting by $\pi_c$ the restriction of $\pi$ on a cycle $c$ of $T$, $\pi$ is the smallest partition that contains the blocks of the $\pi_c$ for any cycle $c$ of $T$ (otherwise there will exist an edge belonging to more than a cycle of $T^\pi$). Moreover, given a pair $(\pi_1, \pi_2)\in \mcal P(V)^2$ such that $T^{\pi_1}$ and $T^{\pi_2}$ are cacti, one has $(\pi_1, \pi_2)\in \Lambda_T$ if and only if for each cycle $c$ of $T$ the partitions $\pi_{1,c},\pi_{2,c}$ restricted to $c$ are such that $(\pi_{1,c},\pi_{2,c}) \in \Lambda_c$. Since $\tau^0\big[T^{\pi_1}(\mbf X_N)\big]$ and $\tau^0\big[ T^{\pi_2}(\mbf Y_N) \big]$ are asymptotically multiplicative with respect to the cycles of $T^{\pi_1},T^{\pi_2}$, we get \^0\_[X\_N Y\_M]{}\[ T\] & =& \_[ c T ]{} \_[(\_1,\_2) \_c]{} \_N\^0\_N\^0+ o(1). This proves the first part of the result. For the asymptotic freeness of the ensemble $\mbf X_N\otimes \mbf Y_N$ with its transpose it suffices to remark that the $^*$-distribution of an ensemble depends only on the distribution of traffics of this ensemble restricted to $0$-graph monomials such that there is a cycle visiting each edge once. In the following three paragraphs, we respectively review some results about the free cumulants, some results about the Weingarten function, and the links between those two objects in large dimension. #### The Weingarten function. To prove Theorem \[TrafficAdjunction\], we have to integrate polynomials against the $\UN$-Haar measure. Expressions for these integrals appeared in [@Weingarten1978] and were first proven in [@Collins2004] and given in terms of a function on symmetric group called the Weingarten function. We recall here its definition and some of its properties. For any $n \in\N^*$ and any permutation $\sigma\in \mcal S_n$, let us set $$\Omega_{n,N}(\sigma)=N^{\# \sigma},$$ where $\# \sigma$ is the number of cycles of $\sigma$. When $n$ is fixed and $N\to \infty,$ $N^{-n}\Omega_{n,N}\to \delta_{\Id_n}$. For any pair of functions $f,g: \mcal S_n\to \C$ and $\pi \in \mcal S_n$, let us define the convolution product $$f\star g (\sigma)= \sum_{\pi\preccurlyeq \sigma}f(\pi)g(\pi^{-1}\sigma),$$ Hence, for $N$ large enough, $\Omega_{n,N}$ is invertible in the algebra of function on $\mcal S_n$ endowed with convolution as a product. We denote by $\Wg_{n,N}$ the unique function on $\mcal S_n$ such that $$\Wg_{n,N}*\Omega_{n,N}= \Omega_{n,N}*\Wg_{n,N}=\delta_{\Id_n}.$$ Then, [@Collins2004 Corollary 2.4] says that, for any indices $i_1,i'_1,j_1,j'_1\ldots,i_n,i'_n,j_n,j'_n\in \{1,\ldots, N\}$ and $U=\big(U(i,j)\big)_{i,j=1\etc N}$ a Haar distributed random matrix on $\UN$, $$\label{IntegrationHaar} \esp[U(i_1,j_1)\ldots U(i_n,j_n)\overline{U}(i'_1,j'_1)\ldots \overline{U}(i'_n,j'_n)]= \sum_{\substack{\alpha,\beta\in \mcal S_n\\ i_{\alpha (k)}=i'_k, j_{\beta(k)}=j'_k}}\Wg_{n,N}(\alpha\beta^{-1}).$$ #### Free cumulants and the Möbius function $\mu$. As explained in [@Biane1997], it is equivalent to consider lattices of non-crossing partitions or sets of permutations endowed with an appropriate distance. For our purposes, it is more suitable to define the free cumulants using sets of permutations. Let us endow $\mcal S_n$ with the metric $d$, by setting for any $\alpha,\beta\in \mcal S_n$, $$d(\alpha,\beta)= n- \#(\beta\alpha^{-1}),$$ where $\#(\beta\alpha^{-1})$ is the number of cycles of $\beta\alpha^{-1}$. We endow the set $\mcal S_n$ with the partial order given by the relation $\sigma_1\preceq\sigma_2$ if $d(\Id_n,\sigma_1)+ d(\sigma_1,\sigma_2)=d(\Id_n,\sigma_2)$, or similarly if $\sigma_1$ is on a geodesic between $\Id_n$ and $\sigma_2$. Given a state $\Phi: \C\langle x_j,x_j^*\rangle_{j\in J}\to \C,$ we define the free cumulants $(\kappa_n)_{n\in \N}$ recursively on $\C\langle x_j,x_j^*\rangle_{j\in J}$ by the system of equations \[Def:FreeCum\] (y\_1y\_n)=\_[(1n)]{} \_(y\_[c\_1]{},…, y\_[c\_k]{}), y\_1,…, y\_nx\_j,x\_j\^\*\_[jJ]{}. Let us fix $y_1,\ldots, y_n\in \C\langle x_j,x_j^*\rangle_{j\in J}$ and denote by respectively $\phi$ and $k$ the functions from $\mcal S_n$ to $\C$ given by $$\phi(\alpha)= \prod_{\substack{(c_1\ldots c_k)\\\text{ cycle of }\sigma}}\Phi(y_{c_1}\ldots y_{c_k})\ \ \text{and}\ \ k(\alpha)= \prod_{\substack{(c_1\ldots c_k)\\\text{ cycle of }\sigma}}\kappa(y_{c_1},\ldots ,y_{c_k}),$$ which are such that $\phi((1\cdots n))=\sum_{\pi\preccurlyeq (1\cdots n)} k(\pi).$ In fact, we have more generally the relation $$\phi(\alpha)=\sum_{\pi\preccurlyeq \sigma} k(\pi).$$ Note that $\phi=k\star \zeta,$ where $\zeta$ is identically equal to one. The identically one function $\zeta$ is invertible for the convolution $\star$ (see [@Biane1997]), and its inverse $\mu$ is called Möbius function. It allows us to express the free cumulants in terms of the trace: $$\label{cumtrace}k=\phi\star \mu.$$ #### Asymptotics of the Weingarten function. One can observe that, for any pair of functions $f,g: \mcal S_n\to \C$ and $\pi \in \mcal S_n$, $$\sum_{\pi\in \mcal S_n}N^{d(\Id_n,\sigma)-d(\Id_n,\pi)-d(\pi,\sigma)}f(\pi)g(\pi^{-1}\sigma)=f\star g (\sigma)+o(1).$$ Defining the convolution $\star_N$ as $$f\star_N g=N^{n}\Omega_{n,N}^{-1}((N^{-n}\Omega_{n,N} f)*(N^{-n}\Omega_{n,N} g))=\sum_{\pi\in \mcal S_n}N^{d(\Id_n,\sigma)-d(\Id_n,\pi)-d(\pi,\sigma)}f(\pi)g(\pi^{-1}\sigma),$$ it follows that $\star$ is the limit of $\star_N$. Because $\Wg_{n,N}$ is the inverse of $\Omega_{n,N}$ for the convolution $\ast$, we have $(N^{2n}\Omega_{n,N}^{-1}\Wg_{n,N})\star_N \zeta =N^{-n}\Omega_{n,N},$ from which we deduce that $(N^{2n}\Omega_{n,N}^{-1}\Wg_{n,N})\star \zeta=\delta_{\Id_n}+o(1)$, or similarly that $$N^{2n}\Omega_{n,N}^{-1}\Wg_{n,N}=\mu+o(1).$$ More generally, if $f,f_N: \mcal S_n\to \C$ are such that $f_N=f+o(1)$, then $$\label{WeingartenCumulantsLibres} N^{n}\Omega_{n,N}^{-1}((\Omega_{n,N}f_N)*\Wg_{n,N})=(f_N)\star_N (\Wg_{n,N})=f\star \mu +o(1) .$$ We can now prove Theorem \[Th:Matrices\]. Let $\mbf X_N = (X_j)_{j\in J}$ a family of unitary invariant random matrices which converges in $^*$-distribution, as $N$ goes to infinity, to $\mbf x = (x_j)_{j\in J}$ family of some noncommutative probability space $(\mathcal A,\Phi)$. We extend $ (\mathcal{A},\Phi)$ to a space of traffics $(\mathcal G (\mathcal{A}),\tau_\Phi)$. We fix a $*$-test graph $T=(V,E,j\times \epsilon)\in \mbb C \mcal G^{(0)}\langle J\times \{1,\ast\} \rangle$ and prove that $\tau_{\mbf X_N}[T]$) converges to $\tau_{\mbf X}[T]$ as $N\to \infty$. By taking the real and the imaginary parts, we can assume that the matrices of $\mbf X_N$ are Hermitian and so assume $\eps(e)=1$ for any $e\in E$. We consider a random unitary matrix $U$, distributed according to the Haar distribution, and independent of $\mbf X_N$. By assumption $\mbf Z_N :=U\mbf X_N U^*\in M_N(\C)$ has the same distribution as $\mbf X_N$. We denote respectively by $\underline{e}$ and $\overline{e}$ the origin vertex and the goal vertex of $e$. Then \_[X\_N]{}\[T\] & = & 1 N \_[: V ]{}\ & = & 1 N \_ . In the integration formula (\[IntegrationHaar\]), the number $n$ of occurrence of each term $U(i,j)$ is the cardinality of $E$ and the sum over permutations of $\{1,\ldots,n\}$ is replaced by a sum over the set $\mcal S_E$ of permutations of the edge set $E$. By identifying $E$ with the set of integers $\{1,\ldots, |E|\}$, we consider that $\Wg_{n,N}$ is defined on $\mcal S_E$ instead of $\mcal S_n$. Then, one has $$\begin{aligned} \tau_{\mbf X_N}[T] = \frac 1 N \sum_{\alpha,\beta\in \mcal S_E}\Wg_{n,N}(\alpha\beta^{-1})\sum_{\substack{\phi: V \to \{1,\ldots, N\}\\ \varphi,\varphi': E \to \{1,\ldots, N\}\\ \phi\left(\underline{\alpha(e)}\right)=\phi(\overline{e}) ,\varphi(\beta(e))=\varphi'(e)}}\esp\left[ \prod_{e\in E} X_{\gamma(e)}\big(\varphi(e),\varphi'(e)\big)\right].\end{aligned}$$ For any permutation $\alpha\in \mcal S_E$, let $\pi(\alpha)$ be the smallest partition of $V$ such that, for all $e\in E$, $\overline{e}$ is in the same block with $\underline{\alpha(e)}$. Summing over $\phi$ in the previous expression yields $$\begin{aligned} \tau_{\mbf X_N}[T]&=\sum_{\alpha,\beta\in \mcal S_E}N^{\# \pi(\alpha)-1}\Wg_{n,N}(\alpha\beta^{-1})\sum_{\substack{ \varphi,\varphi': E \to \{1,\ldots, N\}\\ \varphi(\beta(e))=\varphi'(e)}}\esp\left[ \prod_{e\in E} X_{\gamma(e)}\big(\varphi(e),\varphi'(e)\big)\right]\\ &=\sum_{\alpha,\beta\in \mcal S_E}N^{\# \pi(\alpha)-1}\Wg_{n,N}(\alpha\beta^{-1})\esp\left[\prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }\beta}}\Tr(X_{\gamma(e_1)}X_{\gamma(e_2)}\ldots X_{\gamma(e_k)})\right]\end{aligned}$$ To conclude we will need the following i\) For any permutation $\alpha\in \mcal S_E$, $\#\pi(\alpha)+\#\alpha\le \# E+1$ and the equality implies that the graph of $T^{\pi(\alpha)}$ is an oriented cactus. ii\) The map $$\begin{aligned} \pi:\{\alpha: \#\pi(\alpha)+\#\alpha= \# E+1\}&\longrightarrow \{\pi : \text{the graph of }T^\pi \text{ is an oriented cactus}\}\end{aligned}$$ is a bijection whose inverse $\gamma$ is given, for all $\pi\in \mcal P(V)$ such that $T^\pi$ is an oriented cactus, by the permutation $\gamma(\pi)$ whose cycles are the biconnected components of $T^\pi$.\[lemmabijection\] i\) Let $\alpha\in \mcal S_E$. Let us define a connected graph $G_\alpha$ whose vertices are the cycles of $\alpha$ all together with the blocks of $\pi(\alpha)$, and whose edges are defined are follow. There is an edge between a cycle $c$ of $\alpha$ and a block $b$ of $\pi(\alpha)$ if and only if there is an edge $e$ of $T$ such that $e\in c$ and $\overline{e}\in b$. This way, the edges of $G_\alpha$ are in bijective correspondence with the edges of $T$. Therefore, $\#\pi(\alpha)+\#\alpha\le \# E+1$ with equality if and only $G_\alpha$ is a tree. In fact, each cycle of $\alpha$ yields a cycle in $T^{\pi(\alpha)}$, and in the case where $G_\alpha$ is a tree, there exist no others cycles in $G_\alpha$. By consequence, the biconnected component of $T^{\pi(\alpha)}$ are exactly the cycles of $\alpha$, and $T^{\pi(\alpha)}$ is therefore an oriented cactus. ii\) $\pi\circ\gamma$ and $\gamma\circ \pi$ are the identity functions: $\pi$ is one-to-one and its inverse is $\gamma$. For all $\alpha\in \mcal S_E$, set $$\phi_N(\alpha)=N^{-\# \alpha}\esp\left[\prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }\sigma}}\Tr(X_{\gamma(e_1)}X_{\gamma(e_2)}\ldots X_{\gamma(e_k)})\right]$$ and $$\phi(\alpha)=\prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }\sigma}}\Phi(x_{\gamma(e_1)}x_{\gamma(e_2)}\ldots x_{\gamma(e_k)})$$ in such a way that that $\phi_N=\phi+o(1)$. Let us fix $\alpha\in \mcal S_E$. On one hand we have $$N^{\#\pi(\alpha)+\#\alpha-\# E-1}=\one_{\#\pi(\alpha)+\#\alpha= \# E+1}+o(1).$$ On the other hand, according to , we have $$\begin{aligned} \sum_{\beta\in \mcal S_E}N^{\# E-\#\alpha}\Wg_{n,N}(\alpha\beta^{-1})\esp\left[\prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }\beta}}\Tr(X_{\gamma(e_1)}X_{\gamma(e_2)}\ldots X_{\gamma(e_k)})\right]&=((\phi_N)\star_N \Wg_{n,N})(\alpha)\\ &=(\phi\star \mu)(\alpha)+o(1).\end{aligned}$$ It follows that $$\tau_{\mbf X_N}(T) =\sum_{\substack{\alpha \in \mcal S_E\\\#\pi(\alpha)+\#\alpha= \# E+1}}(\phi\star \mu)(\alpha)+o(1).$$ From , we know that $(\phi\star \mu)(\alpha)=k(\alpha)=\prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }\alpha}}\kappa(x_{\gamma(e_1)},\ldots, x_{\gamma(e_k)})$. Thanks to Lemma \[lemmabijection\], we can now write $$\begin{aligned} \tau_{\mbf X_N}(T) &=\sum_{\substack{\pi\in \mcal P(V)\\ T^\pi \text{ is an oriented cactus}}}\ \prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }\gamma(\pi)}}\kappa(x_{\gamma(e_1)},\ldots, x_{\gamma(e_k)})+o(1)\\ &=\sum_{\substack{\pi\in \mcal P(V)\\ T^\pi \text{ is an oriented cactus}}}\ \prod_{\substack{(e_1\ldots e_k)\\\text{ cycle of }T^{\pi}}}\kappa(x_{\gamma(e_1)},\ldots, x_{\gamma(e_k)})+o(1)\end{aligned}$$ In order to pursue the computation, let $t$ be the $0$-graph monomial $(V,E, \lambda(e))\in \mbb C \mcal G^{(0)}\langle \mathcal{G}(\mathcal{A}) \rangle$ such that $\lambda(e)=x_{\gamma(e)}$. By Definition \[orientation\], we get $$\begin{aligned} \tau_{\mbf X_N}(T) &=\sum_{\pi\in \mcal P(V)}\tau_\Phi^0[t^\pi]+o(1)\\ &=\tau_{\Phi}[t]+o(1)\\ &=\tau_{\mbf x}[T]+o(1)\end{aligned}$$ so that $\tau_{\mbf X_N}(T)$ converges towards the expected limit. Proof of Theorem 1.3 {#Sec:positivitytauphi} -------------------- \[subsec:MainTh\] Let $(\mathcal{A},\Phi)$ be a non-commutative probability space $(\mathcal{A},\Phi)$. We define $(\mathcal G(\mathcal{A}),\tau_\Phi)$ as in Proposition \[interestingproposition\], in such a way that $(\mathcal{A},\Phi)\subset (\mathcal G(\mathcal{A}),\Psi)$ if $\Psi$ denote the trace induced by $\tau_\Phi$. To prove the full statement of Theorem \[MainTh\], it remains to prove that $\tau_\Phi$ satisfies the positivity condition  and the two following items: - If $\mbf A_N$ is a sequence of random matrices that converges in $^*$-distribution to $\mbf a\in \mcal A$ as $N$ tends to $\infty$ and verifies the condition of Theorem \[Th:Matrices\], then $\mbf A_N $ converges in distribution of traffics to $\mbf a\in \mathcal G(\mathcal{A})$ as $N$ tends to $\infty$ (already proved in Theorem \[TrafficAdjunction\]). - Two families $\mbf a$ and $\mbf b\in \mcal A$ are freely independent in $\mcal A$ if and only if they are traffic independent in $G(\mathcal{A})$. In other words, it remains to prove Theorem \[th:positivty\] and Proposition \[prop:freeness\] below. \[th:positivty\]For all non-commutative probability space $(\mathcal{A},\Phi)$, the linear functional $\tau_\Phi:\mathbb{C}\mathcal{G}^{(0)}\langle \mathcal{A}\rangle\to \C$ given by Definition \[orientation\] satisfies the positivity condition . In the four steps of the proof, we will prove successively that $\tau_\Phi\big[t|t^*\big]{\geqslant}0$ for all $t = \sum_{i=1}^L \alpha_i t_i$ a $n$-graph polynomial such that 1. the $t_i$ are $2$-graph monomials without cycles and the leaves are outputs, that is chains of edges with possibly different orientations; 2. the $t_i$ are trees whose leaves are the outputs; 3. the $t_i$ are such that $t_i|t_i^*$ have no cutting edges (see definition below); 4. the $t_i$ are $n$-graph monomials. In the different steps, we will use those two direct corollaries of Menger’s theorem. Let $T$ be a graph and $v_1$ and $v_2$ two distinct vertices. Then the minimum number of edges whose removal disconnect $v_1$ and $v_2$ is equal to the maximum number of edge-disjoint paths from $v_1$ to $v_2$ (i.e. sharing no edges out of the $v_k$’s). A cutting edge of a graph $T$ is an edge whose removal disconnects $T$. A graph $T$ is two edge connected (t.e.c.) if it has no cutting edge. Let $T$ be a graph which is t.e.c. and two distinct vertices $v_1$ and $v_2$. Then, there exists two edge-disjoint simple paths between $v_1$ and $v_2$. \[Lem:LemPos3\]Let $T$ be a graph such that there exist two distinct vertices $v_1$ and $v_2$, and three edge-disjoint simple paths $\gamma_1$, $\gamma_2$ and $\gamma_3$ between $v_1$ and $v_2$. Then, $T$ is not a cactus. #### Step 1 Proposition \[interestingproposition\] shows the positivity if all the $t_i$’s consist in chains of edges all oriented in the same direction. Indeed, we can write $t_i=\cdot \overset{a_i}{\leftarrow} \cdot$ for all $i$ (or $t_i=\cdot \overset{a_i}{\rightarrow} \cdot$ for all $i$) and so, we get $$\tau_\Phi\big[t|t^*\big]=\tau_\Phi\big[\sum_{i,j=1}^L\alpha_i\bar \alpha_j t_it_j^*\big]=\Phi(\sum_{i,j=1}^L\alpha_i\bar \alpha_j a_ia_j^*){\geqslant}0.$$ We deduce that the trace $\Psi$ induced by $\tau_\Phi$ is positive on the algebras $\C\langle \cdot \overset{a}{\leftarrow} \cdot:a\in \mathcal{A}\rangle$ and $\C\langle \cdot \overset{a}{\rightarrow} \cdot:a\in \mathcal{A}\rangle$. From Corollary \[freeness\], we also know that $\Psi$ is also positive on the mixed algebra $(\C\langle \cdot \overset{a}{\leftarrow} \cdot,\cdot \overset{a}{\rightarrow} \cdot:a\in \mathcal{A}\rangle,\tau_\Phi)$ (the free product of positive trace is positive [@NS Lecture 6]). Finally, if the $t_i$’s consist in chains of edges indexed by element of $\mathcal{A}$, we know that $$\tau_\Phi\big[t|t^*\big]=\Psi\big[\sum_{i,j=1}^L\alpha_i\bar \alpha_j t_it_j^*\big]{\geqslant}0.$$ #### Step 2 Assume that the $t_i$’s are trees whose leaves are the outputs. Let us prove by induction on the number $D$ of all edges of the $t_i$’s that we have $\tau_\Phi\big[t|t^*\big]{\geqslant}0$. If the number of edges of the $t_i$’s is $0$, we have $\tau_\Phi\big[t|t^*\big]=\sum_{i,j}\alpha_i\alpha_j^*{\geqslant}0$. We suppose that $D{\geqslant}1$ and that this result is true whenever the number of edges of the $t_i$’s is less than $D-1$. We can remove one edge in the following way. Let us choose one leaf $v$ of one of the $t_i's$ which has at least one edge. It is an output and for each tree $t_i$ we denote by $v^{(i)}$ the first node (or distinct leaf if there is no node) of the tree of $t_i$ encountered by starting from this output $v$, and by $t^{(i)}$ the branch of $t_i$ between this output $v$ and $v^{(i)}$. Of course, $v^{(i)}$ can be equal to $v$ and $t^{(i)}$ can be trivial, but there is at least one of the $t^{(i)}$’s which is not trivial. Denote by $\breve t_i$ the $n$-graph obtained from $t_i$ after discarding the $t^{(i)}$’s, and whose output $v$ is replaced by $v^{(i)}$. We claim that \_= \_\_. Firstly, we can identify the pairs $v^{(i)}$ and $v^{(j)}$ in the computation of the left hand-side. Indeed, we write $\tau_\Phi\big[ t_i| t_j^* \big] = \sum_\pi\tau_\Phi^0\big[ ( t_i| t_j^*)^\pi \big]$, and consider a term in the sum for which $\pi$ does not identify $v^{(i)}$ and $v^{(j)}$. Because $ \breve t_i| \breve t_j^*$ is t.e.c., there exists two disjoints paths between $v^{(i)}$ and $v^{(j)}$. But because $t^{(i)}| t^{(j)*}$ contains a third distinct path, by Corollary \[Lem:LemPos3\] $\pi$ cannot be a cactus if it does not identify $v^{(i)}$ and $v^{(j)}$ and so $ \tau^0_\Phi\big[ ( t_i| t_j^*)^\pi\big]$ is zero. Consider a term in the sum $\sum_\pi\tau^0_\Phi\big[ (t_i| t_j^*)^\pi \big]$ for which $\pi$ identifies the pairs $v^{(i)}, v^{(j)}$. Assume that a vertex $v_1$ of $\breve t_i| \breve t_j^*$ is identified with a vertex $v_2$ which is not in $\breve t_i| \breve t_j^*$. Assume that $\pi$ does not identify $v^{(i)}$ with $v_1$ and $v_2$. Because $\breve t_i| \breve t_j^*$ is t.e.c. there exists two distinct paths between $v_1$ and $v^{(i)}$ out of $\tau_\Phi\big[ t^{(i)}| t^{(j)*}\big]$. But there exists also a path between $v_2$ and $v^{(i)}$ in $t^{(i)}| t^{(j)*}$. By Corollary \[Lem:LemPos3\], we get that $ (t_i| t_j^*)^\pi$ is not a cactus and so $ \tau^0_\Phi\big[ ( t_i| t_j^*)^\pi\big]$ is zero. Hence, to determine which vertices of $\breve t_i| \breve t_j^*$ are identified with some vertices of $ t^{(i)}| t^{(j)*}$, one can first determine which vertices of $\breve t_i| \breve t_j^*$ are identifies with $v^{(i)}= v^{(j)}$ and which vertices of $t^{(i)}| t^{(j)*}$ are identified with this vertex. Hence the sum over $\pi$ partition of the set of vertices of $ t_i| t_j^*$ can be reduced to a sum over $\pi_1$ partition of the set of vertices of $\breve t_i| \breve t_j^*$ and a sum over $\pi_2$ partition of the set of vertices of the graph $ t^{(i)}| t^{(j)*}$. Moreover, by definition of $\tau_\Phi$, for two $0$-graph monomials $T_1$ and $T_2$, if $T$ is obtained by considering the disjoint union of $T_1$ and $T_2$ and merging one of their vertices, one has $\tau^0_\Phi[T] = \tau^0_\Phi[T_1] \times \tau^0_\Phi[T_2]$. Hence, the contribution of $\breve t_i| \breve t_j^*$ factorizes in $ \tau_\Phi\big[ \breve t_i| \breve t_j^* \big]$ and the contribution of $ t^{(i)}| t^{(j)*}$ factorizes in $\tau_\Phi\big[ t^{(i)}| t^{(j)*}\big]$, and we get the expected result. From Step 1, we know that $A=\left(\tau_\Phi\big[ t^{(i)}| t^{(j)*}\big]\right)_{i,j}$ is nonnegative. By induction hypothesis, we know that $B=\left(\tau_\Phi\big[ \breve t_i| \breve t_j^* \big]\right)_{i,j}$ is also nonnegative. We obtain as desired that the Hadamard product of $A$ and $B$ is nonnegative ([@NS Lemma 6.11]) and in particular, for all $\alpha_i$, we have $$\sum_{i,j}\alpha_i\bar{\alpha}_j \tau_\Phi\big[ t_i| t_j^*\big]{\geqslant}0.$$ #### Step 3 Let us prove that, for all $t_i$ such that $t_i|t_i^*$ have no cutting edges, we have $\tau_\Phi\big[t|t^*\big]{\geqslant}0$. For a graph $T$, let call t.e.c. components the maximal subgraphs of $T$ with no cutting edges. The tree of t.e.c. of $T$ is the graph whose vertices are the t.e.c. components of $T$ and whose edges are the cutting edges of $T$. First of all, our condition is equivalent to the condition that, for each $t_i$, any leaf of the tree of the t.e.c. components of $t_i$ is a component containing an output. Here again, we can proceed by induction. Let $D$ be the total number of t.e.c. components of the $t_i$’s which do not consists in a single vertex. If $D=0$, we are in the case of the previous step. Let us assume that $D>0$ and that the result is true up to the case $D-1$. We can remove one t.e.c. in the following way. Let us choose a t.e.c. component $t^{(k)}$ which is not a single vertex of a certain $n$-graph monomial $t_k$, for some $k$ in $\{1,\ldots,L\}$. We consider $t^{(k)}$ as a multi $^*$-graph monomial, where the outputs are the vertices which are attached to cutting edges. Let $\breve t_k$ be the $n$-graph monomial obtained from $t_k$ by replacing the component $t^{(k)}$ by one single vertex. We define also for $i\neq k$ the $^*$-graph monomial $t^{(i)}$ to be the trivial leaf and set $\breve t_i=t_i$. We claim that \_= \_\_\_ (of course, this equality is nontrivial only if we consider $i=k$ or $j=k$). Firstly, the outputs of $t^{(i)}$ can be identified. Indeed, consider $v_1,v_2$ two distinct ouputs of $t^{(i)}$. Writing $\tau_\Phi\big[t_i|t_j^*\big] = \sum_\pi \tau_\Phi^0\big[ (t_i|t_j^*)^\pi\big]$, consider a term in the sum for which $\pi$ does not identify $v_1$ and $v_2$. Since $t^{(i)}$ is t.e.c. there exist two distinct simple paths $\gamma_1$ and $\gamma_2$ between $v_1$ and $v_2$. Consider a path from $v_2$ to $v_1$ that does not visit $t^{(i)}$ in $t_i|t_j^*$. Such a path exists as $v_1$ and $v_2$ belong to two subtrees of $t_i$ that are attached to outputs of $t_i$, themselves being attached to the connected graph $t_j^*$. The quotient by $\pi$ yields three distinct paths $\gamma$ between $v_1$ and $v_2$ in $(t_i|t_j^*)^\pi$ which implies that $(t_i|t_j^*)^\pi$ is not a cactus by Corollary \[Lem:LemPos3\]. Hence, by definition of $\tau_\Phi$, $\tau_\Phi^0\big[(t_i|t_j^*)^\pi\big]$ is zero. Thus, when we write $\tau_\Phi\big[(t_i|t_j^*)\big] = \sum_\pi \tau_\Phi^0\big[ (t_i|t_j^*)^\pi\big]$ we can restrict the sum over the partition $\pi$ that identify $v_1$ and $v_2$, therefore, we can replace $t_i$ by the graph $\tilde t_i$ where we have identify $v_1$ and $v_2$. Hence we have $ \tau_\Phi\big[ t_i |t_j^*\big] = \tau_\Phi\big[ \tilde t_i| \tilde t_j^*\big]$. Let us write $\tau_\Phi\big[ \tilde t_i| \tilde t_j^*\big]= \sum_\pi \tau_\Phi^0\big[ ( \tilde t_i | \tilde t_j^*)^\pi\big]$. Let $\pi$ be as in the sum. Assume that a vertex $v_1$ of $t^{(i)}$ is identified by $\pi$ with a vertex $v_2$ which is not in $t^{(i)}$. Assume that $\pi$ does not identify $w^{(i)}$ with $v_1$ and $v_2$. Since $t^{(i)}$ is t.e.c. there exist two distinct paths between $v_1$ and $w^{(i)}$ in $t^{(i)}$. But $\breve t_i$ is connected and there exists a third path between $v_2$ and $w^{(i)}$. As usual this implies that $(\tilde t_i|\tilde t_j^*)^\pi$ is not a cactus and so $ \tau_\Phi^0\big[ (\tilde t_i| \tilde t_j^*)^\pi\big]$ is zero. Hence, to determine which vertices of $t^{(i)}$ are identified with some vertices out of $t^{(i)}$, one can first determine which vertices of $t^{(i)}$ are identifies with $w^{(i)}$ and which vertices out of $t^{(i)}$ are identified with this vertex. Thus the sum over $\pi$ partition of the set of vertices of $\tilde t_i| \tilde t_j^*$ can be reduced to a sum over $\pi_1$ partition of the set of vertices of $t^{(i)}$ and a sum over $\pi_2$ partition of the set of vertices of the graph with $t^{(i)}$ removed. Moreover, by definition of $\tau_\Phi$, for two $^*$ test graphs $T_1$ and $T_2$, if $T$ is obtained by considering the disjoint union of $T_1$ and $T_2$ and merging one of their vertices, one has $\tau_\Phi^0[T] = \tau_\Phi^0[T_1] \times \tau_\Phi^0[T_2]$. Hence, the contribution of $T( t_i, t_j^*)$ factorizes in $ \tau_\Phi\big[ T(\breve t_i, t_j^*) \big]$ and the contribution of $ t^{(i)}$ factorizes in $\tau_\Phi\big[ t^{(i)}\big]$. We can do the same factorization for the $n$-graph monomial $t_j^*$, and we get the expected result. Now, setting $ \beta_i = \alpha_i \tau\big[ t^{(i)}\big]$, we have \_=\_[i,j]{}\_i|\_j \_ which is nonnegative thanks to the induction hypothesis. #### Step 4 We are not able to prove the positivity in general case so we prove it in an indirect way using the positivity of the free product proves in Theorem \[Th:PosFreeProd\]. To bypass this difficultly, we define an auxiliary distribution of traffic $\tilde{\tau}$ which is defined to be equal to $\tau_\Phi$ on the $0$-graph monomials without cutting edges and equal to $0$ on the $0$-graph monomials with cutting edges. For $t_i$’s some $n$-graph monomials and $t=\sum_{i} \alpha_i t_i$, we have $$\tilde{\tau}\big[ t|t^*\big]=\sum_{i,j}\alpha_i \bar{\alpha}_j\tilde{\tau}\big[ t_i | t_j^* \big]=\sum_{\substack{i,j\\ t_i|t_i^*, \, t_j |t_j^*\\\text{ without cutting edges}}}\alpha_i \bar{\alpha}_j\tilde{\tau}\big[ t_i|t_j^*\big]=\sum_{\substack{i,j\\t_i|t_i^*, \, t_j|t_j^*\\\text{ without cutting edges}}}\alpha_i \bar{\alpha}_j\tau_\Phi\big[t_i|t_j^*\big]{\geqslant}0$$ using the result of the previous step. Therefore, $\tilde{\tau}:\C\mcal G^{(0)} \langle \mcal A \rangle \to \C$ is positive. Let us consider the Haar unitary traffic distribution $\tau_u:\C \mcal G^{(0)} \langle u,u^* \rangle \to \C$, already mentionned in [@Male2011], and which is the (positive) limit of a random matrix distributed according to the Haar measure on the unitary group $U(N)$, and which is well-defined thanks to Theorem \[TrafficAdjunction\]. We do not need the precise form of $\tau_u$. Let us just says the following: $u$ is unitary as a limit of unitaries, which means that $u=u^*$, and Theorem \[TrafficAdjunction\] implies that $\tau_u$ is in the form of Definition \[orientation\], that is to say supported on oriented cacti (see [@Male2011] for a precise formula). The traffic free product $(\tilde{\tau}\star\tau_u):\C \mcal G^{(0)} \langle \mcal A\cup \{u,u^*\} \rangle \to \C$ is satisfies the positivity condition thanks to Theorem \[Th:PosFreeProd\]. For any $*$-graph test $T$ in $\C\mcal G^{(0)} \langle \mcal A \rangle$, we define $uTu^*$ as the $*$-graph test in $\C \mcal G^{(0)} \langle \mcal A\cup \{u,u^*\} \rangle$ obtained from $T$ by replacing each edge $\overset{a}{\longrightarrow}$ by $\overset{u}{\longrightarrow}\overset{a}{\longrightarrow}\overset{u^*}{\longrightarrow}$. We claim that $$\tau_\Phi\big[T\big]=(\tilde{\tau}\star\tau_u)\big[uTu^*\big],\label{Stabilisation}$$ which implies of course the positivity of $\tau_\Phi$ because $(\tilde{\tau}\star\tau_u)$ is positive as a traffic independent product of positive distribution. By definition, there is a natural correspondence between the vertices of $uTu^*$, and $V\sqcup V_1\sqcup V_2$, where $V$ are the vertices of $T$ and $V_1$ and $V_2$ are two copies of the edges $E$ of $T$. Indeed, each edge of $T$ adds two vertices in $uTu^*$ (one at the beginning and one at the end), and we can denote by $V_1$ the vertices which appear at the beginning of an edge of $T$, and by $V_2$ the vertices which appear at the end of an edge of $T$. Moreover, there is a natural correspondence between the edges of $uTu^*$, and $E\sqcup E_1\sqcup E_2$. Indeed, each edge of $T$ adds two edges in $uTu^*$ (one at the beginning and one at the end), and we can denote by $E_1$ the edges with appears at the beginning of an edge of $T$, and by $E_2$ the edges which appears at the end of an edge of $T$. Let us denote by $\sigma$ the partition which is composed of $|V|$ blocks, each one containing one vertex $v\in V$ and all its adjacent vertices of $V_1\sqcup V_2$. For any partition $\pi\in \mcal{P}(V\sqcup V_1\sqcup V_2)$ which is greater than $\sigma$, let us set $\pi_{|V}\in \mcal{P}(V)$ the restriction of the partition $\pi$ to the set $V$. Les us fix $T\in \C\mcal G^{(0)} \langle \mcal A \rangle$ and write $$\begin{aligned} (\tilde{\tau}\star\tau_u)\big[uTu^*\big]&=\sum_{\pi' \in \mcal{P}(V\sqcup V_1\sqcup V_2) } (\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})\\ &=\sum_{\pi\in \mcal{P}(V)}\sum_{\substack{\pi' \in \mcal{P}(V\sqcup V_1\sqcup V_2) \\(\pi'\vee\sigma)_{|V}=\pi}} (\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'}).\end{aligned}$$ We claim that 1. For all $\pi'\in \mcal{P}(V\sqcup V_1\sqcup V_2)$ such that $(uTu^*)^{\pi'}$ is not an oriented cactus, we have $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=0$. 2. For all $\pi\in \mcal{P}(V)$, we have $$\sum_{\substack{\pi' \in \mcal{P}(V\sqcup V_1\sqcup V_2) \\(\pi'\vee\sigma)_{|V}=\pi}} (\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=\tau_\Phi^0\big[T^\pi\big].$$ <!-- --> 1. If $(uTu^*)^{\pi'}$ is not a cactus, either $(uTu^*)^{\pi'}$ has a cut edge, or $(uTu^*)^{\pi'}$ has three edge-disjoint simple paths $\gamma_1$, $\gamma_2$ and $\gamma_3$ between two distinct vertices $v_1$ and $v_2$. First of all, assume that $(uTu^*)^{\pi'}$ has a cut edge indexed by $u$ or $u^*$, then it is also a cut edge in its colored component when $(uTu^*)^{\pi'}$ is decomposed according to the traffic freeness, and so $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})$ vanishes because of the vanishing of the injective traffic distribution of $u$. Let us assume now that $(uTu^*)^{\pi'}$ has a cut edge indexed by an element of $\mcal A$, then it cuts $(uTu^*)^{\pi'}$ into two component, each one containing an odd number of $\{u,u^*\}$. Because of the traffic independence condition, it implies that there exists in the product an injective trace which contains an odd numbers of $\{u,u^*\}$, which is by consequence equal to $0$. Let us assume now that $(uTu^*)^{\pi'}$ has no cut edge but has three edge-disjoint simple paths $\gamma_1$, $\gamma_2$ and $\gamma_3$ between two distinct vertices $v_1$ and $v_2$. If $v_1$ and $v_2$ are in a same colored component of $(uTu^*)^{\pi'}$, we can assume that the edge-disjoint simple paths $\gamma_1$, $\gamma_2$ and $\gamma_3$ are also in this colored component, erasing each excursion which go outside of this component, and consequently, $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})$ vanishes because of the vanishing of the injective traffic distribution of $\tilde{\tau}$ and of $\tau_u$ on 2-edge connected graph which are not cactus. If $v_1$ and $v_2$ are not in a same colored component of $(uTu^*)^{\pi'}$, we can replace $v_2$ by the vertex $v_2'$ in the colored component of $v_1$ that each simple path from $v_1$ to $v_2$ has to visit, due to the tree condition. We can also replace $\gamma_1$, $\gamma_2$ and $\gamma_3$ by edge-disjoint simple paths $\gamma_1'$, $\gamma_2'$ and $\gamma_3'$ which are also in this colored component, erasing each excursion which go outside of this component, and stopping at the first visit of $v_2'$. We see at the end that there exists three edge-disjoint simple paths $\gamma_1'$, $\gamma_2'$ and $\gamma_3'$ between two distinct vertices $v_1'$ and $v_2'$ inside a colored component of $(uTu^*)^{\pi'}$ and consequently, $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})$ vanishes because of the vanishing of the injective traffic distribution of $\tilde{\tau}$ and of $\tau_u$ on 2-edge connected graph which are not cactus. Finally, let us assume that $(uTu^*)^{\pi'}$ is a cactus, but not oriented. Then, there exists two consecutive edges in the same cycle of $(uTu^*)^{\pi'}$ which are not oriented. Then $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})$ vanishes because of the vanishing of the injective traffic distribution of $\tilde{\tau}$ and of $\tau_u$ on cactus which are not oriented. 2. First of all, let us prove that, if $T^{\pi}$ is not a cactus, then $$\sum_{\substack{\pi' \in \mcal{P}(V\sqcup V_1\sqcup V_2) \\(\pi'\vee\sigma)_{|V}=\pi}} (\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=0(=\tau_\Phi^0\big[T^\pi\big]).\label{EQ:annulationsipascactus}$$ Assume that $T^{\pi}$ is not a cactus. Either $T^{\pi}$ has a cut edge, or $T^{\pi}$ has three edge-disjoint simple paths $\gamma_1$, $\gamma_2$ and $\gamma_3$ between two distinct vertices $v_1$ and $v_2$. If $e\in E$ a cut edge of $T^{\pi}$, for all $\pi'\in \mcal{P}(V\sqcup V_1\sqcup V_2)$ such that $(\pi'\vee\sigma)_{|V}=\pi$, $e$ seen as an edge of $(uTu^*)^{\pi'}$ is also a cut edge, which means that $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=0$ thanks to the first item. In the case where there exist three edge-disjoint simple paths $\gamma_1$, $\gamma_2$ and $\gamma_3$ between two distinct vertices $v_1$ and $v_2$ of $T^{\pi}$, it leads to three simple paths $\gamma_1'$, $\gamma_2'$ and $\gamma_3'$ in $(uTu^*)^{\pi'}$ between two distinct vertices $v_1$ and $v_2$ of $T^{\pi}$ which does not share any edges in $E$ (they can share edges in $E_1$ or $E_2$). Of course, because $v_1$ and $v_2$ are distinct in $T^{\pi}$, they are not in the same colored component of $(uTu^*)^{\pi'}$ when decomposed according to the traffic independence condition into a tree of colored component. It means that the $\gamma_1'$, $\gamma_2'$ and $\gamma_3'$ go trough all colored component of $(uTu^*)^{\pi'}$ between $v_1$ and $v_2$, and in particular, there is a component of $(uTu^*)^{\pi'}$ colored by elements of $\mcal A$ with three edge-disjoint simple paths $\beta_1$, $\beta_2$ and $\beta_3$ between two distinct vertices, and because of the first item, it leads also to $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=0$. Finally, we have . Thus we can assume that $T^\pi$ is a cactus. For all $\pi'\in \mcal{P}(V\sqcup V_1\sqcup V_2)$ such that $(\pi'\vee\sigma)_{|V}=\pi$, the graph $T^\pi$ is the graph $(uTu^*)^{\pi'}$ where all edges labelled by $u$ or $u^*$ are removed. So, if one of the cycle of $T^\pi$ is unoriented, it comes from an unoriented cycle of $(uTu^*)^{\pi'}$, which means that $(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=0$ because of the first item. By consequence, we can also assume that $T^\pi$ is an oriented cactus. Let us consider $\pi'$ such that $(uTu^*)^{\pi'}$ is an oriented cactus. The computation of $(\pi'\vee\sigma)$ consists in contracting all edges labelled by $u$ and $u^*$, or equivalently contracting every colored component indexed by $u$ and $u^*$ in one vertex. Because $(uTu^*)^{\pi'}$ is a cactus, this contraction doesn’t change the cycles of $(uTu^*)^{\pi'}$ which are indexed by elements of $\mcal A$. In other words, the cycles indexed by elements of $\mcal A$ of $(uTu^*)^{\pi'}$ and $(uTu^*)^{\pi'\vee \sigma}$ are the same. But the cycles indexed by elements of $\mcal A$ of $(uTu^*)^{\pi'\vee \sigma}$ are exactly those of $T^{(\pi'\vee\sigma)_{|V}}$. Finally, if $(\pi'\vee\sigma)_{|V}=\pi$, the cycles of $(uTu^*)^{\pi'}$ indexed by elements of $\mcal A$ are those of $T^{\pi}$, and we have $$(\tilde{\tau}\star \tau_u)^0 ((uTu^*)^{\pi'})=\tilde{\tau}^0(T^{\pi})\cdot \prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\pi'}\\\text{indexed by }u,u^*}}\tau_u^0(c)=\tau^0_{\Phi}(T^{\pi})\cdot \prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\pi'}\\\text{indexed by}u,u^*}}\tau_u^0(c).$$ We are left to prove that $$\sum_{\pi' \in S_\pi}\prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\pi'}\\\text{indexed by }u,u^*}}\tau_u^0(c)=1,$$ where $S_\pi=\{\pi'\in \mcal{P}(V\sqcup V_1\sqcup V_2):(\pi'\vee\sigma)_{|V}=\pi,(\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})\neq 0\}.$ Here is the good news: there exists a lower bound of $S_{\pi}$, that we will denote by $\Pi$, and which will allows us (with forthcoming justifications) to write: $$\sum_{\pi' \in S_\pi}\prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\pi'}\\\text{indexed by }u,u^*}}\tau_u^0(c)=\prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\Pi}\\\text{indexed by }u,u^*}}\tau_u(c)=1.$$ The partition $\Pi\in \mcal{P}(V\sqcup V_1\sqcup V_2)$ is the lower partition such that if $e$ and $e'$ are two consecutive edges of a cycle of $T^\pi$ (oriented $\overset{e}{\longrightarrow}\overset{e'}{\longrightarrow}$), then the source of $e'$ and the goal of $e$ viewed as edges of $uTu^*$ are in the same block. Remark that $\Pi$ is not necessary a cactus, and consequently, not necessary in $S_{\pi}$. However, for all $\pi'\in S_\pi$, we have $\Pi\preceq \pi'$. Indeed, because we proved that the cycles of $(uTu^*)^{\pi'}$ indexed by $\mcal A$ are those of $T^{\pi}$, the source of $e'$ and the goal of $e$ must be identified in $\pi'$. $\Pi$ consists in the cycles of $T^{\pi}$ linked by some nontrivial components labelled by $u$ and $u^*$. Of course, an identification of two vertices in two different colored component labelled by $u$ and $u^*$ would modify the traffic independence condition, and as a consequence, we know that every $\pi'\in S_\pi$ is obtained by a collection of separate identification in each $u$-colored component of $\Pi$ which transform it into an oriented cactus. Adding all the other vanishing terms (the identifications which do not lead to a cactus), we see that $$\sum_{\pi' \in S_\pi}\prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\pi'}\\\text{indexed by }u,u^*}}\tau_u^0(c)=\prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\Pi}\\\text{indexed by }u,u^*}}\sum_{c\preceq c'}\tau_u^0(c')=\prod_{\substack{\text{component }c\text{ of }(uTu^*)^{\Pi}\\\text{indexed by }u,u^*}}\tau_u(c),$$ (where the sum $c\preceq c'$ is the sum over $c^{\pi'}$ with $\pi'$ an identification of the vertices of $c$). It suffices to conclude to prove that $\tau_u(c)=1$ for every $u$-colored component $c$ of $(uTu^*)^{\Pi}$. This fact comes from the particular structure of $c$: it is a graph whose edges are of the form $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$. Indeed, each $u^*$ in $c$ comes from some local structure $\overset{u}{\longrightarrow}\overset{e}{\longrightarrow}\overset{u^*}{\longrightarrow}$ of $uTu^*$, and if we consider the consecutive edge $e'$ of $e$ in the cycle of $T^\pi$, we know that the source of $e'$ and the goal of $e$ are identified in $(uTu^*)^{\Pi}$, which means also that the source of $u^*$ in $\overset{u}{\longrightarrow}\overset{e}{\longrightarrow}\overset{u^*}{\longrightarrow}$ is identified with the goal of $u$ in $\overset{u}{\longrightarrow}\overset{e'}{\longrightarrow}\overset{u^*}{\longrightarrow}$, and leads to a local structure $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$ in $c$ (with no other identifications for the vertex in the middle of $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$). Similarly, each $u$ in $c$ can be seen in a local structure $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$ in $c$ (with no other identifications for the vertex in the middle of $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$). Finally, every $u$-colored component $c$ of $(uTu^*)^{\Pi}$ is composed of a graph whose edges are $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$, and it is of public notoriety that it implies that $\tau_u(c)=1$ (use once again Proposition \[interestingproposition\] to replace each occurence of $\overset{u}{\longrightarrow}\overset{u^*}{\longrightarrow}$ by $\overset{u^*u}{\longrightarrow}=\overset{1}{\longrightarrow}$, and finally by $\cdot$, which leads to $\tau_u(c)=\tau_u(\cdot)=1$). This lemma allows us to conclude the proof, since $$\begin{aligned} &(\tilde{\tau}\star\tau_u)\big[uTu^*\big]=\sum_{\pi' \in \mcal{P}(V\sqcup V_1\sqcup V_2) } (\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})\\ &=\sum_{\pi\in \mcal{P}(V)}\sum_{\substack{\pi' \in \mcal{P}(V\sqcup V_1\sqcup V_2) \\(\pi'\vee\sigma)_{|V}=\pi}} (\tilde{\tau}\star \tau_u)^0 ( (uTu^*)^{\pi'})=\sum_{\pi\in \mcal{P}(V)}\tau_\Phi^0\big[T^\pi\big]=\tau_\Phi\big[T\big].\end{aligned}$$ [^1]: Saarland university. G. Cébron is supported by the ERC advanced grant “Noncommutative distributions in free probability", held by Roland Speicher. [^2]: Supported in part by RTG 1845 and EPSRC grant EP/I03372X/1. [^3]: Université Paris Descartes UMR 8145 and CNRS, partially supported by the Fondation Sciences Mathématiques de Paris [^4]: The terminology *free* product should be understood as *canonical* product, and may not be confused with the terminology *free* independence
{ "pile_set_name": "ArXiv" }
--- abstract: 'By generalising the cosymplectic setting for time-dependent Lagrangian mechanics, we propose a geometric framework for the Lagrangian formulation of classical field theories with a Lagrangian depending on the independent variables. For that purpose we consider the first order jet bundles $J^1\pi$ of a fiber bundle $\pi:E\to {\mathbb R}^k$ where ${\mathbb R}^k$ is the space of independent variables. Generalized symmetries of the Lagrangian are introduced and the corresponding Noether Theorem is proved.' author: - | Lucia Búa\ Departamento de Xeometría e Topoloxía Facultad de Matemáticas, USC, Spain\ e-mail: lucia.bua@usc.es\ Ioan Bucataru\ Faculty of Mathematics, Alexandru Ioan Cuza University, Iaşi, Romania\ e-mail: bucataru@uaic.ro\ Manuel de León\ ICMAT CSIC, Madrid, Spain\ e-mail: mdeleon@icmat.es\ Modesto Salgado\ Departamento de Xeometría e Topoloxía Facultad de Matemáticas, USC, Spain\ e-mail: modesto.salgado@usc.es\ Silvia Vilariño\ Centro Universitario de la Defensa $\&$ I.U.M.A., 50090 Zaragoza, Spain\ e-mail: silviavf@unizar.es title: Symmetries in Lagrangian Field Theory --- [**Keywords:**]{} Symmetries, Cartan theorem, Noether Theorem, conservation laws, jet bundles. Introduction ============ As it is well-know, the natural arena for studying mechanics is symplectic geometry. One interesting problem is to extend this geometric framework for the case of classical field theories. Several different geometric approaches are well known: the polysymplectic formalisms developed by Sardanashvily et al. [@sarda2; @sarda3; @sarda1], and by Kanatchikov [@Kana], as well as the n-symplectic formalism of Norris [@no1; @no2], and the $k$-cosymplectic of de León et al. [@mod1; @mod2]. Let us remark that the multisymplectic formalism is the most ambitious program for developing the Classical Field Theory (see for example [@CCI91; @bar1; @bar2; @GIM1; @GIM2; @KijTul], and references quoted therein). The aims of this paper are - To give a new Lagrangian description of first order Classical Field Theory, by considering a fibration $E\to {\mathbb R}^k$, which has as particular cases the cosymplectic setting for time-dependent Lagrangian mechanics, and it is related with the $k$-cosymplectic [@mod2] and multisymplectic formalisms. Let us observe that although every fiber bundle over $ {\mathbb R}^k$ is a trivial bundle since ${\mathbb R}^k$ is contractible (Steenrod 1951 [@steenrod]), we do not use this fact to develop our Lagrangian description. - To introduce and to study the generalized symmetries in first order Lagrangian Field Theory. For time-dependent Lagrangian Mechanics this was done by J.F. Cariñena et al. [@CFNM]. In the present paper we present a new approach for Lagrangian Field Theory, working with the first order jet bundle $J^1\pi$ of a fiber bundle $\pi:E\to {\mathbb R}^k$, where $E$ is $(n+k)$-dimensional. The crucial point is that each $1$-form on ${\mathbb R}^k$ defines a tensor field of type $(1,1)$ on $J^1\pi$, see Saunders [@saundersJet]. The paper is organized as follows. The main tools to be used are those of vector fields, $k$-vector fields and forms along maps, the general definitions of which are given in Section \[pre\]. In Section \[jets\] we introduce the geometric elements on $J^1\pi$ necessary to develop the geometric formulation of the Euler-Lagrange field equations in Section \[lagfield\] and to the study of symmetries and conservations laws. The principal tools, here described, are the canonical vector fields, the $k$ vertical endomorphisms and a kind of $k$-vector fields, known as [sopde]{}s, which describe systems of second order partial differential equations. This machinery is later used to discuss symmetries in this context, extending some previous results (see [@BBS12; @mrsv]). The geometric formulation of the Euler-Lagrange field equations is given in Section \[lagfield\], see Theorem \[relvectorphisol\]. For this purpose we introduce the $k$ Poincaré-Cartan $1$-forms using the Lagrangian and the $k$ vertical endomorphisms. Our formulation is a natural extension of the $k$-cosymplectic formalism developed in [@mod2] as we show in Section \[lagfield-cosy\]. Section \[sim\] is devoted to discussing symmetries and conservation laws. We introduce symmetries of the Lagrangian and we give a Noether Theorem. Preliminaries {#pre} ============= $k$-vector fields {#pre-k-vectors} ----------------- A system of first-order ordinary differential equations, on a manifold $M$, can be geometrically described as a vector field on $M$. Accordingly, a system of first-order partial differential equations on $M$ can be geometrically described as a $k$-vector field on $M$, for some $k>1$. Particularly, we can identify a system of second-order partial differential equations ([sopde]{}) with some special $k$-vector fields on the manifold $J^1\pi$, for some $k>1$. We briefly recall the correspondence between systems of first-order partial differential equations and $k$-vector fields. Let us denote by $T^1_kM$ the Whitney sum $TM\oplus \stackrel{k}{\dots} \oplus TM$ of $k$ copies of $TM$ and $\tau_M:T^1_kM \rightarrow M$ the canonical projection. [Definition]{} A [*$k$-vector field*]{} on an arbitrary manifold $M$ is a section ${\bf X}: M \longrightarrow T^1_kM$ of the canonical projection $\tau_M:T^1_kM \rightarrow M$. Each $k$-vector field ${\bf X}$ defines a family of $k$ vector fields $X_{1}, \dots, X_{k}\in\mathfrak{X}(M)$ by projecting ${\bf X}$ onto every factor; that is, $X_\alpha=\tau_{\alpha}\circ {\bf X}$, where $\tau_{\alpha}\colon T^1_kM \rightarrow TM$ is the canonical projection on the $\alpha^{th}$-copy $TM$ of $T^1_kM$. [Definition]{} \[integsect\] An [*integral section*]{} of the $k$-vector field ${\bf X}=(X_{1}, \dots, X_{k})$, passing through a point $x\in M$, is a map $\psi\colon U\subset {\mathbb R}^{k} \rightarrow M$, defined on some open neighborhood $U$ of $0\in {\mathbb R}^{k}$, such that $$\label{integcond} \psi(0)=x, \ \psi_*(x)\left(\frac{\partial}{\partial x^\alpha}\Big\vert_x\right)=X_{\alpha}(\psi (x))\in T_{\psi(x)}M, \quad \forall x\in U, \ 1\leq \alpha\leq k.$$ A $k$-vector field $ {\bf X}=(X_1,\ldots , X_k)$ on $M$ is said to be [*integrable*]{} if there is an integral section of ${\bf X}$ passing through every point of $M$. From Definition \[integsect\] we deduce that $\psi$ is an integral section of ${\bf X}=(X_{1}, \dots, X_{k})$ if, and only if, $\psi$ is a solution to the system of first-order partial differential equations $$X_\alpha^i(\psi(x))= \frac{\partial \psi^i}{\partial x^\alpha}\Big\vert_x, \quad 1\leq \alpha \leq k, \ 1\leq i \leq \dim M,$$ where $X_\alpha=X^i_{\alpha} {\partial }/{\partial y^i}$ on a coordinate system $(U,y^i)$ on $M$, $y^i\circ \psi=\psi^i$, and $x^{\alpha}$ are coordinates on ${\mathbb R}^k$. For a $k$-vector field ${\bf X}=(X_1,\dots,X_k)$ on $M$ we require the integrability condition $[X_\alpha,X_\beta]=0, \forall \alpha,\beta \in \{1,\dots,k\}$, as it has been considered in [@Maranon], see also [@kosambi35; @kosambi48]. Sections along a map {#pre-sections along map} -------------------- Given a fiber bundle $\pi:B\to M$ and a differentiable mapping $f:N\to M$, [*a section of $\pi$ along $f$*]{} is a differentiable map $\sigma:N\to B$ such that $\pi\circ \sigma=f$ (see e. g. [@Poor]). When $\pi$ is a vector bundle, then the set of such sections can be endowed with a structure of $C^\infty(N)$-module. In the case of $\pi:B\to M$ being the tangent bundle of $M$, $\tau_M:TM\to M$, or the cotangent bundle, $\pi_M:T^*M\to M$, the sections along $f$ will be called [*vector fields along $f$*]{} and [*$1$-forms along $f$, respectively*]{}. The notion of sections along a map has been shown to be very fruitful. Many objects commonly used in Physics find their suitable geometric representative by means of this concept and a related one of $f$-derivations ([@PidTul; @saundersJet]), but they have only recently been introduced in Physics ([@CFNM91; @CLM89; @GraciaPons]). Let $X$ be a vector field along $f:N\to M$, say $$\xymatrix@=10mm{ & TM\ar[d]^-{\tau}\\ N \ar[ru]^-{X}\ar[r]^-{f} & M }$$ then we can define an $f$-derivation $i_X:\Lambda^p(M) \to \Lambda^{p-1}(N)$ of degree $-1$ (and type $i_*$) as follows: $i_Xg=0$ for all $g \in C^\infty(M)$ and $$\label{ixalpha} (i_X\omega)(x)\left(v_{1_x}, \ldots , v_{(p-1)_x}\right)=\omega\left(f(x)\right)\left(X(x),f_*(x)v_{1_x}, \ldots , f_*(x)v_{(p-1)_x}\right)$$ where $v_{1_x}, \ldots , v_{(p-1)_x}\in T_xN$. There is another related $f$-derivation $d_X$ defined by $$\label{dd} d_X=i_X\circ d +d \circ i_X,$$ where $d$ stands for the operator of exterior differentiation. This derivation is of degree $0$ (and type $d_*$), i.e., $d_X\circ d_{(M)}=d_{(N)}\circ d_X$. Note that when $X\in \mathfrak{X}(M)$, then the $id_M$-derivations $i_X$ and $d_X$ are nothing but the inner product or contraction $i_X$ and the Lie derivative $\mathcal{L}_X$, respectively. Let us observe that $d_{X}$ is an $f^*$-derivation associated to $X$ in the sense of Pidello and Tulczyjew [@PidTul]. If $X$ is a vector field along $f:N\to M$, from (\[ixalpha\]) and (\[dd\]) we deduce that the map $$\begin{array}{ccccl} d_{X}&:& C^\infty( M )& \to & C^\infty(N)\\ \noalign{\medskip} & & F& \to & d_{X}F \end{array}$$ is given by $$\label{dx} (d_{X}F)(x)=(i_X\, dF)(x)=dF(f(x))(X(x))=X(x)(F),\,\, \forall x \in N.$$ Finally, if $\pi:B\to M$ is a differentiable fibre bundle, we associate to each $\pi$-semi-basic $p$-form $\alpha$ on $B$ a $p$-form $\alpha^V$ along $\pi$, as follows: $$\label{levvert} \alpha^V(b)({v_1}_{\pi(b)}, \ldots,{v_p}_{\pi(b)})= \alpha(b)({w_1}_{b}, \ldots, {w_p}_{b}), b\in B$$ where ${v_i}_{\pi(b)}\in T_{\pi(b)}M$, $i=1, \ldots , p$, and ${w_i}_{b}\in T_{b}B$ are such that $\pi_*(b) {w_i}_{b}= {v_i}_{\pi(b)} $. This type of form $\alpha^V$ will be used in Section \[sim-gen\]. The geometry of jet bundles {#jets} =========================== In this paper, we work with the first and second order jet bundles $J^1\pi$ and $J^2\pi$ of a fiber bundle $\pi:E\to {\mathbb R}^k$, where $E$ is an $(n+k)$-dimensional manifold. If $(x^{\alpha})$ are local coordinates on ${\mathbb R}^k$ and $(x^\alpha, q^i)$ are local fiber coordinates on $E$, we consider the induced standard jet coordinates $(x^{\alpha}, q^i, v^i_{\alpha})$ on $J^1\pi$ and $(x^{\alpha}, q^i, v^i_{\alpha}, v^i_{\alpha\beta})$ on $J^2\pi$, where $1\leq i\leq n$, $1\leq \alpha, \beta\leq k$. The induced jet coordinates are given by $$\begin{array}{cl} x^{\alpha}(j^1_x\phi) = x^{\alpha}(x)=x^{\alpha}, & q^i(j^1_x\phi)=q^i(\phi(x)), \\ \noalign{\medskip} v^i_{\alpha}(j^1_x\phi)=\displaystyle\frac{\partial \phi^i}{\partial x^{\alpha}}\Big\vert_{x}, & v^i_{\alpha\beta}(j^2_x\phi) = \displaystyle\frac{\partial^2\phi^i}{\partial x^{\alpha}\partial x^{\beta}}\Big\vert_{x}. \end{array}$$ Here, $\phi$ is a section for $\pi$ and $j^1_x\phi$, $j^2_x\phi$ are the $1$-jet and $2$-jet on $x$, respectively. For the canonical projections, we use the usual notations $$\begin{array}{ccccccccl} J^2\pi & \stackrel{\pi_{2,1}}{\longrightarrow} & J^1\pi & \stackrel{\pi_{1,0}}{\longrightarrow} & E & & J^1\pi & \stackrel{\pi_{1}}{\longrightarrow} & {\mathbb R}^k \\ \noalign{\medskip} j^2_x\phi & \to & j^1_x\phi & \to & \phi(x) & & j^1_x\phi & \to & x. \end{array}$$ Canonical vector fields along the projections $\pi_{1,0}$ and $\pi_{2,1}$ {#jets-can} ------------------------------------------------------------------------- The vector field $T^{(0)}_{\alpha}$ on $E$ along $\pi_{1,0}$ and the vector field $ T^{(1)}_{\alpha}$ on $J^1\pi$ along $\pi_{2,1}$ $$\xymatrix@=10mm { & T(J^1\pi)\ar[d]^-{\tau_{J^1\pi}}& TE\ar[d]^-{\tau_E}\\ J^2\pi \ar[ru]^-{T^{(1)}_\alpha}\ar[r]^-{\pi_{2,1}} & J^1\pi \ar[ru]^-{T^{(0)}_\alpha}\ar[r]^-{\pi_{1,0}} & E }$$ are defined respectively by $$\begin{aligned} \label{t0t1} T^{(0)}_{\alpha}(j^1_x\phi)=\phi_*(x) \left( \frac{\partial}{\partial x^{\alpha}}\Big\vert_{x}\right) \in T_{\phi(x)}E \\ \noalign{\medskip} T^{(1)}_{\alpha} (j^2_x\phi) = (j^1\phi)_*(x) \left( \frac{\partial}{\partial x^{\alpha}}\Big\vert_{x} \right)\in T_{j^1_x\phi}(J^1\pi),\end{aligned}$$ as we can see from the above diagram. Their local expressions are given by $$\label{loct0t1}\begin{array}{ccl} T^{(0)}_\alpha & = &\displaystyle\frac{\partial}{\partial x^\alpha}\circ\pi_{1,0} + v^i_\alpha \displaystyle\frac{\partial}{\partial q^i}\circ\pi_{1,0} \\ \noalign{\medskip} T^{(1)}_\alpha & = &\displaystyle\frac{\partial}{\partial x^\alpha}\circ\pi_{2,1} +v^i_\alpha \displaystyle\frac{\partial}{\partial q^i}\circ\pi_{2,1}+v^i_{\alpha\beta} \displaystyle\frac{\partial}{\partial v^i_\beta}\circ\pi_{2,1}. \end{array}$$ Using (\[dx\]) we have the maps $d_{T^{(0)}_\alpha}$ and $d_{T^{(1)}_\alpha}$ $$\begin{array}{lllll} d_{T^{(0)}_\alpha} &:& C^\infty(E) & \longrightarrow & C^\infty( J^1\pi ) \\ \noalign{\medskip} d_{T^{(1)}_\alpha}&:& C^\infty( J^1\pi ) & \longrightarrow & C^\infty( J^2\pi )\end{array}$$ defined by $ T^{(0)}_\alpha, \, T^{(1)}_\alpha$, respectively. From (\[dx\]) and (\[loct0t1\]) one obtains $$\label{0thetzeta40} d_{T^{(0)}_\alpha}F=\frac{\partial F}{\partial x^\alpha}\circ\pi_{1,0} +v^i_\alpha \, \frac{\partial F}{\partial q^i}\circ\pi_{1,0} \, ,$$ and $$\label{thetzeta4} d_{T^{(1)}_\alpha}G=\frac{\partial G}{\partial x^\alpha}\circ\pi_{2,1} +v^i_\alpha \, \frac{\partial G}{\partial q^i}\circ\pi_{2,1}+v^i_{\alpha\beta} \, \frac{\partial G}{\partial v^i_\beta}\circ\pi_{2,1} \, .$$ where $F\in C^\infty(E)$ and $G\in C^\infty( J^1\pi )$. [Remark]{} Let us observe that the vector fields $ T^{(0)}_\alpha$ and $ T^{(1)}_\alpha$ are $(\pi_{2,1},\pi_{1,0})$-related in the following sense $(\pi_{1,0})_*\circ T^{(1)}_\alpha=T^{(0)}_\alpha\circ \pi_{2,1}\quad . $ Prolongations of vector fields {#jets-prolong} ------------------------------ We recall the prolongations of vector fields from $E$ to $J^1\pi$ and the prolongation of a vector field along $\pi_{1,0}$ , see Saunders [@saundersJet] (Sections $4.4$ and Section $6.4$) . Let $X$ be a vector field on $E$ locally given by $$X= X_\alpha(q,v) \frac{\partial}{\partial x^\alpha}+X^i(q,v)\frac{\partial}{\partial q^i},$$ then its prolongation $X^1$ is the vector field on $J^1\pi$ whose local expression is $$\label{locX1gen} X^1=X_\alpha\frac{\partial}{\partial x^\alpha}+X^i \frac{\partial}{\partial q^i} + \left( \frac{dX^i}{d x^\alpha}- v_\beta^i \frac{dX_\beta}{d x^\alpha}\right) \frac{\partial}{\partial v_\alpha^i},$$ where $d/dx^\alpha$ denotes the total derivative, that is, $$\frac{d}{dx^\alpha}=\frac{\partial}{\partial x^\alpha}+ v^j_\alpha\frac{\partial }{\partial q^j} \qquad 1\leq \alpha\leq k.$$ Let us observe that $\displaystyle\frac{dF}{dx^\alpha}=T^{(0)}_\alpha(F)$ where $F\in C^\infty(J^1\pi) $. Let $X$ be a vector field along $\pi_{1,0}$ and $X^{(1)}$ its first prolongation along $\pi_{2,1}$, which means $$\xymatrix@=10mm{ & T(J^1\pi)\ar[d]^-{\tau_{J^1\pi}}& TE\ar[d]^-{\tau_E}\\ J^2\pi \ar[ru]^-{X^{(1)}}\ar[r]^-{\pi_{2,1}} & J^1\pi \ar[ru]^-{X}\ar[r]^-{\pi_{1,0}} & E } \quad ,$$ see Saunders [@saundersJet], Section $6.4$. If $X$ has the local expression $$X = X_\alpha(x,q,v)\, \frac{\partial}{ \partial x^\alpha}\circ \pi_{1,0} +X^i(x,q,v) \frac{\partial }{\partial q^i}\circ\pi_{1,0}$$ then $X^{(1)}$ is locally given by $$\begin{array}{ccl} X^{(1)} & = & X_\alpha\circ\pi_{2,1} \displaystyle\frac{\partial }{\partial x^\alpha}\circ\pi_{2,1}+X^i\circ\pi_{2,1} \displaystyle\frac{\partial }{\partial q^i}\circ\pi_{2,1} \\ \noalign{\medskip} & & + \left( d_{T^{(1)}_\alpha}(X^i\circ\pi_{2,1}) - d_{T^{(1)}_\alpha}(X_\beta\circ\pi_{2,1}) v^i_\beta \right)\frac{\partial }{\partial v^i_\alpha}\circ\pi_{2,1} \,\,\, . \end{array}$$ If $X$ is a vector field along $\pi_{1,0}$ and $\pi$-vertical then it is locally given by $$\label{locz1v} X^{(1)}=\left(X^i\frac{\partial }{\partial q^i}\right)\circ\pi_{2,1}+\left(\frac{ \partial X^i}{\partial x^\alpha}\circ\pi_{2,1}+ v^j_\alpha\frac{ \partial X^i}{\partial q^j}\circ\pi_{2,1} + v^j_{\alpha\beta}\frac{\partial X^i}{\partial v^j_\beta}\circ\pi_{2,1} \right)\frac{\partial }{\partial v^i_\alpha}\circ\pi_{2,1}$$ According to (\[dx\]), we know that the vector field $X^{(1)}$ along $\pi_{2,1}$ defines a map $$\begin{array}{ccccl} d_{X^{(1)}}&:& C^\infty( J^1\pi )& \to & C^\infty( J^2\pi )\\ \noalign{\medskip} & & F& \to & d_{X^{(1)}}F \, \, . \end{array}$$ with local expression $$\label{thetzeta40} d_{X^{(1)}}F=X_\alpha\circ \pi_{2,1} \frac{\partial F}{\partial x^\alpha}\circ \pi_{2,1}+X^i\circ \pi_{2,1} \frac{\partial F}{\partial q^i}\circ \pi_{2,1} + \left( d_{T^{(1)}_\alpha}X^i - v^i_\beta d_{T^{(1)}_\alpha}X_\beta \right)\frac{\partial F}{\partial v^i_\alpha}\circ \pi_{2,1}\, .$$ [Remark]{} Let $X$ be a $\pi$-vertical vector field on $E$. The vector field $X\circ \pi_{1,0}$ along $\pi_{1,0}$ $$\xymatrix@=10mm{ & TE\ar[d]\\ J^1\pi\ar[ru]^-{X\circ \pi_{1,0}}\ar[r]^-{\pi_{1,0}} & E }$$ satisfies that $$\label{xx} ( X\circ \pi_{1,0})^{(1)}=X^1\circ \pi_{2,1} .$$ Vertical endomorphisms {#jets-vert} ---------------------- Each $1$-form $dx^\alpha, \, 1\leq \alpha\leq k$, defines a canonical tensor field $S_{ dx^\alpha}$ on $J^1\pi$ of type $(1,1)$, see Saunders [@saundersJet], page $156$, with local expression $$\label{locsalf} S_{ dx^\alpha}\equiv (dq^i-v^i_\beta dx^\beta)\otimes\frac{\partial}{\partial v^i_\alpha}\quad .$$ Throughout the paper, we denote $S_{ dx^\alpha}$ by $S^\alpha$. The vector valued $k$-form $S$ on $J^1\pi$, defined in [@saunders1; @saundersJet], whose values are vertical vectors over $E$, is given in coordinates by $$\label{localS} S=\left((dq^i-v^i_\beta dx^\beta)\wedge d^{k-1}x_\alpha \right)\otimes\frac{\partial }{\partial v^i_\alpha}$$ where $$d^{k-1}x_{\alpha}=i_{{\partial}/{\partial x^{\alpha}}}d^kx = (-1)^{\alpha-1}dx^1\wedge \cdots \wedge dx^{\alpha-1}\wedge dx^{\alpha+1} \wedge \cdots \wedge dx^k$$ and $d^kx=dx^1 \wedge \cdots \wedge dx^k$ is the standard volume form on $\mathbb{R}^k$. From (\[locsalf\]) and (\[localS\]) we deduce that $S$ and $\{S_{dx^1},\ldots , S_{dx^k}\}$ are related by the formula $$S=S^\alpha \wedge d^{k-1}x_{\alpha}.$$ We also have $dx^{\alpha}\wedge S=-S^\alpha \wedge d^kx.$ Contact structures and second order partial differential equations {#jets-sodes} ------------------------------------------------------------------ Let us consider the *Cartan distribution*, which is the $(k+nk)$-dimensional distribution, given by $$C(J^1\pi) = \, Ker\, S^{1}\cup \ldots \cup \, Ker\, S^{k}\, .$$ From (\[locsalf\]) we deduce that $X\in C(J^1\pi) $ if, and only if, $$(dq^i-v^i_{\alpha}dx^{\alpha})(X)=0,$$ and thus $X$ is locally given by $$X=X_{\alpha}\left(\frac{\partial}{\partial x^{\alpha}} + v^i_{\alpha}\frac{\partial}{\partial q^i}\right) + X^i_{\alpha}\frac{\partial}{\partial v^i_{\alpha}} .$$ Therefore, a local basis for $C(J^1\pi)$ is given by the local $k+nk$ vector fields $$\frac{\partial }{\partial x^\alpha}+v^i_\alpha\,\frac{\partial }{\partial q^i} \quad , \quad \frac{\partial }{\partial v^i_\alpha} .$$ We also consider the *contact codistribution*, which is the $n$-dimensional distribution that represents the annihilator of the Cartan distribution and is given by $$\begin{aligned} \label{contact} \Lambda^1_C(J^1\pi)\, =\, \{\theta \in \Lambda^1(J^1\pi),\, \, (j^1\phi)^*\theta=0,\, \, \forall \phi \in \, Sec\, (\pi)\} . \nonumber \end{aligned}$$ A local basis for $\Lambda^1_C(J^1\pi)$ is the set of canonical $1$-forms $$\delta q^i=dq^i-v^i_\beta dx^\beta, \quad i=1, \ldots, n.$$ [Definition]{}\[sode2\] A $k$-vector field ${\bf \Gamma}=(\Gamma_1,\dots,\Gamma_k)$ on $J^1\pi$ is said to be a second order partial differential equation ([sopde]{} for short) if $$dx^\alpha(\Gamma_\beta)=\delta_\beta^\alpha , \quad S^\alpha(\Gamma_\beta)=0,$$ or equivalently, $$dx^\alpha(\Gamma_\beta)=\delta_\beta^\alpha , \quad \delta q^i(\Gamma_\beta)=0\quad$$ for all $i=1 \ldots n$, $\alpha, \beta=1 \ldots k.$ Every vector field $\Gamma_\alpha$ of a [sopde]{} belongs to the Cartan distribution. From Definition \[sode2\] and the expression (\[locsalf\]) of $S^\alpha$, we obtain that the local expression of a [sopde]{} $(\Gamma_1 ,\ldots,\Gamma_k) $ is $$\label{localxia} \Gamma_\alpha(x^\beta,q^i,v^i_\beta)=\frac{\partial}{\partial x^\alpha}+v^i_\alpha\frac{\displaystyle \partial} {\displaystyle \partial q^i}+ \Gamma^i_{\alpha \beta} \frac{\displaystyle\partial} {\displaystyle \partial v^i_\beta},\quad 1\leq \alpha \leq k$$ where $\Gamma^i_{\alpha \beta} $ are functions locally defined on $J^1\pi$. As a direct consequence of the above local expressions, we deduce that the family of vector fields $\{\Gamma_1, \ldots , \Gamma_k\}$ are linearly independent. [Definition]{}\[de652\] Let $\phi:{\mathbb R}^k \rightarrow E$ be a section of $\pi$, locally given by $\phi(x^\alpha)=(x^\alpha,\phi^i(x^\alpha))$, then the [first prolongation]{} $j^1\phi$ of $\phi$ is the map $$\label{locj1phi} \begin{array}{rcl} j^1\phi:{\mathbb R}^k & \longrightarrow & J^1\pi \\ x & \longrightarrow & j^1_x\phi\equiv\left( x^1, \dots, x^k, \phi^i (x^1, \dots, x^k), \frac{\displaystyle\partial\phi^i}{\displaystyle\partial x^\alpha} (x^1, \dots, x^k)\right) \end{array}$$ for all $\alpha=1,\ldots,k$ and for all $x\in Domain \,\phi$. We will see that the integral sections of a [sopde]{} are prolongations of sections. The following proposition has been also proved in Saunders [@saundersJet]. [Proposition]{}\[RelSecCont\] A section $\psi$ of $\pi_1$ is the $1$-jet prolongation of a section of $\pi$ (in other words it is a *holonomic field*) if, and only, if $\psi^*\theta=0$ for all $\theta \in \Lambda^1_c(J^1\pi)$. Proof: Consider $\psi: {\mathbb R}^k \to J^1\pi$, $\psi(x^{\alpha})=(x^{\alpha}, \psi^i(x^{\alpha}), \psi^i_{\beta}(x^{\alpha}))$ a section of $\pi_1$ and $\theta=\theta_i(dq^i-v^i_{\beta}dx^{\beta}) \in \Lambda^1_c(J^1\pi)$. It follows that $$\psi^*\theta=\theta_i \circ \psi \left(\frac{\partial \psi^i}{\partial x^\beta} - \psi^i_\beta \right)dx^\alpha \, .$$ Therefore, $\psi^*\theta = 0$ for all $\theta \in \Lambda^1_c(J^1\pi)$ if and only if $\psi^i_{\beta}= {\partial \psi^i}/{\partial x^{\beta}}$. ------------------------------------------------------------------------ Now we characterize the integral sections of a [sopde]{}. [Proposition]{} Let ${\bf \Gamma}$ be an integrable $k$-vector field. Then the following three properties are equivalent: i\) ${\bf \Gamma}$ is a [sopde]{} ii\) The integral sections of ${\bf \Gamma}$ are $1$-jets prolongations of sections of $\pi$. iii\) There exists a section $\gamma$ for $ \pi_{2,1}$ such that $\Gamma_\alpha=T_\alpha^{(1)}\circ \gamma$. Proof: Let ${\bf \Gamma}$ be an integrable $k$-vector field and $\psi\colon \mathbb{R}^k \to J^1\pi$ an integral section for ${\bf \Gamma}$. Then from (\[integcond\]) and the local expression (\[localxia\]) of $\Gamma_\alpha$, we obtain $$\begin{array}{ccl} (\pi_1\circ \psi)_*(x)\left(\displaystyle\frac{\partial }{\partial x^\alpha}\Big\vert_x\right)&= &(\pi_1)_*(\psi(x))\left(\psi_*(x)\left(\displaystyle\frac{\partial }{\partial x^\alpha}\Big\vert_x\right)\right) = (\pi_1)_*(\psi(x))\Gamma_\alpha(\psi(x)) \\ \noalign{\medskip} &=&\displaystyle\frac{\partial }{\partial x^\alpha}\Big\vert_{\pi_1(\psi(x))} \end{array}$$ which means $$\frac{\partial }{\partial x^\alpha}\Big\vert_x(x^\beta\circ\pi_1\circ\psi)= \delta^\alpha_\beta$$ so $\pi_1\circ\psi=id_{\mathbb{R}^k }$ ,i.e., $\psi$ is a local section for $\pi_1$. We must prove that $\psi=j^1\phi$ where $\phi$ is a section of $\pi$. To this end we use Proposition \[RelSecCont\], showing that $\psi^*\theta=0$ for all $\theta\in \Lambda^1_C(J^1\pi)$. Let us assume that $\psi$ is an integral section passing through $j_x^1\varphi$, that is $\psi(x)=j_x^1\varphi$. Now, since ${\bf \Gamma}$ is a [sopde]{} and $\theta\in \Lambda^1_C(J^1\pi)$, then $i_{\Gamma_\alpha}\theta=0$. Thus, $$\begin{array}{ccl} 0&=&i_{\Gamma_\alpha}\theta(j^1_x\varphi)=\theta(j^1_x\varphi)\Gamma_\alpha(j^1_x\varphi)=\theta(j^1_x\varphi)\Gamma_\alpha(\psi(x))\\ \noalign{\medskip} &=& \theta(\psi(x))\left( \psi_*(x)\left(\displaystyle\frac{\partial }{\partial x^\alpha}\Big\vert_x\right)\right) =(\psi^*\theta)(x)\left(\displaystyle\frac{\partial }{\partial x^\alpha}\Big\vert_x\right) \end{array}$$ So, we have proved $\psi^*\theta=0$ for all $\theta \in \Lambda^1_C(J^1\pi)$. We define $$\begin{array}{cccl} \gamma \colon &J^1\pi &\to &J^2\pi \\ \noalign{\medskip} &j^1_x\sigma&\to &\gamma(j^1_x\sigma)=j^2_x\varphi, \end{array}$$ where $j^1\varphi$ is an integral section of ${\bf \Gamma}$ passing through $j^1_x\sigma$ (i.e., $j^1_x\varphi=j^1\varphi(x)=j^1_x\sigma$). Then $$(\pi_{2,1}\circ\gamma)(j^1_x\sigma)=\pi_{2,1}(\gamma(j^1_x\sigma))=\pi_{2,1}(j^2_x\varphi)=j^1_x\varphi=j^1_x\sigma \, .$$ It follows that the map $\gamma$ so defined is a section for $\pi_{2,1}$. Moreover, the vector field $\Gamma_\alpha$ can be expressed as $\Gamma_\alpha=T^{(1)}_\alpha\circ\gamma$, in fact, $$\Gamma_\alpha(j^1_x\sigma)=(j^1\varphi)_*(x)\left(\frac{\partial }{\partial x^\alpha}\Big\vert_x\right) =T^{(1)}_\alpha(j^2_x\varphi)=(T^{(1)}_\alpha\circ\gamma)(j^1_x\sigma)\, .$$ Let $\gamma$ be a section for $\pi_{2,1}$. We must prove that $\gamma$ defines a [sopde]{} by composition with $T^{(1)}_\alpha$. Since $$\tau_{J^1\pi}\circ\Gamma_\alpha=\tau_{J^1\pi}\circ T^{(1)}_\alpha\circ\gamma=\pi_{2,1}\circ\gamma=id_{J^1\pi},$$ where $\tau_{J^1\pi} \colon T(J^1\pi)\to J^1\pi$ is the canonical projection, then $\Gamma_\alpha=T^{(1)}_\alpha\circ\gamma$ is a vector field on $J^1\pi$. Moreover, ${\bf \Gamma}$ is a [sopde]{} $$dx^\beta(\Gamma_\alpha)(j^1_x\sigma)=dx^\beta(T^{(1)}_\alpha\circ\gamma)(j^1_x\sigma)=\delta^\alpha_\beta$$ and $$\begin{array}{ccl} \delta q^i(\Gamma_\alpha)(j^1_x\sigma)&=&\delta q^i(j^1_x\sigma)\left((T^{(1)}_\alpha\circ\gamma)(j^1_x\sigma)\right)= \delta q^i(j^1_x\sigma)\left(T^{(1)}_\alpha(j^2_x\varphi)\right)\\ \noalign{\medskip} &=& v^i_\alpha(j^2_x\varphi)-v^i_\alpha(j^1_x\sigma)=0, \end{array}$$ where the last identity is true because $\gamma$ is a section for $\pi_{2,1}$ so $j^1_x\sigma=(\pi_{2,1}\circ\gamma)(j^1_x\sigma)=\pi_{2,1}(j^2_x\varphi)=j^1_x\varphi$. ------------------------------------------------------------------------ If $j^1\phi$ is an integral section of a [sopde]{} ${\bf \Gamma}$, then $\phi$ is called [*a solution of*]{} ${\bf \Gamma}$. From (\[integcond\]) and (\[localxia\]), we deduce that $\phi$ is a solution of ${\bf \Gamma}$ if, and only if, $q^i\circ \phi=\phi^i$ is a solution to the following system of second order partial differential equations $$\label{xisol} \frac{\partial^2 \phi^i} {\partial x^\alpha \partial x^\beta}\Big\vert_x=\Gamma^i_{\alpha \beta} \left(x,\phi^i(x),\frac{\partial \phi^i}{\partial x^\gamma}\right).$$ where $1\leq i\leq n$ and $1\leq \alpha,\beta \leq k$. The integrability conditions for the system (\[xisol\]) requires that the $k$-dimensional distribution induced by the [sopde]{} ${\bf \Gamma} $, $H_0=\, span\, \{\Gamma_1,..., \Gamma_k\}$ is integrable. The local expression (\[localxia\]) for a [sopde]{} ${\bf \Gamma}$ shows that $[\Gamma_{\alpha}, \Gamma_{\beta}]=A^{\gamma}_{\alpha\beta}\Gamma_{\gamma}$ if, and only if, $A^{\gamma}_{\alpha\beta}=0$. Therefore, for a [sopde]{} ${\bf \Gamma}$, we assume throughout the paper the following integrability conditions $$\begin{aligned} \Gamma^i_{\alpha\beta}=\Gamma^i_{\beta\alpha}, \quad \Gamma_{\alpha}(\Gamma^i_{\beta\gamma})=\Gamma_{\beta}(\Gamma^i_{\alpha\gamma}), \label{symxi}\end{aligned}$$ which are equivalent to the condition $[\Gamma_{\alpha}, \Gamma_{\beta}]=0$ for all $\alpha,\beta=1, \ldots, k$. LAGRANGIAN FIELD THEORY {#lagfield} ======================= Poincarè-Cartan 1-forms {#lagfield-pc forms} ----------------------- Let $L:J^1\pi \to \mathbb{R}$ be a Lagrangian. For each $\alpha=1 \ldots k$ we define the [*Poincarè-Cartan 1-forms*]{} $\Theta_L^\alpha$ on $J^1\pi$ as the $1$-forms $$\Theta^{\alpha}_{L}= Ldx^{\alpha} + dL\circ S^{\alpha} \, , \quad 1\leq \alpha\leq k .$$ Their local expressions are given by $$\label{thetal} \Theta^\alpha_L= \left(L\delta^{\alpha}_{\beta} -\frac{\partial L}{\partial v^i_{\alpha}}v^i_{\beta}\right) dx^{\beta} + \frac{\partial L}{\partial v^i_{\alpha}} dq^i,$$ or equivalently $$\label{thetal1} \Theta^\alpha_L= \ \frac{\partial L}{\partial v^i_{\alpha}} (dq^i-v^i_\beta dx^\beta)+Ldx^\alpha= \frac{\partial L}{\partial v^i_{\alpha}}\delta q^i+Ldx^\alpha\,\, .$$ Let us observe that the fact of working with a fiber bundle over ${\mathbb R}^k$ allows us to introduce the tensors $S^\alpha$ and consequently the $1$-forms $\Theta^\alpha_L$. It is known that a Lagrangian $L$ induces a $k$-form $\Theta_L$, called the Cartan form in first-order field theories, Saunders [@saunders1; @saundersJet], which is given by $$\Theta_L=Ld^kx+ dL\circ S. \label{tl}$$ with local expression $$\Theta_L= \frac{\partial L}{\partial v^i_{\alpha}} (dq^i-v^i_\beta dx^\beta)\wedge(i_{\frac{\partial }{\partial x^\alpha}}\wedge d^kx)+Ld^kx \, \, .$$ The relationship between the Cartan form and the Poincarè-Cartan 1-forms is given by the following identity $$\label{relat} \Theta_L =\Theta_L ^\alpha \wedge d^{k-1}x_\alpha+ (1-k)L d^kx\quad .$$ Euler-Lagrange field equations {#lagfield-EL} ------------------------------- Let ${\bf \Gamma}=(\Gamma_1,\dots,\Gamma_k)$ be an integrable [sopde]{}. A direct computation, using (\[localxia\]), (\[symxi\]) and (\[thetal1\]), gives the formula $$\label{lthetl} \mathcal{L}_{\Gamma_\alpha}\Theta^{\alpha}_L=dL+ \left(\Gamma_\alpha \left( \frac{\partial L}{\partial v^i_\alpha} \right) - \frac{\partial L}{\partial q^i}\right) (dq^i-v^i_\beta dx^\beta) ,$$ and proves the following lemma. [Lemma]{} Let ${\bf \Gamma}=(\Gamma_1,\dots,\Gamma_k)$ be an integrable [sopde]{} satisfying $$\Gamma_\alpha \left(\frac{\partial L}{\partial v^i_\alpha}\right)-\frac{\partial L}{\partial q^i} =0\,,\quad i=1,\ldots , n\, .$$ If $j^1\phi$ is an integral section of ${\bf \Gamma}$, then $\phi$ is a solution to the Euler -Lagrange equations $$\label{e-l-2} \frac{\partial^2 L}{\partial x^\alpha \partial v^i_\alpha}\Big\vert_{j^1_x\phi}+ \frac{\partial \phi^j}{\partial x^\alpha}\Big\vert_{x} \frac{\partial^2 L}{\partial q^j \partial v^i_\alpha}\Big\vert_{j^1_x\phi}+ \frac{\partial^2 \phi^j}{\partial x^\alpha \partial x^\beta}\Big\vert_{x} \frac{\partial^2 L}{\partial v^j_\beta \partial v^i_\alpha}\Big\vert_{j^1_x\phi}= \frac{\partial L}{\partial q^i}\Big\vert_{j^1_x\phi},$$ where $x\in Domain \, \phi$. Equations (\[e-l-2\]) are usually written as follows $$\frac{\partial }{\partial x^\alpha}\Big\vert_x\left(\frac{\partial L}{\partial v^i_\alpha}\circ j^1\phi \right)=\frac{\partial L}{\partial q^i}\Big\vert_{j^1_x\phi}, \quad 1\leq i\leq k \, .$$ From (\[lthetl\]) we deduce the following proposition: [Proposition]{}\[solELvect\] Let ${\bf \Gamma}$ be an integrable [sopde]{}, then $$\label{lthetl1} \mathcal{L}_{\Gamma_\alpha}\Theta^{\alpha}_L=dL$$ if and only if $$\Gamma_\alpha \left(\frac{\partial L}{\partial v^i_\alpha}\right) - \frac{\partial L}{\partial q^i} =0\,,\quad i=1, \ldots, n.$$ Thus, we have the following result [Corollary]{} Let ${\bf \Gamma}$ be an integrable [sopde]{}; if $j^1\phi$ is solution to (\[lthetl1\]), that is $ \mathcal{L}_{\Gamma_\alpha}\Theta^{\alpha}_L(j^1\phi)=dL(j^1\phi)$, then $\phi$ is solution to the Euler-Lagrange equations. From (\[lthetl1\]) and the identity $i_{\Gamma_\alpha}\Theta^\alpha_L=kL$ we deduce the following proposition: [Proposition]{}\[relsopdeELeq\] Let ${\bf \Gamma}$ be a [sopde]{}, then the equation (\[lthetl1\]) is equivalent to the equation $$i_{\Gamma_\alpha} \Omega^\alpha_L=(k-1)dL \, ,$$ where $\Omega^\alpha_L=-d\Theta^\alpha_L$ will be called the Poincaré-Cartan $2$-forms. Taking into account the above results we are able to prove a theorem that gives us a new Lagrangian field formulation for a bundle $E\to \mathbb{R}^k $. First we recall that a Lagrangian $L:J^1\pi \to \mathbb{R}$ is said to be regular if the matrix $$\left(\frac{\partial^2L}{\partial v^j_\alpha \partial v^i_\beta}\right)\quad 1\leq\, \alpha,\beta\, \leq k, \, \,\, 1\leq\, j,i\, \leq n$$ is not singular, where $1\leq\alpha,\beta\leq k, \, 1\leq j,i\leq n$. [Theorem]{}\[relvectorphisol\] Let ${\bf X}=(X_1, \ldots, X_k)$ be a $k$-vector on $J^1\pi$ such that $$\label{geoverel} dx^\alpha(X_\beta)= \delta^\alpha_\beta \quad , \quad i_{X_\alpha} \Omega^\alpha_L=(k-1)dL \, ,$$ 1. If $L$ is regular then ${\bf X}=(X_1, \ldots, X_k)$ is a [sopde]{}. If ${\bf X}$ is integrable and $\phi\colon U_0 \subset \mathbb{R}^k \to E$ is a solution of ${\bf X}$ then $\phi$ is a solution to the Euler-Lagrange equations (\[e-l-2\]). 2. Now, if ${\bf X}=(X_1, \ldots, X_k)$ is integrable and $j^1\phi$ is an integral section of ${\bf X}$, then $\phi$ is a solution to the Euler-Lagrange equations (\[e-l-2\]). Proof: $(i)$ If we write each $X_\alpha$ in local coordinates as $$X_\alpha = \frac{\partial}{\partial x^\alpha}+X^i_\alpha\frac{\displaystyle \partial} {\displaystyle \partial q^i}+ X^i_{\alpha \beta} \frac{\displaystyle\partial} {\displaystyle \partial v^i_\beta},\quad 1\leq \alpha \leq k,$$ then, from (\[thetal\]) and (\[geoverel\]), we obtain $$\begin{array}{cccl} (1) & X_\alpha \left (L \delta ^\alpha_\beta - \displaystyle\frac{\partial L}{\partial v^i_{\alpha}} v^i_{\beta}\right) + (v^i_{\alpha} -X^i_{\alpha}) \displaystyle\frac{\partial^2 L}{\partial x^\beta \partial v^i_\alpha} & = & \displaystyle\frac{\partial L}{\partial x^\beta} \\ \noalign{\medskip} (2) & X_\alpha \left(\displaystyle\frac{\partial L}{\partial v^j_\alpha}\right) + (v^i_\alpha - X^i_{\alpha}) \displaystyle\frac{\partial^2 L}{\partial q^j \partial v^i_\alpha} &=& \displaystyle\frac{\partial L}{\partial q^j} \\ \noalign{\medskip} (3) & (v^i_\alpha - X^i_\alpha) \displaystyle\frac{\partial^2L}{\partial v^j_\alpha \partial v^i_\beta}&=& 0 . \end{array}$$ Using that $L$ is regular we deduce from $(3)$ that $X^i_\alpha= v^i_\alpha$. Since $\mathbf{X}$ is a [sopde]{} and we assume it to be integrable, its integral sections are prolongations $j^1\phi$ of sections $\phi$ of $\pi$. Then, from $$X_\alpha(j^1\phi_x)=(j^1\phi)_*(x) \left(\frac{\partial }{\partial x^\alpha}\Big\vert_{x}\right)$$ we deduce $$\label{tec1} X^i_\alpha\circ j^1\phi=v^i_\alpha\circ j^1\phi= \frac{\partial \phi^i}{\partial x^\alpha}\, , \quad X^i_{\alpha\beta}\circ j^1\phi =\frac{\partial^2\phi^i}{\partial x^\alpha x^\beta}\, ,$$ and from $(2)$, we obtain $$\frac{\partial^2 L}{\partial x^\alpha \partial v^i_\alpha} + \frac{\partial \phi^j}{\partial x^\alpha} \frac{\partial^2 L}{\partial q^j \partial v^i_\alpha} + \frac{\partial^2\phi^j}{\partial x^\alpha \partial x^\beta} \frac{\partial^2 L}{\partial v^j_\beta \partial v^i_\alpha} = \frac{\partial L}{\partial q^i}.$$ which means that $ \phi$ is solution to the Euler-Lagrange equations (\[e-l-2\]). $(ii)$ If $j^1\phi$ is an integral section of ${\bf X}$, then from the local expression (\[locj1phi\]) of $j^1\phi$ we obtain the equations (\[tec1\]). Thus, from equation $(2)$ and (\[tec1\]) we deduce that $\phi$ is a solution to the Euler-Lagrange equations (\[e-l-2\]). ------------------------------------------------------------------------ [Remark]{} For $k=1$, that is the fibration is $E\to \mathbb{R}$, we obtain the well-known dynamical equation $i_\Gamma d\Theta=0$, where ${\bf \Gamma}$ is a second order differential equation [sode]{}, see [@CFN1993]. In the case $E=\mathbb{R}\times Q\to \mathbb{R}$, equations (\[geoverel\]) can be written as the dynamical equations $$dt(X)=1, \quad i_X \Omega_L=0$$ where $\Omega_L$ is the Poincaré-Cartan $2$-form defined by the Lagrangian $L\colon \mathbb{R}\times TQ\to \mathbb{R}$. These equations coincide with the cosymplectic formulation of the non-autonomous mechanics, see [@clm; @lr]. The relationship between this formalism and the $k$-cosymplectic formalism, described in [@mod2] (see also [@EMR; @krupkova97; @msv; @RRSV11]), will be analyzed in the next section. [Example]{} The Klein-Gordon equation The equation of a scalar field $\phi$ (for instance the gravitational field) which acts on the four-dimensional space-time is [@KijTul] $$\label{scalar} (\Box+m^2)\phi=F'(\phi)$$ where $m$ is the mass of the particle over which the field acts, $F$ is a scalar function such that $F(\phi)-\frac{1}{2}m^2\phi^2$ is the potential energy of the particle of mass $m$, and $\Box$ is the Laplace-Beltrami operator given by $$\Box\phi:={\rm div}\,{\rm grad} \phi=\frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^\alpha}(\sqrt{-g}g^ {\alpha\beta}\frac{\partial\phi}{\partial x^\beta})\,,$$ $(g_{\alpha\beta})$ being a pseudo-riemannian metric tensor in the four-dimensional space-time of signature $(-+++)$. Now we consider the trivial bundle $\pi:E=\mathbb{R}^4\times \mathbb{R} \to \mathbb{R}^4$, with coordinates $(x^1,\ldots x^4,q)$ on $E$, and $(x^1,\ldots x^4,q,v_1,\ldots,v_4)$ the induced coordinates on $ J^1\pi$. Let $L$ be the Lagrangian $L:J^1\pi \to \mathbb{R}$ defined by $$L(x^1,\ldots x^4,q, v_1,\ldots, v_4)=\sqrt{-g}(F(q)-\frac{1}{2}m^2q^2+\frac{1}{2}g^{\alpha\beta}v_\alpha v_\beta),$$ which is regular. Let us assume that ${\bf X}=(X_1,X_2,X_3,X_4)$ is an integrable $4$-vector field on $J^1\pi $ solution to the equations (\[geoverel\]), that is $$\label{sc} dx^\alpha(X_\beta)=\delta^\alpha_\beta \,\, ,\quad i_{X_1}\Omega^1_L+ i_{X_2}\Omega^2_L +i_{X_3}\Omega^3_L+ i_{X_4}\Omega^4_L = 3dL$$ From (\[sc\]) we deduce $$X_\alpha\left(\frac{\partial L}{\partial v_\alpha}\right)=\frac{\partial L}{\partial q},$$ which is equivalent to $$X_\alpha ( \sqrt{-g}g^{\alpha\beta}v_\beta )=\frac{\partial L}{\partial q}=\sqrt{-g}(F'(q)-m^2q)\, .$$ Since $L$ is regular, then ${\bf X}$ is a [sopde]{}, and if $j^1\psi$ is an integral section of ${\bf X}$, then $$(j^1\psi)_*(x)\left(\frac{\partial }{\partial x^\alpha}\Big\vert_x\right)\left(\sqrt{-g}g^{\alpha\beta}v_\beta \right)= \sqrt{-g}(F'(\psi(x))-m^2\psi(x))$$ which can be written as $$0=\sqrt{-g}\frac{\partial }{\partial x^\alpha}\left(g^{\alpha\beta} \frac{\partial \psi}{\partial x^\beta}\right)-\sqrt{-g}(F'(\psi)-m^2\psi)$$ and thus we obtain that $\psi$ is a solution to the scalar field equation (\[scalar\]). [Remark]{} Some particular cases of the scalar field equation (\[scalar\]) are: 1. If $F=0$, we obtain the linear scalar field equation. 2. If $F(q)=m^2q^2$, we obtain the Klein-Gordon equation, see [@saletan], $$(\Box+m^2)\phi=0\,.$$ Relationship with the $k$-cosymplectic formalism {#lagfield-cosy} ------------------------------------------------- We shall show the difference between the corresponding Poincaré-Cartan forms in this new formalism and the Poincaré-Cartan forms of the k-cosymplectic formalism. Let us remark that this new approach is closer to the multysimplectic description, see identity (\[relat\]), than to the k-cosymplectic one. Throught this section we consider the trivial bundle $E=\mathbb{R}^k \times Q \to \mathbb{R}^k $. In this case, the manifold $J^1\pi$ of 1-jets of sections of the trivial bundle $\pi:\mathbb{R}^k \times Q \to \mathbb{R}^k $ is diffeomorphic to $\mathbb{R}^k \times T^1_kQ$, where $T^1_kQ$ denotes the Whitney sum $TQ\oplus\stackrel{k}{\ldots}\oplus TQ$ of $k$ copies of $TQ$. The diffeomorphism is given by $$\begin{array}{rcl} J^1\pi & \to & \mathbb{R}^k \times T^1_kQ \\ \noalign{\medskip} j^1_x\phi= j^1_x(Id_{\mathbb{R}^k },\phi_Q) & \to & ( x,v_1, \ldots ,v_k) \end{array}$$ where $\phi_Q: \mathbb{R}^k \stackrel{\phi}{\to} \mathbb{R}^k \times Q \stackrel{\pi_Q}{\to}Q $, and $$v_\alpha=(\phi_Q)_*(x)\left(\frac{\partial}{\partial x^\alpha}\Big\vert_x\right)\, , \quad 1\leq \alpha \leq k \, .$$ Now we recall the $k$-cosymplectic Lagrangian formalism, beginning with the necessary geometric elements - The Liouville vector field $\Delta$ on $\mathbb{R}^k \times T^1_kQ$ is the infinitesimal generator of the following flow $$\begin{array}{ccl} \mathbb{R} \times(\mathbb{R}^k\times T^1_kQ) & \to & \mathbb{R}^k\times T^1_kQ \\ \noalign{\medskip} (s,(t,v_{1_q},\cdots,v_{k_q})) & \to & (t, e^sv_{1_q},\cdots,e^sv_{k_q}) \end{array}$$ and in local coordinates it has the form $\Delta=v^i_\alpha \displaystyle\frac{\partial }{\partial v^i_\alpha} \, .$ - The canonical vector field $\Delta^\alpha_\beta\,,\,\, 1\leq \alpha,\beta \leq k$ is the vector field on $\mathbb{R}^k \times T^1_kQ$ defined by $$\Delta^\alpha_\beta(x,{v_1}_q,\ldots , {v_k}_q)= \frac{d}{ds}\Big\vert_{0}\left(x,{v_1}_q, \ldots, {v_{\alpha-1}}_q, {v_\alpha}_q+s{v_\beta}_q,{v_{\alpha+1}}_q,\ldots, {v_{k}}_q\right)$$ with local expression $\Delta^\alpha_\beta=v^i_\beta \displaystyle\frac{\partial }{\partial v^i_\alpha}$. Let us observe that $\Delta=\Delta^\alpha_\alpha$. - For a Lagrangian $L:\mathbb{R}^k \times T^1_kQ \to \mathbb{R}$, the [*energy function*]{} is defined as $E_L=\Delta(L)-L $ and its local expression is $$\label{LagranEner} E_L=v^i_\alpha\frac{\partial L}{\partial v^i_\alpha}-L \quad .$$ - The canonical $k$-tangent structure on $T^1_kQ$ is the family $\{J^1,\ldots, J^k\} $ of tensor fields locally given by $$J_\alpha= \frac{\partial }{\partial v^i_\alpha}\otimes dq^i \, .$$ The [*natural extension*]{} $J^\alpha$ of the tensor fields $J^\alpha$ on $T^1_kQ$ to $\mathbb{R}^k\times T^1_kQ$ will be denoted by $\widetilde{J}^\alpha$. - The Poincaré-Cartan $1$-forms introduced in [@mod2] are defined as follows $$\theta_L^\alpha=dL \circ \widetilde{J}^\alpha \quad 1\leq \alpha \leq k \, ,$$ and they have the local expression $$\label{am20} \theta_L^\alpha= \frac{\displaystyle\partial L}{\displaystyle\partial v^i_\alpha} \, dq^i\,.$$ The corresponding Poincaré-Cartan $2$-forms are $\omega^\alpha_L=-d\theta_L^\alpha$. From (\[thetal\]) and (\[am20\]) we deduce that the relationship between the Poincaré-Cartan $1$-forms $\Theta^\alpha_L$ and $\theta^\alpha_L$ is given by the following equation $$\label{am21}\Theta^\alpha_L=\theta^\alpha_L+\left(\delta^\alpha_\beta L -\Delta^\alpha_\beta(L)\right)dx^\beta .$$ As a consequence of (\[am21\]), the solutions $(X_1, \ldots,X_k)$ of our geometric field equations $$dx^\alpha(X_\beta)= \delta^\alpha_\beta \quad , \quad i_{X_\alpha} \Omega^\alpha_L=(k-1)dL \,$$ coincide with the solutions of the $k$-cosymplectic field equations $$\begin{array}{l} dx^\alpha(X_\beta)= \delta^\alpha_\beta, \quad 1 \leq \alpha , \beta \leq k\, ,\\ \noalign{\medskip} i_{X_\alpha} \omega_L^\alpha = dE_L + \displaystyle\frac{\partial L}{\partial x^\alpha}dx^\alpha \, . \end{array}$$ introduced in [@mod2], and also the corresponding integral sections, if they exist. SYMMETRIES AND CONSERVATION LAWS {#sim} ================================ The set of $k$-vector fields solution to the equation (\[geoverel\]) will be denoted by $\mathfrak{X}^k_L(J^1\pi)$. As a consequence of Propositions \[solELvect\] and \[relsopdeELeq\] we have that an integrable [sopde]{} ${\bf \Gamma}$ belongs to $\mathfrak{X}^k_L(J^1\pi)$ if, and only if, $\mathcal{L}_{\Gamma_\alpha}\Theta^{\alpha}_L=dL$. [Definition]{} A [conservation law]{} (or a [conserved quantity]{}) for the Euler-Lagrange equations (\[e-l-2\]) is a map ${\it G}=(G^1 , \ldots , G^k)\colon J^1\pi \to {\mathbb R}^k$ such that the divergence of $${\it G}\circ j^1\phi=(G^1 \circ j^1\phi, \ldots , G^k \circ j^1\phi)\colon U\subset{\mathbb R}^k \to {\mathbb R}^k$$ is zero for every section $\phi:U\subset \mathbb{R}^k \to E$, solution to to the Euler-Lagrange equations (\[e-l-2\]), which means that for all $x\in U\subset {\mathbb R}^k$ we have $$0 = [Div({\it G}\circ j^1\phi)](x)= \frac{\partial (G^\alpha \circ j^1\phi)}{\partial x^\alpha}\Big\vert_{x}= j^1\phi_*(x)\Big(\frac{\partial}{\partial x^\alpha}\Big\vert_{x}\Big)(G^\alpha)=T^{(1)}_\alpha(j^2_x\phi)(G^\alpha).$$ We can characterize conservation laws for the Euler-Lagrange equations in terms of the SOPDEs in $\mathfrak{X}^k_L(J^1\pi)$. [Proposition]{}\[caractconslaw\] The map ${\it G}=(G^1 , \ldots , G^k)\colon J^1\pi \to {\mathbb R}^k$ defines a conservation law for the Euler-Lagrange equations (\[e-l-2\]) if, and only if, for every integrable [sopde]{} $ {\bf \Gamma}=(\Gamma_1,\dots,\Gamma_k)\in \mathfrak{X}^k_L(J^1\pi)$ we have that $${\mathcal L}_{\Gamma_\alpha}G^\alpha=0.$$ Proof: Let $j^1_x\phi$ be an arbitrary point of $J^1\pi$. Since ${\bf \Gamma}$ is an integrable [sopde]{}, let us denote by $j^1\psi$ the integral section of ${\bf \Gamma}$ passing through by $j^1_x\phi$, which means $$j^1\psi(0)=j^1_x\psi=j^1_x\phi, \quad \Gamma_\alpha(j^1_x\psi)=(j^1\psi)_*(x)\left(\frac{\partial }{\partial x^\alpha}\Big\vert_x\right), \quad x\in \mbox{Domain}\, \psi\,.$$ Since ${\bf \Gamma} \in \mathfrak{X}^k_L(J^1\pi)$, and $\psi$ is an integral section of ${\bf \Gamma}$ then $\psi$ is a solution to the Euler-Lagrange equations (\[e-l-2\]). As $ {\it G}=(G^1 , \ldots , G^k)$ is a conservation law, then by hypothesis $$\frac{\partial (G^\alpha \circ j^1\psi)}{\partial x^\alpha}\Big\vert_{0}=0,$$ and therefore we deduce $${\mathcal L}_{\Gamma_\alpha}G^\alpha(j^1_x\phi)=\Gamma_\alpha(j^1_0\psi)(G^\alpha)=(j^1\psi)_*(0) \left(\frac{\partial }{\partial x^\alpha}\Big\vert_0\right)(G^\alpha)= \frac{\partial (G^\alpha \circ j^1\psi)}{\partial x^\alpha}\Big\vert_{0}=0.$$ Conversely, we must prove that $$\frac{\partial (G^\alpha \circ j^1\phi)}{\partial x^\alpha}\Big\vert_{x}=0,$$ for all sections $\phi:W\subset\mathbb{R}^k \to E$, which are solutions to the Euler-Lagrange equations (\[e-l-2\]). Since $j^1\phi\Big\vert_{W} :W\subset\mathbb{R}^k \to J^1\pi$ is an injective immersion ($j^1\phi$ is a section and hence its image is an embedded submanifold), we can define a k-vector field ${\bf X}= (X_1,\ldots, X_k)$ in $j^1\phi(W)$ as follows: $$X_\alpha(j^1_x\phi)=(j^1\phi)_*(x)\left(\frac{\partial }{\partial x^\alpha}\Big\vert_x\right)=\frac{\partial }{\partial x^\alpha}\Big\vert_{j^1_x\phi}+\frac{\partial\phi^i}{\partial x^\alpha}\frac{\partial }{\partial q^i}\Big\vert_{j^1_x\phi}+\frac{\partial^2\phi^i}{\partial x^\alpha\partial x^\beta}\frac{\partial }{\partial v^i_\beta}\Big\vert_{j^1_x\phi}$$ and so $j^1\phi $ is an integral section of $\mathbf{X}$ and $\mathbf{X}$ it is an integrable [sopde]{} on $j^1\phi(W)$. Now we prove that $\mathbf{X}\in \mathfrak{X}^k_L(j^1\phi(W))$. A direct computation shows that $$\left(X_\alpha\left(\frac{\partial L}{\partial v^i_\alpha}\right) -\frac{\partial L}{\partial q^i} \right)\Big\vert_{j^1\phi(W)}=0$$ Now, since $\mathbf{X}$ is an integrable [sopde]{}, from Proposition \[solELvect\] and Proposition \[relsopdeELeq\] we deduce that $\mathbf{X}$ is a solution to the equations (\[geoverel\]), and then $\mathbf{X}\in \mathfrak{X}^k_L(j^1\phi(W))$. The following identities finish the proof: $$\frac{\partial (G^\alpha \circ j^1\phi)}{\partial x^\alpha}\Big\vert_{x}=(j^1\phi)_*(x)\left(\frac{\partial }{\partial x^\alpha} \Big\vert_x\right)(G^\alpha)=X_\alpha(j^1_x\phi)(G^\alpha)= {\mathcal L}_{X_\alpha}G^\alpha(j^1_x\phi)=0.$$ ------------------------------------------------------------------------ Generalized symmetries. Noether’s Theorem {#sim-gen} ----------------------------------------- In this section, we introduce the (generalized) symmetries of the Lagrangian and we prove a Noether’s theorem which associates to each symmetry a conservation law. The following proposition can be seen as a motivation of the condition (\[luc00\]) in the definition of generalized symmetry, and it is also a generalization of Proposition 3.15 in [@rsv07]. [Proposition]{}\[600\] Let $X$ be a $\pi$-vertical vector field on $E$. If there exist functions $g^\alpha:E\to \mathbb{R} \,\,, 1\leq \alpha\leq k$ such that $$X^1(L) = d_{T^{(0)}_\alpha}g^\alpha$$ then the functions $\,\, G^\alpha=(\pi_{1,0})^*g^\alpha-\Theta^\alpha_L(X^1) \,\,$ define a conservation law. Proof: Let us observe that locally $$G^\alpha=g^\alpha\circ \pi_{1,0}-(X^i\circ \pi_{1,0})\frac{\partial L}{\partial v^i_\alpha}\, .$$ Then, taking into account (\[0thetzeta40\]) and (\[locX1gen\]), we deduce that for every solution $\phi$ of the Euler-Lagrange equations (\[e-l-2\]) we have $$\frac{\partial (G^\alpha \circ j^1\phi)}{\partial x^\alpha}\Big\vert_x= \frac{\partial }{\partial x^\alpha}\Big\vert_x\left(g^\alpha\circ \phi - (X^i\circ \phi) \,\,(\frac{\partial L}{\partial v^i_\alpha}\circ j^1\phi) \right) = [ d_{T^{(0)}_\alpha} g^\alpha - X^1(L)](j^1_x\phi)= 0$$ so, $(G^1,\dots, G^k)$ defines a conservation law. ------------------------------------------------------------------------ The Euler-Lagrange form $\delta L$ is the $1$-form on $J^2\pi$ given by $$\delta L=d_{T^{(1)}_\alpha}\Theta_L^\alpha - \pi_{2,1}^*dL$$ with local expression $$\delta L=\left( T^{(1)}_\alpha \left(\frac{\partial L}{\partial v^i_\alpha}\right) - \frac{\partial L}{\partial q^i}\circ \pi_{2,1}\right) (dq^i-v^i_\beta dx^\beta).$$ This is a $\pi_{2,0}$-semi-basic form, and we consider its associated form $(\delta L)^V$ along $\pi_{2,0}$, see (\[levvert\]), with local expression $$\label{locdeltal} \left( T^{(1)}_\alpha(\frac{\partial L}{\partial v^i_\alpha})-\frac{\partial L}{\partial q^i}\circ \pi_{2,1}\right) (dq^i\circ \pi_{2,0} -v^i_\beta dx^\beta\circ \pi_{2,0}) \, .$$ The $1$-forms $\Theta^{\alpha}_{L}$ are $\pi_{1,0}$-semi-basic $1$-form and its associated forms $(\Theta_L^\alpha)^V$ along $\pi_{1,0}$ are locally given by $$\label{locthetahat} (\Theta_L^\alpha)^V=(L\delta^{\alpha}_{\beta} -\frac{\partial L}{\partial v^i_{\alpha}}v^i_{\beta})\, dx^\beta \circ \pi_{1,0} +\frac{\partial L}{\partial v^i_\alpha} \, dq^i\circ \pi_{1,0} \, .$$ The following lemma will be useful in the study of generalized symmetries. [Lemma]{}\[lemsimgen\] Let $X$ be a $\pi$-vertical vector field along $\pi_{1,0}$. Then 1. If there exists functions $G^\alpha:J^1\pi \to \mathbb{R}$, $\alpha =1 ,\ldots , k$, such that $$d_{T_\alpha^{(1)}}G^\alpha(j^1\phi)=-(\delta L)^V(X\circ \pi_{2,1})(j^1\phi)$$ for any $ \phi$ solution to the Euler-Lagrange equations, then $(G^1, \ldots , G^k)$ is a conservation law. 2. The following identity holds $$\label{iden00} d_{X^{(1)}}L=-(\delta L)^V(X\circ \pi_{2,1})+ d_{T_\alpha^{(1)}} \left[ (\Theta_L^\alpha)^V(X) \right] \, .$$ Proof: 1. From (\[t0t1\]) we have $$\label{lem01} d_{T_\alpha^{(1)}}G^\alpha(j^2_x\phi)=\frac{\partial(G^\alpha\circ j^1\phi)}{\partial x^\alpha}\Big\vert_{x}$$ for any $j^2_x\phi\in J^2\pi$. Since $X$ is locally given by $$X=X^i(x,q,v) \frac{\partial }{\partial q^i}\circ \pi_{1,0}$$ then $ X\circ \pi_{2,1}$ is locally given by $$X\circ \pi_{2,1}=\left(X^i(x,q,v) \frac{\partial }{\partial q^i}\circ \pi_{1,0}\right) \circ \pi_{2,1} =X^i(x,q,v) \circ \pi_{2,1} \, \frac{\partial }{\partial q^i}\circ \pi_{2,0} \, .$$ From the above local expressions of $X\circ \pi_{2,1}$ and the local expression (\[locdeltal\]) of $ (\delta L)^V$ we obtain $$\label{lem02} -(\delta L)^V(X\circ \pi_{2,1})(j^2_x\phi)= - \left( \frac{\partial }{\partial x^\alpha}\Big\vert_x\left(\frac{\partial L}{\partial v^i_\alpha}\circ j^1\phi \right)-\frac{\partial L}{\partial q^i}\Big\vert_{j^1_x\phi}\right) X^i (j^1_x\phi)$$ Now from (\[lem01\]) and (\[lem02\]) we obtain that if $\phi$ is a solution to the Euler-Lagrange equations then $$\frac{\partial (G^\alpha\circ j^1\phi)}{\partial x^\alpha}\Big\vert_{x}=0\, .$$ 2. A direct computation using (\[locz1v\]), (\[thetzeta40\]), (\[locdeltal\]) and (\[locthetahat\]) proves the identity (\[iden00\]). ------------------------------------------------------------------------ Some classes of symmetries depend only on the variables (coordinates) in $E$. In this section we consider $v^i_\alpha$-dependent infinitesimal transformations which can be regarded as vector fields $X$ along $\pi_{1,0}$. The following definition is motivated by Proposition \[600\]. [Definition]{} A $\pi$-vertical vector field $X$ along $\pi_{1,0}$ is called a (generalized) symmetry if there exists a map $ (F^1, \ldots , F^k)\colon J^1\pi \to \mathbb{R}^k $ such that $$\label{luc00} \,\, d_{X^{(1)}}L(j^2_x\phi)=d_{T^{(1)}_\alpha}F^\alpha (j^2_x\phi)\,\,$$ for every solution $\phi$ to the Euler-Lagrange equations. The following version of Noether’s Theorem associates to each symmetry of the Lagrangian, in the sense given above, [*a conservation law*]{}. [Theorem]{}\[coche\] Let $X$ be a symmetry of the Lagrangian $L$ then the map $${\it G}=(G^1, \ldots , G^k)\colon J^1\pi \to \mathbb{R}^k$$ given by $$\,\, G^\alpha=F^\alpha- (\Theta_L^\alpha)^V(X)\,\,$$ defines a conservation law. Proof: Let $X$ be a symmetry of the Lagrangian $L$. Then from (\[iden00\]) we get $$d_{T_\alpha^{(1)}}\left[ F^\alpha - (\Theta_L^\alpha)^V(X) \right]= -(\delta L)^V(X\circ \pi_{2,1})$$ and from Lemma \[lemsimgen\], the functions $G^\alpha=F^\alpha- (\Theta_L^\alpha)^V(X)$ define a conservation law. ------------------------------------------------------------------------ [Example]{} Consider the homogeneous isotropic $2$-dimensional wave equation $$\label{waveeq} \partial_{11}\phi -c^2\partial_{22}\phi-c^2\partial_{33}\phi=0,$$ where $\phi:\mathbb{R}^3 \to \mathbb{R}$ is a solution, and defines a section of the trivial bundle $\pi \colon E=\mathbb{R}^3\times \mathbb{R} \to \mathbb{R}^3$. Since $J^1\pi=\mathbb{R}^3\times T^1_3\mathbb{R}$, equation (\[waveeq\]) can be described as the Euler-Lagrange equation for the Lagrangian $ L \colon \mathbb{R}^3\times T^1_3\mathbb{R} \to \mathbb{R}$ given by $$L(x,q,v)=\frac{1}{2}\left((v_1)^2-c^2(v_2)^2- c^2(v_3)^2\right).$$ In this case, for simplicity we consider the case $c=1$. With the vector field $X=v_1 \displaystyle\frac{\partial }{\partial q}\circ\pi_{1,0}$ along $ \pi_{1,0}$ and the functions on $J^1\pi$ $$F^1(v_1,v_2,v_3) = -c^2(v_2)^2-c^2 (v_3)^2, \ \ F^2(v_1, v_2, v_3)= c^2 v_1v_2, \ \ F^3(v_1, v_2, v_3)=c^2v_1v_3$$ using Theorem \[coche\], we deduce that the following functions $$\begin{array}{ccccl} G^1&=&F^1- (\Theta^1_L)^V(X)&=&-c^2(v_2)^2-c^2 (v_3)^2-(v_1)^2 \\ \noalign{\medskip} G^2&=&F^2-(\Theta^2_L)^V(X)&=&2 c^2 v_1v_2 \\ \noalign{\medskip} G^3&=&F^3-(\Theta^3_L)^V(X)&=& 2c^2v_1v_3 \end{array}$$ define a conservation law. Variational symmetries {#sim-var} ---------------------- In this section we consider the trivial bundle $\pi:E=\mathbb{R}^k \times Q \to \mathbb{R}^k $, and we recall some results of variational symmetries of the Euler-Lagrange equations that can be found in Olver’s book [@Olver]. Let us remember that the solution of the Euler-Lagrange equation (\[e-l-2\]) can be obtained as the extremals of the functional $$\mathcal{L}(\phi)=\int_{\Omega_0}(L\circ j^1\phi)(x)d^kx\, ,$$ where $d^kx=dx^1\wedge\cdots\wedge dx^k$ is the volume form on $\mathbb{R}^k $. Roughly speaking, a variational symmetry is a diffeomorphism that leaves the variational integral $\mathcal{L}$ unchanged. [Definition]{} 1. A [*variational symmetry* ]{} is a diffeomorphism $\Phi\colon E=\mathbb{R}^k \times Q\to E=\mathbb{R}^k \times Q$ verifying the following conditions: 1. It is a fiber-preserving map for the bundle $\pi\colon E\to \mathbb{R}^k $; that is, $\Phi$ induces a diffeomorphisms $\varphi\colon \mathbb{R}^k \to \mathbb{R}^k $ such that $\pi\circ \Phi=\varphi\circ \pi$ 2. If $\tilde{x}=\varphi(x)$ for each $x\in \mathbb{R}^k $ $$\int_{\tilde{\Omega} }(L\circ j^1(\Phi\circ \phi\circ \varphi^{-1} ))(\tilde{x})d^k\tilde{x}= \int_{\Omega}(L\circ j^1\phi)(x)d^kx\,$$ where $\tilde{\Omega}=\varphi(\Omega)$. 2. An infinitesimal variational symmetry is a vector field $X\in \mathfrak{X}(\mathbb{R}^k \times Q)$ whose local flows are variational symmetries. The following results can be seen in Theorem $4.12$ and in Corollary $4.30$ [@Olver] . [Theorem]{}\[consolver0\] i) A vector field $X$ on $\mathbb{R}^k \times Q$ is a variational symmetry if, and only if, $X^1(L)+ L \, d_{T_\alpha^{(0)}}X_\alpha =0$, where $X_\alpha=dx^\alpha(X)$. ii\) If $X$ is a variational symmetry then $ \Theta^\alpha_L(X^1)$ defines a conservation law. [Example]{} We consider again the homogeneous isotropic $2$-dimensional wave equation (\[waveeq\]). The rotation group $X=-x^3\displaystyle\frac{\partial }{\partial x^2}+x^2\displaystyle\frac{\partial }{\partial x^3}$ is a variational symmetry, and then the corresponding conservation law $(\Theta^1_L(X^1),\Theta^2_L(X^1), \Theta^3_L(X^1))$ is given by the functions $$\left(x^3v_1v_2-x^2v_1v_3, \, -\frac{1}{2} x^3 \, u +x^2v_2v_3, \, -\frac{1}{2} x^2\, u-v_3v_2x^3\right)$$ where $u= (v_1)^2+(v_2)^2+(v_3)^2$. [Example]{} We consider again $Q=\mathbb{R}$, and let $$0=\left(1+(\partial_2\phi)^2\right)\partial_{11}\phi-2\partial_1\phi\, \partial_2\phi\, \partial_{12}\phi+\left(1+(\partial_1\phi)^2\right)\partial_{22}\phi$$ be the equation of minimal surfaces, which is the Euler-Lagrange equations for the Lagrangian $L \colon \mathbb{R}^2\times T^1_2\mathbb{R} \to \mathbb{R}$ defined by $L(x^1,x^2,q,v_1,v_2)=\sqrt{1+(v_1)^2+(v_2)^2}$. The vector field $X=-q\displaystyle\frac{\partial }{\partial x^1}-q\displaystyle\frac{\partial }{\partial x^2}+(x^1+x^2)\displaystyle\frac{\partial }{\partial q}$ is a variational symmetry, and then the corresponding conservation law $(\Theta^1_L(X^1),\Theta^2_L(X^1))$ is given by the functions $$\left(\frac{-q(1+(v_2)^2-v_1v_2)+(x^1+x^2)v_1}{\sqrt{1+(v_1)^2+(v_2)^2}},\, \, \frac{-q(1+(v_1)^2-v_1v_2)+(x^1+x^2)v_2}{\sqrt{1+(v_1)^2+(v_2)^2}}\right)$$ Now we describe some relationships between the above symmetries. [Theorem]{} i) Let $X$ be a $\pi$-vertical variational symmetry. Then the vector field $ X\circ \pi_{1,0}$ along $\pi_{1,0}$ is a generalized symmetry. ii\) The conservation law induced by $X$ and $ X\circ \pi_{1,0}$ coincide. Proof: i) Since $X $ is a $\pi$-vertical variational symmetry, then locally $X=X^i(x,q)\frac{\partial }{\partial q^i}$, and from Theorem \[consolver0\] we know that $X^1(L)=0$. From (\[xx\]) we know that $(X\circ \pi_{1,0})^{(1)}=X^1\circ \pi_{2,1}$. Then, $$d_{(X\circ \pi_{1,0})^{(1)}}L(j^2_x\phi)=(X\circ \pi_{1,0})^{(1)}(j^2_x \phi)(L)=X^1(j^1_x\phi)(L)=0$$ and thus $X$ is a generalized symmetry. ii\) It is a consequence of $\Theta^\alpha_L(X^1)= (\Theta_L^\alpha)^V(X\circ \pi_{1,0}).$ ------------------------------------------------------------------------ Noether symmetries {#sim-noe} ------------------ In the paper [@mrsv] we introduced the following definition: [Definition]{} A vector field $Y \in \mathfrak{X}(\mathbb{R}^k\times T^1_kQ)$ is an infinitesimal Noether symmetry if $$\mathcal{L}_Y\omega_L^\alpha=0, \quad i_Y dx^\alpha=0, \quad \mathcal{L}_Y E_L=0.$$ [Theorem]{} Let $X$ be a $\pi$-vertical vector field on $\mathbb{R}^k\times Q$ such that $X^1$ is an infinitesimal Noether symmetry, then $X=X\circ\pi_{1,0}$ is a generalized symmetry. Proof: Using (\[locX1gen\]), (\[LagranEner\]), then from the local expression $X =X^i\displaystyle\frac{\partial }{\partial q^i} $ and the condition $\mathcal{L}_{X^1}E_L=0$, we deduce that $$\label{yy}(X^i\circ\pi_{1,0})\frac{\partial L}{\partial q^i}= v^i_\alpha X^1\left(\frac{\partial L}{\partial v^i_\alpha}\right)\quad .$$ From the condition $\mathcal{L}_{X^1}\omega^\alpha_L=0$ we obtain $d(\mathcal{L}_{X^1}\theta^\alpha_L)=0$ and so there exist (locally defined) functions $F^\alpha \colon U\subset\mathbb{R}^k\times T^1_kQ \to \mathbb{R}$ such that $$\mathcal{L}_{X^1}\theta^\alpha_L=dF^\alpha \, \quad 1\leq \alpha \leq k \, .$$ With these identities we obtain the following relations $$\label{fat} \begin{array}{ll} \displaystyle\frac{\partial F^\alpha}{\partial x^\beta}=\displaystyle\frac{\partial L}{\partial v^i_\alpha}\displaystyle\frac{\partial X^i}{\partial x^\beta}\circ \pi_{1,0} & \displaystyle\frac{\partial F^\alpha}{\partial q^j}=\displaystyle\frac{\partial L}{\partial v^i_\alpha}\displaystyle\frac{\partial X^i}{\partial q^j}\circ \pi_{1,0}-X^1\left(\displaystyle\frac{\partial L}{\partial v^j_\alpha}\right) \\ \noalign{\bigskip} \displaystyle\frac{\partial F^\alpha}{\partial v^j_\beta}=\displaystyle\frac{\partial L}{\partial v^i_\alpha}\displaystyle\frac{\partial X^i}{\partial v^j_\beta}\circ \pi_{1,0}=0 & \end{array}$$ From (\[xx\]), (\[yy\]) and (\[fat\]), we deduce that $$\begin{array}{ccl} d_{(X\circ\pi_{1,0})^{(1)}}L(j^2_x\phi)&=&X^i(\phi(x))\displaystyle\frac{\partial L}{\partial q^i}\Big\vert_{j^1_x\phi}+\left(\displaystyle \frac{\partial X^i}{\partial x^\alpha}\Big\vert_{\phi(x)}+v^j_\alpha(j^1_x\phi)\displaystyle\frac{\partial X^i}{\partial q^j}\Big\vert_{j^1_x\phi}\right)\displaystyle\frac{\partial L}{\partial v^i_\alpha}\Big\vert_{j^1_x\phi} \\ \noalign{\medskip} &=& X^i(\phi(x))\displaystyle\frac{\partial L}{\partial q^i}\Big\vert_{j^1_x\phi}+\displaystyle\frac{\partial F^\alpha}{\partial x^\alpha}\Big\vert_{j^1_x\phi} + v^j_\alpha(j^1_x\phi)\left(\displaystyle\frac{\partial F^\alpha}{\partial q^j}\Big\vert_{j^1_x\phi} +X^1(j^1_x\phi)\left(\displaystyle\frac{\partial L}{\partial v^j_\alpha}\right)\right) \\ \noalign{\medskip} &=& \displaystyle\frac{\partial F^\alpha}{\partial x^\alpha}\Big\vert_{j^1_x\phi} +v^j\alpha({j^1_x\phi})\displaystyle\frac{\partial F^\alpha}{\partial q^j}\Big\vert_{j^1_x\phi}=d_{T^{(1)}_\alpha}F^\alpha(j^2_x\phi) \end{array}$$ for any $j^2_x\phi$. This proves that $X\circ\pi_{1,0}$ is a generalized symmetry. ------------------------------------------------------------------------ Conclusions {#conclusions .unnumbered} =========== In this paper, we have discussed a new geometric formalism for fiber bundles over Euclidean spaces; this new formalism allows us to understand better the similarities and differences between the multisymplectic and $k$-cosymplectic settings. Even if such a fiber bundle is topologically trivial, it has some interest from the geometric point of view. Indeed, it is a way to understand better the multisymplectic formalism (and, by the way, it is a usual case in Continuun Mechanics). The current paper is a first step to get more treatable ways to work with the field equations when the base space (the space-time manifold in the physical contexts) is not trivial or even a parallelizable manifold. We are considering more general situations and this paper will help very much for more generalizations. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge the financial support of the Ministerio de Ciencia e Innovación (Spain), projects MTM2011-22585, MTM2011-15725-E, MTM2010-21186-C02-01, the European project IRSES-project “Geomech-246981” and the ICMAT Severo Ochoa project SEV-2011-0087. [99]{} L. Bua, I. Bucataru and M. Salgado: Symmetries, Newtonoid vector fields and conservation laws on the Lagrangian k-symplectic formalism, [*Reviews in Mathematical Physics*]{} [**24**]{}, 1250030 (2012). J.F. Cariñena, M. Crampin and L.A. Ibort: On the multisymplectic formalism for first order field theories, [ *Differential Geometry and its Applications*]{}, [**1**]{}, 345–374 (1991). J.F. Cariñena and J. Fernández-Núñez: Geometric Theory of Time-Dependent Singular Lagrangians, [ *Fortschr. Phys.*]{}, **41**, 517–552 (1993). J.F. Cariñena, J. Fernández-Núñez and E. Martínez: A geometric approach to Cartan’s Second Theorem in time-dependent Lagrangian Mechanics, [*Lett. Math. Phys.*]{}, [**23**]{}, 51 (1991). J.F. Cariñena, J. Fernández-Núñez and E. Martínez: Noether’s theorem in time-dependent Lagrangian Mechanics, [*Reports in Mathamatical Physics*]{}, [**31**]{}, 189–203 (1992). J.F. Cariñena, C. López and E. Martínez: A new approach to the converse of Cartan’s theorem, [*J. Phys A: Math. Gen.*]{}, [**22**]{}, 4777 (1989). F. Cantrijn, M. de León and E.A. Lacomba: Gradient vector fields on cosymplectic manifolds, [*J. Phys. A*]{}, [**25**]{}, 175–188 (1992). A. Echeverría-Enríquez, M.C. Muñoz-Lecanda and N. Román-Roy: Geometrical setting of time-dependent regular systems: Alternative models, [*Rev. Math. Phys.*]{}, [**3**]{}, 301–330 (1991). A. Echeverría-Enríquez, M.C. Muñoz-Lecanda and N. Román-Roy: Geometry of Lagrangian first-order classical field theories. [*Forts. Phys.*]{}, [**44**]{}, 235–280 (1996). A. Echeverría-Enríquez, M.C. Muñoz-Lecanda and Román-Roy: Geometry of Multisymplectic Hamiltonian First-order Field Theories, [*J. Math. Phys.*]{}, [**41**]{}, 7402–744 (2000). G. Giachetta, L. Mangiarotti and G. Sardanashvily: [ *New Lagrangian and Hamiltonian Methods in Field Theory*]{}, World Scientific, Singapore 1997. G. Giachetta, L. Mangiarotti and G. Sardanashvily: Covariant Hamilton equations for field theory, [*J. Phys. A*]{}, [**32**]{}, 6629–6642 (1999). M.J. Gotay, J. Isenberg, J.E. Marsden and R. Montgomery: [*Momentum Maps and Classical Relativistic Fields. Part I: Covariant Field Theory*]{}, arXiv:physics/9801019v2 (2004). M.J. Gotay, J. Isenberg and J.E. Marsden: [*Momentun maps and classical relativistic fields, Part II: Canonical analysis of field theories*]{}, arXiv:math-ph/0411032v1 (2004). X. Gracia and J.M. Pons, J.M: On an evolution operator connecting lagrangian and hamiltonian formalisms, [ *Lett. Math. Phys.*]{}, [**17**]{}, 175–180 (1989). J.V. José and Saletan, E.J.: [*Classical Dynamics, A Contemporary Approach*]{}, Cambridge University Press, Cambridge 1988. I.V. Kanatchikov: Canonical structure of classical field theory in the polymomentum phase space, [ *Rep. Math. Phys.*]{}, [**41**]{}, 49–90 (1998). J. Kijowski and W.M. Tulczyjew: [*A symplectic framework for field theories*]{}, Lecture Notes in Physics, 107, Springer-Verlag, New York 1979. D.D. Kosambi: Systems of Differential Equations of the Second Order, [*Quart. J. Math.*]{}, [**6**]{}, 1–12 (1935). D.D. Kosambi: Systems of partial differential equations of the second order, [*Quart. J. Math.*]{}, [**19**]{}, 204–219 (1948). O. Krupková: [*The geometry of ordinary variational equations*]{}, Springer-Verlag, Berlin 1997. M. de León, E. Merino, J.A. Oubiña, P.R. Rodrigues and M. Salgado: Hamiltonian systems on $k$-cosymplectic manifolds, [*J. Math. Phys.*]{}, [**39**]{}, 876–893 (1998). M. de León, E. Merino and M. Salgado: $k$-cosymplectic manifolds and Lagrangian field theories, [ *J. Math. Phys.*]{}, [**42**]{}, 2092–2104 (2001). M. de León and P.R. Rodrigues: [*Methods of differential geometry in analytical mechanics*]{}, North-Holland Mathematics Studies, 158, North-Holland., Amsterdam 1989. H. Marañón: [*Simetries d’equacions diferencials. Aplicació als sistemes k-simplèctics*]{}, Treball Fi de Master Matematica Aplicada, Department of Applied Mathematics IV. Technical University of Catalonia (UPC), 2008. J.C. Marrero, N. Román-Roy, M. Salgado and S. Vilariño: On a kind of Cartan symmetries and conservation laws in k-cosymplectic Field Theory, [*Journal of Mathematical Physics*]{}, [**52**]{}, 022901 (2011). M. McLean and L.K. Norris: Covariant field theory on frame bundles of fibered manifolds, [*J. Math. Phys.*]{}, [**41**]{}, 6808–6823 (2000). M.C. Muñoz Lecanda, M. Salgado and S. Vilariño: $k$-symplectic and $k$-cosymplectic Lagrangian field theories: some interesting examples and applications, [*Journal of Mathematical Physics*]{}, [**52**]{}, 022901 (2011). L.K. Norris: [*Generalized symplectic geometry on the frame bundle of a manifold*]{}, in Proc. Symp. Pure Math., Part 2 (Amer. Math. Soc., Providence RI, 54, 435–465 (1993). P.J. Olver: [*Applications of Lie groups to differential equations*]{}, Graduate Texts in Mathematics, 107, Springer-Verlag, New York, 1986. G. Pidello and W.M. Tulczyjew: Derivations of differential forms on jet bundles, [*Annali di Mat. Pura ed Apl.*]{}, [**147**]{}, 249–265 (1987). W.A. Poor: [*Differential Geometric structures*]{}, Mc-Graw-Hill, 1981. N. Román-Roy, A.M. Rey, M. Salgado and S. Vilariño: On the $k$-symplectic, $k$-cosymplectic and multisymplectic formalism of classical field theories, [ *J. Geom. Mechanics*]{}, [**3**]{}, 113–137 (2011). N. Román-Roy, M. Salgado and S. Vilarino: Symmetries and Conservation Laws in Günter k-symplectic formalism of Field Theory, [*Reviews in Mathematical Physics*]{}, [**19**]{} 1117-1147 (2007). G. Sardanashvily: [*Generalized Hamiltonian Formalism for Field Theory. Constraint Systems. I*]{}, World Scientific, Singapore, 1995. N. Steenrod, [*The Topology of Fibre Bundles*]{}, Princeton University Press, 1951. D.J. Saunders: The Cartan Form in Lagrangian field theories, [*J. Phys. A: Math. Gen.*]{}, [**20**]{}, 339–349 (1987) D.J. Saunders: [*The Geometry of Jet Bundles*]{}, London Math. Soc. Lecture Notes Series, 142, Cambridge Univ. Press, Cambridge, 1989.
{ "pile_set_name": "ArXiv" }
--- abstract: | A gauge model with chiral color symmetry is considered and possible effects of the color $G'$-boson octet predicted by this symmetry are investigated in dependence on two free parameters, the mixing angle $\theta_G$ and $G'$ mass $m_{G'}$. The allowed region in the $m_{G'} - \theta_G$ plane is found from the Tevatron data on the cross section $\sigma_{t\bar{t}}$ and forward-backward asymmetry $A_{\rm FB}^{p \bar p}$ of the $t\bar{t}$ production. The mass limits for the $G'$-boson are shown to be stronger than those for the axigluon. A possible effect of the $G'$-boson on the $t\bar{t}$ production at the LHC is discussed and the mass limits providing for the $G'$-boson evidence at the LHC are estimated in dependence on  $\theta_G$. Keywords: Beyond the SM; chiral color symmetry; axigluon; massive color octet; top quark physics. PACS number: 12.60.-i author: - | M.V. Martynov[^1], A.D. Smirnov[^2]\ [Division of Theoretical Physics, Department of Physics,]{}\ [Yaroslavl State University, Sovietskaya 14,]{}\ [150000 Yaroslavl, Russia.]{} title: | Chiral color symmetry\ and possible $G'$-boson effects\ at the Tevatron and LHC --- The search for new physics beyond the Standard Model (SM) induced by higher symmetries (such as supersymmetry, left-right symmetry, etc.) is one of the modern research directions in elementary particle physics. The Large Hadron Collider (LHC) will allow the exploration of the existence of new physics at the TeV energy scale with very large statistics [@Butterworth:2007bi]. Top physics is a very promising place to look for new physics effects [@Hill:1993hs] and a top factory such as the LHC is expected to be a goldmine for studying the SM as well as beyond the SM physics [@Beneke:2000hk]. There are models extending the standard color gauge group $SU_c(3)$ to the group of the chiral color symmetry $$\label{chiral_group} G_c=SU_L(3)\!\times \! SU_R(3) \to SU_c(3),$$ which is assumed to be valid at high energies and is broken to usual QCD $SU_c(3)$ at low energy scale. Such chiral color theories [@Pati:1975ze; @Hall:1985wz; @Frampton:1987ut; @Frampton:1987dn] in addition to the usual massless gluon $G_\mu$ predict in the simplest case of $g_L=g_R$ the existence of a new color-octet gauge boson, the axigluon $G^A_\mu$ with mass $m_{G_A}$. The axigluon couples to quarks with an axial vector structure and with the same strong interaction coupling strength as QCD. It has a width $\Gamma_{G_A}\approx 0.1 m_{G_A}$ [@Bagger:1988]. Since it is the colored gauge particle with axial vector coupling to quarks, the axigluon should immediately result in the increase of the hadronic cross section and in the appearance of a forward-backward asymmetry of order $\alpha_s^2$ [@rodrigo-2008]. The CDF data on the cross section of the dijet production at the Tevatron [@Aaltonen:2008dn] exclude at 95% C.L. the axigluon mass region $260$ GeV$ < m_{G_A} < 1.250$ TeV and the Tevatron data on asymmetry sets the lower mass limit for the axigluon at $m_{G_A} > 1.2, 1.4$ TeV [@antunano-2007; @rodrigo-2008]. The massive color octet with arbitrary vector– and axial-vector–quark coupling constants has been considered phenomenologicaly in ref. [@ferrario-2008]. But it is also interesting to consider the color octet as the gauge boson induced by the chiral color symmetry of a general type. In the present paper we consider the color-octet boson induced by the gauge chiral color symmetry  in general case of $g_L\neq g_R$. We calculate the possible contributions of this boson to the cross section and to the forward-backward asymmetry of the $Q \bar{Q}$ production in $p \bar{p}$ and $pp$ collisions in dependence on the free parameters of the model. We compare the results with the Tevatron data on the $t \bar{t}$ production and discuss a possible effect of this boson in the $t \bar{t}$ production at the LHC. To reproduce the usual quark-gluon interaction of QCD the gauge coupling constants $g_L, \, g_R$ of the gauge group  must satisfy the relation $$\begin{aligned} \frac{g_L g_R}{\sqrt{(g_L)^2+(g_R)^2}} = g_{st} \label{eq:gLgRgst}\end{aligned}$$ where $g_{st}$ is the strong interaction coupling constant. The basic gauge fields $G^L_\mu$ and $G^R_\mu$ are mixed and form the usual gluon field $G_\mu$ and the field $G'_\mu$ of an additional $G'$-boson as $$\begin{aligned} G_\mu&= \frac{g_R G^L_\mu + g_L G^R_\mu}{\sqrt{(g_L)^2+(g_R)^2}} \equiv s_G \, G^L_\mu + c_G \, G^R_\mu,\\ G'_\mu&= \frac{g_L G^L_\mu - g_R G^R_\mu}{\sqrt{(g_L)^2+(g_R)^2}} \equiv c_G \, G^L_\mu - s_G \, G^R_\mu,\end{aligned}$$ where $G^{L,R}_{\mu} = G^{L,R}_{i\mu} t_i$, $G_{\mu} = G^i_{\mu} t_i$, $G'_{\mu} = G'^i_{\mu} t_i$, $i=1,2,...,8$, $t_i$ are the generators of $SU_c(3)$ group, $s_G =\sin\theta_G, \, c_G =\cos\theta_G $, $\theta_G$ is $G^{L} - G^{R}$ mixing angle, $tg\,\theta_G=g_R/g_L$. The symmetry  can be softly broken by the scalar field $\Phi_{\alpha \beta}$, which transforms according to the $(3_L, \bar{3}_R)$ representation of the group  and has the VEV $\langle \Phi_{\alpha \beta} \rangle = \delta_{\alpha \beta} \, \eta /(2 \sqrt{3}) $, $\alpha, \beta = 1,2,3$ are the $SU_L(3)$ and $SU_R(3)$ indices. After such symmetry breaking the gluons are still massless and the $G'$-boson acquires the mass $$m_{G'} = \frac{g_{st}}{s_G c_G} \, \frac{\eta}{\sqrt{6}} . \label{eq:MG1}$$ The interaction of the $G'$-boson with quarks can be written in the model independent form as $$\mathcal{L}_{G'qq}=g_{st} \, \bar{q} \gamma^\mu (v + a \gamma_5) G'_\mu q$$ where $v$ and $a$ are the phenomenological vector and axial-vector coupling constants. The gauge symmetry  gives for $v, \, a$ the expressions $$v = \frac{c_G^2-s_G^2}{2 s_G c_G} = \cot(2\theta_G), \,\,\,\, a = \frac{1}{2 s_G c_G} = 1 / \sin(2\theta_G) . \label{eg:va}$$ So, in the general case of the gauge chiral color simmetry  the mass of the $G'$-boson is defined by expression  and the vector and axial-vector coupling constants of $G'$-boson with quarks (in contrast to phenomenological approach of ref. [@ferrario-2008]) depend on one parameter $\theta_G$ which is defined by the gauge coupling constants $g_L, \, g_R$ satisfying relation . This circumstance reduces the possible region of the parameters $v, \, a$, and allows the possibility of studing the phenomonology of the $G'$-boson in more detail in dependence on two free parameters of the model $m_{G'}$ and $\theta_G$. In the particular case of $g_L=g_R$ $\theta_G=45^\circ$, $v=0$, $a=1$, the $G'$-boson coincides with the axigluon. In the general case of decreasing $\theta_G$, the coupling constants increase according to  so that, for example, for $\theta_G= 15^\circ, \, 10^\circ$ the pertubation theory parameters take the values $\alpha_s v^2 / \pi \approx \alpha_s a^2 / \pi \approx 0.14, \, 0.3$ respectively. In further considerations we restrict ourselves to the mixing angle region $10^\circ\lesssim\theta_G\leq 45^\circ$. The hadronic width of the $G'$-boson can be written as $$\begin{aligned} \Gamma_{G'} = \sum_{Q} \Gamma (G' \to Q\overline{Q}) \label{GammaG'}\end{aligned}$$ where $$\begin{aligned} \Gamma (G' \to Q\overline{Q}) = \frac{\alpha_{s}\, m_{G'}}{6} \Bigg[ \, v^2 \left(1+\frac{2m_Q^{2}}{m_{G'}^{2}}\right) + a^2 \left(1-\frac{4m_Q^{2}}{m_{G'}^{2}}\right) \Bigg] \sqrt{1-\frac{4m_Q^{2}}{m_{G'}^{2}}} \label{GammaG'QQ}\end{aligned}$$ is the width of $G'$-boson decay into $Q\overline{Q}$-pair. In the case of neglecting the masses of light quarks (except t-quark) the result , agrees with that of ref. [@ferrario-2008]. Using the coupling constants  from ,  we obtain the next estimations for the relative width of $G'$-boson $$\Gamma_{G'}/m_{G'}=0.11, \; 0.18, \; 0.41, \; 0.75, \; 1.71 \label{gammaG1}$$ for $ \theta_G=45^\circ, \; 30^\circ, \; 20^\circ, \; 15^\circ, \; 10^\circ $ respectively. Since it is a strongly interacting particle, the $G'$-boson can give significant contributions to the production of quark–antiquark pairs in $p \bar{p}$ and $pp$ collisions. The differential partonic cross section of the process $q\bar{q} \rightarrow Q \bar{Q}$ considering the $G'$-boson and gluon contributions within the tree approximation has been calculated (in agreement with ref. [@ferrario-2008]) and can be written as $$\begin{aligned} \nonumber &&\frac{ d\sigma(q\bar{q} \stackrel{\,g,\,G'}{\rightarrow} Q \bar{Q}) }{d\cos \hat{\theta}} = \frac{\alpha_s^2\pi \beta}{9\hat{s}} \bigg \lbrace f^{(+)}+\frac{2 \hat{s} (\hat{s}-m_{G'}^2)} {(\hat{s}-m_{G'}^2)^2+m_{G'}^2 \Gamma_{G'}^2} \Big[ \, v^2 f^{(+)} + 2 a^2 \beta c \, \Big] + \\ \label{diffsect} && + \frac{\hat{s}^2} {(\hat{s}-m_{G'}^2)^2+m_{G'}^2 \Gamma_{G'}^2} \Big[ \left( v^2 + a^2 \right) \big( v^2 f^{(+)}+ a^2 f^{(-)} \big) + 8 a^2v^2 \beta c \, \Big] \bigg \rbrace, $$ where $f^{(\pm)}=(1+\beta^2 c^2\pm 4m_Q^2/\hat{s})$, $c = \cos \hat{\theta}$, $\hat{\theta}$ is the scattering angle of $Q$-quark in the parton center of mass frame, $\hat{s}$ is the invariant mass of $Q \bar{Q}$ system, $\beta = \sqrt{1-4m_Q^2/\hat{s}}$. Integration of  over the angle gives the corresponding total cross section in the form $$\begin{aligned} \nonumber \sigma(q\bar{q} \stackrel{\,g,\,G'}{\rightarrow} Q \bar{Q}) &=& \frac{4\pi \alpha_s^2\beta}{27\hat{s}} \bigg \lbrace 3-\beta^2 - \frac{2\hat{s}m_{G'}^2v^2(3-\beta^2)}{(\hat{s}-m_{G'}^2)^2+\Gamma_{G'}^2 m_{G'}^2}+\\ &+&\frac{\hat{s}^2 \big[ \, (v^4+2v^2)(3-\beta^2) + v^2 a^2 (3+\beta^2) + 2a^4\beta^2 \, \big]} {(\hat{s}-m_{G'}^2)^2 + \Gamma_{G'}^2 m_{G'}^2} \bigg \rbrace. \label{sect}\end{aligned}$$ In tree approximation, the $G'$-boson does not contribute to the $g g \rightarrow Q \bar{Q}$ process of $Q \bar{Q}$ production in gluon fusion. The differential and total SM partonic cross sections of this process are well known and have the form $$\frac{d\sigma(gg\rightarrow Q \bar{Q})}{d\cos \hat{\theta}} = \alpha_s^2 \: \frac{\pi \beta}{6 \hat{s}} \left(\frac{1}{1-\beta^2c^2}-\frac{9}{16}\right) \left(1 + \beta^2 c^2 +2(1-\beta^2)-\frac{2 (1-\beta^2)^2}{1-\beta^2 c^2}\right), \label{difcsggQQ}$$ $$\sigma(gg\rightarrow Q \bar{Q}) = \frac{\pi \alpha_s^2 }{48 \hat{s}} \left[ \left(\beta ^4-18 \beta ^2+33\right) \log \left(\frac{1+\beta }{1-\beta }\right)+ \beta \left( 31 \beta ^2-59 \right) \right]. \label{totcsggQQ}$$ Taking into account the parton densities and the values of the $\beta$ parameter in  one can see from  that the contribution of the $G'$-boson to $Q \bar{Q}$ production is most significant for $t\bar{t}$ production. The $t\bar{t}$ production is well studied at the Tevatron and the recent CDF result for the $t\bar{t}$ production cross section is [@Lister:2008it] $$\sigma_{t\bar{t}} = 7.0 \pm 0.3 (stat) \pm 0.4 (syst) \pm 0.4 (lumi) pb . \label{expcspptt}$$ We have calculated the cross section $\sigma(p \bar{p} \rightarrow t \bar{t})$ of $t\bar{t}$-pair production in $p\bar{p}$-collisions at the Tevatron energy using the parton cross sections ,  and the parton densities AL’03 [@alekhin] (NLO, fixed-flavor-number, $Q^2=m_t^2$) with the appropriate K-factor $K=1.24$ [@campbell-2007-70]. The allowed region in $m_{G'} - \theta_G$ plane (the unshaded region) which is compatible with data  within $2 \sigma$ is shown in Fig.1, the $1 \sigma$ region is marked by the dashed line. From Fig.1 and by comparing the calculated cross section $\sigma(p \bar{p} \rightarrow t \bar{t})$ with CDF result  we find that the $G'$-boson with masses $$m_{G'}[TeV]>0.91 (0.94), \, \; 1.03 (1.05), \, \; 1.20 (1.23), \, \; 1.39 (1.44), \, \; 1.73 (1.82) \label{eq:mG1limcs}$$ is compatible with data  within $2 \sigma (1 \sigma)$ for $ \theta_G=45^\circ, \; 30^\circ, \; 20^\circ, \; 15^\circ, \; 10^\circ$ respectively. The first value in  coincides with the known mass limit for the axigluon [@choudhury-2007] whereas the next ones are the mass limits for the $G'$-boson in dependence on the mixing angle $\theta_G$. The $G'$ boson can generate, at tree-level, a forward-backward asymmetry through the interference of $q\bar{q} \stackrel{\,G'}{\rightarrow} t\bar{t}$ and $q\bar{q} \stackrel{\,g}{\rightarrow} t\bar{t}$ amplitudes [@antunano-2007; @Sehgal:1987wi; @choudhury-2007]. From  we find that the $G'$ boson induces a forward-backward difference in the $q\bar{q}\to Q\bar{Q}$ cross section of the form $$\begin{aligned} \Delta_{FB}(q\bar{q}\to Q\bar{Q})&=& \sigma(q\bar{q}\rightarrow Q \bar{Q}, \, \cos \theta > 0)- \sigma(q\bar{q}\rightarrow Q \bar{Q}, \, \cos \theta < 0)= \nonumber \\ &=& \frac{4\alpha_s^2\pi \beta^2 a^2}{9} \Bigg( \frac{ \hat{s}-m_{G'}^2 +2 v^2 \hat{s}} {(\hat{s}-m_{G'}^2)^2+m_{G'}^2 \Gamma_{G'}^2} \Bigg). \label{eq:deltaFBqq}\end{aligned}$$ As seen from  in dependence on the values $\hat{s}$, $m_{G'}^2$ and $v$, the $G'$ boson can give a contribution to the forward-backward asymmetry $A_{\rm FB}^{p \bar p}$ in $p \bar p$ collisions, which can take positive values as well as negative ones. Concerning the gluon-gluon fusion, one can see from  that this process does not contribute to forward-backward asymmetry of the tree level. The forward-backward asymmetry of top quarks has been measured at the Tevatron. The latest CDF analysis [@aaltonen-2008] based on 1.9 $fb^{-1}$ integrated luminosity gives $$A_{\rm FB}^{p \bar p} = \frac{N_t (\cos \theta >0)-N_t (\cos \theta <0)} {N_t(\cos \theta >0)+N_t(\cos \theta <0)}=0.17 \pm 0.07~(\rm{stat}) \pm 0.04~(\rm{sys}). \label{AFBpptt}$$ Using  (or ) and the parton densities one can calculate the forward-backward asymmetry $A_{\rm FB}^{p \bar p}$ in dependence on $m_{G'}$ and $\theta_G$. The allowed region in the $m_{G'} - \theta_G$ plane (the undashed region), which is compatible with data  within $2 \sigma$ is shown in Fig.1. The border of the allowed $1 \sigma$ region is shown by the dashed line. One can see that the $1 \sigma$ region allowed by $A_{\rm FB}$ data  is excluded by the cross section data . Nevertheless, there is a region in the $m_{G'} - \theta_G$ plane that is compatible with the data  and  simultaneusly within $2 \sigma$ (the clean region). Comparing the calculated $A_{\rm FB}^{p \bar p}$ asymmetry with the data  and accounting for the mass limits  from Fig.1 we find that the $G'$-boson with masses $$m_{G'}>1.44\, TeV, \; 1.56 \, TeV, \; 1.76 \, TeV \label{eq:mG1limcsAFB1}$$ for $ \theta_G=45^\circ, \; 30^\circ, \; 20^\circ $ as well as with masses $$m_{G'} = 1.20-1.32 \, TeV, \; > 1.39 \, TeV, \; > 1.73 \, TeV \label{eq:mG1limcsAFB2}$$ for $ \theta_G=20^\circ, \; 15^\circ, \; 10^\circ $ is compatible with data  and  simultaneusly within $2 \sigma$. The first value in  is close to the known mass limit for the axigluon [@antunano-2007] resulting from the $A_{\rm FB}^{p \bar p}$ data whereas the other values in  and  are the new mass limits for the $G'$-boson resulting from the data  and  simultaneusly in dependence on the mixing angle $\theta_G$. In $pp$ collisions at the LHC the $q\bar{q}$ fluxes are essentially smaller than the $gg$ fluxes, and $t\bar{t}$ production is dominated by the contribution from the $gg$ initial state. However, by increasing the $t\bar{t}$ invariant masses this dominance becomes to be less significant and it is reasonable to search for the $G'$-boson through its effect on the $t\bar{t}$ invariant mass distribution. The large number of top pairs expected to be produced at the LHC (8 million pairs for 10 $fb^{-1}$ integrated luminosity) makes a study of such a differential distribution meaningful. Using the parton cross sections ,  and integrating them with the parton densities [@alekhin] over the final $t$ quark rapidity $y$, we have obtained the $t\bar{t}$ invariant mass distribution $d\sigma_s(pp \to t\bar{t})/dM_{t\bar{t}}$, which can be expected at the LHC when taking into account the $G'$-boson contribution. The background distribution $d\sigma_b(pp \to t\bar{t})/dM_{t\bar{t}}$ is obtained analogously but by neglecting the $G'$-boson contribution in . The former distribution exceeds the latter one and has the peak from the $G'$-boson defined by the mass $m_{G'}$ and width  of the $G'$-boson. To distinguish the signal and background events we use the significance estimator [@Bartsch:824351] $$\label{Significance} \mathcal{S}=\sqrt{ 2 \big[ \, (N_s+N_b)\ln{\left(1+N_s/N_b \right)}-N_s \, \big]} \, ,$$ where $N_s$ and $N_b$ are number of signal and background events in the $t\bar{t}$ invariant mass region $m_{G'}\pm \Delta M$. These numbers can be calculated as $$N_{s,b}= L\sigma_{s,b}(m_{G'},\Delta M), \qquad \sigma_{s,b}(m_{G'},\Delta M)=\int_{m_{G'}-\Delta M}^{m_{G'}+\Delta M} \frac{d\sigma_{s,b}(pp \to t\bar{t})}{dM_{t\bar{t}}} dM_{t\bar{t}},$$ where $L$ is integrated luminosity and the integration mass region $\pm \Delta M$ is chosen to maximize the significance estimator $\mathcal{S}$. Below we take $\Delta M=1.28 \, \Gamma_{G'}$, which corresponds to the $3 \sigma$ width in the case of a Gaussian distribution. We have calculated and analysed the integrated luminosity which is necessary for the evidence of $G'$-boson at the LHC. The integrated luminosity at $3\sigma$ significance ($\mathcal{S}=3$) in dependence on $G'$ mass for different $\theta_G$ is shown in Fig.2. From this figure we find that for $ \theta_G=45^\circ, \; 30^\circ, \; 20^\circ, \; 15^\circ $ the $G'$-boson with masses $$m_{G'} < 6.5 \, TeV, \; 7.0 \, TeV, \; 7.9 \, TeV, \; 9.8 \, TeV \label{eq:mG1LHC}$$ can be evident in $t\bar{t}$ events at the LHC at integrated luminosity $L=10\,fb^{-1}$ with $3\sigma$ significance and expected numbers of signal (background) events $N_s(N_b)=3.2(0.4), \,$ $ 3.1(0.3), \,$ $ 3.9(0.7), \,$ $ 7.0(3.6)$ respectively. The first value in  corresponds to the case of the axigluon. It should be noted that the chiral extension  of the usual $SU_c(3)$ color symmetry and its unification with the electroweak symmetry by the group $G_c \!\times \! SU_L(2) \! \times \! U(1) $ naturally extends the Higgs sector. For giving the masses to the up and down quarks and to the leptons one needs two scalar doublets $(\Phi^{(1,2)}_{a})_{\alpha \beta}$ with the SM hypercharges $Y^{SM}=\mp 1 $ and VEVs $\langle (\Phi^{(b)}_{a})_{\alpha \beta} \rangle = \delta_{\alpha \beta} \,\delta_{ab} \,\eta_{b}/(2\sqrt{3})$ and a colorless doublet $\Phi^{(3)}_{a}$ with VEV $\langle \Phi^{(3)}_{a} \rangle = \delta_{a2} \,\eta_{3}/\sqrt{2}$, here $a=1,2$ is the $SU_L(2)$ index and $\sqrt{\eta_{1}^2 + \eta_{2}^2 + \eta_{3}^2} = \eta_{SM} \approx 250 \, GeV $ is the SM VEV. The doublets $(\Phi^{(1,2)}_{a})_{\alpha \beta}$ break the chiral symmetry  but their VEVs $\eta_{1}, \eta_{2}$ are insufficient to give the necessary masses ,  to the $G'$-boson and by this reason one needs an additional scalar field $\Phi^{(0)}_{\alpha \beta}$ which does not interact with fermions and has the VEV $\langle \Phi^{(0)}_{\alpha \beta} \rangle = \delta_{\alpha \beta} \, \eta_0 /(2 \sqrt{3}) $. In this case the $G'$ mass can be given by expression  with $\eta = \sqrt{\eta_{1}^2 + \eta_{2}^2 + \eta_{0}^2}$ and from ,  we find that the VEV of the chiral color symmetry breaking $\eta_0$ can be relatively small, $\eta_0 \gtrsim 800 \, GeV$. Because of the decomposition $(3_L, \bar{3}_R) = 1_{SU_c(3)} + 8_{SU_c(3)}$ the multiplets $(\Phi^{(1,2)}_{a})_{\alpha \beta}$, $\Phi^{(0)}_{\alpha \beta}$ after the chiral color symmetry breaking give rise to the $SU_c(3)$ octets $(\Phi^{(1,2;8)}_{a})_{\alpha \beta} = \Phi^{(1,2)}_{ia} (t_i)_{\alpha \beta} $, $\Phi^{(0;8)}_{\alpha \beta} = \Phi^{(0)}_{i} (t_i)_{\alpha \beta} $ and to the color singlets $(\Phi^{(1,2;0)}_{a})_{\alpha \beta} = \Phi^{(1,2)}_{0a} \, \delta_{\alpha \beta} / \sqrt{6} $, $\Phi^{(0;0)}_{\alpha \beta} = \Phi^{(0)}_{0} \, \delta_{\alpha \beta} / \sqrt{6} $. The colorless doublets $\Phi^{(1,2)}_{0a}$, $\Phi^{(3)}_{a}$ form the SM Higgs doublet $\Phi^{(SM)}_{a}$ with the SM VEV $\eta_{SM}$ and two additional doublets $\Phi'_{a}$, $\Phi''_{a}$. So, the chiral color symmetry in addition to the new gauge $G'$-boson predicts the new scalar fields: the colorless doublets $\Phi'_{a}$, $\Phi''_{a}$, two doublets of color octets $\Phi^{(1,2)}_{ia}$, the color octet $\Phi^{(0)}_{i}$ and the colorless $SU_L(2)$ singlet $\Phi^{(0)}_{0} = (\eta_{0} + \chi^{(0)}_0 + i \, \omega^{(0)}_0)/\sqrt{2} $ with the VEV $\eta_0$. It should be noted that scalar octets of the different origin are predicted also in a number of models [@popov-2005-20; @PPSmPhAN2007; @MW; @GrWise; @Perez; @Perez2; @Choi]. Since they are colored particles, the color scalar octets due to their interactions with gluons can be produced in $pp$ collisions and the phenomenology of such particles at the LHC is under active discussion now [@MW; @GrWise; @Perez; @Perez2; @Choi; @Gerbush; @Zerwekh; @MartSmMPLA23; @Dobrescu; @Idilbi]. As concerns the field $\Phi^{(0)}_{0}$ its real part $\chi^{(0)}_0$ after the chiral color symmetry breaking acquires the mass of order of $\eta_{0}$ whereas the imaginary part $\omega^{(0)}_0$ is still massless in the tree approximation and in the unitary gauge is not ruled out by a gauge transformation. The features of these new fields will be discussed in more details elsewhere. In conclusion, we summarize the results found in this work. The gauge model with the chiral color symmetry of quarks as a possible extension of the Standard Model is considered, and possible effects of the color $G'$-boson octet predicted by this symmetry at the Tevatron and LHC energies are investigated. The hadronic width of the $G'$-boson and the $G'$-boson contributions to the cross section $\sigma_{t\bar{t}}$ and to the forward-backward asymmetry $A_{\rm FB}^{p \bar p}$ of $t\bar{t}$ production at the Tevatron are calculated and analysed in dependence on two free parameters of the model, the mixing angle $\theta_G$ and $G'$ mass $m_{G'}$. The allowed region in the $m_{G'} - \theta_G$ plane is found from the Tevatron data on $\sigma_{t\bar{t}}$ and $A_{\rm FB}^{p \bar p}$. The mass limits for the $G'$-boson are shown to be stronger than those for the axigluon due to the specific dependence of the $G'$-boson coupling constants on $\theta_G$. A possible effect of the $G'$-boson on the $t\bar{t}$-pair production at the LHC is discussed and the mass limits providing for the $G'$-boson evidence at the LHC with $3\sigma$ significance at the integrated luminosity $L=10\,fb^{-1}$ are estimated in dependence on $\theta_G$. [10]{} url \#1[`#1`]{}titlet \#1urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} J. M. Butterworth, AIP Conf. Proc. **957**, 197–200 (2007). [](http://arxiv.org/abs/0709.2547), C. T. Hill, S. J. Parke, Phys. Rev. **D49**, 4454–4462 (1994). [](http://arxiv.org/abs/hep-ph/9312324), M. Beneke, et al., CERN-TH/2000-100, [](http://arxiv.org/abs/hep-ph/0003033). J. C. Pati, A. Salam, Phys. Lett. **B58**, 333–337 (1975). L. J. Hall, A. E. Nelson, Phys. Lett. **B153**, 430 (1985). P. H. Frampton, S. L. Glashow, Phys. Rev. Lett. **58**, 2168 (1987). P. H. Frampton, S. L. Glashow, Phys. Lett. **B190**, 157 (1987). J. Bagger, C. Schmidt, S. King, Phys.Rev.D **37**, 1188 (1988). G. Rodrigo, PoS RADCOR **2007**, 010 (2007). [](http://arxiv.org/abs/0803.2992). C. Amsler, et al. (PDG), Phys. Lett. B **667**, 1 (2008). T. Aaltonen, et al., FERMILAB-PUB-08-572-E, [](http://arxiv.org/abs/0812.4036). O. Antunano, J. H. Kuhn, G. Rodrigo, Phys. Rev. **D77**, 014003 (2008). [](http://arxiv.org/abs/0709.1652), P. Ferrario, G. Rodrigo, Phys.Rev.D **78**, 094018 (2008). [](http://arxiv.org/abs/0809.3354). A. Lister, FERMILAB-CONF-08-474-E, [](http://arxiv.org/abs/0810.3350). S. Alekhin, Phys.Rev.D **67**, 014002 (2003). J. M. Campbell, J. W. Huston, W. J. Stirling, Rept. Prog. Phys. **70**, 89 (2007). [](http://arxiv.org/abs/hep-ph/0611148), L. M. Sehgal, M. Wanninger, Phys. Lett. **B200**, 211 (1988). D. Choudhury, R. M. Godbole, R. K. Singh, K. Wagh, Phys. Lett. **B657**, 69–76 (2007). [](http://arxiv.org/abs/0705.1499), T. Aaltonen, et al. (CDF Collaboration), Phys. Rev. Lett. **101**, 202001 (2008). [](http://arxiv.org/abs/0806.2472), V. Bartsch, G. Quast, CMS-NOTE-2005-004, CERN, Geneva (Feb 2005). P. Yu. Popov, A. V. Povarov, A. D. Smirnov, Mod. Phys. Lett. A [**20**]{}, 3003 (2005), hep-ph/0511149. A. V. Povarov, P. Yu. Popov, A. D. Smirnov, Yad. Fiz. [**70**]{}, 771 (2007); (Physics of Atomic Nuclei [**70**]{}, 739 (2007)). A. V. Manohar, M. B. Wise, Phys. Rev. D [**74**]{}, 035009 (2006), hep-ph/0606172. M. I. Gresham, M. B. Wise, Phys. Rev. D [**76**]{}, 075003 (2007), arXiv.org:0706.0909. P. F. Perez, H. Iminniyaz, G. Rodrigo, Phys. Rev. D [**78**]{}, 015013 (2008), arXiv.org:0803.4156. P. F. Perez et. al., arXiv:0809.2106. S.Y. Choi et. al., arXiv:0812.3586.  M. Gerbush et. al., Phys. Rev. D [**77**]{}, 095003 (2008), arXiv.org:0710.3133. A. R. Zerwekh, C. O. Dib, R. Rosenfeld, Phys. Rev. D [**77**]{}, 097703 (2008), arXiv.org:0802.4303. M. V. Martynov, A. D. Smirnov, Mod. Phys. Lett. A [**23**]{}, 2907 (2008), arXiv:0807.4486. B. A. Dobrescu, K. Kong, R. Mahbubani, Phys. Lett. **B670**, 119 (2008), arXiv:0709.2378. A. Idilbi, C. Kim, T. Meher, arXiv:0903.3668. [**Figure captions**]{} > Fig. 1. The $m_{G'} - \theta_G$ regions compatible within $2 \sigma$ with CDF data on $\sigma_{t\bar{t}}$ (the unshaded region) and on $A_{\rm FB}^{p \bar p}$ (the undashed region). The dashed lines denote the corresponding $1 \sigma$ regions. > Fig. 2. The integrated luminosity $L$ needed for $3\sigma$ evidence of $G'$-boson at the LHC in dependence on the $G'$ mass for different $\theta_G$. The horizontal dashed line denotes $L=10\,fb^{-1}$. =0.7 M.V. Martynov, A.D. Smirnov, Modern Physics Letters A Fig. 1 =0.7 M.V. Martynov, A.D. Smirnov, Modern Physics Letters A Fig. 2 [^1]: E-mail: martmix@mail.ru [^2]: E-mail: asmirnov@univ.uniyar.ac.ru
{ "pile_set_name": "ArXiv" }
--- author: - 'Mathieu Vrard,$^{1}$ Margarida S. Cunha$^{1}$' bibliography: - 'Glitches\_mvrard.bib' title: 'Influence of structural discontinuities present in the core of red-giant stars on the observed mixed-mode pattern and characterization of their properties' --- Introduction ============ Since the launch of the space-borne photometric missions CoRoT [@2006ESASP1306...33B] and $\Kepler$ [@2010Sci...327..977B], many important studies have been carried out. Among the observed stars, red giants have shown particularly complex spectra, exhibiting pressure modes as well as mixed modes [@2009Natur.459..398D]. The latter are the results of modes behaving as gravity modes in the core of the star and as pressure modes in their envelope thus allowing to obtain information on their stellar core. They were used to distinguish core-helium burning stars (clump stars) from hydrogen-shell burning red-giant branch stars and also provide observational constraints on the stellar core rotation . The pressure modes present in red-giant star spectra result from acoustic waves stochastically excited by convection in the outer layers of the star. The observed pressure mode pattern has been depicted in a canonical form, called the universal red-giant oscillation pattern . This canonical form describes the regularity of the pressure mode pattern characterized by two quantities: the frequency $\numax$ of maximum oscillation and the mean frequency difference $\Dnu$ between consecutive pressure modes of same angular degree. We can consider the observed pattern to be the translation of the second-order asymptotic pattern described in @1980ApJS...43..469T at low radial order . The mixed modes show a more complicated pattern than pressure modes and gravity modes, which, for the latter, are evenly spaced in period. However, their oscillation pattern can be asymptotically described and the deviations from this pattern can, therefore, be characterized. It has long been predicted that structural discontinuities exist in stellar interiors and that they affect the observed solar-like oscillations by inducing regular deviations from the classical mode pattern [e.g. @1990LNP...367..283G]. The influence of structural discontinuities, usually called glitches, on the pressure mode pattern was also investigated for red-giant stars [@2014MNRAS.tmp..576B; @2014MNRAS.445.3685C] and observed in $\Kepler$ data . However, few studies investigated the influence of structural discontinuities on the gravity waves for the same type of objects. These type of studies were mainly done for gravity-mode pulsators, like white dwarfs [e.g. @1991ApJ...378..326W; @1992ApJS...80..369B], $\gamma$-Doradus stars [e.g. @2008MNRAS.386.1487M] and hot B subdwarfs stars [@2000ApJS..131..223C; @2018MNRAS.474.4709K] which are essentially the cores of previous red giants. Only one recent study transposed the previous work done on the aforementioned gravity-mode pulsators to the inner g-mode resonant cavity of red-giant stars and, thus, deduced the influence of the structural discontinuities located in their deeper layer on their mixed-mode pattern [@2015ApJ...805..127C]. The signature of these discontinuities was then claimed to have been discovered by but the precise characterization of these signatures was not performed. The aim of this work is to use the analytic model describing the influence of glitches on the mixed-mode pattern that was derived by @2015ApJ...805..127C [in preparation] and use it to characterize the discontinuity characteristics. First, we will describe this analytical model and adapt it to estimate the frequency position of the mixed modes as a function of the discontinuity characteristics. Second, we will use this analytical description to measure the glitch characteristics in several red-giant clump stars observed with $\Kepler$. Description of the glitch influence on the red-giant mixed-mode pattern ======================================================================= Analytical description ---------------------- Asymptotically, gravity modes are approximately evenly spaced in period following the asymptotic period spacing $\Delta\Pi_{\ell}$. This asymptotic value is defined by the integration of the Brunt-Väisäla radial profile $\Brunt$ inside the radiative inner regions $\Radreg$. For $\ell = 1$ modes, it writes $$\deltapi = \frac{2\pi^{2}}{\sqrt{2}}\left(\int_{\mathcal{\Radreg}} \frac{\Brunt}{r}\; {\mathrm{d}}r\right)^{-1}. \label{Brunt_equation}$$ Its value is related to the size of the inner radiative region [@2013EPJWC..4303002M]. As stated before, gravity waves propagate only in regularly stratified medium, which correspond here to the stellar radiative core. Consequently, gravity modes will not be observed directly in red-giant star spectra. However, in red-giant stars, a coupling occur between gravity and pressure waves giving rise to the so-called mixed-modes. An asymptotic relation describing the mixed modes behavior was provided by @Shibahashi1979 and @Unno1989. In this framework, eigenfrequencies are derived from an implicit equation relating the coupling of the p and g waves through an evanescent region [Eq. (16.50) of @Unno1989]: $$\label{asympt_coupling} \tan \Phasep = q \tan \Phaseg,$$ where $\Phasep$ and $\Phaseg$ are the p- and g-wave phases. The dimensionless coefficient q corresponds to the coupling between the p- and g-waves and measures the level of mixture of the p and g phases. Following @Unno1989, the p- and g-wave phases can be written as, for $\ell = 1$ modes: $$\label{PhaseP_obs} \Phasep = \pi \left(\frac{\nu-\nu_{n,\ell=1}}{\Dnu} \right),$$ $$\label{PhaseG_obs} \Phaseg = \pi\left(\frac{1}{\nu\deltapi} - \eps_g \right),$$ where $\nu$ corresponds to the mixed-mode frequencies, $\nu_{n,\ell=1}$ represents the pure pressure $\ell = 1$ modes and $\eps_g$ is the gravity offset. In the case of the JWKB approximation applied to non-radial adiabatic oscillations, $\Phasep$ and $\Phaseg$ can also be express as the following [@Shibahashi1979; @Unno1989]: $$\label{Description_PhaseP} \Phasep = \int_{\mathcal{\RadP}} \waveN^{2}\; {\mathrm{d}}r$$ $$\label{Description_PhaseG} \Phaseg = \int_{\mathcal{\Radreg}} \waveN^{2}\; {\mathrm{d}}r,$$ where $\RadP$ correspond to the pressure waves resonant cavity region and $\waveN$ is the radial wavenumber. This quantity can be approximated by the following expression: $$\label{Description_PhaseG} \waveN^{2} \approx \frac{\omega^{2}}{c_s^{2}} \left(\frac{\Lamb^{2}}{\omega^{2}} - 1 \right) \left(\frac{\Brunt^{2}}{\omega^{2}} - 1 \right),$$ where $\omega$ is the wave angular frequency, $c_s$ is the sound speed and $\Lamb$ is the Lamb frequency for modes of degree $\ell$, expressed as $\Lamb = \sqrt{\ell(\ell+1)}c_s/r$. To understand the impact of a glitch on the oscillation frequencies, we will consider a single discontinuity appearing at a specific position in radius in the buoyancy frequency (hereafter named $r^{*}$). At first, we model the glitch using a Gaussian-like function which we defined as the following for $\ell = 1$ modes: $$\label{Description_PhaseG} \Brunt^{2} = \Brunt_0^{2} \left[1 + \frac{\Amp_G}{\sqrt{2\pi}\width} \left(\frac{r}{\sqrt{2}\Brunt_0}\right)^{\frac{1}{2}} \exp \left(\frac{-(\cavity-\Period)^{2}}{2\width^{2}} \right) \right],$$ where $\Brunt_0$ is the glitch-free buoyancy frequency. The constants $\Amp_G$ and $\width$ measure respectively the amplitude and width of the glitch while the parameters $\cavity$ and $\Period$ correspond respectively to the buoyancy depth and the buoyancy depth at the glitch position. The buoyancy depth of a specific discontinuity corresponds to the position of the glitch in the considered resonant cavity as seen by the gravity waves. By defining $r_1$ and $r_2$ as the lower and upper turning points, respectively, that define the propagation cavity of the g-waves, the buoyancy depth can be expressed as: $\cavity = \sqrt{2} \int_{r}^{r_2} \frac{\Brunt}{r}\; {\mathrm{d}}r$. This quantity can be related to the gravity-mode period spacing ($\deltapi$) through the total buoyancy radius [@1980ApJS...43..469T]: $$\label{Relation_deltapi_buoyancyradius} \rescavity \approx \frac{2\pi^{2}}{\deltapi} = \cavity + \sqrt{2} \int_{r_1}^{r} \frac{\Brunt}{r}\; {\mathrm{d}}r.$$ The buoyancy depth at the glitch position will be described as $\Period = \sqrt{2} \int_{r^{*}}^{r_2} \frac{\Brunt}{r}\; {\mathrm{d}}r$. The glitch is therefore defined by three parameters: $\Amp_G$, $\width$ and $\Period$. The mode frequencies are determined by finding which oscillation eigenfrequencies are allowed by the boundary conditions that are imposed to the problem [e.g. @2007AN....328..273G; @2015ApJ...805..127C]. The work of @2015ApJ...805..127C demonstrated that the eigenvalue condition in the presence of mode coupling and a glitch is given by: $$\label{Equation_deltapi_glitch} \int_{r_1}^{r_2} \waveN\; {\mathrm{d}}r = \pi\left(n - \frac{1}{2} \right) - \coupling - \glitch,$$ where $n$ is a positive integer, $\coupling \approx \arctan\left(\frac{q}{\tan(\Phasep)} \right)$ is the coupling phase and $\glitch$ represents the glitch frequency-dependent phase. The $\coupling$ and $\glitch$ parameters correspond to, respectively, the coupling and glitch influence on the mode frequencies. If we assume that the coupling and glitch influence correspond to small perturbations, then we can write $\waveN = \waveN^{0} + \delta_k$ where $\waveN^{0}$ corresponds to the radial wavenumber without accounting for any glitch or coupling and $\delta_k$ is the perturbation due to the coupling and glitches. Since without glitch and coupling, it has been shown that the eigenvalue conditions translates to $\int_{r_1}^{r_2} \waveN^{0}\; {\mathrm{d}}r = \pi\left(n - \frac{1}{2} \right)$, we can therefore write: $$\label{Equation_glitch} \int_{r_1}^{r_2} \delta_k\; {\mathrm{d}}r = - \coupling - \glitch.$$ Moreover, following @Unno1989 $\delta_k$ can be express as a function of the g phase: $\int_{r_1}^{r_2} \delta_k\; {\mathrm{d}}r = \Phaseg$. By using Eq. (\[PhaseG\_obs\]) we can therefore write: $$\label{Relation_deltapi_buoyancyradius} \pi\left(\frac{1}{\nu\deltapi} - \eps_g \right) = - \coupling - \glitch.$$ After substituting the coupling phase $\coupling$ in this equation and using Eq. (\[PhaseP\_obs\]) to express the p-wave phase, we find the following relation: $$\label{modes_GlitchG} \nu = \nu_{n,\ell=1} + \frac{\Dnu}{\pi} \arctan \left( q \tan \left[\frac{\pi}{\nu\deltapi} - \eps_g + \glitch_G \right] \right),$$ which is close from the expression derived by , having only the addition of the glitch phase. For the case of a glitch modeled by a Gaussian-like function, Cunha et al. (in preparation) have demonstrated that the glitch phase $\glitch = \glitch_G$ is defined by: $$\label{GlitchG_description} \glitch_G = {arcCot}\left[\frac{2\pi\nu}{a_G {e}^{-\frac{\width^{2}}{8(\pi\nu)^{2}}}\sin^{2}(\beta_2)} - \frac{\cos(\beta_1)}{2\sin^{2}(\beta_2)} \right],$$ where $\beta_1 = 2\frac{\Period}{2\pi\nu} + 2\coupling + 2\varepsilon$, $\beta_2 = \frac{\Period}{2\pi\nu} + \frac{\pi}{4} + \coupling + \varepsilon$ and $a_G = \Amp_G \left(\frac{r^{*}}{\sqrt{2}\Brunt_0^{*}} \right)^{1/2}$. $\varepsilon$ is a phase value and $\Brunt_0^{*}$ corresponds to the glitch-free buoyancy frequency at the glitch position in radius. It has to be noted that this formulation is only valid for a glitch situated in the outer cavity, with $\Period/\rescavity > 0.5$. The boundary conditions on the left and right side of the cavity are indeed not the same when p- and g-waves coupling happens. With Eq. (\[modes\_GlitchG\]) and Eq. (\[GlitchG\_description\]) it is possible to derive the mixed-mode frequencies in the presence of a glitch. It has to be noted that this expression is only valid for a discontinuity described as a Gaussian-like function. Other discontinuity shapes can be present in stellar interiors, like in some stellar models with discontinuities appearing as discontinuous step-functions [@2008MNRAS.386.1487M; @2015MNRAS.453.2290B]. In this case we model the glitch by the following function: $$\Brunt = \left\{ \begin{array}{l} \Brunt {_{\mathrm{in}}} \ \ \ \mathrm{for} \ \ \ r<r^{*} \\ \Brunt {_{\mathrm{out}}} \ \ \ \mathrm{for} \ \ \ r>r^{*} \end{array} \right. \label{Equation:step_function}$$ with $\Brunt$ varying by $\Delta\Brunt = \Brunt {_{\mathrm{in}}} - \Brunt {_{\mathrm{out}}}$. The glitch is therefore characterized by two parameters: its amplitude $\Amp_{ST} = [\Brunt {_{\mathrm{in}}}/\Brunt {_{\mathrm{out}}}]_{r^{*}} - 1$ and its radial position $r^{*}$. For this specific case, it has been shown that the glitch phase $\glitch = \glitch_{ST}$ is defined by (Cunha et al., in preparation): $$\glitch_{ST} = {arcCot}\left[ - \frac{2 + 2\Amp_{ST}\cos^{2}(\beta_2)}{ \Amp_{ST}\cos(\beta_1)} \right], \label{GlitchST_description_inner}$$ if $\Period/\rescavity < 0.5$, which correspond to a glitch located in the inner half of the cavity (measured in terms of buoyancy radius), while $$\glitch_{ST} = {arcCot}\left[ - \frac{2 + 2\Amp_{ST}\sin^{2}(\beta_2)}{ \Amp_{ST}\cos(\beta_1)} \right], \label{GlitchST_description_outer}$$ if $\Period/\rescavity > 0.5$, which correspond to a glitch located in the outer half of the cavity. Glitch influence on period echelle diagram\[Glitch\_influence\] --------------------------------------------------------------- In order to evaluate the impact of the glitch parameters on the red-giant mixed-mode pattern, we computed the mixed-mode frequencies for a typical red-giant star using Eq. (\[modes\_GlitchG\]) with different glitch parameters and description. We used as an input, the global seismic parameters that were measured for the star KIC002692629. This red-giant branch star has a large separation and frequency of maximum oscillation equal, respectively, to $\Dnu = 16.23 \mu$Hz and $\numax = 207.37\mu$Hz. We settled the gravity-mode period spacing and coupling parameter to, respectively, $\deltapi = 85$s and $q = 0.13$ which correspond to typical values of these parameters for this kind of star . The frequencies of pure $\ell = 1$ pressure modes were determined by the use of the universal pattern which describes the pure p-mode eigenfrequency pattern. To isolate the effect of the glitch on the mixed-mode pattern, we modified the mode frequencies following the stretching technique as described in Section 2.2 of . In practice, each frequency is modified according to the difference expressed by the ratio between the mixed-mode period spacing (which corresponds to the difference between mixed-modes of consecutive radial order) and the pure gravity mode period-spacing ($\deltapi$). The stretched periods are therefore represented by a new variable: $\taufreq$. It corresponds to the mode periods for which the influence of the coupling between gravity and pressure waves has been removed. Following this statement, if the original mixed-mode pattern had no deviations from the analytic mixed-mode pattern the stretched mixed-mode frequencies would be regularly spaced in period. Since we initially introduce a glitch in our computed mixed-mode pattern, this will not be the case. However, using the stretching technique allows us to isolate the glitch influence and evaluate the impact of its different characteristics. ![image](Synth_85glitch_width010Amplitude00050Test.png){width="0.28\linewidth"} ![image](Synth_85glitch_width020Amplitude00050Test.png){width="0.28\linewidth"} ![image](Synth_85glitch_width055Amplitude00050Test.png){width="0.28\linewidth"} The results for a glitch modeled by a Gaussian with different discontinuity positions in the gravity waves resonant cavity can be seen in Fig. \[fig:Fig\_width\], represented as a stretched period echelle diagram. With this representation, if there is no deviations from the asymptotic mixed-mode pattern, we expect to see the mode aligned on a straight line. Here, the glitch influence appears clearly as a periodic modulation in the mixed-mode frequencies around the expected mixed-mode pattern as shown in previous works . As can be seen on Fig. \[fig:Fig\_width\], the period of the modulation is affected by the glitch position inside the resonant cavity ($r^{*}$). A discontinuity present near the extremity of the resonant cavity induces a modulation with long period while a discontinuity close to the center of the resonant cavity results in a short-period modulation. The width of the discontinuity ($\width$) affects the amplitude of the modulation: it produces a decrease of the modulation amplitude with decreasing mode frequencies. The wider the glitch is, the stronger this decrease will be. Finally, the amplitude ($\Amp_G$) and phase ($\varepsilon$) of the discontinuity affect, respectively, the modulation amplitude and phase. The impact of a discontinuity modeled by a step-function is very similar, as can be seen on Fig. \[fig:Fig\_step\]. However, since the shape of the discontinuity does not have a specific width, the amplitude of the observed modulation will have no variation as a function of the frequency. ![Stretched period echelle diagram of computed asymptotic mixed modes for which a buoyancy glitch modeled by a step-function was included. The glitch amplitude, phase and radial position was settled to, respectively, $\Amp_{ST} = 0.005$, $\varepsilon = 0.0$ and $\Period/\rescavity = 0.20$.[]{data-label="fig:Fig_step"}](Synth_85glitch_step020Amplitude05000.png){width="0.45\linewidth"} Characterization of the discontinuities ======================================= In this section, the analytical expression providing the mixed-mode frequencies in the presence of a glitch in the stellar core will be used to characterize the deviations from the classical mixed-mode pattern in observed red-giant star spectra. Data selection -------------- Long-cadence data from $\Kepler$ up to the quarter Q$17$ were used, which correspond to $44$ months of photometric observations. We focused our sample on the $6111$ stars for which the signal-to-noise ratio was sufficient to determine their evolutionary status, following . Among them, we selected a few stars that show clear periodic deviations from the classical mixed-mode pattern as revealed in the period echelle diagram. We checked that these deviations do not correspond to the signature expected for core rotation. The stars showing these features were all belonging to the clump. Measurement of the mixed-mode frequencies ----------------------------------------- To identify the mixed-mode frequency positions in the spectra, the measurement of the seismic global parameters is essential. We used the envelope autocorrelation function in order to obtain estimates of the values of $\Dnu$, $\numax$ and the parameters of the Gaussian envelope power excess produced by the oscillations. After this, the $\Dnu$ values were refined by using the universal pattern in order to enhance the accuracy of the determination of $\Dnu$ and to precisely locate the different oscillation modes. In a second step, for a more precise estimation of the global seismic parameters, we performed a global fit of the background and the power following the bayesian fitting technique described by . We used the $\numax$ and the Gaussian envelope power excess parameters deduced from the previous measurement as priors for this fit. In order to obtain realistic priors for the background parameters, we used the scaling relations between these parameters and $\numax$ as obtained by . Following this, we also measured the $\deltapi$ and $q$ parameters using the automated method described in . We checked afterwards that the results given by the automated measurement were consistent and that there was no other possible solution. Since we are only interested in the mixed-mode characteristics, we selected the part of the spectra near the $\nu_{n,\ell=1}$ mode frequencies where they mostly appear. In practice, it consists on avoiding the frequency regions where radial and quadrupole modes are present. Since the positions of these modes are known through the red-giant universal pattern, the frequency selection only depends on the value of the large separation: we keep part of the spectrum with a second-order reduced frequency $\red$ verifying: $$\label{eqt-conditions} \red = {\nu \over \Dnu} -\left(\np+\epsilonp +{\alpha\over 2} (\np-\nmax)^2 \right) \in [0.06,0.80] ,$$ where $\epsilonp$ is the pressure asymptotic offset, $\np$ the pressure radial order, $\nmax = \numax/\Dnu-\epsilonp$ is the non-integer order at the frequency $\numax$ of maximum oscillation signal, and $\alpha$ is a term corresponding to the second order of the asymptotic expansion . In order to identify the different mixed modes present in each portions of the spectra, we smoothed the power density spectrum with a low-pass filter with a width equal to $\Dnu/100$ and located the local maxima. These local peaks are considered to be significant if their heights exceed a threshold corresponding to the rejection of the pure noise hypothesis with a confidence level of $99.9\%$. Since only $\ell = 3$ modes with very low visibility can be present at those frequencies, all the significant peaks are assumed to be $\ell = 1$ mixed modes. Finally, we fit all the modes that were identified on each portion of the spectra using the bayesian method described in Part $2.2$ of . Two types of peak profiles were used to fit the modes following the fact that resolved and unresolved mixed modes both exists in those spectra. Resolved peaks are modes for which the observing time is higher or equivalent to the mode lifetime and unresolved peaks are modes for which the mode lifetime is largely higher than the observing time. In the former case, the oscillation peak profile is represented as a sinus cardinal function [@2004SoPh..220..137C] otherwise, the adopted profile is that of a Lorentzian [e.g. @1990ApJ...364..699A]. In order to know if the different peaks correspond to resolved modes or not, we analyzed the number of frequency bins, which belongs to the peaks, that are above $8$ times the background level, thus corresponding to the presence of a signal with a confidence level of $99.9\%$. If there is more than one frequency bin that reach this level for a specific peak, then the corresponding mode is considered as being resolved otherwise it is identified as unresolved. An example of the performed adjustment is shown in Fig. \[fig:mode\_fitting\] for one portion of the star spectrum KIC$1995859$. The fit of the different portions of the spectra following these criteria allowed us to extract the frequencies of all significant modes for the selected objects. ![Power density spectrum for the star KIC1995859 (black line) as a function of frequency. The fit of the modes is shown with the solid green line. Here it consists of several Lorentzians, since all modes are identified as being resolved, fitted over the previously adjusted background.[]{data-label="fig:mode_fitting"}](Sectrapart_KIC001995859peakbagging.png){width="1.00\linewidth"} Gravity-mode glitches identification and characterization --------------------------------------------------------- In order to precisely characterize the glitches, we performed the stretching of the frequencies as described in Section \[Glitch\_influence\]. As stated before, it allows us to isolate the glitch influence from the coupling influence on the mixed-mode pattern. The results for the stars KIC$1995859$ and KIC$9332840$ are presented in Fig. \[fig:Stretched\_diagram\]. For the star KIC$1995859$, the modes exhibit a single nearly vertical ridge, showing that there is no obvious regular deviations. However, it is not the case for other stars like KIC$9332840$ for which a long modulation is observed as what was observed for the asymptotic development in Section \[Glitch\_influence\]. ![Stretched period echelle diagram of the stars KIC$1995859$ (left) and KIC$9332840$ (right). Error bars correspond to the $1-\sigma$ uncertainties. The dashed orange line indicates the stretched period reference if there was no shift due to glitches from the asymptotic development.[]{data-label="fig:Stretched_diagram"}](ObsKIC001995859_322.png "fig:"){width="0.48\linewidth"} ![Stretched period echelle diagram of the stars KIC$1995859$ (left) and KIC$9332840$ (right). Error bars correspond to the $1-\sigma$ uncertainties. The dashed orange line indicates the stretched period reference if there was no shift due to glitches from the asymptotic development.[]{data-label="fig:Stretched_diagram"}](ObsKIC009332840_300.png "fig:"){width="0.48\linewidth"} Among the clump stars we considered in the sample, we could identified $15$ for which clear regular modulation was present. In order to characterize the parameters of the modulation, we performed a fit based on Eq. (\[modes\_GlitchG\]). The glitch model we used is the one using a step-function (Eq. (\[GlitchST\_description\_inner\]) and Eq. (\[GlitchST\_description\_outer\])) since this model has less free parameters. Moreover, the frequency range where we detect modes is too small to account of the amplitude variation with the frequency we expect to see for the glitch modeled by a Gaussian. The fit was realized with a $\chi^{2}$ estimator. The error bars were deduced through the inversion of the Hessian matrix. The fit was successful for $10$ stars, the results for three of them are shown on the Fig. \[fig:Stretched\_diagram\_fit\]. The detailed results can be seen in Table \[tab:table\_wide\]. ![image](ObsKIC002714397_324fit.png){width="0.30\linewidth"} ![image](ObsKIC002995656_306fit.png){width="0.30\linewidth"} ![image](ObsKIC009332840_300fit.png){width="0.30\linewidth"} [c @ c c c c c]{} KIC number & $\deltapi$ (s) & q & $\Period/\rescavity$ & $\Amp_{ST}$ & $\varepsilon$\ $1724879$ & $293.8$ & $0.22$ & $0.044\pm0.005$ & $2.09\pm0.12$ & $-2.34\pm0.03$\ $1726211$ & $322.0$ & $0.36$ & $0.024\pm0.010$ & $2.09\pm0.47$ & $1.86\pm0.05$\ $2156988$ & $280.3$ & $0.31$ & $0.009\pm0.025$ & $3.00\pm0.26$ & $-1.04\pm0.02$\ $2303367$ & $307.7$ & $0.31$ & $0.034\pm0.025$ & $2.40\pm0.03$ & $-3.04\pm0.03$\ $2583651$ & $280.2$ & $0.29$ & $0.016\pm0.005$ & $4.00\pm0.13$ & $-1.04\pm0.01$\ $2714397$ & $324.5$ & $0.39$ & $0.034\pm0.025$ & $3.00\pm0.06$ & $2.36\pm0.01$\ $2995656$ & $306.1$ & $0.29$ & $0.024\pm0.005$ & $2.90\pm0.18$ & $-0.34\pm0.02$\ $3117024$ & $248.0$ & $0.21$ & $0.020\pm0.005$ & $3.00\pm0.32$ & $0.56\pm0.02$\ $3946270$ & $304.0$ & $0.39$ & $0.036\pm0.010$ & $4.00\pm1.26$ & $-2.74\pm0.06$\ $9332840$ & $300.0$ & $0.24$ & $0.030\pm0.005$ & $2.70\pm0.14$ & $-1.54\pm0.01$\ Discussion and conclusion ========================= As can be seen in Fig. \[fig:Stretched\_diagram\_fit\], the modulations we discovered for these clump stars have very long periods. Therefore, these periods correspond to a discontinuity located close to one of the extremities of the gravity waves resonant cavity. In red-giant clump stars, there are few discontinuities that could theoretically produce such a glitch. One of them is the region of hydrogen-burning shell. However, this region is situated, during the clump evolutionary states, at the middle of the gravity waves resonant cavity and is usually too broad to produce a firm discontinuity [@2015ApJ...805..127C]. The second discontinuity we can consider is the signature of the first dredge-up . However, like for the hydrogen-burning shell, this discontinuity is situated closer to the middle of the resonant cavity in intermediate-mass clump stars and it disappears during the red-giant branch stellar evolution phase for low-mass stars [@2015ApJ...805..127C]. The last physical process that can produce a discontinuity is the influence of the convective core on the radiative part of the stellar core. There are indeed several physical processes induced by the development of a convective core during the clump phase like overshooting, penetrative convection [@2017MNRAS.469.4718B] or chemical mixing that will influence the base of the radiative region and produce discontinuities [@2015MNRAS.453.2290B]. More broadly, the influence of chemical discontinuities, present at the base of the radiative zone of the stars and produced by the chemical burning, on the gravity-mode pattern, has been theoretically predicted for other types of pulsators like $\gamma$-Doradus stars [e.g. @2008MNRAS.386.1487M] and hot B subdwarfs stars [@2013EPJWC..4304005C; @2013ASPC..479..263C]. For the latter, recent observations have confirmed the theoretical predictions [@2018MNRAS.474.4709K]. These observations exhibit a modulation of the period echelle diagram that is close to what we observed in our study. Since these discontinuities are close to the inner limit of the gravity waves resonant cavity they could correspond to the ones we observed in this study. To conclude, we can say that we have conducted an analysis on the impact of core structural discontinuities on the mixed-mode pattern of red-giant stars. First, we performed a theoretical analysis, based on the asymptotic expansion developed by @2015ApJ...805..127C [2019], allowing us to predict the mixed-mode frequency positions in the presence of a discontinuity for different glitch models. We then selected a sample of red-giant stars and extracted the precise mixed-mode frequency positions in the oscillation spectra of those objects and fitted the theoretical glitch model on this observed mixed-mode pattern. We found a dozen red-giant clump stars exhibiting clear periodic frequency shifts in their mixed-mode pattern that correspond to the expected behavior of glitches. The characterization of the glitch parameters was possible for $10$ stars through the use of a fitting technique. We found that these observed glitches belong to a discontinuity situated near the extremity of the gravity waves resonant cavity and that they likely correspond to the influence of the stellar convective core. These results will have to be confirmed by increasing the sample size and improving the fitting of the glitch parameters in further studies. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalização in the context of the grants: PTDC/FIS-AST/30389/2017 & POCI-01-0145-FEDER-030389 and UID/FIS/04434/2013 & POCI-01-0145-FEDER-007672. MV also thanks Stéphane Charpinet for fruitful discussions.
{ "pile_set_name": "ArXiv" }
-10mm -20mm =msbm10 \#1 =eufm10 \#1 NBI-HE-98-30 DFNT-T 06-98 MPS-RR-98-10 [ **Crumpled Triangulations and Critical Points\ in $4D$ simplicial quantum gravity**]{} [$\,^{a,}$[^1], [*M. Carfora*]{}$\,^{b,}$[^2], [*D. Gabrielli*]{}$\,^{c,}$[^3] and [*A. Marzuoli*]{}$\,^{b,}$[^4] ]{} $^a$ The Niels Bohr Institute,\ Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark, $^b$ Dipartimento di Fisica Nucleare e Teorica,\ Università degli Studi di Pavia,\ via A. Bassi 6, I-27100 Pavia, Italy,\ and\ Istituto Nazionale di Fisica Nucleare, Sezione di Pavia,\ via A. Bassi 6, I-27100 Pavia, Italy $^c$ S.I.S.S.A.-I.S.A.S.,\ Via Beirut 2-4, 34013 Trieste, Italy [**Abstract**]{} We estimate analytically the critical coupling separating the weak and the strong coupling regime in $4D$ simplicial quantum gravity to be located at $k_2^{crit}\simeq 1.3093$. By carrying out a detailed geometrical analysis of the strong coupling phase we argue that the distribution of dynamical triangulations with singular vertices and singular edges, dominating in such a regime, is characterized by distinct sub-dominating peaks. The presence of such peaks generates volume dependent pseudo-critical points: $k_2^{crit}(N_4=32000)\simeq1.25795$, $k_2^{crit}(N_4=48000)\simeq1.26752$, $k_2^{crit}(N_4=64000)\simeq1.27466$, etc., which appear in good agreement with available Monte Carlo data. Under a certain scaling hypothesis we analytically characterize the (canonical) average value, $c_1(N_4;k_2)=<N_0>/N_4$, and the susceptibility, $c_2(N_4;k_2)=(<N_0^2>-<N_0>^2)/N_4$, associated with the vertex distribution of the $4$-D triangulations considered. Again, the resulting analytical expressions are found in quite a good agreement with their Monte Carlo counterparts. Introduction ============ In this article we shall characterize analytically the critical point separating the weak and the strong coupling regime in $4D$-simplicial quantum gravity by locating it at $k_2^{crit}\simeq1.3093$. The elementary techniques we develop here will allow us to get a rather detailed understanding of the geometry and the physics of the strong coupling phase of the theory. In particular we will show that the dynamics in such a phase is influenced by the presence of peaks in the distribution of singular triangulations. These latter are combinatorial manifolds characterized by the presence of vertices shared by a number of simplices diverging linearly with the volume of the triangulation, and possibly connected by a sub-singular edge. The peaks in question are parameterized by the fraction of total volume which is allocated around such singular vertices. In order of decreasing entropic relevance, the peaks are found, according to a well-defined pattern, at $k_2\simeq1.24465$, $k_2\simeq1.2744$, $k_2\simeq1.2938$, $k_2\simeq1.30746$, $k_2\simeq1.31762$, $k_2\simeq1.32545$, etc., asymptotically fading towards the weak coupling regime. By exploiting simple entropic arguments drawing from our recent work [@Carfora], [@Houches] and by making use of a certain scaling hypothesis, we show how such collection of sub-dominating sets of singular triangulations significantly affects the dynamics of the transition between weak and strong coupling. We hope that our work offers the possibility of making progress in understanding the nature of the transition, one of the major issue which controls the validity of dynamical triangulations as the basis of a regularization scheme for gravity. Before embarking on this analysis we offer some general motivation for such a study. The Model --------- Let $M$ be a closed n-dimensional, ($n\geq 2$), manifold of given topology. Let $Riem(M)$ and $Diff(M)$ respectively denote the space of Riemannian metrics $g$ on $M$, and the group of diffeomorphisms on $M$. In the continuum formulation of Euclidean quantum gravity one attempts to give meaning to a formal path integration over $Diff(M)$ equivalence classes of metrics in $Riem(M)$: 0.5 cm $$\begin{aligned} Z(\Lambda, G,M)=\int_{Riem(M)/Diff(M)}{\cal D}[g(M)]e^{-S_{g}[\Lambda,G,\Sigma]} \label{uno}\end{aligned}$$ 0.5 cm where $S_{g}[\Lambda,G,\Sigma]$ is the Einstein-Hilbert action associated with the Riemannian manifold $(M,g)$, [*viz.*]{}, 0.5 cm $$\begin{aligned} S_g[\Lambda,G,\Sigma]= \Lambda\int_M d^n\xi\sqrt{g}-\frac{1}{16\pi{G}}\int_{M} d^n\xi\sqrt{g}\, R\end{aligned}$$ 0.5 cm and ${\cal D}[g(M)]$ is some [*a priori*]{} distribution on $Riem(M)/Diff(M)$ describing the strong coupling statistics ($\Lambda\to0$, $G\to\infty$) of the set of Riemannian manifolds $\{(M,g)\}$ considered. We avoid here discussing well-known specific pathologies in dealing with (\[uno\]) and about which the reader can find abundant literature, and simply recall that in the Dynamical Triangulations approach to quantum gravity one attempts to give meaning to (\[uno\]) by replacing the continuum Riemannian manifold $(M,g)$ with a Piecewise-Linear manifold (still denoted by $M$) endowed with a triangulation $T_a\to{M}$ generated by gluing a (large) number of equilateral $n$-simplices $\sigma^n$. One approximates Riemannian structures by means of such triangulated manifolds by using a representative metric where each simplex $\sigma^n$ is a Euclidean equilateral simplex with sides of length $a$, (typically we set $a=1$). This metric is locally Euclidean everywhere on the $PL$-manifold except near the $(n-2)$ sub-simplices $\sigma^{n-2}$, (the [*bones*]{}), where the sum of the dihedral angles, $\theta(\sigma^n)$, of the incident $\sigma^n$’s is in excess (negative curvature) or in defect (positive curvature) with respect to the $2\pi$ flatness constraint, the corresponding deficit angle $r$ being defined by $r=2\pi-\sum_{\sigma^n}\theta(\sigma^n)$. If $K^{n-2}$ denotes the $(n-2)$-skeleton of $T\to{M^n}$, then $M^n\backslash{K^{n-2}}$ is a flat Riemannian manifold, and any point in the interior of an $r$- simplex $\sigma^r$ has a neighborhood homeomorphic to $B^r\times{C}(link(\sigma^r))$, where $B^r$ denotes the ball in ${\Bbb R}^n$ and ${C}(link(\sigma^r))$ is the cone over the link $link(\sigma^r)$, (the product $link(\sigma^r)\times[0,1]$ with $link(\sigma^r)\times\{1\}$ identified to a point). Note that for dynamical triangulations the deficit angles are generated by the string of integers, the [*curvature assignments*]{}, $\{q(k)\}_{k=0}^{N_{n-2}-1}$ providing the numbers of top-dimensional simplices incident on the $N_{n-2}$ distinct bones, [*viz.*]{}, $r(i)=2\pi-q(i)\arccos(1/n)$. 0.5 cm By specializing to this setting the standard Regge calculus, the formal path integration (\[uno\]) is replaced on a dynamically triangulated PL manifold $M$, (of fixed topology), by the (grand-canonical) partition function [@Regge],[@Houches],[@Frohlich] $$\begin{aligned} Z[k_{n-2},k_n]=\sum_{T\in {\cal T}(M)} \frac{1}{C_T}\; e^{-k_nN_n+k_{n-2}N_{n-2}} \label{grandpartition}\end{aligned}$$ where $k_{n-2}$ and $k_n$ are two (running) couplings, the former proportional to the inverse gravitational coupling $1/G$, while the latter is a linear combination of $1/16\pi{G}$ and of the cosmological constant $\Lambda$. The summation in (\[grandpartition\]) is extended to the set $\{{\cal T}(M)\}$ of all distinct dynamical triangulations the PL-manifold $M$ can support, and it is weighted by the symmetry factor, $C_T$, of the triangulation: the order of the automorphism group of the graph associated with the triangulation $T$. Since symmetric triangulations are the exception rather then the rule, we shall assume $C_T=1$ in the estimates of the partition functions below. Thus, in the following we will omit the symmetry factor when writing the partition function. 0.5 cm One can introduce also the [*canonical*]{} partition function defined by $$\begin{aligned} W(k_{n-2})_{eff}=\sum_{T\in {\cal T}(N_n)} e^{k_{n-2}N_{n-2}}, \label{can}\end{aligned}$$ where the summation is extended over all distinct dynamical triangulations with given $N_n$, ([*i.e.*]{}, at fixed volume), of a given PL-manifold $M$. Finally, we shall consider the [*micro-canonical*]{} partition function $$\begin{aligned} W[N_{n-2},b(n,n-2)]=\sum_{T\in {\cal T}(N_n;N_{n-2})} 1,\end{aligned}$$ where the summation is extended over all distinct dynamical triangulations with given $N_n$ and $N_{n-2}$, [*i.e.*]{}, at fixed volume and fixed [*average incidence*]{} $$\begin{aligned} b(n,n-2)=\frac{1}{2}n(n+1)(N_n/N_{n-2}),\end{aligned}$$ of a given PL-manifold $M$. The micro-canonical partition function is simply the number of distinct dynamical triangulations with given volume ($\propto{N_n}$) and fixed average curvature ($\propto{b(n,n-2)}$), of a given PL- manifold $M$. In other words, $W[N_{n-2},b(n,n-2)]$ is the [*entropy function*]{} for the given set of dynamical triangulations: it provides the discretized counterpart of the a priori distribution ${\cal D}[g(M)]$ describing the strong coupling statistics ($k_n\to0$, $k_{n-2}\to0$) of the set of Riemannian manifolds $\{(M,g)\}$. 0.5 cm Recently there have been a number of significant advances in $3$- and $4$-dimensional simplicial quantum gravity that fit together in a coherent whole; roughly speaking, these results are related to: (i) a deeper understanding of the geometry of $n\geq3$-dimensional dynamical triangulations; (ii) the study of simpler models mimicking quite accurately the critical structure of simplicial quantum gravity; (iii) more refined computer simulations of the phase structure of the $4$- dimensional theory. These results imply that both in dimension $n=3$ and $n=4$, simplicial quantum gravity has two geometrically distinct phases parameterized by the value of the inverse gravitational coupling $k_{n-2}$. In the weak coupling phase (large values of $k_{n- 2}$) we have a dominance of PL-manifolds which collapse to branched polymer structures with an Hausdorff dimension $d_H=2$ and an entropy exponent (analogous to the string susceptibility of the $2$- dimensional theory) $\gamma=1/2$. In this phase the theory has a well defined continuum limit which is independent of any fine tuning of the (inverse) gravitational constant $k_{n-2}$. We are not really interested in this continuum limit, even if it exists. The situation is not unusual from the point of view of lattice theories. For instance in compact $U(1)$ gauge theories one hits the trivial Coulomb phase for all $\beta>\beta_0$. In the strong coupling phase (small values of $k_{n-2}$), we have a dominance of crumpled manifolds: the typical configuration sampled by the computer simulations is that of a triangulation with a few vertices on which most of the top- dimensional simplices are incident, such presence of [*singular vertices*]{}, typically connected by a sub-singular edge, seems to be a signature of the strong coupling phase [@Catterall]. Note that the word [*singular vertex*]{} is used in DT theory with a meaning quite different from the accepted meaning adopted in PL geometry. What is actually meant is that a metric ball around any such a vertex, of radius equal to the given lattice spacing, has a volume that grows proportionally to the volume of the whole PL-manifold. This behavior indicates that the Hausdorff dimension of the typical triangulation in the strong coupling phase is very large if not infinite. There is strong evidence that the transition between weak and strong coupling, marked by a critical value $k_{n- 2}^{crit}$, is of a first order nature in the $n=3$-dimensional case. In dimension $n=4$, the original numerical simulations seemed to indicate a second order nature of the transition, a result that invited positive speculations on the possibility that simplicial quantum gravity could indeed provide a reliable regularization of Euclidean quantum gravity. However, recent and more accurate analyses [@Krzywicki] of the Monte Carlo simulations seem rather to point toward a first order nature of the transition. These results are not definitive since the latent heat at the critical point is very small as compared to the $3$D-case, so that the question remains open whether any further increase in the sophistication of the simulations will definitively establish, within the limit of the reached accuracy, the nature of the transition. In any case, all recent numerical results [@singedge] strongly indicates that the phase transition in $4$-dimensional simplicial gravity is associated with the creation of singular geometries. These results have put to the fore the basic problems of theory, and in this sense important questions abound: What is the geometrical nature of the crumpled phase? What is the mechanism driving the transition from the polymer phase to the crumpled phase? Is there any geometrical intuition behind the first or higher order nature of the transition? Is it possible to take dynamical control over the occurrence of singular vertices and edges which otherwise would entropically dominate? Some of these questions can be systematically addressed by a detailed but otherwise elementary discussion of the geometry of dynamical triangulations along the lines of [@Carfora], [@Houches]. This geometrical viewpoint has turned out to be useful and interesting in terms of providing an analytical framework within which discuss simplicial quantum gravity while at the same time maintaining a strong contact with computer simulations. In this paper we discuss in detail how this approach can be further exploited to estimate analytically the value of $k_2^{crit}$, and describing the geometrical properties of the strong coupling phase of the $4$- dimensional theory. Large volume asymptotics of the partition functions =================================================== For the convenience of the reader we recall in this section a few basic results of [@Carfora] that we are going to use later on. The discretized distribution $W[N_{n-2},b(n,n-2)]$ is one of the objects of main interest in simplicial quantum gravity, and for $n\geq3$ an exact evaluation of $W[N_{n-2},b(n,n-2)]$ is an open and very difficult problem. However, since in (\[grandpartition\]) and (\[can\]) one is interested in the large volume limit, what really matters, as far as the criticality properties of (\[grandpartition\]) are concerned, is the asymptotic behavior of $W[N_{n-2},b(n,n-2)]$ for large $N_n$. This makes the analysis of $W[N_{n-2},b(n,n-2)]$ somewhat technically simpler, and according to [@Carfora] one can actually estimate its leading asymptotics with the relevant sub-leading corrections. If we consider an $n$-dimensional ($n\geq2$) PL-manifold $M$ of given fundamental group $\pi_1(M)$), then the distribution $W[N_{n-2},b(n,n-2)]$ of distinct dynamical triangulations, with given $N_{n-2}$ bones and average curvature $b\equiv b(n,n-2)$, factorizes according to $$\begin{aligned} W[N_{n-2},b(n,n-2)]=p_{N_{n-2}}^{curv} {\langle}Card\{T_a^{(i)}\}_{curv}{\rangle}, \label{rulesums}\end{aligned}$$ where $p_{N_{n-2}}^{curv}$ is the number of possible distinct curvature assignments $\{q(\alpha)\}_{\alpha=0}^{N_{n-2}}$ for triangulations $\{T_a\}$ with $N_{n-2}$ bones and given average incidence $b(n,n-2)$, [*viz.*]{}, $$\begin{aligned} \{q(\alpha)\}_{\alpha=0}^{N_{n-2}}\not= \{q(\beta)\}_{\beta=0}^{N_{n-2}} \not=\{q(\gamma)\}_{\gamma=0}^{N_{n-2}}\not=\ldots,\end{aligned}$$ while $<Card\{T_a^{(i)}\}_{curv}>$ is the average (with respect to the distinct curvature assignments) of the number of distinct triangulations sharing a common set of curvature assignments, (for details, see section 5.2 pp.102 of [@Carfora]). This factorization allows a rather straightforward asymptotic analysis of $W[N_{n- 2},b(n,n-2)]$, and in the limit of large $N_n$ we get[@Carfora] (theorem 6.4.2, pp. 131 with the expression $<Card\{T_a^{(i)}\}_{curv}>$ provided by theorem 5.2.1, pp. 106) $$\begin{aligned} W[N_{n-2},b(n,n-2)]&\simeq& \frac{W_{\pi}}{\sqrt{2\pi}}\cdot e^{(\alpha_nb(n,n-2)+\alpha_{n-2})N_{n-2}} \nonumber\\ &&\cdot\sqrt{\frac{(b-\hat{q}+1)^{1-2n}}{(b-\hat{q})^3}} \cdot {\left [ \frac{(b-\hat{q}+1)^{b-\hat{q}+1}}{(b-\hat{q})^{b-\hat{q}}} \right ] }^{N_{n-2}} \label{asintotica}\\ &&e^{[-m(b(n,n-2))N^{1/n_H}_n]} \left(\frac{b(n,n-2)}{n(n+1)}N_{n-2}\right)^{D/2} {N_{n-2}}^{\tau(b)-\frac{2n+3}{2}}. \nonumber \label{notation}\end{aligned}$$ The notation here is the following: $W_{\pi}$ is a topology dependent parameter of no importance for our present purposes (see [@Carfora] for its explicit expression), $\alpha_{n-2}$ and $\alpha_n$ are two constants depending on the dimension $n$, (for instance, for $n=4$, $\alpha_n=-\arccos(1/n)|_{n=4}$, $\alpha_{n-2}=0$); $\hat{q}$ is the minimum incidence order over the bones (typically $\hat{q}=3$); $D{\doteq}dim[Hom(\pi_1(M),G)]$ is the topological dimension of the representation variety parameterizing the set of distinct dynamical triangulations approximating locally homogeneous $G$-geometries, ($G\subset SO(n)$). Finally, $m(b)\geq0$ and $\tau(b)\geq0$ are two parameters depending on $b(n,n-2)$ which, together with $n_H>1$, characterize the sub-leading asymptotics of $W[N_{n-2},b(n,n-2)]$. In particular, note that $$\begin{aligned} e^{[-m(b(n,n-2))N^{1/n_H}_n]} (\frac{b(n,n-2)}{n(n+1)}N_{n-2})^{D/2} {N_{n-2}}^{\tau(b)}. \label{asicurv}\end{aligned}$$ is the asymptotics associated with ${\langle}Card\{T^{(i)}\}_{curv}{\rangle}$. The remaining part of (\[asintotica\]) is the leading exponential contribution coming from the large $N_n$ behavior of the distribution $p_{N_{n-2}}^{curv}$ of the possible curvature assignments. This latter term provides the correct behavior of the large volume limit of dynamically triangulated manifolds, an asymptotics that matches nicely with the existing Monte Carlo simulations, ([*e.g.*]{}, see [@Carfora], section 7.1, pp.160). 0.5 cm While the exponential asymptotics is basically under control, it must be stressed that some of the most delicate aspects of the theory are actually contained in ${\langle}Card\{T^{(i)}\}_{curv} {\rangle}$. Roughly speaking, the set of triangulations $\{T^{(i)}\}_{curv}$ can be rather directly interpreted as a finite dimensional approximation of the [*moduli space*]{} of constant curvature metrics in smooth Riemannian geometry, (the standard example being the parameterization of the moduli space of surfaces of genus $\geq2$ with the set of inequivalent constant curvature $(=-1)$ metrics admitted by such surfaces; due to rigidity phenomena, such example is to be taken with care for $n\geq3$). With this geometric interpretation in mind we can consider $$\begin{aligned} \frac{\ln {\langle}Card\{T^{(i)}\}_{curv} {\rangle}}{\ln{N_n}} {\simeq} - \frac{m(b(n,n-2))N_n^{1/n_H}}{\ln{N_n}}+ \frac{D}{2}+\tau(n)~~~~~\mbox{for}~N_n\to \infty \label{covdim}\end{aligned}$$ as the (formal) covering dimension of such a moduli space. Clearly, whenever $m(b)>0$ (and $n_H$ finite or $O(\ln{N_n})$), such covering dimension is singular. This signals the fact that in the corresponding range of the parameter $b$, dynamical triangulations fail to approximate in the large volume limit any smooth Riemannian manifold. The critical incidence {#critica} ---------------------- The parameters $m(b)$, $\tau(b)$, and $n_H$, characterizing the covering dimension and the large volume asymptotics (\[asicurv\]) are not yet explicitly provided by the analytical results of [@Carfora]. By exploiting geometrical arguments, one can only prove[@Carfora] (theorem 5.2.1, pp. 106) an existence result to the effect that if $n\geq3$, there is a [*critical value*]{} $b_0(n)$, of the average incidence $b(n,n-2)$, to which we can associate a critical value $k_{n-2}^{crit}$ of the inverse gravitational coupling, such that $$\begin{aligned} m(b)=0, \label{muno}\end{aligned}$$ for $b(n,n-2)\leq b_0(n)$; whereas $$\begin{aligned} m(b)>0, \label{mdue}\end{aligned}$$ for $b_0(n)< b(n,n-2)$. In other words, for $b<b_0(n)$ the sub-leading asymptotics in (\[asintotica\]) is at most polynomial, whereas for $b>b_0(n)$ this asymptotics becomes sub-exponential as $N_n$ goes to infinity, (note that in the $2$-dimensional case (\[asintotica\]) has always a sub-leading polynomial asymptotics). This change in the sub-leading asymptotics qualitatively accounts for the jump from the strong to the weak coupling phase observed in the real system during Monte Carlo simulations. However, the lack of an explicit expression for $m(b)$ hampers a deeper analysis of the nature of this transition. In particular, one is interested in the way the parameter $m(b(n,n-2))$ approaches $0$ as $b(n,n-2)\to{b_0(n)}$, since adequate knowledge in this direction would provide the order of the phase transition. It is clear that a first necessary step in order to discuss the properties of $m(b(n,n-2))$ is to provide a constructive geometrical characterization of the critical average incidence $b_0(n)$, and not just an existence result. As far as the other parameter $\tau(n)$ is concerned, the situation is on more firm ground. $\tau(n)$ characterizes the sub-leading polynomial asymptotics in the weak coupling phase, and recently[@Gabrielli], an analysis of the geometry of dynamical triangulations in this phase has provided convincing analytical evidence that $\tau(n)-(2n+3)/2+3=1/2$. As expected, this corresponds to a dominance, in the weak coupling phase, of branched polymers structures. Canonical Averages and the Curvature Susceptibility {#canone} --------------------------------------------------- If we consider the weighted distribution $W[N_{n-2},b(n,n- 2)]\exp[k_{n-2}N_{n-2}]$, characterizing the canonical partition function (\[can\]), then it is straightforward to check that, as $N_n\to\infty$, this distribution is strongly peaked around triangulations with an average incidence $b^*(n,n-2;k_{n-2})$ given by (see [@Carfora], eqn. (6.39), pp. 134) $$\begin{aligned} b^*(n,n-2;k_{n-2})=3(\frac{A(k_{n-2})}{A(k_{n-2})-1}), \label{solution}\end{aligned}$$ where for notational convenience we have set $$\begin{aligned} A(k_{n-2})&\doteq& { \left [\frac{27}{2}e^{k_{n-2}}+1+ \sqrt{(\frac{27}{2}e^{k_{n-2}}+1)^2-1} \right ]}^{1/3}+ \nonumber\\ &&{ \left [\frac{27}{2}e^{k_{n-2}}+1- \sqrt{(\frac{27}{2}e^{k_{n-2}}+1)^2-1} \right ]}^{1/3}-1.\end{aligned}$$ This remark allows to compute, via a uniform Laplace estimation, the large volume asymptotics of the canonical partition function $W(k_{n- 2})_{eff}=\sum_{T\in {\cal T}(N_n)} e^{k_{n-2}N_{n-2}}$. A discussion of this asymptotics at the various orders is rather delicate and the reader can find the details in [@Carfora], (chapter 6, theorem 6.6.1). For our purposes it is sufficient to consider the leading order expression which can be readily obtained, starting from the micro-canonical partition function, by means of a standard saddle point evaluation, [*viz.*]{} $$\begin{aligned} W(N_n, k_{n-2})_{eff}&=& c_n\left( \frac{(A(k_{n-2})+2)}{3A(k_{n-2})} \right)^{-n} N_n^{\tau(n)+D/2-n- 1}e^{[-m(b^*(n,n-2;k_{n-2}))N^{1/n_H}_n]} \cdot\nonumber\\ &&\cdot e^{\left[[\frac{1}{2}n(n+1)\ln\frac{A(k_{n-2})+2}{3}]N_n \right]}\Big(1+O(N^{-3/2}_n)\Big), \label{regular}\end{aligned}$$ where $c_n$ is a scaling factor not depending from $k_2$. 0.5 cm As the inverse gravitational coupling $k_{n-2}$ varies, the average curvature correspondingly changes according to (\[solution\]). It follows that there is a well defined critical value, $k_{n-2}^{crit}$, solution of the equation $$\begin{aligned} b_0(n)=3(\frac{A(k_{n-2})}{A(k_{n-2})-1}), \label{criteqn}\end{aligned}$$ for which $b^*(n,n-2;k_{n-2})=b_0(n)$, where $b_0(n)$ is the critical average incidence, (see (\[muno\]) and (\[mdue\])). This $k_{n-2}^{crit}$ describes the transition between the strong coupling phase ($k_{n-2}<k_{n- 2}^{crit}$) and the weak coupling phase ($k_{n-2}>k_{n-2}^{crit}$) associated with the two distinct sub-leading asymptotics regimes of (\[regular\]). 0.5 cm The explicit geometric characterization of $b_0(n)$ and the evaluation of the corresponding critical value $k_{2}^{crit}$ in the $4$-dimensional case are among the most important issues that we discuss in this paper. In order to compare the geometrical results we obtain with the data coming from recent Monte Carlo simulation it will be useful to have handy the expressions of the free energy $\ln{W_{eff}(N_4,k_2)}$, of the (canonical) average of the number of bones $<N_2>=\partial\ln{W_{eff}(N_4,k_2)}/\partial{k_2}$, and of the associated [*curvature-curvature*]{} correlator $[{\langle}N_2^2{\rangle}- {\langle}N_2{\rangle}^2]=\partial^2\ln{W_{eff}(N_4,k_2)}/\partial{k_2}^2$. 0.5 cm The large volume asymptotics of the (canonical) free energy is readily obtained from (\[regular\]); by setting $n=4$ and discarding the inessential constant terms we get (in the saddle-point approximation used above) $$\begin{aligned} \ln{W(N, k_{2})_{eff}}= 10N_4\ln\frac{A(k_{2})+2}{3}-m(k_2)N^{1/n_H}. \label{frenergy}\end{aligned}$$ 0.5 cm The canonical average $<N_2>$ follows from differentiating (\[frenergy\]) with respect to the inverse gravitational coupling $k_2$, $$\begin{aligned} {\langle}N_2{\rangle}=10\frac{A_1(k_2)}{A(k_2)+2}N_4- N_4^{1/n_H}\frac{\partial{m(k_2)}}{\partial{k_2}}, \label{firstderiv}\end{aligned}$$ where we have set $$\begin{aligned} A_1(k_2) &\doteq& \frac{\partial{A(k_2)}}{\partial{k_2}}\nonumber\\ &= &\frac{9}{2}e^{k_2} \left[1+\frac{\frac{27}{2}e^{k_{2}}+1}{\sqrt{(\frac{27}{2}e^{k_{2}}+1) ^2-1}}\right]\cdot { \left [\frac{27}{2}e^{k_{2}}+1+ \sqrt{(\frac{27}{2}e^{k_{2}}+1)^2-1} \right ]}^{-2/3}+ \nonumber\\ && +\frac{9}{2}e^{k_2} \left[1- \frac{\frac{27}{2}e^{k_{2}}+1}{\sqrt{(\frac{27}{2}e^{k_{2}}+1)^2- 1}}\right]\cdot { \left [\frac{27}{2}e^{k_{2}}+1- \sqrt{(\frac{27}{2}e^{k_{2}}+1)^2-1} \right ]}^{-2/3}.\end{aligned}$$ 0.5 cm Note that for a $4$-dimensional PL manifold $M$ of Euler characteristic $\chi(M)$, we have $N_0=\frac{N_2}{2}-N_4+\chi$. Thus, from ${\langle}N_2{\rangle}$ we immediately get the expression for the first normalized cumulant of the distribution of the number of vertices of the triangulation, [*viz.*]{} $$c_1(N_4;k_2)\doteq\frac{{\langle}N_0{\rangle}}{N_4}= \frac{5A_1(k_2)}{A(k_2)+2}-1 -N_4^{1/n_H- 1}\frac{\partial{m(k_2)}}{\partial{k_2}} \label{cumulant1}$$ 0.5 cm This is a typical quantity monitored in Monte Carlo simulation, and later on we will discuss how (\[cumulant1\]) actually compares with respect existing numerical data. Finally, the curvature susceptibility $$\begin{aligned} \frac{{\langle}N_2^2{\rangle}- {\langle}N_2{\rangle}^2}{N_4} =\frac{1}{N_4}\; \frac{\partial^2\, \ln{W_{eff}(N_4,k_2)}}{\partial{k_ 2}^2}\end{aligned}$$ is explicitly computed as $$\begin{aligned} 4c_2(N_4;k_2)&=& \frac{{\langle}N_2^2{\rangle}-{\langle}N_2{\rangle}^2}{N_4}\nonumber\\ &=& 10\frac{(A(k_2)+2)A_2(k_2)-A_1(k_2)^2}{(A(k_2)+2)^2} -N_4^{1/n_H-1}\frac{\partial^2{m(k_2)}}{\partial{k_2}^2}, \label{cumulant2}\end{aligned}$$ where $c_2(N_4;k_2)\doteq\frac{{\langle}N_0^2{\rangle}-{\langle}N_0{\rangle}^2}{N_4}$ is the second normalized cumulant of the distribution of the number of vertices of the triangulation, and where we have set $$\begin{aligned} && A_2(k_2)\doteq\frac{\partial^2{A(k_2)}}{\partial{k_2}^2}= A_1(k_2)+\nonumber\\ && + \frac{\frac{243}{4}e^{2k_{2}}} {[(\frac{27}{2}e^{k_{2}}+1)^2-1]^{3/2}}{ \left[\frac{27}{2}e^{k_{2}}+1- \sqrt{(\frac{27}{2}e^{k_{2}}+1)^2-1} \right]}^{-2/3}-\nonumber\\ && - \frac{\frac{243}{4}e^{2k_{2}}} {[(\frac{27}{2}e^{k_{2}}+1)^2-1]^{3/2}}{\left[\frac{27}{2}e^{k_{2}}+1+ \sqrt{(\frac{27}{2}e^{k_{2}}+1)^2-1} \right]}^{-2/3} -\nonumber\\ && -\frac{81}{2}e^{2k_2}{ \left[\frac{27}{2}e^{k_{2}}+1+ \sqrt{(\frac{27}{2}e^{k_{2}}+1)^2-1} \right]}^{- 5/3}\left[1+\frac{\frac{27}{2}e^{k_{2}}+1}{\sqrt{(\frac{27}{2}e^{k_{2} }+1)^2-1}}\right]^2-\nonumber\\ && -\frac{81}{2}e^{2k_2}{\left[\frac{27}{2}e^{k_{2}}+1- \sqrt{(\frac{27}{2}e^{k_{2}}+1)^2-1} \right]}^{-5/3}\left[1- \frac{\frac{27}{2}e^{k_{2}}+1}{\sqrt{(\frac{27}{2}e^{k_{2}}+1)^2- 1}}\right]^2 \label{der2}\end{aligned}$$ A scaling hypothesis -------------------- Clearly the above expressions for $c_1(N_4;k_2)$ and $c_2(N_4;k_2)$ are useless if we do not specify how $m(k_2)$ depends on the inverse gravitational coupling $k_2$. Since according to theorem 5.2.1 of [@Carfora], $m(k_2)\to0$ as $b(k_2)$ approaches a critical incidence $b_0(4)$, (henceforth denoted by $b_0$), the simplest [*hypothesis*]{} we can make is that, for $(b(k_2)-b_0)\to0^+$, $m(k_2)$ scales to zero according to a power law given by $$m(k_2)=\frac{1}{\nu}(\frac{1}{b(k_2)}-\frac{1}{b_0})^{\nu}, \label{assumption}$$ where $0<\nu<1$ is a critical exponent to be determined; (the factor $1/\nu$ is inserted for later convenience). The expression (\[regular\]) for the canonical partition function $W[N_{n-2}, b(n,n-2)]$ is a large volume ($N_4$) asymptotics evaluated at fixed volume. It contains a non- trivial subleading asymptotics governed by $m(k_2)$. The net effect of this subleading term is clearly visible in the expressions (\[cumulant1\]) and (\[cumulant2\]) of the two cumulants, and shows that in order to capture the behavior of $c_1(N_4;k_2)$ and $c_2(N_4;k_2)$ as $k_2\to{k_2}^{crit}$, (\[assumption\]) is not sufficient. It must be combined with a finite scaling hypothesis telling us how $m(k_2)$ scales with the volume $N_4$, as $b(k_2)\to{b_0}$. From the asymptotics (\[regular\]), and the expression (\[cumulant1\]) for the first cumulant, it easily follows that the simplest, if not the most natural, hypothesis we can make is to assume that $m(k_2)$ scales asymptotically with the volume according to $$\begin{aligned} \lim_{ \matrix{{\scriptstyle N_4\to\infty}, \cr {\scriptstyle (k_2-{k_2^{crit}})\to0^-}}} |\frac{1}{b(k_2)}-\frac{1}{b_0}|^{\nu-1}\cdot{N_4}^{\frac{1}{n_H}- 1}=1, \label{scaling}\end{aligned}$$ where according to theorem 5.2.1 of [@Carfora] $n_H>1$. This implies that, when $b(k_2)\to{b}_0$, $m(k_2)$ scales as $$\begin{aligned} m(k_2)\simeq N_4^{-\frac{\nu(n_H-1)}{n_H(1-\nu)}}.\end{aligned}$$ Together with $m(k_2)N_4^{1/n_H}\to0$ as $b(k_2)\to{b_0}$, (\[scaling\]) yields $(1/n_H)<\nu<1$, a finite size scaling relation connecting the critical exponents $\nu$ and $n_H$. Introducing this ansatz in (\[cumulant1\]) and (\[cumulant2\]) we explicitly obtain $$\label{Rumulant1} c_1(N_4;k_2)\doteq\frac{{\langle}N_0 {\rangle}}{N_4} =\frac{5A_1(k_2)}{A(k_2)+2}-\frac{1}{3}\frac{A_1(k_2)}{A^2(k_2)}-1,$$ and $$\begin{aligned} c_2(N_4;k_2)&=&\frac{{\langle}N_0^2{\rangle}-{\langle}N_0{\rangle}^2}{N_4}\label{Rumulant2}\\ &=& \frac{5}{2}\frac{(A(k_2)+2)A_2(k_2)-A_1(k_2)^2}{(A(k_2)+2)^2}- \frac{1}{12}\frac{A_2(k_2)A(k_2)-2A_1(k_2)}{A^3(k_2)}+\nonumber\\ && +\frac{|\nu-1|}{36}\;\frac{A_1^2(k_2)}{A^4(k_2)}\; {\left\vert \frac{A(k_2)-1}{3A(k_2)}-\frac{1}{b_0} \right\vert}^{-1}. \nonumber\end{aligned}$$ Note that in this latter expression the only undetermined parameters are the critical exponent $\nu$ and the critical incidence $b_0$. In the following paragraphs we provide an explicit geometric characterization of such $b_0$. The only remaining unknown quantity is then $\nu$. Anticipating the conclusion of the paper, it turns out that it is possible the choose a value of $\nu$ ($\approx 0.94$) which leads to quite a good agreement between (\[Rumulant1\])–(\[Rumulant2\]) and the available Monte Carlo data for the distributions of these two cumulants. The Geometry of the strong coupling phase ========================================= A PL manifold endowed with a dynamical triangulation is a particular example of an Alexandrov space, [*i.e.*]{} finite dimensional, inner metric space with a lower curvature bound in distance-comparison sense, (a brief introduction with the relevant references can be found in [@Carfora], section 3.2). The natural topology specifying in which sense dynamical triangulations approximate Riemannian manifolds is associated with an Hausdorff-like distance introduced by Gromov [@Gromov], and which is a direct generalization of the classical Hausdorff distance between compact subsets of a metric space. The role of this topology stems from the fact there are many geometric constructions in dynamical triangulations theory that are close in Gromov-Hausdorff topology, but not in smooth Riemannian geometry. In [@Carfora] we proved that every Riemannian manifold (of bounded geometry) can be uniformly approximated in this topology by dynamical triangulations, ([@Carfora], section 3.3, Th.3.3.1); the converse result, namely if every dynamical triangulation approximates, as the number of simplices goes to $\infty$, a $n$-dimensional Riemannian manifold, is deeply tied to understanding the structure of the thermo-dynamical behavior of the large volume limit of the set of possible dynamical triangulations. It is interesting to note that the geometry of the set of all possible dynamical triangulations of a manifold of given topology is a subject of which little is actually known, even in dimension two. Recently, in a remarkable paper [@Thurston], W. Thurston has shed some light in the two- dimensional case by showing that the space of triangulations (of positive curvature) has the rich geometric structure of a complex hyperbolic manifold. We do not need to reach, in this work, such a level of sophistication and in order to determine the critical average incidence, $b_0$, we discuss mostly the kinematical properties of the space of all possible dynamical triangulations admitted by an $n$- dimensional PL manifold $M$ of given topology. 0.5 cm Let $(M,T_a)$ be a dynamically triangulated manifold, then the $f$- vector of the triangulation is the string of integers $(N_0(T_a),N_1(T_a),\ldots,N_n(T_a))$, where $N_i(T_a)\in {\Bbb N}$ is the number of $i$-dimensional sub- simplices $\sigma^i$ of $T_a$. This vector is constrained by the Dehn- Sommerville relations $$\begin{aligned} \sum_{i=0}^n(-1)^iN_i(T)=\chi(T), \label{eleven}\end{aligned}$$ $$\begin{aligned} \sum_{i=2k-1}^n(-1)^i\frac{(i+1)!}{(i-2k+2)!(2k-1)!}N_i(T)=0, \label{twelve}\end{aligned}$$ if $n$ is even, and $1\leq k\leq n/2$. Whereas if $n$ is odd $$\begin{aligned} \sum_{i=2k}^n(-1)^i\frac{(i+1)!}{(i-2k+1)!2k!}N_i(T)=0, \label{thirteen}\end{aligned}$$ with $1\leq k\leq (n-1)/2$, and where $\chi(T)$ is the Euler- Poincaré characteristic of $T$. It is easily verified that the relations (\[eleven\]), (\[twelve\]), (\[thirteen\]) leave $\frac{1}{2}n-1$, ($n$ even) or $\frac{1}{2}(n-1)$ unknown quantities among the $n$ ratios $N_1/N_0,\ldots,N_n/N_0$, [@Dav4]. Thus, in dimension $n=2,3,4$, the datum of $N_n$, and of the number of bones $N_{n-2}$, fixes, through the Dehn-Sommerville relations all the remaining $N_i(T)$. These extremely simple and perhaps even naive-sounding remarks turn out to be quite powerful in providing information on the global metrical properties of the underlying PL-manifold. Not only, as is obvious, on the volume ($\propto{N_n(T_a)}$), and on the average curvature ($\propto\frac{1}{2}n(n+1){N_n(T_a)/N_{n-2}(T_a)}$), but, corroborated by a few more elementary facts, also on the genesis of singular vertices and edges. 0.5 cm An elementary but geometrically significant result of this type is provided by the range of variation of the possible average incidence $b(n,n-2)$. One gets (see[@Carfora] (lemma 2.1.1)) Let $T_a\to M^n$ a triangulation of a closed $n$-dimensional $PL$- manifold $M$, with $2\leq{n}\leq4$, then for $N_n(T_a)\to\infty$, we get \(i) For $n=2$: $$\begin{aligned} b(2,0)=6;\end{aligned}$$ (ii) for $n=3$: $$\begin{aligned} \frac{9}{2}\leq b(3,1)\leq6; \label{bitre}\end{aligned}$$ (iii) for $n=4$: $$\begin{aligned} 4\leq b(4,2)\leq 5. \label{biquattro}\end{aligned}$$ \[walkup\] The $2$-dimensional case as well as the upper bounds for $n=3$ and $n=4$ are well-known trivial consequences of the Dehn- Sommerville relations. The lower bounds $b(3,1)\geq9/2$ and $b(4,2)\geq4$ are instead related to a rather sophisticated set of results proved by Walkup[@Walkup] in the sixties concerning the proof of some conjectures for $3$- and $4$-dimensional PL manifolds; (apparently, these results went unnoticed by researchers in simplicial quantum gravity). Walkup’s theorems have important implications for understanding the geometry both of the strong and of the weak coupling phase of simplicial gravity. In dimension $n=3$, we have[@Walkup] There exists a triangulation $T\to{\Bbb S}^3$ of the $3$-sphere ${\Bbb S}^3$ with $N_0$ vertices and $N_1$ edges if and only if $N_0\geq5$ and $$\begin{aligned} 4N_0-10\geq N_1 \geq \frac{N_0(N_0-1)}{2}.\end{aligned}$$ Moreover $T$ is a triangulation of ${\Bbb S}^3$ satisfying $N_1=4N_0-10$ if and only if $T$ is a stacked sphere, whereas $T$ is a triangulation of ${\Bbb S}^3$ satisfying $N_1 =\frac{N_0(N_0-1)}{2}$ if and only if $T$ is a $2$-neighborly triangulation, namely if every pair of vertices is connected by an edge. A stacked sphere $({\Bbb S}^n,T)$ is a triangulation $T\to{\Bbb S}^n$ of a sphere which can be constructed from the boundary $\partial\sigma^{n+1}\simeq{\Bbb S}^n$ of a simplex $\sigma^{n+1}$ by successive adding of pyramids over some facets. More explicitly, the boundary complex of any abstract $(n+1)$-simplex $\sigma^{n+1}$ is by definition a stacked sphere, and if $T$ is a stacked sphere and $\sigma^n$ is any $n$-simplex of $T$, then $\hat{T}$ is a stacked sphere if $\hat{T}$ is any complex obtained by $T$ by removing $\sigma^n$ and adding the join of the boundary $\partial\sigma^n$ with a new vertex distinct from the vertices of $T$. Note also that a triangulated $PL$-manifold is called $k$-neighborly if $$\begin{aligned} N_{k-1}(T)=\frac{N_0!}{k!(N_0-k)!}\end{aligned}$$ We are referring explicitly to $3$- and $4$-spheres ${\Bbb S}^n$, because the majority of Monte Carlo simulations have been carried out in these cases (for a recent discussion of more general topologies, see[@Bialas]). However, it must be stressed that the above definitions, as well as Walkup’s theorems, can be naturally extended (with suitable modifications[@Walkup]) to any $n$-dimensional PL manifolds $M$. Note in particular that every triangulable $3$-manifold $M$ can be triangulated so that the closed star of some edge contains all the vertices and every pair of vertices is connected by an edge. In dimension $n=4$ we have a somewhat weaker characterization of the possible set of triangulations: If $T\to{M}$ is a triangulation of a closed connected $4$-manifold, then $$\begin{aligned} N_1(T)\geq 5N_0(T)-\frac{15}{2}\chi(T), \label{fourstacked}\end{aligned}$$ and equality holds if and only if $(M,T)$ is a stacked sphere. Note that actually one has a stronger statement in the sense that equality in (\[fourstacked\]) holds if and only is all vertex links in the $4$-manifold $M$ are stacked $3$-spheres. Contrary to what happens for $3$-manifolds, $2$-neighborly triangulations ([*i.e.*]{}, triangulations where every pair of vertices is connected by an edge), are not [*generic*]{} for $4$-dimensional PL manifolds, and as matter of fact, the above theorem immediately implies[@Kuhnel] that for any such $(M,T)$ $$\begin{aligned} N_0(T)(N_0(T)-11)\geq -15\chi(M), \label{neigh}\end{aligned}$$ where the equality implies that $(M,T)$ is $2$-neighborly. Thus, the equality is not possible for large and arbitrary values of $N_0(T)$, but, (depending on topology)[@Kuhnel], only in the cases $N_0(T)=0$, $N_0=5$, $N_0=6$, or $N_0=11\,mod\,15$. Even if $2$-neighborly triangulations are not generic, one can easily construct voluminous ([*i.e.*]{}, with $N_4(T)$ arbitrarily large) triangulations of the $4$-sphere where all vertices [*but two*]{} are connected by an edge. In order to realize such triangulations, consider a $2$-neighborly triangulation $T(3)$ of the $3$-sphere ${\Bbb S}^3$ with $f$-vector $[N_0(T(3)),N_1(T(3)),N_2(T(3)),N_3(T(3))]$. If we take the [*Cone*]{}, $C({\Bbb S}^3)$, on such $({\Bbb S}^3,T(3))$, [*viz.*]{}, the product ${\Bbb S}^3\times[0,a]$ with ${\Bbb S}^3\times\{a\}$ identified to a point, then we get a triangulation of a $4$-dimensional ball $B^4$ with $f$-vector given by $$\begin{aligned} f(B^4)&= &(N_0(T(3))+1, N_1(T(3))+N_0(T(3)),\nonumber\\ && N_2(T(3))+ N_1(T(3)), N_3(T(3))+N_2(T(3)), N_3(T(3))).\end{aligned}$$ By gluing two copies of such a cone $C({\Bbb S}^3)$ along their isometric boundary $\partial{C}({\Bbb S}^3)\simeq{\Bbb S}^3$, we get a triangulation of the $4$-sphere ${\Bbb S}^4$ with $f$-vector $$\begin{aligned} f({\Bbb S}^4) &= & (N_0(T(3))+2, N_1(T(3))+2N_0(T(3)),\nonumber\\ && N_2(T(3))+ 2N_1(T(3)), N_3(T(3))+2N_2(T(3)), 2N_3(T(3))).\end{aligned}$$ It is trivially checked that corresponding to such a triangulation we get $$\begin{aligned} N_1({\Bbb S}^4)=\frac{N_0({\Bbb S}^4)(N_0({\Bbb S}^4)-1)}{2}-1,\end{aligned}$$ where the $-1$ accounts for the missing edge between the two cone- vertices in $C({\Bbb S}^3)\cup_{S^3}{C}({\Bbb S}^3)$. 0.5 cm When applied to simplicial quantum gravity, the existence of such $2$- neighborly (or almost $2$-neighborly) triangulations implies that there are dynamical triangulations of the $n$-sphere ${\Bbb S}^n$, $n=3$, $n=4$, where all vertices are singular. Corresponding to such configurations we have that $b(n,n- 2)|_{n=3}=6$, and $b(n,n-2)|_{n=4}=5$. Thus, not surprisingly, for such triangulations the kinematical upper bound for the average incidence $b(n,n-2)$ is attained. However, it is important to stress that such extremely singular configurations [*do not saturate*]{} the set of possible configurations for which $b_{max}(n,n-2)$ is reached. From the Dehn-Sommerville relations one immediately gets $$\begin{aligned} b(n,n-2)|_{n=3}=6\cdot\frac{N_3}{N_3+N_0},\end{aligned}$$ and $$\begin{aligned} b(n,n-2)|_{n=4}=10\cdot\frac{N_4}{2N_4+2N_0-2\chi(T)},\end{aligned}$$ which, together with the obvious relation $N_1\leq{N_0}(N_0-1)/2$, implies that in order to attain the upper kinematical bounds $b_{max}(n,n-2)|_{n=3}=6$ and $b_{max}(n,n-2)|_{n=4}=5$ it is sufficient that $$\begin{aligned} N_0(T)=O[N_n(T)^{\alpha}],\end{aligned}$$ with $1/2\leq\alpha<1$. Note that $2$-neighborly or almost $2$- neighborly triangulations correspond to $\alpha=1/2$. Singular Stacked Spheres {#sss} ------------------------ It should be stressed that the presence of singular vertices can occur also for $b(n,n-2)=b_{min}(n,n-2)$, [*i.e.*]{}, for stacked spheres. In other words, singular vertices are [*not kinematically*]{} forbidden by the geometry of the triangulations. Their suppression or enhancement in the different phases of simplicial quantum gravity is rather related to the relative abundance, with respect to the totality of possible triangulations, of the number of distinct triangulations with singular vertices as $b(n,n-2)$ varies. In other words, it is an entropic phenomenon as clearly suggested by S. Catterall, G. Thorleifsson, J. Kogut, and R. Renken [@Catterall]. For definiteness, we can describe a concrete construction of a singular stacked sphere. It amounts to gluing a $4$-dimensional ball $B^4$ bounded by a stacked $3$-sphere ${\Bbb S}^3$ with a cone over such an ${\Bbb S}^3$. 0.5 cm Consider a $3$-dimensional stacked sphere ${\Bbb S}^3$. According to one of Walkup’s theorems, such an ${\Bbb S}^3 $ is the boundary of a $4$-dimensional ball $B^4$ with a tree-like structure and corresponds to a triangulation with $f$-vector $$f(B^4) = (N_0(S^3), N_1(S^3), N_2(S^3), N_3(S^3)+N_3(\hat{B}^4), N_4(B^4)),$$ where $N_i(S^3)$, $i=0,1,2,3$ is the $f$-vector of the boundary stacked sphere and $N_3(\hat{B}^4)$ is the number of $\sigma^3$ in the interior, $\hat{B}^4$, of $B^4$. Note that if we take the cone, $C(S^3)$, over the boundary stacked sphere, we get another triangulation of the $4$-dimensional ball, $B^4_{sing}$, whose boundary is again isometric to the given ${\Bbb S}^3$, but whose interior contains a (unique) singular vertex. The $f$-vector of such a triangulation is $$f(B^4_{sing})\! =\! (1\!+\! N_0(S^3), N_1(S^3)\!+\!N_0(S^3), N_2(S^3)\!+\!N_1(S^3), N_3(S^3)\!+\!N_2(S^3), N_3(S^3)).$$ Gluing these two triangulated balls $B^4$ to $B^4_{sing}$ through their common boundary ${\Bbb S}^3$, we get a triangulation of the $4$-sphere ${\Bbb S}^4\simeq B^4\cup_{S^3}C(\partial{B^4})$, with $f$-vector $$\begin{aligned} N_0&= & N_0(S^3)+1\nonumber\\ N_1&= & N_1(S^3)+ N_0(S^3)\nonumber\\ N_2&= & N_2(S^3)+ N_1(S^3)\nonumber\\ N_3&= & N_3(S^3)+ N_3(\hat{B}^4)+N_2(S^3)\nonumber\\ N_4&= & N_4(B^4)+ N_3(S^3).\end{aligned}$$ Since ${\Bbb S}^3$ is a stacked sphere, we have $4N_3(S^3)=3N_1(S^3)$ which, together with $N_2(S^3)=2N_3(S^3)$, implies $$\begin{aligned} N_2=\frac{10}{3}N_3(S^3).\end{aligned}$$ From $2N_3=5N_4$ and the Euler relation for the $4$-dimensional ball $B^4$, (with $\chi(B^4)=1$), we immediately get $N_4(B^4)=\frac{1}{3}N_3(S^3)-\frac{2}{3}$, which implies $$\begin{aligned} N_4=\frac{4}{3}N_3(S^3)-\frac{2}{3}.\end{aligned}$$ Thus, for $N_3(S^3)\to\infty$ we get a voluminous triangulation ($N_4\to\infty$) of ${\Bbb S}^4$ with average incidence $$\begin{aligned} b(n,n-2)|_{n=4}=10\cdot\frac{N_4=\frac{4}{3}N_3(S^3)- \frac{2}{3}}{N_2=\frac{10}{3}N_3(S^3)}\to_{N_3\to\infty} 4,\end{aligned}$$ which shows that ${\Bbb S}^4\simeq B^4\cup_{S^3}C(\partial{B^4})$ is a stacked sphere with a singular vertex (the apex of the cone $C(\partial{B^4})$). An even simpler construction suffices to prove an analogous result in the $3$-dimensional case. 0.5 cm As mentioned in the introductory remarks, stacked spheres are relevant in providing the geometrical rationale for the prevalence of branched polymer structures in the weak coupling phase of simplicial quantum gravity. As a matter of fact [@Gabrielli],it is their tree-like structure that accounts for the [*kinematical*]{} possibility of polymerization. However, the existence of stacked spheres with singular vertices, shows that the dynamical onset of polymerization is not just a consequence of the geometry of triangulated manifolds as $b(n,n-2)\to{b}_{min}(n,n-2)$. On the kinematical side we may have, in the configuration space, extremal cases such as the $2$-neighborly triangulations occurring for $b(n,n-2)\to{b}_{max}(n,n-2)$ or the singular stacked spheres for $b(n,n-2)\to{b}_{min}(n,n-2)$. Monte Carlo simulations do confirm that such configurations are not generic. Near $b_{max}(n,n- 2)$ we generically sample singular triangulations with just a few singular vertices[@Catterall]. Similarly, as $b(n,n-2)\to{b}_{min}(n,n-2)$ the dominant configurations sampled correspond to stacked spheres without singular vertices. The mechanism for understanding the dynamical prevalence of such configurations over the other configurations which are kinematically possible is simply related to the fact that, with respect to the counting measure, distinct dynamical triangulations are not equally probable as a function of the average incidence $b(n,n-2)$. In order to discuss this point we need to exploit a few elementary facts related to the geometry of the ergodic moves used in simplicial quantum gravity. Ergodic moves and the onset of criticality {#ergodic} ------------------------------------------ The $(k,l)$ moves[@Varsted] in $3$ and $4$ dimension are a well known set of elementary surgery operations (related to the Pachner moves[@Pachner]) which allow to construct all triangulations of a PL- manifolds starting from a given triangulation. Roughly speaking, the generic $(k,l)$ move consists in cutting out a sub-complex made up of $k$-dimensional simplices $\sigma^k$ and replacing it with a complex of $l$- dimensional simplices $\sigma^l$ with the same boundary. Note that $k+l=n+2$. We are interested in discussing how a finite set of such moves generate the $f$-vector of voluminous triangulations of the $n$- sphere ${\Bbb S}^n$, ($n=3,4$), starting from the standard $f$-vector of the simplex $\partial\sigma^{n+1}\simeq{\Bbb S}^n$. For $n=3$, the relevant moves are the $(1,4)$ move (barycentric subdivision), the $(2,3)$ move (triangle to link exchange) and their inverses. For $n=4$, since the [*flip*]{} move $(3,3)$ does not alter the distribution of the number $N_i(T)$ of simplices, the $f$-vector of the sphere is generated by the moves $(1,5)$ (barycentric subdivision) and $(2,4)$ (two-four exchange) and their inverses. Following[@Gabrielli], (with a slight change in notation), we denote by $P_{k,l}(n)$ the number of moves of type $(k,l)$ in dimension $n$, and introduce the balance variables ($n=3$): $x_1\doteq(P_{1,4}-P_{4,1})$, $x_2\doteq(P_{2,3}-P_{3,2})$; and ($n=4$): $y_1\doteq(P_{1,5}-P_{5,1})$, $y_2\doteq(P_{2,4}- P_{4,2})$. In terms of such quantities we can easily characterize the string of integers $\{N_i\}$, $i=0,\ldots,n$, which are [*possible*]{} $f$-vectors of triangulated ${\Bbb S}^n$. For $n=3$, we get for $f({\Bbb S}^3) = (N_0(S^3),N_1(S^3),N_2(S^3),N_3(S^3))$: $$\begin{aligned} N_0(S^3)&=& 5+x_1,\nonumber\\ N_1(S^3)&=&10+4x_1+x_2,\nonumber\\ N_2(S^3)&=&10+6x_1+2x_2,\nonumber\\ N_3(S^3)&=&5+3x_1+x_2, \label{tredim}\end{aligned}$$ whereas, for $n=4$ we have for $f({\Bbb S}^4) = (N_0(S^4),N_1(S^4),N_2(S^4),N_3(S^4),N_4(S^4))$: $$\begin{aligned} N_0(S^4)&=&6+y_1,\nonumber\\ N_1(S^4)&=&15+5y_1+y_2,\nonumber\\ N_2(S^4)&=&20+10y_1+4y_2,\nonumber\\ N_3(S^4)&=&15+10y_1+5y_2,\nonumber\\ N_4(S^4)&=& 6+4y_1+2y_2. \label{fourdim}\end{aligned}$$ Note that not all $f({\Bbb S}^n)$ obtained in this way are actual $f$- vectors of triangulated ${\Bbb S}^n$. This is a consequence of the fact that the above relations between the $\{N_i\}$ and the variables $P_{k,l}(n)$ are equivalent to the Dehn-Sommerville constraints. And these latter are known to be necessary but not sufficient conditions in characterizing the possible $f$-vectors of a triangulated manifold; (sufficient conditions have been conjectured by R. Stanley[@Stanley]-see[@Carfora] for a brief discussion of this point). Walkup’s theorems imply the following kinematical bounds on the variables $x_i$, $y_i$, ($i=1,2$): $$\begin{aligned} x_1 &\geq & 0,\nonumber\\ y_1 &\geq & 0,\end{aligned}$$ (both from the obvious condition $N_0(S^n)\geq n+2$); $$\begin{aligned} x_2 &\geq & 0,\nonumber\\ y_2 &\geq & 0,\end{aligned}$$ (the former from $N_1(S^3)\geq4N_0(S^3)-10$; the latter from $N_1(S^4)\geq5N_0(S^4)-\frac{15}{2}\chi(S^4)$, with $\chi(S^4)=2$); $$\begin{aligned} x_1^2+x_1-2x_2 &\geq & 0,\nonumber\\ y_1^2+y_1-2y_2 &\geq & 0,\end{aligned}$$ (both from $N_1(S^n)\leq{N_0}(S^n)(N_0(S^n)-1)/2$). Finally, one can express the average incidence $b(n,n-2)$ as a function of $x_i$ and $y_i$, so as to obtain $$\begin{aligned} b(n,n-2)|_{n=3}=6\cdot\frac{5+3x_1+x_2}{10+4x_1+x_2},\end{aligned}$$ and $$\begin{aligned} b(n,n-2)|_{n=4}=10\cdot\frac{6+4y_1+2y_2}{20+10y_1+4y_2}.\end{aligned}$$ 0.5 cm It is also interesting to discuss in terms of the variables $x_i$ and $y_i$, the average incidence of the top-dimensional simplices $\sigma^n$ on the vertices $\sigma^0$ of the triangulations considered. A straightforward computation provides $$\begin{aligned} Q(n)\doteq\frac{1}{N_0}\sum_{\{\sigma^0\}}q(\sigma^0)= (n+1)\frac{N_n}{N_0},\end{aligned}$$ yielding $$\begin{aligned} Q(n)|_{n=3}=4\cdot\frac{5+3x_1+x_2}{5+x_1},\end{aligned}$$ and $$\begin{aligned} Q(n)|_{n=4}=5\cdot\frac{6+4y_1+2y_2}{6+y_1}.\end{aligned}$$ As expected, $Q(n)$ is not bounded above: when the move $(2,3)$ (for $n=3$), or $(2,4)$ (for $n=4$) dominates, [*i.e.*]{}, near the $b_{max}(n,n-2)$ kinematical boundary, $Q(n)\to\infty$ as $N_n\to\infty$. One may wonder if this unboundedness is related to the unboundedness of the Einstein-Hilbert action, the answer is most likely no. It is certainly reasonable to put restrictions on $Q(n)$ in the search of a continuum limit of the theory, and this may change the phase structure of the theory. But if it does, it is just an illustration of the fact that this particular part of the phase diagram has no relevance for a genuine continuum limit. There should be a reasonable universality. This is nicely illustrated in $2$-D dynamical triangulation theory where any restriction (except the strict flatness constraint $q(i)=6$) leads to $2$D-gravity. 0.5 cm The above elementary remarks are a trivial restatement of the well known fact that the moves $(1,4)$ and $(1,5)$ (the barycentric subdivision) drive the system into the elongated phase, whereas the moves $(2,3)$ and $(2,4)$ drive to the crumpled phase. The crumpling transition occur as soon as singular vertices are statistically enhanced by the presence of enough $(2,4)$ moves with respect to $(1,5)$, (for $n=3$ this enhancement is generated by the dominance of $(2,3)$ moves with respect to $(1,4)$ moves). The genesis of singular vertices: ${\Bbb S}^4_{sv}$ {#singvertex} --------------------------------------------------- In order to characterize the onset of crumpling we describe the $f$- vector of the generic triangulation of ${\Bbb S}^n$ in a way that clearly shows the mechanism of formation of singular vertices. Such a description is obtained by gluing a triangulated ball $B^n$ to the cone over its boundary $\partial{B}^n\simeq{\Bbb S}^{n-1}$. Thus, by referring to the $4$- dimensional case for definiteness, we consider ${\Bbb S}^4_{sv}\simeq B^4\cup_{S^3}C(\partial{B^4})$, ([*sv*]{} for singular vertex). Note that any triangulation of ${\Bbb S}^4$ can be factorized in this way (since $C(\partial{B^4})$ and $\partial{B^4}$ are the star and the link of a vertex, respectively), and we have $$\begin{aligned} N_4=N_4(B^4)+N_4(C(\partial{B^4})).\end{aligned}$$ The triangulation is singular as soon as we have $$\begin{aligned} N_4(B^4)\propto {N}_4(C(\partial{B^4})),\end{aligned}$$ namely when the cone $C(\partial{B^4})$ contains a number of top- dimensional simplices growing linearly with the volume of the whole manifold. It is easily checked that the $f$-vector of ${\Bbb S}^4_{es}\simeq B^4\cup_{S^3}C(\partial{B^4})$ is given by $$\begin{aligned} N_0&= & N_0(S^3)+ 1+ N_0(\hat{B}^4)\nonumber\\ N_1&= & N_1(S^3)+ N_0(S^3)+N_1(\hat{B}^4)\nonumber\\ N_2&= & N_2(S^3)+ N_1(S^3)+N_2(\hat{B}^4)\nonumber\\ N_3&= & N_3(S^3)+ N_2(S^3)+N_3(\hat{B}^4)\nonumber\\ N_4&= & N_3(S^3)+ N_4(B^4), \label{esseffe}\end{aligned}$$ where $N_i(S^3)$ denotes the $f$-vector of the boundary $\partial(B^4)\simeq{\Bbb S}^3$ of the triangulated ball $B^4$, and $N_i(\hat{B}^4)$ is the $f$-vector of the interior of $B^4$. The Dehn-Sommerville relations for ${\Bbb S}^4$ and for ${\Bbb S}^3$ constrain $N_i(\hat{B}^4)$ and $N_k(S^3)$ according to $$\begin{aligned} & & N_0(\hat{B}^4)-N_1(\hat{B}^4)+N_2(\hat{B}^4)-N_3(\hat{B}^4) +N_4({B}^4)=1\nonumber\\ & & 2N_1(\hat{B}^4)-3N_2(\hat{B}^4)+4N_3(\hat{B}^4)- 5N_4({B}^4)+ N_0(S^3)=0\nonumber\\ & & 2N_3(\hat{B}^4)+N_3(S^3)= 5N_4(\hat{B}^4).\end{aligned}$$ The average incidence $b(4,2)$ of such triangulated ${\Bbb S}^4$ can be easily computed in terms of the $f$-vectors $N_i(\hat{B}^4)$ and $N_k(S^3)$ according to $$\begin{aligned} b(4,2)=10\cdot\frac{4b(3,1)N_3(S^3)+2b(3,1)[N_0(\hat{B}^4)- N_1(\hat{B}^4)+N_2(\hat{B}^4)]- 2b(3,1)}{[6b(3,1)+18]N_3(S^3)+3b(3,1)N_2(\hat{B}^4)}, \label{deb}\end{aligned}$$ where $b(3,1)\doteq6[N_3(S^3)/N_1(S^3)]$ is the average incidence of $\partial{B^4}\simeq{\Bbb S}^3$. The presence of a singular vertex corresponds to $$\begin{aligned} \frac{N_4(\hat{B}^4)}{N_3(S^3)}=O(1),\end{aligned}$$ and it is easily verified that under such condition $b(4,2)$ is an increasing function of $b(3,1)$. [*This remark implies that singular triangulations with the smallest possible $b(4,2)$ are to be found corresponding to $b(3,1)=b(3,1)_{min}=9/2$*]{}. We have already seen an example of such a triangulation in the previous section, one for which the lowest kinematically possible incidence, $b(2,4)=4$, is attained. However such examples are not generic. They correspond to assuming $y_2=0$, (or more generally, they still occur if one interprets the right hand side as $y_2=O(1)$), and the singular vertex is not stable under $(1,5)$ moves. Eventually by performing enough barycentric subdivisions the initial singular vertex is smoothed out. Explicitly, assume that we start our chain of barycentric subdivisions on an a ${\Bbb S}^4_{sv}\simeq B^4\cup_{S^3}C(\partial{B^4})$ with a given value of $N_4$, say $N_4(0)$. Denote by ${\Bbb S}^4_{sv}(0)$ this initial triangulation. Note that at this initial step $$\begin{aligned} N_4(B^4(0))=\frac{1}{3}{N}_4(C(\partial{B^4(0)})),\end{aligned}$$ (see the previous section). If we carry out a $(1,5)$ move on each $4$-simplex of ${\Bbb S}^4_{sv}(0)$, we get a triangulation of ${\Bbb S}^4$ still of the form ${\Bbb S}^4_{sv}\simeq B^4\cup_{S^3}C(\partial{B^4})$, which we denote ${\Bbb S}^4_{sv}(1)$. For such triangulation we have $$\begin{aligned} {N}_4(C(\partial{B^4(1)}))= 4\cdot{N}_4(C(\partial{B^4(0)})),\end{aligned}$$ $$\begin{aligned} N_4(B^4(1))= 5\cdot{N}_4(B^4(0))+{N}_4(C(\partial{B^4(0)})).\end{aligned}$$ Now proceed by induction, noticing that if at each step we carry out a barycentric subdivision of each $4$-simplex of the ${\Bbb S}^4_{sv}$ generated at the previous step, we still get a $4$-sphere, ${\Bbb S}^4_{sv}(k)$, triangulated according to ${\Bbb S}^4_{sv}\simeq B^4\cup_{S^3}C(\partial{B^4})$, and such that, at the $k$-th step, $$\begin{aligned} {N}_4(C(\partial{B^4(k)}))= 4^k\cdot{N}_4(C(\partial{B^4(0)})),\end{aligned}$$ $$\begin{aligned} N_4(B^4(k))= 5^k\cdot{N}_4(B^4(0))+{N}_4(C(\partial{B^4(0)}))\sum_{i=1}^{k-1}5^{i- 1}\cdot4^{k-1}.\end{aligned}$$ Thus, as $k$ grows (corresponding to $y_1\to+\infty$, $y_2=O(1)$), $N_4(B^4(k))$ largely dominates over ${N}_4(C(\partial{B^4(k)}))$: $$\begin{aligned} N_4(B^4(k))>>_{y_1\to+\infty} {N}_4(C(\partial{B^4(k)})),\end{aligned}$$ and the resulting triangulation of ${\Bbb S}^4$ is no longer singular. In this sense, the dominance of the $(1,5)$ move naturally yields regular stacked sphere and thus for branched polymers. 0.5 cm Discarding these particular examples of unstable triangulated spheres with a singular vertex, we can easily characterize the smallest $b(4,2)$ corresponding to generic singular triangulations, namely triangulations generated in the large volume limit as $y_1\to\infty$ and $y_2\to\infty$, and whose singular vertices are stable under the action of the $(k,l)$ moves (at a fixed ratio $\frac{y_1}{y_2}$). Let us start by noticing that corresponding to $b(3,1)=9/2$, the expression (\[deb\]) for the average incidence reduces to $$\begin{aligned} b(4,2)=10\cdot\frac{6N_1(S^3)+4[N_0(\hat{B}^4)- N_1(\hat{B}^4)+N_2(\hat{B}^4)]}{15N_1(S^3)+6N_2(\hat{B}^4)}, \label{qui}\end{aligned}$$ where, in the numerator, we have discarded terms which are $o(1)$, thus irrelevant in the large volume limit. 0.5 cm Since ${\Bbb S}^3$ is [*stacked*]{}, the integers $N_3(S^3)$ and $N_1(S^3)$ are related by $4N_3(S^3)=3N_1(S^3)-10$, which implies that $3N_1(S^3)\equiv10(mod\;4)$. Thus, $N_1(S^3)$ must be an integer multiple of $4$ up to an error term which goes to zero, with increasing $N_1(S^3)$, as $10/N_1(S^3)$. More explicitly, and referring to the expression of the $f$-vector of ${\Bbb S}^3$ in terms of the balance variables $x_1\in{\Bbb N}$ and $x_2\in{\Bbb N}$ introduced in section \[ergodic\], we get the following components: $$\begin{aligned} N_0(S^3)&=&5+x_1,\nonumber\\ N_1(S^3)&=&10+4x_1,\nonumber\\ N_2(S^3)&=&10+6x_1,\nonumber\\ N_3(S^3)&=&5+3x_1,\end{aligned}$$ since corresponding to a stacked ${\Bbb S}^3$ we have $x_2=0$, (see (\[tredim\])). 0.5 cm The congruence properties just established for the $f$-vector of a stacked $3$-sphere suggest to parameterize both $N_2(\hat{B}^4)$ and $N_0(\hat{B}^4)- N_1(\hat{B}^4)+N_2(\hat{B}^4)$, appearing in (\[qui\]), in terms of $N_1(S^3)$ by setting $$\begin{aligned} N_2(\hat{B}^4)=\tilde{\beta}N_1(S^3),\end{aligned}$$ and $$\begin{aligned} N_0(\hat{B}^4)-N_1(\hat{B}^4)+N_2(\hat{B}^4)]=\tilde{\alpha}N_1(S^3).\end{aligned}$$ According to the above remarks, $N_1(S^3)$ is asymptotically an integer multiple of $4$, thus if we are interested to triangulations for which $N_1(S^3)$ can grow arbitrarily large, it follows that the two parameters $\tilde{\alpha}$ and $\tilde{\beta}$ necessarily are rational numbers of the form $\tilde{\beta}=\frac{\beta}{4}$ and $\tilde{\alpha}=\frac{\alpha}{4}$ with $\beta$ and $\alpha$ integers. In other words, the generic triangulations of ${\Bbb S}^4_{sv}\simeq B^4\cup_{S^3}C(\partial{B^4})$ with the joining ${\Bbb S}^3$ stacked ($b(3,1)=9/2$), can be conveniently parameterized by setting $$\begin{aligned} \frac{N_2(\hat{B}^4)}{N_1(S^3)}\doteq\frac{\beta}{4}, \label{paruno}\end{aligned}$$ and $$\begin{aligned} \frac{N_0(\hat{B}^4)- N_1(\hat{B}^4)+N_2(\hat{B}^4)}{N_1(S^3)}\doteq\frac{\alpha}{4}, \label{pardue}\end{aligned}$$ where $\alpha$ and $\beta$ are integers. Note that while $\beta\geq0$, $\alpha$ can possibly take also negative values. However, if we rewrite (\[qui\]) in terms of such parameters $$\begin{aligned} b(4,2)=10\cdot\frac{12+2\alpha}{30+3\beta}, \label{crumpledb}\end{aligned}$$ the kinematical bound $b(4,2)\geq4$ implies $5\alpha\geq3\beta$, and thus $\alpha$ is non-negative as well. 0.5 cm The parameters $\alpha$ and $\beta$ so introduced are completely equivalent to the balance variables $y_1$ and $y_2$ related to the cumulant action of the $(k,l)$ moves. Explicitly, we obtain $$\begin{aligned} 2y_1 &= & (\frac{1}{2}+\frac{1}{4}\beta-\frac{1}{3}\alpha)N_1(S^3)- \frac{20}{3}\nonumber\\ 2y_2 &= & (\frac{5}{6}\alpha-\frac{1}{2}\beta)N_1(S^3)+\frac{20}{3} \label{ipsilon}.\end{aligned}$$ The Dehn-Sommerville relations for the $f$-vector $N_i(\hat{B}^4)$ allow us to express also its components in terms of $\alpha$ and $\beta$ according to $$\begin{aligned} 3N_0(\hat{B}^4) &= & \left[\frac{3\beta- 4\alpha}{8}\right]{N}_1(S^3)+10\nonumber\\ 3N_1(\hat{B}^4) &= & \left[\frac{9\beta- 10\alpha}{8}\right]{N}_1(S^3)+10\nonumber\\ N_2(\hat{B}^4) &= & \frac{1}{4}\beta{N}_1(S^3)\nonumber\\ 3N_3(\hat{B}^4) &= & \left[\frac{3+5\alpha}{4}\right]{N}_1(S^3)- 5\nonumber\\ 3N_4(\hat{B}^4) &= & \left[\frac{3+2\alpha}{4}\right]{N}_1(S^3)-2. \label{dellapalla}\end{aligned}$$ The generic conditions $y_1>0$ and $y_2>0$, (and both approaching $+\infty$), together with $N_0(\hat{B}^4)>0$, imply that the parameters $\alpha$ and $\beta$ are related by $$\begin{aligned} \frac{3}{5}\beta<\alpha<\frac{3}{4}\beta, \label{alfabeta}\end{aligned}$$ with $(\alpha,\beta)\in{\Bbb N}^+\times{\Bbb N}^+$. From these remarks it follows that, as $y_1$ and $y_2$ go to $+\infty$, there are two distinct regimes for the set of triangulations considered: 0.3 cm [*(i)*]{} If we [*constrain*]{} the $f$-vector $N_i(S^3)$ of the connecting $\partial{B}^4$ to be $O(1)$, then according to (\[ipsilon\]), $\alpha$ and $\beta$ go to $\infty$ as $y_1,y_2\to+\infty$. From (\[dellapalla\]) we get that in this regime $$\begin{aligned} N_4(S^4) &\simeq & N_4(B^4)\simeq(\alpha/6)N_1(S^3)\nonumber\\ N_2(S^4) &\simeq & N_2(B^4)=(\beta/4){N_1}(S^3),\end{aligned}$$ where $N_1(S^3)$ is a constant. The geometrical bounds (\[alfabeta\]) simply imply that as $\alpha,\beta\to\infty$, the corresponding average incidence $b(4,2)$ varies between the kinematical bounds $4\leq b(4,2)\leq5$, as required. 0.3 cm [*(ii)*]{} Conversely, if we do not constrain $N_i(S^3)$ to be $O(1)$, then according to (\[ipsilon\]), $N_1(S^3)$, (and hence $N_3(S^3)$), is allowed to grow unboundedly large as $y_1,y_2\to+\infty$. This growth, which corresponds to the generation of singular vertices, is possible for any finite value of the parameters $\alpha$ and $\beta$ compatible with (\[alfabeta\]). Note that if kinematically possible, according to (\[alfabeta\]), such singular triangulations [*entropically dominate*]{} over the regular ones since these latter are generated by the constrained configurations forcing $N_i(S^3)$ to be $O(1)$, while the former are unconstrained. More specifically, since the number of distinct triangulations of a $3$-sphere ${\Bbb S^3}$ grows exponentially with $N_3(S^3)$, configurations with $N_3(S^3)$ as large as possible, if kinematically allowed, will dominate over configurations with $N_3(S^3)=O(1)$. 0.5 cm The kinematical bound (\[alfabeta\]) for the occurrence of singular triangulations is not trivial. In order to discuss its implications, let us consider the ratio between the total volume of the triangulated ${\Bbb S}^4_{sv}$ and the volume of the ball around the singular vertex $\sigma^0$, [*viz.*]{} $$\begin{aligned} \frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}=\frac{N_4}{N_3(S^3)}= \frac{12+2\alpha}{9}. \label{volsing}\end{aligned}$$ A direct computation of the average incidence $b(4,2)$, (see \[crumpledb\]), together with (\[alfabeta\]) immediately shows that the [*smallest*]{} $b(4,2)$’s for which we may have singular triangulations occur for $$\begin{aligned} \alpha &= & 5+3h,\nonumber\\ \beta &= & 8+5h,\end{aligned}$$ with $h=0,1,2,\ldots$. As $h$ varies, the average incidence $b(4,2)_h$ and the volume ratio (\[volsing\]) respectively take the values: $$\begin{aligned} b_h(4,2)=10\cdot\frac{22+6h}{54+15h}, \label{criticalbs}\end{aligned}$$ $$\begin{aligned} \frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}\bigg|_{h}=\frac{22+6h}{9}. \label{critvol}\end{aligned}$$ Since the singular triangulations that entropically dominate are those for which $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}\Big|_{h}$ is as low as possible, the smallest $b(4,2)$ for which we may have [*generic*]{} singular triangulations with the largest $Vol_{sing}(\sigma^0)$ is $$\begin{aligned} b(2,4)_{sing}=\frac{110}{27}\simeq 4.07407\ldots,\end{aligned}$$ (corresponding to $h=0$ and $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}=22/9\simeq2.444\ldots$). It is easily verified that such an average incidence is associated to a relative concentration of $(1,5)$ moves versus $(2,4)$ moves given by $$y_1=5y_2. \label{concentration}$$ 0.5 cm It is clear from (\[criticalbs\]) that singular triangulations may appear also for smaller values of $b(4,2)$. A list of the first possible values of $b_h(4,2)$ is provided by table \[tavola1\]. These triangulations are less singular than the ones associated with $b(4,2)=\frac{110}{27}$ since they correspond to larger values of the ratio $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}$, and for this reason we may be tempted to consider them as entropically [*sub dominating*]{} at least in the large-volume limit. Yet, this is mere apparency since their presence is particularly relevant for locating the critical incidence $b_0$ and for understanding the present status of the Monte Carlo simulations. h $\beta$ $\alpha$ b(2,4) $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_h$ --- --------- ---------- -------------------------------- -------------------------------------------- 0 8 5 $\frac{110}{27}\simeq4.07407$ $\frac{22}{9}\simeq2.444$ 1 13 8 $\frac{280}{69}\simeq4.0579$ $\frac{28}{9}\simeq3.111$ 2 18 11 $\frac{340}{84}\simeq4.04761$ $\frac{34}{9}\simeq3.777$ 3 23 14 $\frac{400}{99}\simeq4.0404$ $\frac{40}{9}\simeq4.444$ 4 28 17 $\frac{460}{114}\simeq4.03508$ $\frac{46}{9}\simeq5.111$ 5 33 20 $\frac{520}{129}\simeq4.03100$ $\frac{52}{9}\simeq5.777$ : The smallest incidence numbers $b(4,2)|_h$ and the associated singular volume fraction $Vol(S^4)/Vol_{sing}(\sigma^0)$ as a function of the parameters $\alpha$ and $\beta$.[]{data-label="tavola1"} .05 cm Moreover, as we see in the next section, these triangulations have a subtle interplay with the particular singular geometry dominating in the strong coupling phase of $4$-D simplicial gravity: PL-manifolds with a single singular edge connecting two singular vertices. 0.5 cm In order to get the complete geometrical picture, one has to note that for $\alpha=2+8h$ and $\beta=3+13h$, we also get a highly degenerate configuration for which $$b_h(4,2)=\frac{160}{39}\simeq 4.102564$$ is a constant average incidence as $h$ varies, whereas $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}=\frac{16+16h}{9}$, $h=0,1,2,\ldots$. In other words, corresponding to such value of $b(4,2)$ we have distinct triangulations with distinct ratios $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}$ but with $b(4,2)$ fixed. Even if this set of triangulations contains configurations for which $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}\simeq1.777$, such a degeneration makes any particular configuration at fixed $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}$ entropically sub- dominating with respect to the generic configurations described by (\[criticalbs\]), at least as $N_4\to\infty$. The development of singular edges: ${\Bbb S}^4_{es}$ {#sedge} ---------------------------------------------------- The explicit construction of the previous section may suggest that the singular triangulations we are explicitly considering are characterized by the dominance of just one singular vertex. Actually, as the parameters $\alpha$ and $\beta$ vary, triangulations of ${\Bbb S}^4$ of the form $B^4\cup_{S^3}C(\partial{B^4})$ are not the only ones possible whose average incidence $b(4,2)$ takes on the value (\[crumpledb\]), [*at least as*]{} $N_4(S^4)\to\infty$. As a matter of fact, in the infinite volume limit, (but not at finite volume), triangulations, with more than one singular vertex and with singular edges are still characterized by the average incidence (\[crumpledb\]). Their dominance in the class of triangulations considered, as $N_4(S^4)\to\infty$, is driven by a rather simple entropic mechanism which we discuss in detail in this section. 0.5 cm Implicitly, the occurrence of more than one singular vertices may still be described by the construction $B^4\cup_{S^3}C(\partial{B^4})$, since one may simply consider the new singular vertices and edges to be located in the ball $B^4$. However, the interplay between dominance of one or more (edge-connected) singular vertices is most easily seen from a simple variant of the construction leading to (\[crumpledb\]). The generic singular triangulation of ${\Bbb S}^4$ is still realized by glueing two $4$- balls along an isometric ${\Bbb S}^3$ boundary which is again assumed to be a stacked $3$-sphere, [*i.e.*]{}, as $B_{es}^4\cup_{S^3}{B^4}$. However, one of the two balls, say the one denoted by $B_{es}^4$, ($es$ being an acronym for [*edge-singular*]{}), is no longer taken of the form of a cone $C(\partial{B^4})$ over the ${\Bbb S}^3$ boundary, but more generally is provided by a triangulation with $f$-vector $$\begin{aligned} N_0(B^4_{es})&=&\frac{1}{3}\sum_{j=1}^kN_3(B^3(j))+k\nonumber\\ N_1(B^4_{es})&=&\frac{5}{3}\sum_{j=1}^kN_3(B^3(j))+\frac{1}{2}\sum_{l= 1}^{k-1}N_2(S^2(l))+ 3(k-1)\nonumber\\ N_2(B^4_{es})&=&\frac{10}{3}\sum_{j=1}^kN_3(B^3(j))+2\sum_{l=1}^{k- 1}N_2(S^2(l))+ 2(k-1)\nonumber\\ N_3(B^4_{es})&=&3\sum_{j=1}^kN_3(B^3(j))+\frac{5}{2}\sum_{l=1}^{k- 1}N_2(S^2(l))\nonumber\\ N_4(B^4_{es})&=&\sum_{j=1}^kN_3(B^3(j))+\sum_{l=1}^{k-1}N_2(S^2(l)). \label{peanut}\end{aligned}$$ To grasp the geometrical origin of this $f$-vector imagine $k$ distinct $3$-spherical disks, $B^3(j)$, joined through $(k-1)$ ${\Bbb S}^2$-boundaries, ${\Bbb S}^2(l)$; a sort of $3$-dimensional [*peanut-shell*]{} with $k$-bulges and $k-1$ necks. This gives rise to a $3$-spherical peanut-shell ${\Bbb S}^3$, and we get a $4$-dimensional ball $B^4_{es}$ out of this ${\Bbb S}^3$ by considering $(k-1)$-edges $\{\sigma^1(l)\}_{l=1,\ldots,k-1}$ not belonging to ${\Bbb S}^3$ and connecting $k$ vertices $\{\sigma^0(j)\}_{j=1,\ldots,k}\notin{\Bbb S}^3$. The $4$-ball $B^4_{es}$ is defined by requiring that the generic $2$- spheres ${\Bbb S}^2(l)$ are the links (in $B^4_{es}$) of the corresponding edges $\sigma^1(l)$, $l=1,\ldots,k-1$. Moreover, the complex obtained by $B^4_{es}$ by removing the $(k-1)$ stars (in $B^4_{es}$) of the edges $\sigma^1(l)$, is assumed to be the disjoint union of $k$ cones $C(B^3(j))$ over the $3$-spherical disks $B^3(j)$, with apices in the $k$ vertices $\sigma^0(j)$. This construction can be roughly described as a $3$-dimensional peanut shell containing one rather than $k$ distinct $4$-dimensional nuts. It is easily verified that the $f$- vector $N_i(B^4_{es})$ (\[peanut\]) describes this generalized [*peanut triangulation*]{} and that it represents a triangulated $4$-ball with $N_0(B^4_{es})$ vertices, $k$ of which, $\{\sigma^0(j)\}_{j=1,\ldots,k}$ are interior vertices, ([*i.e.*]{}, $\sigma^0(j)\notin\partial{B^4_{es}}$), with $N_3(B^3(j))+N_2(S^2(j))$ $4$-simplices $\sigma^4$ incident on the $j$-th of them. The $j$-th of the $k-1$ interior links $\sigma^1(l)$, connects the vertex $\sigma^0(j)$ with $\sigma^0(j+1)$, and $N_2(S^2(j))$ $4$-dimensional simplices $\sigma^4$ are incident on it. Thus, if some, say $1\leq{s}\leq{k}$, of the $\{N_3(B^3(j)\}$ and the corresponding $s-1$ of the $\{N_2(S^2(l))\}$ grow with the simplicial volume of the ${\Bbb S}^4\supset{B^4_{es}}$, (not necessarily with the same rate), the triangulation of $B^4_{es}$ just constructed contains $s$ [*singular*]{} vertices connected by $s-1$ [*singular*]{} edges. Note that if we take the boundary of this triangulated $B^4_{es}$ we obtain a stacked $3$-sphere ${\Bbb S}^3$ with $f$-vector $$\begin{aligned} N_0(S^3)&=&\frac{1}{3}\sum_{j=1}^kN_3(B^3(j))\nonumber\\ N_1(S^3)&=&\frac{4}{3}\sum_{j=1}^kN_3(B^3(j))\nonumber\\ N_2(S^3)&=&2\sum_{j=1}^kN_3(B^3(j))\nonumber\\ N_3(S^3)&=&\sum_{j=1}^kN_3(B^3(j)).\end{aligned}$$ This ${\Bbb S}^3$ boundary of the $4$-dimensional ball $B^4_{es}$ may be profitably thought of as resulting from the connected sum, along isometric ${\Bbb S}^2$-boundaries of $k$ distinct stacked $3$-spheres ${\Bbb S}^3_i$, $i=1,\ldots{k}$, to be considered as the links (in an ${\Bbb S}^4$) of a corresponding singular vertex. In this way the singular ball $B^4_{es}$, (and the corresponding ${\Bbb S}^4$), can be considered as the kinematical set up for discussing the [*interaction*]{} of $k$ distinct singular vertices (of the type considered in the previous section). This picture allow also to prove an elementary but important result showing that, in the class of triangulations considered, the order of singularity of a singular edge $\sigma^1(l)$ in $B^4_{es}$ is subdominating with respect to the order of singularity of the corresponding vertices $\sigma^0(l-1)$ and $\sigma^0(l)$. In other words, in the large volume limit, the number of $4$-simplices incident on $\sigma^1(l)$ grows slower than the number of $4$-simplices incident on the vertices $\sigma^0(l-1)$ and $\sigma^0(l)$. As usual, the number of incident $4$-simplices can be considered as the (possibly) singular volume associated to the corresponding edge or vertex. Thus, if we denote such simplicial volumes by $Vol(\sigma^1(l))=\#\{\sigma^4\cap\sigma^1(l)\}$, $Vol(\sigma^0(l-1))=\#\{\sigma^4\cap\sigma^0(l-1)\}$, and $Vol(\sigma^0(l))=\#\{\sigma^4\cap\sigma^0(l)\}$, we have the following In the class of triangulations considered for $B^4_{es}$, $$\begin{aligned} \lim_{N_4(B^4_{es})\to\infty}\frac{Vol(\sigma^1(l))}{Vol(\sigma^0(l))} =0.\end{aligned}$$ \[subessedue\] (Obviously the same holds with $Vol(\sigma^0(l))$ replaced by $Vol(\sigma^0(l-1))$). In order to prove this result we may consider, without loss in generality, a $B^4_{es}$ whose ${\Bbb S}^3$-boundary consists of two stacked $3$-spheres ${\Bbb S}_1$ and ${\Bbb S}_2$ joined through an isometric ${\Bbb S}^2$. The more general case can be proved similarly without much effort. Again without loss of generality we may assume that the two isometric copies of ${\Bbb S}^2$ along which the two ${\Bbb S}^3_1$ and ${\Bbb S}^3_2$ are glued are the links of a vertex in the corresponding ${\Bbb S}^3_i$, (as remarked in the previous paragraph, this can be always arranged; also it can be easily shown that there are stacked $3$-spheres with a marked vertex whose $2$- spherical link grows linearly with the simplicial volume of the $3$- sphere-see section \[sss\] for an example in $4$-dimensions. The above lemma states that, in the case of stacked $3$-spheres, this large volume behavior for an ${\Bbb S}^2$ cannot hold if such a $2$-sphere is a joining neck). It immediately follows that the $f$-vector of $\partial{B^4_{es}}={\Bbb S}^3_1\cup_{S^2}{\Bbb S}^3_2$ can be written in terms of the $f$-vector of ${\Bbb S}^3_i$, $i=1,2$, and of ${\Bbb S}^2$ as $$\begin{aligned} N_0(S^3)&=&\frac{1}{3}[N_3(S^3_1)+N_3(S^3_2)]-N_2(S^2)+N_0(S^2)- 6\nonumber\\ N_1(S^3)&=&\frac{4}{3}[N_3(S^3_1)+N_3(S^3_2)]-4N_2(S^2)+N_1(S^2)- 4\nonumber\\ N_2(S^3)&=&2[N_3(S^3_1)+N_3(S^3_2)]-4N_2(S^2)\nonumber\\ N_3(S^3)&=&N_3(S^3_1)+N_3(S^3_2)-2N_2(S^2). \label{essetre}\end{aligned}$$ Since $N_1(S^2)=(3/2)N_2(S^2)$ we immediately get $$\begin{aligned} \frac{N_3(S^3)}{N_1(S^3)}=\frac{3}{4}\cdot\frac{N_3(S^3_1)+N_3(S^3_2)- 2N_2(S^2)}{N_3(S^3_1)+N_3(S^3_2)-\frac{15}{8}N_2(S^2)-\frac{12}{4}},\end{aligned}$$ which implies that $\frac{N_3(S^3)}{N_1(S^3)}>\frac{3}{4}$ as long as $\frac{N_2(S^2)}{N_3(S^3_i)}=O(1)$ in the large volume limit ([*i.e.*]{}, as $N_3(S^3)+N_2(S^2)\to\infty$). Thus $\partial{B^4_{es}}={\Bbb S}^3_1\cup_{S^2}{\Bbb S}^3_2$ can be a stacked $3$-sphere if and only if $$\begin{aligned} \lim_{N_3(S^3)\to\infty}\frac{N_2(S^2)}{N_3(S^3_i)}=0. \label{subdom}\end{aligned}$$ According to the above remarks $N_2(S^2)=Vol(\sigma^1(i))$ and $N_3(S^3_i)+N_2(S^2)=Vol(\sigma^0(i))$, and we can write (\[subdom\]) as $$\begin{aligned} \lim_{N_4(B^4_{es})\to\infty}\frac{Vol(\sigma^1(i))}{Vol(\sigma^0(i))- Vol(\sigma^1(i))}=0,\end{aligned}$$ from which the lemma follows. 0.5 cm This latter result only implies that the singular volume of the edge cannot grow [*linearly*]{} with the total volume of the ball $B^4_{es}$ (and of the resulting ${\Bbb S}^4$-see below). As we have seen in the previous section, a linear growth is instead typical for the singular volume associated to the vertices. It should be stressed that a subdominant rate of growth, (say with some fractional power of the total volume of $B^4_{es}$), is well in agreement with (\[subdom\]). As a matter of fact subdominant powers for the volume growth associated with a singular edge are the ones typically experienced in numerical simulations [@Catterall]. 0.5 cm Note that triangulations with $Vol(\sigma^1(i))$ as large as kinematically possible, thus growing with $[N_4(B^4_{es})]^{\delta}$ for some $0<\delta<1$, entropically dominate over triangulations of $B^4_{es}$ with $Vol(\sigma^1(i))=O(1)$. This remark follows as a direct consequence of the fact that triangulating $B^4_{es}$ under the hypothesis $N_2(S^2)=O(1)$, while sufficient to assure the validity of (\[subdom\]), it is not a necessary condition. It generates a subclass of constrained configurations in the class of triangulations of $B^4_{es}$ considered. Conversely, triangulations with $N_2(S^2)\propto[N_4(B^4_{es})]^{\delta}$ are, according to (\[subdom\]), unconstrained, and as such much more numerous at least in the large volume limit. 0.5 cm As in the previous section, we obtain a $4$-sphere ${\Bbb S}^4_{es}$, ([*es*]{} again for [*edge-singular*]{}), by glueing a generic triangulated ball $B^4$ with stacked ${\Bbb S}^3$ boundary to the singular $B^4_{es}$ defined by (\[peanut\]), [*viz.*]{}, ${\Bbb S}^4_{es}\simeq{B^4}\cup_{S^3}B^4_{es}$. It is easily checked that the $f$-vector of such an ${\Bbb S}^4_{es}$ is given by $$\begin{aligned} N_0(S^4_{es})&= & N_0(S^3)+ N_0(\hat{B}^4)+k\nonumber\\ N_1(S^4_{es})&= & N_1(S^3)+ N_0(S^3)+N_1(\hat{B}^4)+ \frac{1}{2}\sum_{l=1}^{k-1}N_2(S^2(l))+3(k-1)\nonumber\\ N_2(S^4_{es})&= & N_2(S^3)+ N_1(S^3)+N_2(\hat{B}^4)+ 2\sum_{l=1}^{k-1}N_2(S^2(l))+2(k-1)\nonumber\\ N_3(S^4_{es})&= & N_3(S^3)+ N_2(S^3)+N_3(\hat{B}^4)+ \frac{5}{2}\sum_{l=1}^{k-1}N_2(S^2(l))\nonumber\\ N_4(S^4_{es})&= & N_3(S^3)+ N_4(B^4)+\sum_{l=1}^{k-1}N_2(S^2(l)),\end{aligned}$$ where $N_i(S^3)$ denotes the $f$-vector of the joining stacked $3$- sphere $\partial(B^4_{es})\simeq{\Bbb S}^3$, and $N_i(\hat{B}^4)$ is the $f$-vector of the interior of $B^4$. According to (\[subdom\]), $N_2(S^2)/N_3(S^3)$ is asymptotically $o(1)$, thus, in the large volume limit, the average incidence $b(4,2)$ of such a triangulated ${\Bbb S}^4_{es}$ is still provided by the expression (\[crumpledb\]) introduced in the previous section, [*viz.*]{}, $$\begin{aligned} \lim_{N_4(S^4)\to\infty}b(4,2)|_{S^4_{es}}=10\cdot\frac{12+2\alpha}{30 +3\beta}, \label{crumpledb2}\end{aligned}$$ where the two parameters $\beta$ and $\alpha$ are again defined by (\[paruno\]) and (\[pardue\]), respectively. Before we proceed any further, we should emphasize that (\[crumpledb2\]) strictly speaking only holds in the limit $N_4(S^4)\to\infty$, and that [*at finite*]{} (but large) volume $N_4(S^4_{es})$, we have $b(4,2)|_{S^4_{es}}>b(4,2)$ with $$\begin{aligned} b(4,2)|_{S^4_{es}}=10\cdot\frac{12+2\alpha}{30+3\beta}+\eta\frac{\sum_ {l=1}^{k-1}N_2(S^2(l))}{N_3(S^3)}, \label{crumpledb3}\end{aligned}$$ for a suitable $\alpha$- and $\beta$-dependent constant $\eta>0$ which can be easily worked out. For instance, for the relevant case $k=2$, ([*i.e.*]{}, two singular vertex connected by a sub-singular edge), we get to leading order in $N_2(S^2)/N_3(S^3)$, $$\begin{aligned} b(4,2)|_{S^4_{es}}=10\cdot\frac{12+2\alpha}{30+3\beta}+ 10\cdot\frac{6+3\beta-4\alpha}{100+\beta^2+20\beta} \left[\frac{N_2(S^2)}{N_3(S^3)}\right]. \label{bexample}\end{aligned}$$ Since according to lemma \[subessedue\] the ratio $\frac{\sum_{l=1}^{k- 1}N_2(S^2(l))}{N_3(S^3)}$ can go to zero, in the large volume limit, as slowly as $N_3(S^3)^{\delta-1}$ for some $0<\delta<1$, we get the At finite volume $N_4(S^4)$, the singular-vertex triangulations ${\Bbb S}^4_{sv}\simeq{B^4}\cup_{S^3}C(\partial{B^4})$, considered in section \[singvertex\], are closer to the kinematical boundary $b(4,2)=4$ than the edge-singular triangulations ${\Bbb S}^4_{es}\simeq{B^4}\cup_{S^3}B^4_{es}$. \[closer\] We stress that this result does not imply that the singular-vertex triangulations ${\Bbb S}^4_{sv}\simeq{B^4}\cup_{S^3}C(\partial{B^4})$ entropically dominate in the large volume limit. For, according to (\[crumpledb2\]), the edge-singular triangulations become more and more important as the volume increases, and eventually in the infinite volume limit the triangulated spheres ${\Bbb S}^4_{es}$ enter in full entropic competition with the triangulated ${\Bbb S}^4_{sv}$ considered in the previous section. Actually this entropic competition comes into play quite rapidly as the volume increases. For instance, from (\[bexample\]), one gets that, for the dominating configurations at $h=0$, $$\begin{aligned} b(4,2)|_{S^4_{es}}\simeq \frac{110}{27}+\frac{100}{324}\cdot \left[\frac{N_2(S^2)}{N_3(S^3)}\right]. \label{edgexample}\end{aligned}$$ Numerical simulations at $N_4(S^4)=32000$, (see [*e.g.*]{}, [@Catterall] and [@singedge]) show evidence that $N_2(S^2)/N_3(S^3)<1/10$, thus the average incidence $b(4,2)|_{S^4_{es}}$ of ${\Bbb S}^4_{es}$ differs (at $h=0$) from the average incidence $b(4,2)|_{S^4_{vs}}$ of ${\Bbb S}^4_{vs}$ by less than $3/100$. Therefore it is important to understand how, as $N_4(S^4)$ increases, the $k$ distinct singular vertices (and the corresponding $k-1$ subsingular connecting edges) interact among them, and which configuration actually dominates in the large volume limit. 0.5 cm As we have seen in section \[singvertex\], the various singular triangulations of the $4$-sphere considered there are parameterized by the ratio between the total simplicial volume of the given ${\Bbb S}^4_{sv}$ and the simplicial volume of its singular part, (see (\[volsing\]) and (\[critvol\])). If we consider a similar ratio also for ${\Bbb S}^4_{es}$, [*i.e.*]{}, $$\begin{aligned} \frac{Vol(S^4_{es})}{Vol(sing)}= \frac{N_4(S^4_{es})}{N_4(B^4_{es})}, \label{sing3}\end{aligned}$$ then, as is easily verified, this ratio is still provided, in the large volume limit, by (\[volsing\]). It follows that the entropic comparison between the single singular vertex triangulations ${\Bbb S}^4_{sv}$ and the multiple singular vertices triangulations ${\Bbb S}^4_{es}$ should be carried out at a fixed value of the ratio $Vol(S^4_{es})/Vol(sing)=const.=Vol(S^4_{sv})/Vol_{sing}(\sigma^0)$. In our case $$\begin{aligned} N_4(B^4_{es})&=&\sum_{j=1}^kN_3(B^3(j))+\sum_{l=1}^{k- 1}N_2(S^2(l))\nonumber\\ &=&N_3(S^3)+\sum_{l=1}^{k-1}N_2(S^2(l)). \label{ennesse}\end{aligned}$$ According to the remarks following lemma \[subessedue\], unconstrained triangulations of ${\Bbb S}^4_{es}$ generally have $\sum_{l=1}^{k-1}N_2(S^2(l))=O(N_3(S^3)^{\delta})$ for some $0<\delta<1$. Thus $$\begin{aligned} \lim_{N_4(S^4)\to\infty}N_4(B^4_{es})/N_3(S^3)=1,\end{aligned}$$ and working at constant ratio (\[sing3\]), (in the infinite volume limit $N_4(S^4_{es})\to\infty$), implies that we have to consider triangulations of $B^4_{es}$ with $$\begin{aligned} N_3(S^3)=A_1\cdot N_4(S^4)\end{aligned}$$ and $$\begin{aligned} A_2\leq\sum_{l=1}^{k-1}N_2(S^2(l))\leq A_3\cdot N_3(S^3)^{\delta},\end{aligned}$$ for some positive constants $A_1$, $A_2$, and $A_3$. 0.5 cm Guided by these considerations we can easily get a set of entropic rules for determining which configuration dominates in the set of singular triangulations of ${\Bbb S}^4$. We start by an obvious adaptation of an argument in [@Catterall], according to which the number of distinct triangulations associated with a singular vertex, (the [*local entropy of the vertex*]{}), is provided by the number of distinct triangulations of the link of the given vertex. The link, $link(\sigma^0(j))$, around the $j$-th singular vertex $\sigma^0(j)\in{B^4_{es}}$, is a $3$-sphere ${\Bbb S}^3(j)$, and any two such links, ${\Bbb S}^3(j)$ and ${\Bbb S}^3(j+1)$, associated with two singular vertex connected by a singular edge $\sigma^1(j)$, have a non-empty intersection ${\Bbb S}^2(j)$, (the link of the connecting edge $\sigma^1(j)$). Thus, the inclusion-exclusion principle implies that the number, $Card[B^4_{es}(S^3(1),\ldots,S^3(k);S^2(1),\ldots,S^2(k-1))]$, of distinct triangulations of $B^4_{es}$ with given singular vertices $\{S^3(j)\}_{j=1,\ldots,k}$ and given singular edges $\{S^2(l)\}_{l=1,\ldots,k-1}$ is provided by $$\begin{aligned} Card[B^4_{es}(S^3(1),\ldots,S^3(k);S^2(1),\ldots,S^2(k- 1)))]=\frac{\prod_{j=1}^kCard[S^3(j)]}{\prod_{l=1}^{k-1}Card[S^2(l)]},\end{aligned}$$ where $Card[S^3(j)]$ and $Card[S^2(l)]$ respectively denote the number of distinct triangulations of the $3$-spherical links of the $j$-th singular vertex and of the $2$-spherical singular link of the $l$-th singular edge. Since each $S^3(j)$ is a stacked $3$-sphere, (hence with an average incidence $b(3,1)=\frac{9}{2}$), the microcanonical partition function (\[asintotica\]) immediately provides the leading order asymptotics both for $Card[S^3(j)]$ and $Card[S^2(l)]$, [*viz.*]{}, $$\begin{aligned} Card[S^3(j)]_{N_3(S^3(j))>>1}\simeq \left [ \frac{(b(3,1)-\hat{q}+1)^{b(3,1)-\hat{q}+1}}{(b(3,1)-\hat{q})^{b(3,1)- \hat{q}}} \right ] ^{N_1(S^3(j))},\end{aligned}$$ where $b(3,1)=\frac{9}{2}$, $\hat{q}=3$. Since $N_1(S^3(j))=\frac{4}{3}N_3(S^3(j))$, we get $$\begin{aligned} Card[S^3(j)]_{N_3(S^3(j))>>1}\simeq \left [ \frac{(\frac{5}{2})^{5/2}}{(\frac{3}{2})^{3/2}} \right ] ^{\frac{4}{3}N_3(S^3(j))}.\end{aligned}$$ Similarly, by setting $b(2,1)=6$, $\hat{q}=3$, and $N_0(S^2(l))=2+\frac{1}{2}N_2(S^2(l))$, (\[asintotica\]) provides $$\begin{aligned} Card[S^2(l)]_{N_2(S^2(l))>>1}\simeq \left [ \frac{(b(2,1)-\hat{q}+1)^{b(2,1)-\hat{q}+1}}{(b(2,1)-\hat{q})^{b(2,1)- \hat{q}}} \right ] ^{N_0(S^2(l))}= \left [ \frac{4^4}{3^3} \right ] ^{\frac{1}{2}N_2(S^2(l))}.\end{aligned}$$ Thus, by setting $C(2)\doteq[4^4/3^3]^{1/2}$ and $C(3)\doteq[(5/2)^{5/2}/(3/2)^{3/2}]^{4/3}$, we eventually get $$\begin{aligned} &&Card[B^4_{es}(S^3(1),\ldots,S^3(k);S^2(1),\ldots,S^2(k- 1))]\simeq\nonumber\\ &&\simeq\exp\left[\left (\sum_{j=1}^kN_3(S^3(j))\right )\ln{C(3)}- \left (\sum_{l=1}^{k-1}N_2(S^2(l))\right )\ln{C(2)}\right]. \label{cardo}\end{aligned}$$ Since $$\begin{aligned} \sum_{j=1}^kN_3(S^3(j))=N_3(S^3)+2\sum_{l=1}^{k-1}N_2(S^2(l)),\end{aligned}$$ where $S^3=\partial{B^4_{es}}$ is the stacked boundary of $B^4_{es}$, we can rewrite (\[cardo\]) as $$\begin{aligned} Card[B^4_{es}(S^3(1),\ldots;S^2(1),\ldots,)]\simeq C(3)^{N_3(S^3)}\cdot\left[ \frac{C(3)^2}{C(2)} \right]^{ \sum_{l=1}^{k-1}N_2(S^2(l))}, \label{Bentropy}\end{aligned}$$ (by exploiting (\[ennesse\]) this expression can be also rewritten in terms of $N_4(B^4_{es})$). Since $C(3)/C(2)>1$, we have that triangulations of $B^4_{es}$ with large $\sum_{l=1}^{k-1}N_2(S^2(l))$ are dominant in the infinite volume limit. This implies that the simplicial volume of the $k-1$ edges connecting the $k$ vertices is as large as possible. Note that (\[Bentropy\]) does not depend on the particular $S^3(j)$ or $S^2(l)$ but only on the fixed quantities $N_3(S^3)$ and $\sum_{l=1}^{k-1}N_2(S^2(l))$ determining the ratio between $N_4(S^4)$ and the volume of the singular part $B^4_{es}$ of ${\Bbb S}^4$, (see (\[sing3\]) and (\[ennesse\])). Thus, among all possible triangulations with $k$ distinct singular vertices connected by $k-1$ distinct edges, those entropically favored, as $k$ varies, are the less constrained ones, namely triangulations with just one singular edge connecting two singular vertices: the triangulations of $B^4_{es}$ with $k=2$. For such triangulations the $S^3$ links of the singular vertices and the $S^2$ link of the connecting edge are as large as kinematically possible. Note that for the triangulated $B^4=C(\partial{B^4})$ considered in section \[singvertex\] we have $$\begin{aligned} Card[C(\partial{B^4})]\simeq C(3)^{N_3(S^3)},\end{aligned}$$ and in the large volume limit $$\begin{aligned} &&Card[B^4_{es}(S^3(1),\ldots;S^2(1),\ldots,)]\simeq\nonumber\\ &&\simeq C(3)^{N_3(S^3)}\cdot\left[ \frac{C(3)^2}{C(2)} \right]^{ \sum_{l=1}^{k-1}N_2(S^2(l))} >Card[C(\partial{B^4})]\simeq C(3)^{N_3(S^3)}. \label{edgesingo}\end{aligned}$$ Since, as $N_4(S^4)$ increases, the triangulations ${\Bbb S}^4_{es}$ enter more and more in entropic competition with the single singular vertex triangulations ${\Bbb S}^4_{sv}$, (\[edgesingo\]) directly implies the following basic result For a given ratio $$\begin{aligned} \frac{Vol(S^4_{es})}{Vol(sing)}= \frac{N_4(S^4_{es})}{N_4(B^4_{es})}=\frac{22+6h}{9},\end{aligned}$$ with $h=0,1,2,\ldots,$, the singular triangulations of ${\Bbb S}^4$ which are closer to the kinematical boundary $b(4,2)=4$, and which entropically dominate in the large volume limit $N_4(S^4)\to\infty$, are realized by triangulations ${\Bbb S}^4_{es}$ with one sub-singular edge connecting two singular vertices, and are characterized by the average incidence $$\begin{aligned} b_h(4,2)=10\cdot\frac{22+6h}{54+15h}.\end{aligned}$$ \[edgelemma\] The last part of this lemma, concerning the $h$-parameterization of the singular triangulations, is an immediate consequence of the expressions (\[crumpledb2\]) and (\[crumpledb3\]) for the average incidence of ${\Bbb S}^4_{es}$ and of the results of section \[singvertex\]. Results which characterize the sets of value of $\alpha$ and $\beta$ giving the closest approach of $b(4,2)=10\cdot\frac{12+2\alpha}{30+3\beta}$ to the kinematical boundary $b(4,2)=4$ as the ratio $\frac{Vol(S^4_{es})}{Vol(sing)}$ varies. 0.5 cm The geometrical analysis just discussed and the Lemma \[edgelemma\] appear in good qualitative agreement with the picture which emerges from recent Monte Carlo simulations [@singedge] concerning the study of singular structures in $4$D simplicial gravity. According to such a numerical analysis there are, [*at finite volume*]{}, two pseudo-critical couplings (and hence corresponding pseudo-critical incidences $b(4,2)$) separately associated with the creation of singular edges and singular vertices. This behavior seem to correspond to the different entropic relevance of the single singular vertex triangulations ${\Bbb S}^4_{sv}$ and of the singular edge triangulations ${\Bbb S}^4_{es}$ discussed above. In the simulations the two pseudo-critical couplings lock into a single critical point in the large volume limit. This merging appears to be related to the full entropic competition between ${\Bbb S}^4_{sv}$ and ${\Bbb S}^4_{es}$ which dominates our geometrical picture in the infinite volume limit. Explicitly, the average incidence $b(4,2)|_{S^4_{es}}$, (see \[crumpledb3\]), is slightly larger (at finite volume) than $b(4,2)|_{S^4_{sv}}$. Thus, if we apply formula (\[solution\]) relating the average incidence $b(4,2)$ to a value of the coupling $k_2$, we find that the set of $k_2(S^4_{es})$’s corresponding to $b(4,2)|_{S^4_{es}}$, (as $h$ varies), is slightly smaller than the corresponding set of $k_2(S^4_{vs})$’s associated with $b(4,2)|_{S^4_{vs}}$. Anticipating the analysis of section \[secquattro\], this remark implies that there are indeed two pseudo-critical points respectively associated with edge- singular ${\Bbb S}^4_{es}$ and vertex-singular ${\Bbb S}^4_{vs}$ triangulations, say $k_2^{crit}(S^4_{es};N_4)$ and $k_2^{crit}(S^4_{vs};N_4)$, with $$\begin{aligned} k_2^{crit}(S^4_{es};N_4)\leq k_2^{crit}(S^4_{vs};N_4),\end{aligned}$$ and coalescing in just one critical point as $N_4$ gets larger and larger. Obviously, what one actually sees at a given finite volume mostly depends on the rate $N_2(S^2)/N_3(S^3)$, (see \[crumpledb3\]), which controls how fast the two average incidences $b(4,2)|_{S^4_{es}}$ and $b(4,2)|_{S^4_{vs}}$ approach each other. On this rate we are not yet able to say anything substantial. As recalled, (see (\[edgexample\])), computer simulations indicates that at relatively large volumes, (tipycally $N_4=32000$), the term $N_2(S^2)/N_3(S^3)$ is already so small that $b(4,2)|_{S^4_{es}}\simeq{b(4,2)}|_{S^4_{vs}}$ up to a few percent, and edge-singular triangulations are to all effects as close to the kinematical boundary $b(4,2)=4$ as the ${\Bbb S}^4_{vs}$ are. Thus they do entropically dominate. The characterization of the critical incidence ---------------------------------------------- Since in the infinite volume limit both singular configurations ${\Bbb S}^4_{sv}$ and ${\Bbb S}^4_{es}$ are characterized by the same average incidence (\[crumpledb\]), we can use indifferently both for characterizing the critical incidence $b_0(4)$ signaling the closest approach of generic singular triangulations to the kinematical boundary $b(4,2)=4$. The single singular vertex configurations ${\Bbb S}^4_{sv}$ are somehow easier to handle than ${\Bbb S}^4_{es}$, thus for definiteness we describe the characterization of the critical incidence (and the corresponding critical gravitational coupling) by referring explicitly to ${\Bbb S}^4_{sv}\simeq{B^4}\cup_{S^3}C(\partial{B^4})$. In any case, one should keep in mind that the extension of the analysis to ${\Bbb S}^4_{es}$ can be carried out without difficulty along the same lines. 0.5 cm How can we characterize the critical incidence $b_0(4)$? A glance at table \[tavola1\] clearly shows that, as $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}$ increases, the values of $b(4,2)|_h$ are very close to each other. This remark implies that triangulations with $b(4,2)|_{h=0}=110/27$, even if entropically dominating in ${\Bbb S}^4_{sv}\simeq{B^4}\cup_{S^3}C(\partial{B^4})$, cannot be taken as the mark of the real critical incidence. As a matter of fact. for values of $h$ close to the leading configuration at $h=0$, there can be statistical competition between such singular triangulations, at least as $N_4\to\infty$. The critical incidence $b_0$ is actually obtained by averaging the distinct $b(4,2)|_h$’s over the set of corresponding singular triangulations. 0.5 cm To characterize such average we exploit the fact that the singular triangulations we are considering have their singular part constructed as a cone over a stacked $3$-sphere ${\Bbb S}^3$. If we join, through the identification of a marked $\sigma^3\in{\Bbb S}^3$, two stacked $3$-spheres, ${\Bbb S}^3(1)$ and ${\Bbb S}^3(2)$ we get another stacked $3$-sphere ${\Bbb S}^3(3)={\Bbb S}^3(1)\#{\Bbb S}^3(2)$, and all (voluminous) stacked spheres can be obtained in this way. Thus, if we construct the cone over this connected sum of stacked $3$-spheres we can sweep all possible voluminous ([*i.e.*]{}, large $N_4$) singular triangulations of the type we are considering. Explicitly, let us denote the singular triangulations of ${\Bbb S}^4$, obtained from the stacked $3$-spheres ${\Bbb S}^3(1)$ and ${\Bbb S}^3(2)$, by ${\Bbb S}^4(1)\doteq B^4(1)\cup_{S^3(1)}C({\Bbb S}^3(1))$ and ${\Bbb S}^4(2)\doteq B^4(2)\cup_{S^3(2)}C({\Bbb S}^3(2))$, respectively. If ${\Bbb S}^3(3)={\Bbb S}^3(1)\#_f{\Bbb S}^3(2)$, where $f$ is an homeomorphism between two marked $\sigma^3(1)\in{\Bbb S}^3(1)$ and $\sigma^3(2)\in{\Bbb S}^3(2)$, then $$\begin{aligned} {\Bbb S}^4(3)= {\Bbb S}^4(1)\#_{f*}{\Bbb S}^4(2)= (B^4(1)\cup_f B^4(2))\cup_{S^3(3)}C({\Bbb S}^3(3)), \label{compo}\end{aligned}$$ where $f*$ is the extension of $f$ to the cone over the marked $\sigma^3$, and every singular triangulations of ${\Bbb S}^4$ over a stacked $3$-sphere can be obtained in this way. 0.5 cm The analytical counterpart of (\[compo\]) follows directly from the last of relations (\[dellapalla\]) characterizing the $f$-vector of the ball $B^4$ as the parameters $\alpha$ and $\beta$, (thus $h$), vary. From it we get $$\begin{aligned} N_4[B^4(1)\cup_f{B^4(2)}] &=& \left[\frac{13+6h}{9}\right]{N}_1(S^3(1)) +\left[\frac{13+6h}{9}\right]{N}_1(S^3(2))\nonumber\\ &=&N_4[B^4(1)]+N_4[B^4(2)],\end{aligned}$$ where we have discarded costant terms which are $o(1)$ in the large $N_4$ limit. To exploit this information let $$\begin{aligned} {\cal T}_h[Vol(B^4)=N]\doteq{Card}\left\{{\Bbb S}^4\colon\frac{Vol_{norm}(S^4)}{Vol_{sing}(\sigma^0)}=\frac{6h+22}{9} \ ;\; Vol(B^4)=N \right\},\end{aligned}$$ be the cardinality of the set of distinct singular triangulations of the ball $B^4$, constructed over a stacked ${\Bbb S}^3$, with given ratio $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}$ and $N_4(B^4)=N$. According to the behavior of this set of triangulations under the connected sum we have $$\begin{aligned} {\cal T}_h\left[Vol(B^4)=N(1)+N(2)\right]={\cal T}_h\left[Vol(B^4)=N(1)\right]\cdot {\cal T}_h\left[Vol(B^4)=N(2)\right].\end{aligned}$$ It is easily verified that this relation implies that the leading asymptotics of ${\cal T}_h(Vol(B^4))$ is provided by $$\begin{aligned} {\cal T}_h(Vol(B^4))=c(B^4;h)^{N_4(B^4)}, \label{calpalla}\end{aligned}$$ where $\ln{c(B^4;h)}$ is the specific entropy for the generic $\sigma^4\in{\Bbb S}^4_{vs}$. Since there is a unique cone $C({\Bbb S}^3)$ over the stacked sphere boundary $\partial{B^4}\simeq{\Bbb S}^3$, (\[calpalla\]) provides also the leading exponential asymptotics to the number of distinct triangulations of ${\Bbb S}^4_{vs}$ with given $N_4$ and given $h$, [*viz.*]{}, $$\begin{aligned} Card\{{\Bbb S}^4_{vs}\}\propto c(B^4;h)^{N_4(B^4)}.\end{aligned}$$ Actually, when $h>>1$ and $N_3(S^3)=O(1)$, for each triangulation of ${\Bbb S}^3$, there can be a worth of $Aut(S^3)$ inequivalents cones, $Aut(S^3)$ denoting the automorphisms group of the given triangulation, for simplicity we disregard here these correction factors. Note also that the above construction applies to the edge-singular spheres ${\Bbb S}^4_{es}$ with minor modifications. According to (\[esseffe\]), $$\begin{aligned} N_4(B^4)=N_4(S^4)-N_3(S^3)=N_4(S^4)\frac{13+6h}{22+6h},\end{aligned}$$ thus we get that to leading order $$\begin{aligned} Card\{{\Bbb S}^4_{vs}\}=c(B^4;h)^{N_4(S^4)-N_3(S^3)}\doteq s(h)^{N_4(S^4)}, \label{ballentropy}\end{aligned}$$ where we have introduced the specific entropy, $\ln{s(h)}$, of a $\sigma^4\in{\Bbb S^4}_{sv}$ according to $$\begin{aligned} \ln{s(h)}&\doteq&\lim_{N_4(S^4)\to\infty} \frac{\ln{Card\{{\Bbb S}^4_{vs}\}}}{N_4(S^4)}\nonumber\\ &=&\frac{13+6h}{22+6h}\ln{c(B^4;h)}.\end{aligned}$$ In order to characterize $\ln{s(h)}$, note that triangulations of the form ${\Bbb S}^4_{vs}$ describe, for $h=0$, the generic singular triangulations of ${\Bbb S}^4$ realizing the closest approach to the kinematical boundary $b(4,2)=4$. Conversely, and as already stressed, the triangulations ${\Bbb S}^4_{vs}$ reduce, as $h\to\infty$, to the generic (branched polymer) triangulations of ${\Bbb S}^4$, (with a rooted $\sigma^4$). These remarks imply that corresponding to $h=0$ and $h=h_{max}$ we must have $$\begin{aligned} \ln{s(h=0)}&=&\ln{c(S^4;h=0)}\nonumber\\ \ln{s(h=h_{max})} &=&\ln{c(S^4;h=h_{max})},\end{aligned}$$ where $h_{max}$ is characterized by the value of the ratio (\[critvol\]) evaluated for the smallest possible $N_3(S^3)=5$, [*i.e.*]{}, $h_{max}=\frac{3}{10}N_4-\frac{11}{3}$, and where $\ln{c(S^4;h)}$ is the specific entropy associated with the microcanonical partition function (\[notation\]) , [*i.e.*]{}, $$\begin{aligned} c(S^4;h)\simeq \left [ \frac{(b(4,2)-2)^{b(4,2)-2}}{(b(4,2)-3)^{b(4,2)-3}} \right ] ^{10/b(4,2)}, \label{specific}\end{aligned}$$ with $b(4,2)=10\cdot(22+6h)/(54+15h)$, (the actual specific entropy contains a constant factor which is of no relevance for the present considerations-see (\[notation\])). Since $c(S^4;h)$ is a slowly varying function of $h$, the specific entropy $\ln{s(h)}$ can be characterized as the convex combination of $\ln{s(h=0)}$ and $\ln{s(h=h_{max})}$ over the interval $0\leq{h}\leq{h}_{max}$, [*viz.*]{}, $$\begin{aligned} \ln{s(h)}=\frac{h}{h_{max}}\ln{s(h=h_{max})}+\left(1- \frac{h}{h_{max}}\right)\ln{s(h=0)}.\end{aligned}$$ In other words, we are considering $\ln{s(h)}$ as the convex combination of the extreme pure phases ($h=0$: crumpling, and $h\to\infty$: branched polymer). A straightforward computation provides $$\begin{aligned} s(h)=c(S^4;h=0)\cdot\left[\frac{c(S^4;h=0)}{c(S^4;h=h_{max})}\right]^{ -\frac{10}{3N_4}h}. \label{probo}\end{aligned}$$ Since in the large $N_4(S^4)$ limit, $\ln[c(S^4;h=0)/c(S^4;h=h_{max})]\simeq0.06$ we eventually get for the leading asymptotics $$\begin{aligned} Card\{{\Bbb S}^4_{vs}\}=c(S^4;h=0)^{N_4}e^{-\frac{h}{5}}. \label{probability}\end{aligned}$$ It is worth stressing that a completely analogous result holds for $Card\{{\Bbb S}^4_{es}\}$, since, as $N_4\to\infty$, the set of edge-singular triangulations, (with one edge connecting two singular vertices), ${\Bbb S}^4_{es}|_{k=2}$, is as close to the kinematical boundary $b(4,2)=4$ as the triangulations ${\Bbb S}_{vs}$. The two class ${\Bbb S}^4_{es}$ and ${\Bbb S}^4_{vs}$ only differ in the subleading asymptotics. 0.5 cm According to (\[probability\]), the average value of $b(4,2)|_h$ over the set of singular triangulations considered is given, in the large $N_4$ limit, by $$\begin{aligned} {\langle}b(4,2)_{sing}{\rangle}|_{h_{max}} =\frac{\sum_{h=0}^{h_{max}}b(4,2)|_h\exp[- \frac{h}{5}]}{\sum_{h=0}^{h_{max}}\exp[-\frac{h}{5}]}. \label{singaverage}\end{aligned}$$ By approximating the numerator with an integral, we get $$\begin{aligned} {\langle}b(4,2)_{sing}{\rangle}|_{h_{max}} =4+\frac{4}{15}\cdot\frac{e^{\frac{18}{25}}[E_1(\frac{18}{25})- E_1(\frac{h_{max}}{5}+\frac{18}{25})]}{5(1-e^{-\frac{h_{max}}{5}})}, \label{accamax}\end{aligned}$$ where $E_1(x)$ is the exponential integral function. In the large volume limit $h_{max}\to\infty$, and the above expression reduces to $$\begin{aligned} {\langle}b(4,2)_{sing}{\rangle}=4+\frac{4}{75}e^{\frac{18}{25}}E_1(\frac{18}{25}) \simeq 4.0394361235. \label{critincidence}\end{aligned}$$ As stressed, a similar analysis carried out for the class of singular triangulations ${\Bbb S}^4_{es}$ would provide the same ${\langle}b(4,2)_{sing}{\rangle}$. It follows that, as $N_4(S^4)\to\infty$, (\[critincidence\]) is the value of the incidence $b(4,2)$ statistically dominating in both sets ${\Bbb S}^4_{sv}$ and ${\Bbb S}^4_{es}$. As argued in the previous sections, these triangulations are the ones characterizing the smallest possible $b(4,2)$ marking the onset of the dominance of singular geometries. Thus, we can identify ${\langle}b(4,2)_{sing} {\rangle}$ with the [*critical*]{} incidence $b_0$ (see section \[critica\]) characterizing the transition between the weak and the strong coupling phase of the theory, [*i.e.*]{}, $$\begin{aligned} b_0(4)\doteq {\langle}b(4,2)_{sing}{\rangle}\simeq 4.0394361235.\end{aligned}$$ 0.5 cm Together with the critical incidence ${\langle}b(4,2) {\rangle}$ it is worthwhile to compute the infinite volume average, over the set of singular triangulations ${\Bbb S}^4_{sv}$ or ${\Bbb S}^4_{es}$, of the local volume of the singular part of the triangulation, $Vol(sing)$. Note that for the class of triangulations ${\Bbb S}^4_{sv}$, $Vol(sing)=Vol(\sigma^0)$, whereas for the triangulations of ${\Bbb S}^4_{es}$ dominating in the infinite volume limit, we have $$\begin{aligned} Vol(sing)\simeq 2Vol(\sigma^0), \label{edgev}\end{aligned}$$ since according to the remarks of section \[sedge\] and lemma \[edgelemma\], in such a limit, triangulations with just two singular vertices (connected by a sub- singular edge) dominate. For both class of triangulations $Vol(S^4)/Vol(sing)=(22+6h)/9$, and the required average is provided by $$\begin{aligned} {\langle}\frac{Vol(sing)}{Vol(S^4)}{\rangle}|_{h_{max}} =\frac{\sum_{h=0}^{h_{max}}\exp[- \frac{h}{5}]\frac{9}{6h+22}}{\sum_{h=0}^{h_{max}}\exp[-\frac{h}{5}]}. \label{volaverage}\end{aligned}$$ (Strictly speaking, this ensemble average explicitly refers to the single singular vertex triangulations ${\Bbb S}^4_{sv}$, however, as stressed before, this ensemble average differs from the ${\Bbb S}^4_{es}$ ensemble average by corrections which vanish as $N_4(S^4)\to\infty$). By approximating as usual the summatories with an integral we get $$\begin{aligned} {\langle}\frac{Vol(sing)}{Vol(S^4)}{\rangle}|_{h_{max}} =\frac{3e^{11/15}}{10(1- e^{-h_{max}/5})}\cdot [E_1(\frac{11}{15})-E_1(\frac{h_{max}}{5}+\frac{11}{15})].\end{aligned}$$ According to (\[edgev\]), we get for the average local volume of the (two) most singular vertices, the explicit expression $$\begin{aligned} {\langle}Vol(\sigma^0){\rangle}|_{h_{max}} = \frac{3e^{11/15}}{20(1-e^{-h_{max}/5})}\cdot [E_1(\frac{11}{15})-E_1(\frac{h_{max}}{5}+\frac{11}{15})]\cdot{N_4}, \label{vollo}\end{aligned}$$ which, in the infinite volume limit, reduces to $$\begin{aligned} {\langle}Vol(\sigma^0){\rangle}=\frac{3e^{11/15}}{20} E_1(\frac{11}{15})\cdot N_4. \label{avertex}\end{aligned}$$ Note that the value of the critical average incidence ${\langle}b(4,2) {\rangle}\simeq4.03943\ldots$ shows that the leading configurations contributing to the singular geometry of ${\Bbb S}^4_{es}$ are, loosely speaking, those for which $h\leq 6$, (see table \[tavola2\]). Thus, a rough indicator of what is the average singular volume for $b(4,2)$ sufficiently smaller than ${\langle}b(4,2) {\rangle}\simeq4.03943\ldots$, ([*viz.*]{}, when in the polymeric phase), can be obtained by considering the average $$\begin{aligned} {\langle}\frac{Vol(sing)}{Vol(S^4)}{\rangle}|_{poly} =\frac{\sum_{h\geq6}^{h_{max}}\exp[- \frac{h}{5}]\frac{9}{6h+22}}{\sum_{h\geq6}^{h_{max}}\exp[- \frac{h}{5}]}.\end{aligned}$$ Explicitly we get $$\begin{aligned} {\langle}Vol(\sigma^0){\rangle}_{poly} =\frac{3e^{29/15}}{20} E_1(\frac{29}{15})\cdot N_4, \label{branchaverage}\end{aligned}$$ which can be interpreted as the contribution to ${\langle}Vol(\sigma^0){\rangle}$ coming from the non-singular geometries in ${\Bbb S}^4_{es}$. The critical coupling $k_2^{crit}$ {#secquattro} ================================== The kinematical picture which emerges from the above analysis is immediately connected to the thermodynamical behavior of $4D$- dynamical triangulations by recalling the results of section \[canone\] according to which, as $k_2$ varies the distribution of triangulated manifolds is strongly peaked around triangulations with an average incidence given by $3[A(k_2)/(A(k_2)-1)]$, (see (\[solution\])). Thus by solving for $k_2$ the equation $${\langle}b(4,2)_{sing}{\rangle}=3(\frac{A(k_{2})}{A(k_{2})-1}), \label{cappa}$$ we get an estimate of the value of $k_2$ corresponding to which singular triangulations start dominating the canonical partition function (\[can\]) in the [*infinite volume limit*]{}. Recall that singular triangulations are those characterizing the sub-exponential sub-leading asymptotics (see Th.5.2.1, pp.106-118 of [@Carfora]) $$\begin{aligned} \lefteqn{W[N_{2},b(4,2)]\simeq \label{asintotica4}}\\ &&e^{(\alpha_4b(4,2))N_2}\cdot{\left [ \frac{(b-\hat{q}+1)^{b-\hat{q}+1}}{(b-\hat{q})^{b-\hat{q}}} \right ] }^{N_{2}} e^{[-m(b(4,2))N^{1/n_H}_4]} {N_{2}}^{-\frac{11}{2}}, \nonumber\end{aligned}$$ with $m(b(4,2)>0$, (see (\[asintotica\]) for the general expression; the above expression can be obtained from (\[asintotica\]) by setting $n=4$, $\alpha_2=0$, $\tau(b)=0$, and $D=0$ since we are considering ${\Bbb S}^4$, we have also dropped a few inessential constant terms). Thus we can identify the $k_2$ solution of equation (\[cappa\]) with the [*critical value*]{}, $k_2^{crit}$, of the inverse gravitational coupling marking the transition between the strong and weak coupling in $4D$-simplicial quantum gravity. 0.5 cm Introducing in (\[cappa\]) the values ${\langle}b(4,2)_{sing}{\rangle}\simeq4.0394361235$ obtained above for the kinematical bound controlling the occurrence of generic singular triangulations, we get for the critical coupling the explicit value $$\begin{aligned} k_2^{crit}\simeq 1.3093. \label{cappacrit}\end{aligned}$$ A model for pseudo-criticality at finite $N_4(S^4)$ --------------------------------------------------- It is very interesting to compare the value for $k_2^{crit}$, already in very good agreement with what is found by means of Monte Carlo simulations, with the other $k_2^{h}$’s obtained by solving equation (\[cappa\]) with the left member ${\langle}b_{sing}(4,2){\rangle}$ replaced by the values $b_h(4,2)$ provided by (\[criticalbs\]). In this way we get table \[tavola2\]. h b(2,4) $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_h$ $k_2^h$ --- -------------------------------- -------------------------------------------- ----------------- 0 $\frac{110}{27}\simeq4.07407$ 2.444 $\simeq1.24465$ 1 $\frac{280}{69}\simeq4.0579$ 3.111 $\simeq1.2744$ 2 $\frac{340}{84}\simeq4.04761$ 3.777 $\simeq1.2938$ 3 $\frac{400}{99}\simeq4.0404$ 4.444 $\simeq1.30746$ 4 $\frac{460}{114}\simeq4.03508$ 5.111 $\simeq1.31762$ 5 $\frac{520}{129}\simeq4.03100$ 5.777 $\simeq1.32545$ : Some of the values of $k_2^{h}$ obtained by solving equation (\[cappa\]) for $b_h(4,2)$ as $h$ varies. Such values appear strikingly near to the values of the pseudo-critical points found in Monte Carlo simulations as the size of the triangulations considered is increased.[]{data-label="tavola2"} According to the remarks in the previous paragraph, $k_2^{h}$, $h=1,2,\ldots$, can be interpreted as the values of the inverse gravitational coupling corresponding to which the sub-leading singular configurations comes into play. In other words, corresponding to such values of $k_2$ there are [*distinct peaks*]{} in the distribution of singular triangulations of ${\Bbb S}^4_{es}$. The [*leading peak*]{} is at $k_2=k_2^{crit}\simeq1.24465$, this corresponds to the dominance of singular triangulations for which $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}=22/9$; the [*first sub- leading peak*]{} occurs at $k_2=k_2^{h=1}\simeq1.2744$, corresponding to the sub-dominance of singular triangulations for which $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}=28/9$; the [*second sub- leading peak*]{} occurs at $k_2=k_2^{h=2}\simeq1.2938$ and is associated with the sub-dominance of singular triangulations for which $\frac{Vol(S^4)}{Vol_{sing}(\sigma^0)}|_{h}=34/9$, and so on. In the large $N_4$ limit there is enough [*phase space*]{} for having all such peaks contributing to the partition function of the theory, and the presence of the sub-dominating peaks lowers the critical incidence from its [*bare*]{} value $b(4,2)|_{h=0}$ to ${\langle}b(4,2)_{sing}{\rangle}$, and shifts the critical $k_2^{crit}$ from the [*bare*]{} value $1.24465$ to its effective value $k_2^{crit}\simeq1.3093$. Using a field-theoretic image, one may say that in the large volume limit the fluctuations associated with the various sub-dominating peaks in the distribution of singular triangulations dress the bare critical incidence. Conversely, at a finite value of $N_4$ one expects that the resulting average ${\langle}b(4,2)_{sing}{\rangle}(N_4)$, computed from (\[singaverage\]) with $h\leq\bar{h}(N_4)\leq{h}_{max}$, for some $\bar{h}(N_4)$, is larger than the limiting value ${\langle}b(4,2)_{sing}{\rangle}$. Corresponding to this ${\langle}b(4,2)_{sing}{\rangle}(N_4)$ one gets an $N_4$- dependent pseudo-critical point $k_2^{crit}(N_4)$ smaller than the actual $k_2^{crit}$. Roughly speaking, at finite volume, there is no phase space available for having all sub-dominating peaks competing with each other according to their relative entropic relevance. Moreover, at finite volume we should distinguish which kind of singular geometry we are dealing with. According to lemma \[closer\] and (\[crumpledb3\]), the average incidence is larger for the edge-singular triangulations ${\Bbb S}^4_{es}$ than for the single singular vertex triangulations ${\Bbb S}^4$. Thus, corresponding to ${\Bbb S}^4$ or ${\Bbb S}^4_{es}$ we should get a slightly different sequence of pseudo-critical points, (according to (\[cappa\]), $k_2^{crit}(N_4)|(S^4_{es})\leq k_2^{crit}(N_4)|(S^4)$). A difference which fades away as the volume increases. 0.5 cm In order to make contact with numerical simulations is worthwhile to develop an analytical model taking care of these [*finite size*]{} effects. Again for simplicity, let us limit our analysis to the vertex singular triangulations ${\Bbb S}^4_{vs}$, with the understanding that what we say can be easily extended to the edge-singular triangulations ${\Bbb S}^4_{es}$ with minor modifications. The starting point of our analysis is the entropic formula (\[probability\]) expressing, as $h$ varies, the entropy of the triangulations ${\Bbb S}^4_{vs}$ as convex combinations of its extreme two [*pure phases*]{} associated with crumpling ($h=0$) and polymerization ($h=h_{max}\to\infty$). Rather than use directly (\[probability\]) we should refer to the conditional entropy $$\begin{aligned} \frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\end{aligned}$$ which provides the contribution of the triangulations ${\Bbb S}^4_{vs}$ to the set of all possible triangulations of ${\Bbb S}^4$, at fixed volume. From (\[probability\]) and (\[asintotica\]) we get, to leading order in the large $N_4(S^4)$ limit, $$\begin{aligned} \frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\simeq \Omega^{N_4(S^4)}e^{- \frac{h}{5}}, \label{condprob}\end{aligned}$$ where $\Omega$ is the $h$-dependent constant $$\begin{aligned} \Omega\doteq\frac{c(S^4;h=0)}{c(S^4;h)}\simeq33.97082\cdot \left [ \frac{(b(4,2)-2)^{b(4,2)-2}}{(b(4,2)-3)^{b(4,2)-3}} \right ] ^{-10/b(4,2)}, \label{omecostant}\end{aligned}$$ with $b(4,2)=10\cdot(22+6h)/(54+15h)$. The expression (\[condprob\]) for the conditional entropy, holds at finite, sufficiently large $N_4(S^4)$, and, since $\frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\leq1$, it implies that, at [*finite volume*]{}, triangulations ${\Bbb S}^4$ with $h>>N_4(S^4)\ln\Omega$ are entropically suppressed. This remark implies that the configurations ${\Bbb S}^4_{vs}$, which actually contribute in characterizing the critical incidence, have an entropic cut at some value of $h$, say $\bar{h}(N_4)=O(N_4(S^4)\ln\Omega)$. The specific entropy $\ln{c}(S^4;h)$ of $\{{\Bbb S}^4\}$ changes very slowly with $h$, thus at finite $N_4(S^4)\doteq{N}$, we may tentatively write $$\begin{aligned} \left(\frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\right)_N=\Omega_0^{N_4}e^{- \frac{h}{5}}, \label{fsize}\end{aligned}$$ for $0\leq{h}\leq\bar{h}(N)$ whereas $$\begin{aligned} \left(\frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\right)_N =\Omega_{h=h_{max}}^{N_4}, \label{polysize}\end{aligned}$$ for $\bar{h}(N)<{h}\leq{h}_{max}$, and where $\Omega_0=1+\epsilon$, $\epsilon>0$, is a suitable constant not differing much from $1$, (according to (\[omecostant\]), $\Omega|(h=1)\simeq 1.01234$, and $\Omega|(h=10^5)\simeq1.0615$). In other words, we are assuming that for $0\leq{h}\leq\bar{h}(N)$ the system may exist as a mixture of its two extreme pure phases, whereas for $h>\bar{h}(N)$ it collapses into its branched polymer phase. It is worthwhile stressing that more realistically one may consider, in place of (\[fsize\]), a convex combination of the extreme phase $h=0$ and the (non extreme) phase corresponding to $h=\bar{h}(N)$. By exploitying (\[probo\]), this prescription can be worked out without difficulty, however it gives rise to a rather complex scaling behavior of the resulting entropy. Moreover, the fact that $c(S^4;h)$ is a slow varying function of $h$, makes, as we shall see, the simpler (\[fsize\]) quite accurate and much easier to handle. A qualitative characterization of $\bar{h}(N)$ as $N$ varies can be easily obtained by the obvious scaling properties of (\[fsize\]). If we consider triangulations ${\Bbb S}^4_{vs}$ with two distinct volumes, say $N_4(S^4)=N(1)$ and $N_4(S^4)=N(2)$, then $$\begin{aligned} \left(\frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\right)_{N_4=N(1)} =\left(\frac{Card{\Bbb S}^4_{vs}}{Card{\Bbb S}^4}\right)_{N_4=N(2)},\end{aligned}$$ provided that $\bar{h}(N)$ scales with $N$ according to $$\begin{aligned} \bar{h}(N(2))=\bar{h}(N(1))+5[N(2)-N(1)]\ln\Omega_0. \label{hscaling}\end{aligned}$$ This scaling relation implies that $\bar{h}(N)$ has a linear dependence on $N_4(S^4)$ according to $$\begin{aligned} \bar{h}(N_4)=5{N_4}\ln\Omega_0+\xi, \label{accaeffetto}\end{aligned}$$ where $\xi$ is a suitable constant. This rather simple argument does not yet provide the actual value of the constants $\Omega_0$ and $\xi$, however confrontation with numerical data at $N_4(S^4)=32000$ indicates as reliable candidates the values $$\begin{aligned} 5\ln\Omega_0&=&\frac{1}{16000}\nonumber\\ \xi &=& -1.\end{aligned}$$ Note that the above condition for $\Omega_0$ implies $\Omega_0\simeq1.0000125$, a value which is perfectly consistent with the above characterization of $\Omega$, (see (\[omecostant\]). It also indicates that the triangulations ${\Bbb S}^4_{vs}$, (actually the entropically dominating ${\Bbb S}^4_{es}$), do saturate the possible set of triangulations of ${\Bbb S}^4$ in the strong coupling phase. The $N_4$-dependent pseudo-critical incidence ${\langle}b(4,2)_{sing}{\rangle}(N_4)$ and the associated pseudo-critical point $k_2^{crit}(N_4)$ can be easily obtained from (\[accamax\]) by replacing $h_{max}$ with $\bar{h}(N)$, [*viz.*]{}, $$\begin{aligned} {\langle}b(4,2)_{sing}{\rangle}(N_4) =4+\frac{4}{15}\cdot\frac{e^{\frac{18}{25}}[E_1(\frac{18}{25})- E_1(\frac{\bar{h}(N)}{5}+\frac{18}{25})]}{5(1-e^{- \frac{\bar{h}(N)}{5}})}, \label{Naccamax}\end{aligned}$$ and by solving for $k_2$ the equation (\[cappa\]) with ${\langle}b(4,2)_{sing}{\rangle}(N_4)$ in place of ${\langle}b(4,2)_{sing}{\rangle}$. 0.5 cm By exploiting these results we get an overall analytic picture of the large volume behavior of $4$-dimensional simplicial quantum gravity which is in a surprising agreement with the Monte Carlo simulations of the real system [@singedge]. Comparison with Numerical Work ============================== At this stage it is indeed useful to discuss the status of our geometrical results in the light of the most recent numerical work. This comparison is particularly important since , as recalled in the introductory remarks, the current perspective on $4$-dimensional simplicial quantum gravity has undergone a rather drastic change. As a matter of fact, recent Monte Carlo simulations seem to accumulate more and more evidence for a first order nature of the transition separating the strong and the weak coupling regime of the theory. Taken at face value this result suggests that dynamical triangulations is not likely to be a viable model of quantum gravity unless one adds additional terms to the action. It is perhaps fair to say that the geometrical analysis of the previous paragraphs bears relevance to such an issue. The characterization of the critical coupling $k_2^{crit}$ and the existence of [*entropically sub-dominating peaks*]{} in the distribution of singular triangulations strongly indicates that this geometrical picture may be responsible for the phenomenology we see in numerical work. 0.5 cm Let us start by noticing that in numerical work is difficult to resolve the various contribution to the distribution of singular triangulations coming from the various peaks geometrically found by our analysis. The resolving power depends, among other parameters, on the size of the triangulations, and as a rough indicator, the larger the size the bigger is the set of sub-dominating singular triangulations which come into play. Obviously the first sub-dominant terms are the most relevant ones, and as suggested in the previous section, an interesting value to look at for comparison with Monte Carlo data is the value of the inverse gravitational coupling corresponding to the pseudo-critical average incidence ${\langle}b(2,4)_{sing}{\rangle}(N_4)$. As recalled there, by solving for $k_2$ the equation $${\langle}b(4,2)_{sing}{\rangle}(N_4)=3(\frac{A(k_{2})}{A(k_{2})-1}), \label{avcappa}$$ we obtain the value of $k_2^{crit}(N_4)$ corresponding to which we expect to see a clear signature of the dominance of singular geometries in the set of triangulated $4$-spheres of volume $N_4$. This is actually a pseudo-critical point, the location of which depends on $N_4$. Numerically one finds that as the [*volume*]{} $N_4$ of the triangulation increases, the corresponding pseudo-critical point $k_2^{crit}(N_4)$ increases too, (see [*e.g.*]{} [@Krzywicki]). Simulations and extrapolation to triangulations with size $N_4=48000$ and $N_4=64000$ locate the corresponding $k_2^{crit}(N_4)$ at $1.267$ and $1.273$, respectively. 0.5 cm According to (\[accaeffetto\]) the actual dependence of the number of dominating peaks, $\bar{h}(N_4)$, as a function of the volume $N_4$ of the triangulation, is linear according to $$\bar{h}(N_4)=\frac{N_4}{16000}-1,$$ for $N_4(S^4)\geq32000$, where the actual value, ($5\ln\Omega_0=1/16000$ and $\xi=-1$), of the constants comes from comparison with the numerical data provided at $N_4(S^4)=32000$ by [@Krzywicki]. With this expression of $\bar{h}(N_4)$ we obtain, from (\[avcappa\]) and (\[Naccamax\]), the table \[finitesize\]. $N_4$ h Analytical $k_2^{crit}(N_4)$ Monte Carlo $k_2^{crit}(N_4)$ ------- --- ------------------------------ ------------------------------- 32000 1 1.25795 1.258 48000 2 1.26752 1.267 64000 3 1.27466 1.273 : The value of the analytical pseudo-critical points $k_2^{crit}(N_4)$ versus their Monte Carlo counterparts. These values are computed under the hypothesis that the linear dependence of $\bar{h}(N_4)$ from $N_4(S^4)$ is given by $h(N_4)=N_4/16000-1$.[]{data-label="finitesize"} The agreement between the analytical pseudo-critical points and the Monte appears surprisingly good, and suggests that the identification of our $k_2^{crit}(N_4)$ with the pseudo critical $k_2^{crit}(N_4)$ found in Monte Carlo simulations is not a mere coincidence. An important implication of this identification, if correct, is that the growth with $N_4$ of $k_2^{crit}(N_4)$ is due to the increasing contribution of the sub-dominating singular triangulations. This result provides a nice explanation to the fact that Monte Carlo data seem to indicate that the major part of the finite size effects come from the crumpled phase [@Jurke]. By extrapolating the actual measurements, the Monte Carlo simulations locate the critical point around $k_2^*\simeq1.327$ or around $k_2^*\simeq1.293$, (depending if the data fit used is modeled after a second order or a first order transition, respectively)[@Krzywicki]. Again, our analytical result $k_2^{crit}\simeq1.3093$ appears in quite a good agreement with the numerical data, (curiously enough our $k_2^{crit}$ is, with a good approximation, the average of the above two numerical data), and moreover its analytical characterization provides a natural entropic explanation to the structure and location of the associated finite-size pseudo-critical points. 0.5 cm Another distinct feature of recent numerical works concerns the bimodality in the distribution of singular vertices seen during Monte Carlo simulations exactly around $k_2^{crit}(N_4=32000)\simeq 1.258$, [@Krzywicki]. In this connection, particularly interesting are the papers [@singedge] and [@Bialas], where long run histories (at $N_4(S^4)=32000$) provides a reliable measurement of the average maximum vertex order near the critical point. In these simulations the system wanders between two states characterized by two quite distinct values of the average maximum vertex order. In one case, this maximum is close to $3000$, while for the other the figure is close to $1000$. A correlation analysis shows that this metastability corresponds to tunneling back and forth from a branched polymer state (average vertex order $\simeq1000$) containing no singular vertex and a crumpled state (average vertex order $\simeq3000$) with one or two singular vertices. According to our analysis, this behavior is the one exactly coded into the entropy formulae (\[fsize\]) and (\[polysize\]) which exactly describe a finite size tunnelling between a crumpled state (described by (\[fsize\])) and a branched polymer state (described by (\[polysize\]). A good indication of the average vertex order, as we approach the transition point for increasing $k_2$, is provided by (\[avertex\]) At $N_4(S^4)=32000$ this analytic formula yields $$\begin{aligned} {\langle}Vol(\sigma^0){\rangle}_{N_4=32000} =\frac{3e^{11/15}}{20} E_1(\frac{11}{15})\cdot (N_4=32000)\simeq3400. \label{numvertex}\end{aligned}$$ Conversely, if we approach the transition point by lowering $k_2$, then a reliable indication is provided by (\[branchaverage\]). Explicitly we get $$\begin{aligned} {\langle}Vol(\sigma^0){\rangle}_{poly} =\frac{3e^{29/15}}{20} E_1(\frac{29}{15})\cdot (N_4=32000)\simeq1770. \label{numtex}\end{aligned}$$ Such results appear quite in reasonable agreement with the values of ${\langle}Vol(\sigma^0){\rangle}_{N_4=32000}$ obtained during the simulations and mentioned before. Such data suggests that the bimodality seen in the numerical simulation has its origin in the presence of sub-dominating singular triangulations. In particular, due to finite size effects the set of subdominating singular triangulations ${\Bbb S}^4_{es}$ for $h=0,1,\ldots,6$ seems to provide a metastable cluster of configurations that entropically dominate the crumpled state. 0.5 cm Taken at face value, this set of results seem to indicate, at least to the indulgent reader, a variety of viewpoints on the actual status of a theoretical interpretation of the numerical simulations: [*(I)*]{} The bimodality as well as the implied first order interpretation of the transition between weak and strong coupling is a finite size effect related to: [*(i)*]{} The saturation of the triangulations of $\{{\Bbb S}^4\}$ with ${\Bbb S}^4_{es}$ in the strong coupling phase; [*(ii)*]{} The slow dependence of the specific entropy, $\ln{c}(S^4;h)$, of $\{{\Bbb S}^4\}$ from the parameter $h$ controlling the volume of the singular part of the triangulation. This slow variation may be responsible of the fact that the tunnelling does not disappear as the volume of the triangulations increases. Obviously, this latter remark can be easily turned inside out to favour a less optimistic point of view: [*(II)*]{} The slow $h$-variation in $\ln{c}(S^4;h)$ may well be such as to mantain the bimodality for larger and larger volumes: we have a genuine first order transition. 0.5 cm It is rather clear that our analysis, being based on a sort of mean field approximation, cannot distinguish clearly between such two scenarios: we need sharper entropic estimates. Even if shamefully low in providing answers to the headlines that numerical simulations score, we wish to conclude with a final example pointing to a constructive way of using our analytical entropy estimates. This final point concerns the $k_2$ dependence of the two normalized cumulants of the distribution of the number of vertices of the triangulation, $c_1(N_4;k_2)$ and $c_2(N_4;k_2)$ whose analytic expression is explicit provided by (\[Rumulant1\]) and (\[Rumulant2\]). Strictly speaking, these expressions are accurate only near the actual critical average incidence $b_0$, however we can use them quite safely in a rather larger range of variation of $k_2$, (due to the slow variation of $b(2,4)$ as a function of $k_2$). Accurate Monte Carlo measurements of such cumulants have been reported in [@Krzywicki], and by referring to these data for $N_4=32000$, the comparison between MC-data and our analytic results for $c_1(N_4;k_2)$ and $c_2(N_4;k_2)$ are shown in table \[compare\]. $k_2$ $c_1(N_4;k_2)$ $c_1(MontCarl)$ $c_2(N_4;k_2)$ $c_2(MontCarl)$ ------- ---------------- ----------------- ---------------- ----------------- 1.240 0.1935053 0.18970(12) 0.109062 0.141(7) 1.246 0.1945674 0.19150(11) 0.1194586 0.144(8) 1.252 0.1956271 0.19399(32) 0.1465348 0.254(35) 1.258 0.1966846 0.19712(20) 0.3996907 0.316(8) 1.264 0.1977398 0.20052(21) 0.1844987 0.118(20) 1.270 0.1987927 0.20085(27) 0.1274851 0.118(20) : A comparison between the analytical values and the available Monte Carlo data for the first two cumulants of the distribution of the number of vertices of the triangulation.[]{data-label="compare"} The agreement between the analytical cumulant $c_1(k_2;N_4)$ and its Monte Carlo counterpart is particularly good; (note that for a better comparison with the numerical data we have actually used in (\[Rumulant2\]) an average between $b(4,2)|_{h=0}$ and $b(4,2)|_{h=1}$ so as to shift from $k_2^{crit}\simeq1.3093$ to a pseudo-critical $k_2^{crit}(N_4)\simeq1.258$). Slightly less impressive is the agreement between the second cumulants, but this is to be expected since near the pseudo-critical point $k_2^{crit}(N_4)$, the second cumulant $c_2(N_4;k_2)$ fluctuates quite wildly. We wish to stress that such an agreement rests both on the rigorous asymptotics (\[cumulant1\]), (\[cumulant2\]) and on the scaling [*hypotheses*]{} $$m(k_2)=\frac{1}{\nu}|\frac{1}{b(k_2)}-\frac{1}{b_0}|^{\nu}, \label{assumption2}$$ and $$\lim_{\matrix{{\scriptstyle N_4\to\infty}\cr {\scriptstyle k_2\to k_2^{crit}}}} |\frac{1}{b(k_2)}-\frac{1}{b_0}|^{\nu-1}\cdot{N_4}^{\frac{1}{n_H}- 1}= \mbox{const.}, \label{scaling2}$$ The best agreement, used in table \[compare\], is obtained by choosing $\nu \approx 0.94$. Eq. (\[assumption2\]), is nothing but a natural consequence of the vanishing of the parameter $m(b)$ for $b(2,4)\to{b}_0$; whereas the second condition (\[scaling2\]) rests on a less firm ground and must be considered as a working hypothesis to be better substantiated. 0.5 cm Some of the results discussed above show that the numerical evidence pointing toward a first order nature of the transition can be explained in a natural geometrical framework. The bimodality, which has been underlined as a strong indication that the transition is of a first order, is well explained by the presence of entropically sub- dominating peaks in the distribution of singular triangulations. Similarly to what has been argued by Catterall et al. [@Catterall], the system tunnels among such distinct sub-dominant configurations with some of these configurations being meta-stable for $N_4$ finite, (especially those with $h\simeq0,1,\ldots$ which dominate the crumpled phase, and those for which $h>>1$ characterizing the branched polymer phase). Of course the analytical arguments provided by us are all based on a kind of mean-field approximation, since we consider only a restricted class of triangulations. Mean-field analysis is in general not very reliable when it comes to predicting the [*order*]{} of a phase transition. However, in this case we have seen that combined with an additional scaling assumption, we get reasonable agreement with Monte Carlo data for both $k_2^{c}(N_4)$, $c_1(N_4)$ and $c_2(N_4)$. This might indicate a validity beyond that usually provided by a mean-field approximation. A good test of the reliability of the geometric truncation used in the present work is to apply it to the more complicated system of 4d simplicial quantum gravity coupled to Abelian gauge fields. In that system one seemingly observe a new interesting phase structure [@new], different from the branched polymer – crumpled phase originally reported in [@original]. References {#references .unnumbered} ========== J. Ambjørn, M. Carfora, A.Marzuoli, The Geometry of Dynamical Triangulations. Lecture Notes in Phys. [**m50**]{} (Springer 1997) J. Ambjørn, Quantization of Geometry. Lect. given at Les Houches Nato A.S.I.: Fluctuating Geometries in Statistical Mechanics and field Theory. Session LXII, 1994; J. Ambjørn, B. Durhuus, T.Jónsson, Quantization of Geometry (Cambridge Monograph in Math. Phys. 1997). T. Regge, Nuovo Cim. [**19**]{} (1961) 558. J. Fröhlich: Regge calculus and discretized gravitational functional integrals. Preprint IHES (1981), reprinted in: Non-perturbative quantum field theory — mathematical aspects and applications. Selected Papers of J.Fröhlich (World Sci. Singapore 1992) S. Catterall, G. Thorleifsson, J. Kogut, R. Renken, Nuc. Phys. B 468 (1996) 263. P. Bialas, Z.Burda, A. Krzywicki ,B. Petersson, Nuc.Phys. B 472 (1996) 293. See also V.B. de Bakker: Further evidence that the transition of $4$D dynamical triangulation is 1st order. hep-lat/9603024 S. Catterall, R. Renken, J. Kogut, Phys.Lett.B 416 (1998) 274 D. Gabrielli: Polymeric phase of simplicial quantum gravity, to appear in Phys.LettB., 1998 M. Gromov, Structures métriques pour les variétés Riemanniennes. (Conception Edition Diffusion Information Communication Nathan, Paris 1981) W. Thurston: Shapes of polyhedra and triangulations of the sphere, math.GT/9801088, 1998 C. Itzykson,J-M. Drouffe, Statistical field theory: $p.2$. (Cambridge University Press, Cambridge 1989) D. Walkup, Acta Math. [**125**]{} (1970) 75. S. Bilke, Z. Burda, B. Petersson, Topology in 4D simplicial quantum gravity, in Nucl.Phys. B (Proc. Suppl.) 53 (1997) 743. W. Kühnel, in: Advances in differential geometry and topology. Eds. I.S.I.- F. Tricerri, (World Scientific, Singapore, 1990) M. Gross and D. Varsted, Nucl.Phys. [**B378**]{} (1992) 367. U. Pachner, Europ.J.Combinatorics [**12**]{} (1991) 129. R. Stanley, Advances in Math. [**35**]{} (1980) 236; Also:J. Amer. Math. Soc. [**5**]{} (1992) 805; and Discrete Geometry and convexity, pp.212-223, Ann. NY Acad. Sci., New York 1985 J. Ambjørn and J. Jurkiewicz, Nucl.Phys. B 451 (1995) 643. S. Bilke, Z. Burda, A. Krzywicki, B. Petersson, J. Tabaczek and G. Thorleifsson, Phys.Lett. B418 (1998) 266. J. Ambjørn and J. Jurkiewicz, Phys.Lett. B278 (1992) 42. [^1]: email ambjorn@nbi.dk. Supported by a MaPhySto-grant [^2]: email carfora@pv.infn.it; carfora@sissa.it [^3]: email gabri@sissa.it [^4]: marzuoli@pv.infn.it
{ "pile_set_name": "ArXiv" }
[SLAC–PUB–7413\ LBNL–40054\ February 1997\ ]{} [ **Bremsstrahlung Suppression due to the LPM and Dielectric Effects in a Variety of Materials[^1]**]{} P.L. Anthony,$^1$ R. Becker-Szendy,$^1$ P. E. Bosted,$^2$ M. Cavalli-Sforza,$^{3,\#}$ L. P. Keller,$^1$\ L. A. Kelley,$^3$ S. R. Klein,$^{3,4}$ G. Niemi,$^1$ M. L. Perl,$^1$ L. S. Rochester,$^1$ J. L. White$^{1,2}$ [$^{1}$Stanford Linear Accelerator Center, Stanford, CA 94309]{}\ [$^{2}$The American University, Washington, D.C. 20016]{}\ [$^{3}$Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064]{}\ [$^{4}$Lawrence Berkeley Laboratory, Berkeley, CA 94720]{}\ [ **Abstract** ]{} The cross section for bremsstrahlung from highly relativistic particles is suppressed due to interference caused by multiple scattering in dense media, and due to photon interactions with the electrons in all materials. We present here a detailed study of bremsstrahlung production of 200 keV to 500 MeV photons from 8 and 25 GeV electrons traversing a variety of target materials. For most targets, we observe the expected suppressions to a good accuracy. We observe that finite thickness effects are important for thin targets. Submitted to [*Phys. Rev. D*]{} PACS Numbers: 13.40.-f,12.20.-m,41.90.+e,42.50.Ct Introduction ============ When an ultra-relativistic electron emits a low energy photon via bremsstrahlung, the longitudinal momentum transfer between the electron and the target nucleus can be very small. Because of the uncertainty principle, this means that the momentum transfer must take place over a long distance, known as the formation length. One way to think of this is as the distance required for the electron and photon to separate enough to be considered separate particles. If anything happens to the electron or photon while travelling this distance, the emission can be disrupted. We have previously presented letters demonstrating suppression due to multiple scattering[@prl] and dielectric suppression[@dprl]. We present here additional data further exploring these suppression mechanisms in a variety of materials. These data explore bremsstrahlung production of 200 keV to 500 MeV photons from 8 and 25 GeV electrons. Special attention will be given to the effects of finite target thickness. LPM Suppression --------------- LPM suppression is due to multiple scattering, first discussed by Landau and Pomeranchuk[@landau] and slightly later by Migdal[@migdal]. If an electron multiple scatters while traversing the formation zone, the bremsstrahlung amplitude from before and after the scattering can interfere, reducing the amplitude for bremsstrahlung photon emission. A similar suppression occurs for pair production. The LPM effect is relevant in many areas of physics. It will cause the elongation of high energy electromagnetic showers, making them appear more like hadronic showers. At the next generation of colliders, LHC and NLC, this may reduce the electron-pion separation achievable for a given detector configuration, especially where early shower development is monitored with a pre-shower detector. The effects of LPM suppression on cosmic ray air showers have been discussed by many authors[@cosmics]. In exceedingly high energy (above $10^{18}$ eV) photon induced air showers, the LPM effect increases the graininess of the shower, and changes the relationship between shower density and calculated energy. LPM suppression can also affect showers produced by ultra-high energy $\nu_e$ interactions in water or ice, as might be observed by underwater or under-ice detectors[@DUMAND]. The electronic LPM effect has analogs in nuclear physics involving quarks and gluons moving through matter, and calculations have used LPM-like formalisms to put limits on color dE/dx[@Brod1]. However, the strong-coupling nature of QCD makes comparison with data less than straightforward. An LPM-type suppression also appears is in stellar interiors. Because the density is very high, the nucleon collision rate, $\Gamma_{coll}$, far exceeds the oscillation frequency of neutrino or axion radiation[@Raffelt], production of these exotic particles is suppressed. Several previous experiments have studied the LPM effect, mostly with cosmic rays. Most of the cosmic ray experiments date to the 1950’s[@Fowler], with a few more recent results[@Strausz]. Most examined the depth of pair conversion of high energy photons in emulsion. They qualitatively confirmed the LPM effect, but with very limited statistics. A 1975 experiment at Serpukhov measured the photon spectrum from 40 GeV electrons[@Serp]. They were troubled by limited statistics and large systematic errors and backgrounds, but observed a qualitative agreement with the LPM theory. Experiment CERN NA-43 measured photon emission from electrons and positrons in a silicon crystal[@bak]. They observed suppression due to a number of effects; they attribute part of the total to the LPM effect. Dielectric Suppression ---------------------- A second suppression mechanism involves the photons. Produced photons can interact with the electrons in the medium by Compton scattering. For forward scattering, this interaction can be coherent, causing a phase shift in the photon wave function. If this phase shift, taken over the formation length, is large enough, then it can cause a loss of coherence, reducing photon emission. As the photon energy approaches zero, this effect completely suppresses bremsstrahlung, removing the infrared divergence of the original Bethe-Heitler cross section. This is the QED analog of color screening in QCD[@screen]. Little previous data exist on this suppression mechanism[@Armenia]. Theory ====== The length scale for suppression is determined by longitudinal momentum transfer from the nucleus to the electron: $$q_\parallel = p_e - p_e' - k = \sqrt{E^2 - m^2} -\sqrt{(E-k)^2 - m^2} - k,$$ where  and $E$ are the electron momentum and energy before the interaction,  is the electron momentum afterward, $m$ is the electron mass, and $k$ is the photon energy. For $E\gg m$ and $k\ll E$, this simplifies to $$q_\parallel \sim {m^2 k \over 2 E (E - k) } \sim {k \over 2 \gamma^2},$$ where $\gamma=E/m$. This momentum can be very small, for example, 0.02 eV/c for a 25 GeV electron emitting a 100 MeV photon. Therefore, the uncertainty principle requires that the emission take place over a long distance, called the formation length: $l_f~=~2\hbar c\gamma^2/k$. For 25 and 8 GeV electrons, $l_f (m)~=~864$eV$/k$ and $l_f (m)~=~88.2$eV$/k$ respectively. This is the same formation length that occurs in transition radiation[@Jackson]. LPM Suppression --------------- The LPM effect comes into play when one considers that the electron must be undisturbed while it traverses the formation length. One factor that can disturb the electron, and suppress the bremsstrahlung, is multiple Coulomb scattering. If the electron multiple scatters by an angle , greater than the typical emission angle of bremsstrahlung photons, $\sim$ m/E$=1/\gamma$, then the bremsstrahlung is suppressed. In the Gaussian approximation, a particle traversing a thickness $l_f$ of material with radiation length $X_0$ scatters by an average angle of[@Rossi] $$\overline\theta_{MS}^2 = ( {E_s \over E} )^2 {l_f \over X_0},$$ where $E_s=\sqrt{4\pi/\alpha}\cdot m = 21$ MeV and $\alpha$ the fine structure constant $\sim 1/137$. The LPM effect becomes important when $\theta_{MS}$ is larger than $\theta_B$. This occurs for $ E_s/E \sqrt{l_f/X_0} > m/E$. For a given electron energy, suppression becomes significant for photon energies below a certain value, given by $$y = {k \over E} < {E \over E_{LPM}},$$ where (eV) = $m^4 X_0 / (2\hbar c E_s^2$) = $3.8\times10^{12} X_0$(cm), about 1.3 TeV in uranium and 2.1 TeV in lead; values for the targets used in this experiment are given in Table I. For a specific beam energy, 25 GeV, for example, it is possible to define a maximum photon energy for which the LPM effect is significant, $k_{LPM} = E^2/E_{LPM}$ For example, $k_{LPM25}$ is 470 MeV for uranium, and 8.5 MeV for carbon; Table I gives values for our targets for 8 and 25 GeV beams. The multiple scattering adds to $q_\parallel$ by changing the electrons direction, and reducing its momentum. The formation zone can be found by replacing $p$ and $p'$ with their forward components assuming that the multiple scattering is spread throughout the formation zone. Then, $$q_\parallel = ({k\over 2\gamma^2})(1 + {E_s^2 l_f \over 2 E^2 X_0}).$$ Since the formation zone length is given by $l_f=\hbar/q_{\parallel}$, this produces a quadratic equation for $l_f$, and, hence suppression: $$S = \sqrt{k E_{LPM} \over E^2}.$$ Migdal did a detailed calculation, describing the multiple scattering angles classically with a Gaussian distribution, and solving the transport equation to find an ensemble of trajectories[@migdal]. Then, with appropriate weighting, he used these trajectories to calculate the photon emission probability. He found $${d\sigma_{\text{LPM}} \over dk} = {4\alpha r_e^2\xi(s) \over 3k} \{y^2 G(s) + 2 [1 +(1-y)^2 ] \phi (s) \} Z^2 \ln\bigg( {184 \over Z^{1/3}} \bigg),$$ where $$s = {1\over 2} \bigg( {y \over 1-y} \bigg)^{1/2} \bigg( {mc \over \hbar} {mc^2 \over E} {\alpha X_0 \over 8\pi \xi(s)} \bigg)^{1/2}.$$ Z is the atomic number, $r_e$ the classical electron radius, and $\xi(s)$, G(s) and $\phi(s)$ are complex functions with $1\le\xi(s)\le2$, $0\le G(s) \le 1$ and $0\le\phi(s)\le1$. When $y\ll 1$, $s\sim\sqrt{(k E_{\text{LPM}}/E^2)}$. In the absence of suppression $s\rightarrow\infty$, $G(s)\rightarrow 1$, and $\phi(s) \rightarrow1$; strong suppression corresponds to $s\rightarrow0$, $G(s)\rightarrow 0$, and $\phi(s) \rightarrow0$. Migdal’s calculation gives results within about 10% of Eqn. 6. Migdal was forced to made a number of simplifying assumptions. First, he only included elastic scattering from the nuclei themselves. More recent calculations have considered both electron-nucleus and electron-electron interactions, using form factors[@perl][@Tsai]: $${d\sigma_{\text{BH}} \over dk} = {4\alpha r_e^2 \over 3k}\bigg[ \{ y^2 + 2 [1 + (1-y)^2] \} (Z^2 F_{el} + Z F_{inel})+ (1 - y) {( Z^2 + Z )\over 3}\bigg].$$ Here $F_{el}\approx \ln( 184 / Z^{1/3})$ and $F_{inel}\approx \ln( 1194 / Z^{2/3})$ are the elastic and inelastic atomic form factors[@Tsai]. In Eqn. 7, $d\sigma_{\text{LPM}} / dk$ includes the elastic form factor, but not the inelastic form factor or the last $(1-y)(Z^2+Z)/3$ term. Because the elastic and inelastic form factors have the same $y$ dependence, it is easy to include the inelastic form factor by normalizing $d\sigma_{\text{LPM}}/dk$ to the radiation length as defined by Tsai[@Tsai]. Because of the small momentum transfer, the recoil of the struck electron can be neglected, and so electron-electron bremsstrahlung should manifest the same LPM suppression as nuclear bremsstrahlung. The $(1-y)(Z^2 + Z)/3$ term is omitted from both our cross sections and the traditional definition of the radiation length[@Tsai]; this is roughly a 2% correction. In addition, Migdal was forced to assume that the multiple scattering angle followed a Gaussian distribution; this is known to underestimate the number of large angle scatters. This can affect his results. For example, the occasional large angle scatter can lead to some suppression at photon energies above which Migdal predicted suppression would disappear. Blankenbecler and Drell developed a new calculational approach to this suppression, based on the formalism they developed for beamstrahlung, treating the multiple scattering quantum mechanically[@Drell]. The results of their calculation cannot be given as a simple equation, but their results are similar to those of Migdal for thick targets. One big advantage of their calculation is that it implicitly handles targets of finite thickness, dividing the electron path into 3 sections: before the target, inside the target, and after the target, with interference between the different regions (including before and after). Because of this treatment, they calculate the total emission over the slab, and do not localize the point of photon emission. More recently, Zakharov has presented a calculation[@Zakharov]. Although it has a different basis from Blankenbecler and Drell, it appears to give similar results. Unfortunately, it also suffers from the same limitations regarding multiple emission and dielectric suppression. Dielectric Suppression ---------------------- The magnitude of dielectric suppression, due to the photon-electron gas interactions, can be calculated by finding the photon phase shift due to the dielectric constant of the medium, using classical electromagnetic theory[@mikaelian]. The phase shift is $(1-\sqrt{\epsilon})kcl_f$ where $\epsilon$ is the dielectric constant of the medium, given by $$\epsilon(k)=1-(\hbar\omega_p)^2/k^2,$$ where $\omega_p=\sqrt{4\pi N Z e^2/m}$; N is the number of atoms per unit volume, Z the atomic charge, and $e$ the electric charge. If the phase shift gets large, coherence is lost. This limits the effective formation length to the distance which has a phase shift of 1: $$l_f={2\hbar c k \gamma^2 \over k^2+ k_p^2},$$ where $k_p=\gamma\hbar\omega_p$ is the maximum photon energy for which dielectric suppression is large. It is also the maximum energy at which transition radiation is large. The suppression is simply given by the ratio of in-material to vacuum formation lengths: $$S = { k^2 \over k^2 + k_p^2}.$$ The suppression becomes large for $k < k_p$; below this energy, the photon spectrum changes from $1/k$ to $k$. Numerically, the plasma frequencies for most solids are in the 20–60 eV range, so the suppression becomes important for $k < r E$ where $r=\hbar\omega_p/m= \hbar\sqrt{4\pi Z e^2/m^3}$, about $5.5\times10^{-5}$ in carbon or $1.4\times10^{-4}$ in tungsten; values for other targets are given in Table I. For small $k$, dielectric suppression is much more important than LPM suppression. Total Suppression ----------------- Because LPM and dielectric suppression both reduce the effective formation length, the suppressions do not simply multiply. Where both mechanisms appear, the total suppression can be found by summing the contributions to $q_\parallel$ and hence $l_f=\hbar/q_\parallel$; the suppression is simply the ratio of $l_f$ to its vacuum value[@galitsky]. Migdal included dielectric suppression in his formalism by scaling $\phi$ appropriately[@migdal]. Unfortunately, the Blankenbecler and Drell approach is not easily amenable to inclusion of dielectric suppression[@dick]. For 25 GeV beams hitting the targets used here, the LPM effect is more important for photon energies above 5 MeV; at significantly lower energies, dielectric suppression dominates. With 8 GeV beams, LPM suppression is reduced by a factor of $(8/25)^2$, so dielectric suppression is usually the dominant effect. These spectral shape for the different photon energies (and hence mechanisms) are schematically summarized in Fig. 1. Thin Targets and Surface Effects -------------------------------- When an electron interacts near the surface of a target, part of the formation zone may extend outside of the target. Then, there will be less multiple scattering or Compton scattering, so the suppression should be reduced. There is also a transition as the electromagnetic fields of the electron readjust themselves to allow for the electron multiple scattering and effect of the medium. A very simplistic approximation for the surface effects would be to allow for a single formation length of un-suppressed Bethe-Heitler radiation near the target surfaces, with the rest of the radiation from the interior fully suppressed. This implies that the surface effects are important where LPM suppression is large, at small $k$, since $l_f$ scales as $1/\sqrt{k}$. However, where dielectric suppression dominates, $l_f$ scales as $k$, giving short formation zones and little surface effects. Unfortunately, this model is conceptually inadequate because, in addition to the reduced suppression, there can also be edge radiation. For dielectric suppression, this is just conventional transition radiation[@Jackson], given by $${dN \over dk }= {\alpha \over \pi k} \bigg[(1+{2k^2 \over k_p^2}) \ln{(1+{k_p^2 \over k^2})} -2 \bigg].$$ Where LPM suppression is large, Gol’dman[@goldman] has pointed out that there is an additional transition radiation caused by the multiple scattering. When the target is thinner than the formation zone, the problem simplifies. For extremely thin targets, where the target thickness $t< X_0 (m/E_s)^2$, there isn’t enough multiple scattering to cause suppression, and the Bethe-Heitler spectrum is retained. For slightly thinner targets, but where $t< l_f$, Shul’ga and Fomin showed[@Shulga] that the entire target can be treated as a single radiator, and the Bethe- Heitler spectrum is recovered[@ternovskii], albeit at a reduced intensity. The radiation spectrum is given by $${dN_{SF}\over dk} = {2\alpha\over\pi} \int_0^\infty d^2\theta f(\theta) \bigg( {2\zeta^2+1 \over \zeta\sqrt{\zeta^2+1}} \ln{(\zeta+\sqrt{\zeta^2+1})}-1\bigg),$$ where $\zeta=\gamma\theta/2$, $\theta$ being the scattering angle. The integrals are taken over the two independent scattering planes, and $$f(\theta)={1\over \pi\theta_0^2} \exp(-\theta^2/\theta_0^2).$$ Because the targets are very thin[@PDG], $$\theta_0= {E_s \over E} \sqrt{t\over X_0}\ [1+0.038 \ln{t\over X_0}].$$ These formulae are numerically evaluated. It is worth pointing out that, in the limiting case, the radiation becomes proportional to ln($t$)! Then, the radiation depends only on $t/X_0$, and is independent of $E$. This spectrum applies for photon energies $k$ where the reduced formation length (taking into account the reduction due to the LPM effect) is larger than the target thickness. This occurs when $$l_f = S*l_{f0} = {dN_{SF}/dk \over dN_{BH}/dk } {2\hbar c\gamma^2\over k} > t,$$ where $dN_{BH}/dk$ is the Bethe-Heitler predicted radiation from the entire sample. This equation is valid as long as dielectric suppression and transition radiation are not large. For thicker targets, Ternovskii[@ternovskii] calculated the spectrum of this radiation at an interface. Like Blankenbecler and Drell, Ternovskii divided the electron path into 3 regions, and allowed for interference between the regions. For sufficiently thick targets, he parameterized his results into a bulk emission, matching Migdal, plus two edge terms. For $k\ll E$ and $s\gg 1$, the edge term is conventional transition radiation. For $s<1$ and $sk_p^2/k^2\ll 1$ LPM suppression dominates and Ternovskii finds for $k\ll E$, $${dN \over dk}= {2 \alpha \over \pi k} \ln{{\chi\over \sqrt{s}}},$$ where $\chi\sim 1$, similar to the logarithmic uncertainty found by Migdal. For $s>1$, the region of no LPM suppression, this equation is negative; common sense seems to indicate that the function should be cut off. For comparison with data, a more serious problem is that Eqns. 13 and 18 do not match up in the region $sk_p^2/k^2\sim 1$. Garibyan[@garibyan] also calculated the transition radiation spectrum, also using Gol’dman as a base, but for a single edge. His results were similar, but not identical to Ternovskii, with the same negative region. In 1965, Pafomov [@pafomov] stated that the formulations of Gol’dman, Ternovskii and Garibyan were flawed because they improperly separated the total radiation into bremsstrahlung and transition radiation, causing the negative regions. In his calculations, Pafomov found that there is transition radiation even for $s>1$, with a $1/k^2$ spectrum. Pafomov predicted that, for $k_{LPM} > k_p$, the transition radiation term is, per edge: $$\begin{aligned} {dN \over dk} = {\alpha\over \pi k } \log{(k_p/k)^2}, &\hskip .4 in k < k_p^{4/3}/k_{LPM}^{1/3} \\ {dN \over dk} = {\alpha\over \pi k} \log{{2 \over 3} \sqrt{{k_{LPM}\over k}}}, & \hskip .4 in k_p^{4/3}/k_{LPM}^{1/3}< k \ll k_{LPM} \\ {dN \over dk} = {8\alpha\over 21\pi k} ({k_{LPM} \over k})^2. &\hskip .4 in k \gg k_{LPM}\end{aligned}$$ The first equation is similar to, but larger than conventional transition radiation, with the difference probably due to calculational technique. Unfortunately, Eqns. 19 and 20 do not quite match when $k= k_p^{4/3}/k_{LPM}^{1/3}$ causing a noticeable step in our simulations. There is also a discontinuity between Eqns. 20 and 21 at $k\sim k_{LPM}$. Pafomov gives a numerical approximation that covers the entire region $k>k_{LPM}^{4/3}/k_p^{1/3}$ and avoids the discontinuity; we use it in our calculations. For bulk emission, Pafomov accepted Migdal’s results. Because of the logarithmic uncertainties, transition regions, and discontinuities, it is difficult to confidently apply any of these edge effect formulae; we will show a few selected comparisons with our data. Even in the absence of a acceptable theory, it is possible to remove the edge effects by comparing data from targets of similar composition but different thickness. By subtracting the two spectra, it is possible to find an ‘internal’ spectrum and a ‘surface effect’ spectrum, accurate as long as there is no interference between the two edge regions. For thin targets, dielectric suppression should be reduced, at least in classical calculations. When the photon wave phase shift, integrated over the target thickness, is small, then the suppression should disappear. Experiment ========== This experiment[@prl] [@dprl] [@klein][@becker] was conducted in End Station A at the Stanford Linear Accelerator Center. As Fig. 2 shows, electrons entered End Station A and passed through targets mounted in a seven position target holder. During data taking, we rotated through the targets, taking $\sim$2 hours of data on each target. We took a total of 8 hours of data on most target/beam energy/calorimeter gain setting combinations. The targets materials and thicknesses are given in Table II; a selection of high and low Z targets were used, usually with two target thicknesses per material. Rotations included one position on the target ladder which was left empty for no-target running to monitor beam related background. A 1 cm square silicon photodiode was mounted in another position. By measuring the rates of lead glass hits to Si photodiode hits, we could check for changes in the beam size; the beam position and shape proved stable with time. After passing through the targets, the electrons entered an 18D72 dipole magnet, which was run at 3.25 (1.04) T-m of bending for 25 (8) GeV electrons. This field bent full-energy electrons downward by 39 mrad; lower energy electrons were bent more. One especially useful feature of the magnet was its large fringe field. Because of this fringe field, the electron bending started slowly, so synchrotron photons produced during the initial bending had low momenta; this reduced the synchrotron radiation background observed in the calorimeter significantly. Synchrotron radiation emitted by an electron pointing at the bottom edge of the calorimeter had a 280 keV (9 keV) critical energy at 25 (8) GeV. The average energy deposition in the calorimeter was 40 keV and 400 eV respectively. After bending, the electrons exited the vacuum chamber, travelled 15 meters through a helium bag, into 6 planes of proportional wire chambers[@wires], with a 20 cm separation, arranged Y U Y V Y U where Y plane wires were horizontal, and U and V planes were were at a 30 degree angle from horizontal, to provide left-right information. The Y (U/V) planes had a 2 (4) mm wire pitch. Due to an unfortuitous choice of angle, the wire chambers had a momentum resolution only slightly better than a single plane, giving resolution of roughly 90 MeV at 25 GeV. The electrons were absorbed in a stack of three 10 cm by 10 cm lead glass blocks, arranged so that full energy electrons hit the middle of the top block. This enabled us to accurately count electrons calorimetrically. Electrons with energies below 17.4 (5.8) GeV for 25 (8) GeV beams missed the blocks and were not counted. The fraction not counted was estimated with the Monte Carlo, and was typically about 1% per 1% of $X_0$ target thickness. Photons produced in the target travelled 50 meters downstream through vacuum into a BGO calorimeter. The calorimeter consisted of 45 (a 7 by 7 array with the corners missing) BGO crystals, each measuring 2 cm square by 20 cm (18 $X_0$) deep[@calorimeter]. Each crystal was read out by a Hamamatsu R1213 1/2” photomultiplier tube (PMT) with a linear base. The PMTs detected about 1 photoelectron per 30 keV of energy deposition in the BGO. During much of the running, one crystal in the outermost row was not functional. The calorimeter was built and extensively characterized in 1984 as a prototype, and was reconditioned for this experiment. In 1984, the nonlinearity in the 100 MeV range was estimated at 2%; Monte Carlo simulations of leakage indicate that this does not change significantly at 500 MeV. The calorimeter was read out by a LeCroy 2282 12 bit ADC. The ADC gate was set to 900 nsec, several times the BGO light decay time of 300 nsec. One advantage of this gate width was that sensitivity variations due to the 50 nsec time structure of the electron beam were negligible. Because the ADC pedestals were known to drift slowly, frequent pedestal runs were performed. Calorimeter ADC overflows were detected by histogramming the ADC output on a channel by channel, run by run basis; the maximum ADC count was typically  3950 counts and was easily determined by inspection. Events with an ADC overflow were flagged. The experiment studied a very wide range of photon energies, from 200 keV to 500 MeV. This is a considerably wider range than can be handled by a single PMT gain and ADC, so data were taken at two different calorimeter gain settings, with the gain adjusted by varying the PMT high voltage. The first data set corresponded to 100 keV per ADC count, and the second to 13 keV per ADC count. These will be referred to as ‘low gain’ and ‘high gain’ running respectively. Initially, a 1/2” thick scintillator slab was placed in front of the calorimeter, as a charged particle veto. When the charged particle background was found to be small, it was removed. The only other material between the target and the calorimeter was a 0.64 mm (0.7% $X_0$) aluminum window immediately in front of the calorimeter. This minimized the number of produced photons that were lost before hitting the calorimeter. Scintillator paddles were located above and below the calorimeter. Their logical AND provided a cosmic ray muon trigger, used to calibrate the calorimeter. The paddles could initiate a trigger in the interval between beam pulses. Most of the electronics were housed in a single CAMAC crate. Besides the calorimeter ADC, lead glass block ADCs and wire chamber hit patterns, we read out a number of additional scintillator paddles on each beam pulse, irrespective of what happened on that pulse. Monitoring data, such as the BGO temperature and spectrometer magnet settings were read out periodically. We used the acquisition framework developed by SLAC–E–142/3. The beams for this experiment were produced parasitically during Stanford Linear Collider (SLC) operations. Off axis electrons and positrons in the SLAC linac struck collimators near the end of the accelerator[@cavalli]. A useful flux of high energy bremsstrahlung photons emerged from the edges of these collimators and travelled down the beampipe, past the bending magnets, and into a target in the beam switchyard. This target converted the photons into $e^+e^-$ pairs, and those electrons within the A-line acceptance angle were transported to End Station A. For most of the running, we ran at an average intensity of one electron per pulse, with the short term averages between 0.8 and 1.5 electrons per pulse as SLC conditions varied. The average intensity was changed by adjusting the momentum defining collimators; typical momentum acceptance was $\Delta P/P\sim 0.2\%$. The beam optics were set up so that there was a virtual focus at the calorimeter. The typical beam spot vertical and horizontal half widths were 2.5 mm at 25 GeV and somewhat larger at 8 GeV. calibration =========== Since the calorimeter calibration is crucial to experimental accuracy, several methods were used to calibrate the calorimeter: 400 and 500 MeV electron beams, bremsstrahlung events, and cosmic ray muons. The calibrations were divided into two classes: relative calibrations, which were used to measure the relative gain between BGO crystals, and absolute, which set the overall energy scale. The most careful calibration was done with the ‘low gain’ calorimeter PMT HV setting; the ‘high gain’ data were calibrated by comparison with the ‘low gain’ running. This analysis used the ‘low gain’ data over the range of 5 to 500 MeV. The ‘high gain’ data are used from 200 keV to 40 MeV. Between 5 and 40 MeV, the data are combined using a weighted mean. In this region, the data agree well; this gives us confidence in our relative calibrations. One key factor in the calibration was the BGO temperature, which is known to affect both the light output and decay time. We therefore measured the way that changing temperatures affected the BGO response to cosmic ray muons, and corrected the data. The BGO temperature was monitored by a thermistor throughout the experiment. The BGO light output decreased by 2%/$^0$C, a bit more than other measurements[@Zucc]. This correction factor was applied to our data. The BGO channel gains were controlled by adjusting the PMT high voltage. Relative high voltages were set with potentiometric dividers, and the absolute scale was set by two supplies in our counting house. The relative gains were roughly equalized before the experiment by normalizing the calorimeter crystal response to 662 keV gamma rays from a $^{137}$Cs source. The change from ‘high gain’ to ‘low gain’ was done by adjusting the voltage on the two supplies. Since not every phototube had identical gain vs. voltage characteristics, this changed the relative gains somewhat. Because of this, the relative channel to channel calibrations were done separately for high and low gain running. Better measurements of the relative gain came from the cosmic ray data gathered throughout the run. The cosmic ray trigger consisted of a coincidence between the two scintillator paddles bracketing the calorimeter. They were placed so that triggers occurred for muons traversing the center of the BGO. The calorimeter absolute energy scale was largely determined with 400 and 500 MeV electron beams. The electrons were produced parasitically, as during normal E-146 running. Because of the low energy, special precautions were required. All of the beam line magnets were degaussed, and the usual power supplies were temporarily replaced with lower current supplies that could regulate reliably at the required power levels. The magnetic fields were monitored with a flip coil in a magnet that was subjected to identical treatment to the beam line magnets. The estimated error on the overall energy scale calibration is 5%. Since the low energy beam had a relatively wide angular distribution, these data also provided a check on the crystal to crystal intercalibration. By examining histograms of reconstructed energy vs. the location where the electron hit the calorimeter, we estimate that the crystal to crystal calibration varied by less than 2%. Since most of the bremsstrahlung photons hit the central crystal, this had a negligible effect on our overall resolution. For each event, the electron momentum, measured in the wire chambers, and the photon energy should sum to the beam energy. Since the wire chamber energy resolution is determined by geometry, it can provide an additional check on the calorimeter calibration. Unfortunately, because of the steeply falling photon spectrum and the quantization introduced by the wire spacing, this analysis is quite tricky. However, it analysis confirmed that the calorimeter energy calibration is good to within 10%. The ‘high gain’ data were calibrated by comparison with the low gain data, mostly using the cosmic rays. This calibration is accurate to about 10%. It is worth noting that the calorimeter behavior is significantly different for the high and low gain data. At higher energies, the impinging photons create electromagnetic showers, while at lower energies, most photons interact via single or multiple Compton scattering. Besides the loss in resolution due to the photoelectron statistics, it is necessary to account for resolution deterioration because photons can be Compton scattered out the front face of the calorimeter; the probability of this increases at low energies. Also, because of the possibility of a photon Compton scattering twice, in two widely separated crystals in the calorimeter, the photon cluster finder loses efficiency; these problems are accounted for in our systematic errors, which are larger for small photon energies. Data Analysis ============= Because bremsstrahlung is the dominant cross section, event selection is simple. Events containing a single electron in the lead glass were selected. The calorimeter ADC counts were converted to energy. For ‘low gain’ running, the total energy observed in the calorimeter was used directly. For ‘high gain’ running clustering was required to remove spurious pedestal fluctuations: we stared with the highest energy crystal in the event, and added in the energies of all neighboring crystals that were above the ADC pedestal. Because the angular acceptance of the central crystal, 0.2 mrad, was larger than the typical bremsstrahlung angle, $1/\gamma\sim0.02$mrad, even after allowing for the beam divergence, the majority of the bremsstrahlung photon flux hit the center of the calorimeter, so we did not have to correct for calorimeter leakage on an event by event basis. Events with a calorimeter energy between 200 keV and 500 MeV were histogrammed by photon energy, with the bins having a logarithmic width. The photon intensity, $(1/X_0) (dN/d(log k))=(1/kX_0) (dN/dk)$ is plotted vs. $k$, with $k$ on a logarithmic scale, necessary to cover the 3 1/2 decades of energy range. The $y$ axis is chosen so that the classical Bethe-Heitler $1/k$ spectrum will appear as a flat line. There are 25 bins per decade of photon energy, giving each bin a width $\Delta k/k\sim 0.09$, Although the Bethe Heitler cross section is flat for a logarithmic energy binning, the corresponding data would not be flat because of multiphoton pileup. This is because a single electron traversing the target may interact twice, emitting two photons. Because the photon energies add, this depletes the low energy end of the measured spectrum and tilts the spectrum. The logarithmic energy scale and the mismatch between ADC counts and histogram bin boundaries can create a problem for low $k$. The uneven mapping can create a dithering in the histograms, with different numbers of ADC counts contributing to adjacent bins, creating an up-down-up pattern, as can be see in Fig. 2 of Ref. 2. To avoid this, the data below 500 keV were smoothed with a 3 point average with weights 0.25 : 0.5 : 0.25. Above 500 keV, the weights of the 2 side points were reduced logarithmically with the energy, reaching zero at 5 MeV. We have previously shown that both LPM and dielectric suppression are necessary to explain the data; this paper presents a more detailed examination of the data for a variety of targets. In most cases, only a single, combined LPM plus dielectric suppression curve is shown. To produce histograms covering almost 3 1/2 decades of photon energy, it was necessary to combine data from the high and low gain running. Above 5 MeV, high gain data were used, while below 40 MeV low gain data were used. Between 5 and 40 MeV, weighted averages of both data sets were used. Because the agreement between the two data sets was considerably better than the estimated systematic errors, the actual combination technique was unimportant. One run of 0.7% $X_0$ Au 8 GeV high gain data was removed from the analysis because it was significantly above both the other high gain data and also the low gain data. And, as discussed below, the 0.1% $X_0$ gold data were not always consistent. In all other cases, the data from individual runs were consistent. Monte Carlo ----------- A computer code using Monte Carlo integration techniques based on a set of look-up tables was written to make predictions for the photon intensity spectra. This technique was necessary in order to combine the effects of multiple photon emission from one electron with predictions for LPM and dielectric suppression and transition radiation. Tables of photon production cross sections are generated, starting with 10 keV photons, with each step in photon energy increasing exponentially in multiples of 1.02. The Migdal cross sections are generated using the simplified calculational methods developed by Stanev and collaborators[@stanev]. Their parameterizations agree well with Migdal’s calculations, without dielectric suppression. Our calculations include an additional term for the longitudinal density effect, in the manner prescribed by Migdal. A separate table is generated for transition radiation. This table is normally filled with conventional transition radiation (Eqn. 13); the Gol’dman or Pafomov combined formula can also be used. The photons from the entry radiation can, of course, interact in the target. For ease of extrapolation, these tables are then converted to integral and total cross sections. The Monte Carlo then begins generating events. Each electron enters the target, and entry radiation may be generated. The electron is tracked through the target in small steps. The step size is limited so that the probability of emission at each step is less than 1%; at most one photon can be produced per step. If the electron radiates, the photon energy is chosen using the integral cross section table. The photon energy is subtracted from the electron energy, and the tracking continues, until it leaves the target, producing another opportunity for transition radiation. The possibility of produced photons interacting in the target by pair production or Compton scattering is included using another look up table[@Overbo]; any photon that interacted is considered lost. When one electron emits multiple photons, the photon energies were summed before histogramming. The photon energies are then smeared to match the measured calorimeter resolution. In the Monte Carlo curves, at $1.1 < k/k_{LPM} < 1.3$, the LPM curve rises slightly above the Bethe-Heitler curve. This rise comes from Migdal’s original equations, because the product $\xi(s)\phi(s)$ can rise slightly above 1. The Blankenbecler and Drell theory, as described in Section II.A., does not allow for the possibility of multiple interactions, and, without the photon emission point, it isn’t easy to include their calculations in a Monte Carlo, and, consequently, allow for experimental effects, such as photon absorption in the target. Because of these problems, in particular the multiple interaction possibility, we have not implemented their cross sections in our Monte Carlo. Instead, we will directly compare their cross sections with our data, but only for the thinnest targets, where multiple photon emission is small, and at energies above those where dielectric suppression occurs. Backgrounds ----------- Because the calorimeter subtended such a small solid angle, backgrounds due to photonuclear interactions were small – only photons produced with very small $p_\perp$ would hit the calorimeter. As previously mentioned, the maximum critical energy for synchrotron radiation from the spectrometer magnet incident on any part of the calorimeter was 280 keV (40 keV) at 25 (8) GeV; for synchrotron radiation hitting the central crystal, the critical energies were much lower. Because the synchrotron radiation was painted in a band downward from the central crystal, it was easy to identify in the calorimeter. For the 25 GeV ‘high gain’ data, synchrotron radiation could be a significant background. For the data, backgrounds were reduced with the cut diagrammed in Fig. 3. Photon clusters in the lower 25% of the calorimeter, below the diagonal lines, were removed. Photons reconstructed exactly on the border were kept, but with an appropriate weighting, 50% if they were on the border lines, and 75% at the center of the center crystal. The data were adjusted upward to compensate for this 25% loss of signal. Because of uncertainty in the source of after-cut backgrounds, no further corrections are applied. The backgrounds were measured with periodic no-target runs. The no-target data, both with and without the synchrotron radiation cut are shown in Fig. 4 for both 8 and 25 GeV running. Note that this figure ais normalized as photons per 1000 electrons, whereas Figs. 5–13 are normalized as photons per electron per radiation length of the target. The backgrounds in Fig. 4 can be scaled to the data in Figs. 5–13 by dividing by 1000 the electron scale factor in Fig. 4, and again by the radiation length in percent. At 25 GeV, the majority of background is synchrotron radiation, which is largely removed by the cut. At 8 GeV, the cut has little effect; acceptance corrections occasionally make the post-cut background larger than the pre-cut. Except for the region where synchrotron radiation was expected, backgrounds were always small. After the cut, backgrounds at 25 GeV were less than one 200 keV– 500 MeV photon per 1000 electrons. At 8 GeV, the background was about a factor of 3 lower, with or without the cut. Discussion of Data ------------------ Figures 5–13 present our data for a variety of target materials, arranged in order of increasing suppression. For each material, there is one figure, with four or six panels, showing two target thicknesses in 8 and 25 GeV beams, plus edge-effect subtracted data (discussed in Section VI). The 25 GeV ‘high gain’ data have had the synchrotron radiation removal cut applied. For lead, there is only one target thickness. Occasionally, there are data at only one energy for a target. The high gain and low gain calorimeter data have been combined as previously described; where there are no high or low gain data, the histogram is cut off at the appropriate energy. For each target, we compare the data with different Monte Carlo curves. Our standard curve, shown by a solid line in all the plots, is a Monte Carlo including LPM and dielectric suppression, with conventional transition radiation. For the thinner targets, we make comparisons with a number of transition radiation theories. For these plots, the Monte Carlo curves have been normalized to match the data, as discussed in Section VIII. Figure 5 shows data from the carbon targets. In addition to the standard Monte Carlo (solid line), LPM suppression only (dotted line) and a Bethe-Heitler curve (dashed line) are shown for comparison. To give an idea of the effect of transition radiation, we also show in Fig. 5a a Bethe-Heitler only curve and in Fig. 5d the suppression curve, both with no transition radiation, as dot-dashed lines. The upturn below about 500 keV for the 25 GeV electron Monte Carlos is transition radiation (Eqn. 12). The additional upturn in the data are consistent with the remaining background. The combined Monte Carlo does the best job of representing the data. At 8 GeV, the suppression is dominated by dielectric suppression; at 25 GeV, the two effects have a similar magnitude. At 25 GeV, the suppression appears to turn on at higher energies and more gradually than predicted by the Monte Carlo. Figure 6 shows data from the aluminum targets, with the same three Monte Carlo curves as in Fig. 5. The data are slightly below the Monte Carlo over most of the plot. Here, the upturn below 500 keV in the 25 GeV data are consistent with transition radiation plus remnant synchrotron radiation. Since the $Z$ of aluminum is twice that of carbon, the LPM effect is much larger. Because the densities are similar, dielectric suppression is very similar. As with carbon, the LPM effect appears to turn on slightly more gradually than the Monte Carlo predicts. Figure 7 shows data from the iron targets, compared with just the standard Monte Carlo. The data and Monte Carlo are close, but the data may have a longer, but more gradual slope than the Monte Carlo predicts. Data from the 2% lead target are shown in Fig. 8, again with the standard Monte Carlos. Figure 9 shows data from the tungsten targets. The fit is quite good at 8 GeV. At 25 GeV, for $k< 10$ MeV, the data for the 2% $X_0$ target are above the Monte Carlo. At 7 MeV, the target thickness is comparable to the unsuppressed formation length. Eqn. 17. shows that the suppressed length becomes comparable to $t$ below 3.0 MeV. Below 1.7 MeV, dielectric suppression reduces $l_f$ below $t$. Between 1.7 MeV and 3.0 MeV, the target should interact as a single radiator; the straight line on the figure is from Eqn. 14; the height is significantly above the data. Figure 10 shows data from the 3% and 5% uranium targets. In both cases, the 25 GeV data rise above the Monte Carlo at low $k$. The prediction of Eqn. 14 is shown by the straight line in the 25 GeV 3% data. For the 5% target and the 8 GeV 3%data, $t > l_f$ everywhere, so it is appropriate to treat the edge effects in terms of independent transition radiation. The transition radiation predicted by Ternovskii (dotted line) and Pafomov (dashed line) are shown on these plots, on top of the LPM + dielectric suppression base. The Ternovskii curve has a jump around 500 keV in the 25 GeV data. This corresponds to $sk_p^2/k^2=1$, below which transition radiation from Eqn. 13 applies; the corresponding $k$ is below 200 keV for 8 GeV electrons. Below this energy, Ternovskii matches conventional transition radiation. Above this energy, Ternovskii predicts a rather large transition radiation, which does not match the data. The match could be improved by adjusting $\chi$. However, a rather large adjustment would be required. Pafomov’s predictions jumps at about 800 keV (400 keV), corresponding to $k=k_p^{4/3}k_{LPM}^{1/3}$. Below this, his predictions are considerably above both conventional transition radiation and the data. Above the break, the shape looks reasonable, but the amplitude appears to be a factor of 2 to 3 too big. Figure 11 shows data from the 6% and 0.7% gold targets. For the 0.7% target, the excess flat region extends from about 1 MeV up to 30 MeV. The downturn for the 0.7% $X_0$ data above $k=100$ MeV is due to the natural decrease of the Bethe-Heitler spectrum. Because the 0.7% target is thin enough that multi-photon emission is small, we can compare it directly with predictions that are not amenable to Monte Carlo simulation. We do this in Fig. 12, which shows an enlarged view of the data in Fig. 11. Here, the dashed line is the result of a calculation by Blankenbecler and Drell[@dick], normalized to our Bethe-Heitler Monte Carlo. Because Blankenbecler and Drell do not include dielectric suppression or transition radiation in their calculations, the calculations are suspect below 5 MeV (1.5 MeV) at 25 (8) GeV. At 25 GeV, Blankenbecler and Drell are an excellent fit to the data, with a $\chi^2/DOF$ of 1.15 above 2 MeV. At 8 GeV, the agreement is not as good, with $\chi^2/DOF$=2.3. Because of the more gradual onset of suppression in the Blankenbecler and Drell calculation, the downturn in the 8 GeV spectrum occurs above $k=500$ MeV and is not visible. At 25 GeV, the prediction of Shul’ga and Fomin is shown as a straight dot-dashed line. At 8 GeV, the target is thin enough that their formulae do not apply. Zakharov[@Zakharov] has compared his calculation with our 0.7% 25 GeV data for $k> 5 $MeV, and finds excellent agreement. Figure 13 shows data from the 0.1% gold target, with Bethe-Heitler (dashed line) and dielectric suppression only (solid line) Monte Carlos. The target is thin enough that the total multiple scattering is less than $1/\gamma$. One might expect that there is then no LPM suppression. However, Blankenbecler and Drell found[@dick] a slight suppression at 25 GeV, about 8% at $k=500$ MeV, rising to 13% at $k=100$ MeV. At 8 GeV, the suppression is a few percent. Because of the small signal and relatively large uncertainties, we are not able to confirm or reject this slope. Little transition radiation is visible. Because $t<l_f$ over the entire relevant $k$ range, transition radiation is reduced by $\sin^2{(t/l_f)}$[@Artru] compared to a thick target ($t>l_f$). Dielectric suppression is expected to be similarly reduced, because the total phase shift in the entire target thickness is much less than one. However, at 8 GeV, considerable downturn is observed, with the data between the dielectric suppression only and Bethe-Heitler predictions. Unfortunately, there are a number of experimental uncertainties associated with this target. Because the target is so thin, background contamination is relatively more significant than it is for other targets. The actual target thickness is not well known, and visual inspection suggests that the target thickness is not uniform; we have not been able to measure this. We have observed considerable variation in overall bremsstrahlung amplitude from run to run; this could be caused by the beam spot hitting different locations on the target. Target Subtraction ================== The data presented above show that the suppressed curves are a much better fit to the data than the Bethe Heitler curves. However, in many cases, the Monte Carlo does not fit the data well, especially when the target thickness is a significant fraction of $l_f$, and surface effects are large. One way to remove the surface effects is to compare targets of the same material, but differing thicknesses. We do this by performing a bin by bin subtraction of the histograms of the same material but differing thicknesses, for example 6% $X_0$ Au - 0.7% $X_0$ Au, giving the ‘middle’ 5.3% $X_0$ of the target. Because this subtraction increases the slope change due to multi-photon pileup (multiple interactions in the target), it is necessary to compare the result with Monte Carlo data which have been subjected to the same procedure. The subtractions are shown in Figs. 5–11. This subtraction suffers from a few drawbacks. It assumes that the target is thicker than a formation length, so that there is no interference between the transition radiation from the two edges. The subtraction increases the effect of multi-photon emission and photon absorption in the targets. Because of this, when the procedure is applied to Monte Carlo data, the result is negative below about 1 MeV (500 keV) at 25 (8) GeV beam energy, depending on the target material. These effects are included in the Monte Carlo, but the subtractions do increase the relative systematic errors. However, edge effects change the multi-photon pileup slightly. Because this is not in the Monte Carlo, it also adds to the systematic errors. The systematic errors due to the Monte Carlo in Table IV should be doubled. Nevertheless, subtraction appears to be an effective process for separating edge effects from bulk LPM suppression, so we present the subtracted data here. After subtraction, the LPM Monte Carlo is a much better match to the data. To quantify the agreement, we have performed a $\chi^2$ fit of the Monte Carlo to the data; the results of the fit are given in Table III. The only free parameters in the fit are the previously mentioned normalization constants; see Section VIII for a discussion of the normalization. For most of the materials, the fit quality is good, with $\chi^2$/DOF$\sim 1$. For the targets where the $\chi^2/DOF > 1$, indicating a poor fit, the disagreement appears to be within the systematic errors; we have not attempted to include the systematic errors in the fit or $\chi^2$. Figures 5c and 5f show the carbon data, above 450 keV (200 keV) for 25 (8) GeV. The fit quality is reasonable, although, because of the good statistics, the $\chi^2/DOF$s at 25 GeV of 2.74 is high. At 8 GeV the fit quality is much better, with $\chi^2/DOF=1.17$. At 25 GeV, much of the $\chi^2$ comes from the region of small $k$, where the data are below the Monte Carlo. The fact that the subtracted data and MC agree much better than their unsubtracted counterparts indicates that the mismatch between the data and LPM + dielectric suppression MC is related to the target edges. This is a bit puzzling, since it is difficult to see how surface terms could [*increase*]{} the suppression; an unreasonably large contamination by a higher $Z$ material would be required to explain the spectrum. Figures 6d, 7d and 9d show the 25 GeV subtracted aluminum, iron and tungsten data, above 500 keV. The aluminum and tungsten simulations are an excellent fit to the data, with $\chi^2/DOF$=0.84 and 0.99 respectively. The iron fit is rather poor with $\chi^2/DOF= 2.32$, although it agrees a lot better than the unsubtracted data. Figures 10c and 10f show the uranium data, above 1000 keV (300 keV) for 25 (8) GeV. The fit quality is quite good, with $\chi^2/DOF$s of 0.89 and 1.56. Figures 11c and 11f show the gold data, above 5 MeV (350 keV) for 25 (8) GeV. The fit quality is excellent at 25 GeV, with $\chi^2/DOF$=0.85. The 8 GeV data have a $\chi^2/DOF$ of 2.68, because the data are below the MC prediction below 1 MeV. This may be partly because the 0.7% target is so thin that coherent interactions between the two edges are significant. However, in that case we would expect better agreement at 8 GeV, where $l_f$ is much smaller. One side benefit of the subtraction procedure is that the break in the spectrum between LPM suppression and dielectric suppression becomes much clearer. &gt;From these results, it is clear that the Migdal formula does an excellent job of describing suppression in bulk media. The suppression scales as expected with beam energy, photon energy, target Z and $X_0$. It would be possible to modify the subtraction procedure to isolate the emission due to a single edge. However, because of the large errors and uncertainties inherent in the process, the results would have limited significance. For the carbon and iron targets, the ‘edge’ term would be negative over a fair fraction of the spectrum. Systematic Errors ================= Our systematic errors are divided into two classes, those that affect the absolute normalization only (discussed in the next section) and those that can affect the shape of the spectrum. The major systematic errors are due to energy calibration, photon (cluster) finding, calorimeter nonlinearity, uncertainty in the target density, and multiphoton pileup, as summarized in Table IV. The systematic errors that can affect the spectral shape are quite different for the high and low calorimeter gain data, because several things change. As was previously discussed, in one case energy loss is primarily by showering, and in the other by Compton scattering, so the clustering works differently. Also, for the high gain data, backgrounds are much larger. For these reasons, the systematic errors are much larger for $k<5$ MeV than for $k>5$ MeV. Surprisingly, except for the synchrotron radiation removal cut, the systematic errors are independent of electron beam energy. For $k>5$ MeV, the major errors are calorimeter energy calibration (1.5%), photon cluster finding (2%), calorimeter nonlinearity (3%), backgrounds (1%), target density(2%), electron flux (0.5%), and Monte Carlo uncertainties (1%), for a total systematic uncertainty of 4.6%. The 5% uncertainty in the calorimeter energy calibration is equivalent to shifting the histogrammed data by just over half a bin. The magnitude of the consequent error in cross section depends on the slope of the curve, and consequently on the target thickness. In the worst case, the 6% $X_0$ gold target, a 5% energy scale shift produces a 1.5% change in the measured cross section. The photon cluster finding introduces a 2% uncertainty in the cross section. Likewise, leakage out the back and sides of the calorimeter, and PMT saturation effects introduces a 3% uncertainty, Most of the targets materials had a well defined density. However, the carbon targets were graphite, which has a density that can vary, only partly because it can absorb water. During data taking, they were in vacuum, so that wasn’t a problem. Their density was determined by measuring and weighing them, the latter after they were dried in an oven. We measured a density 4$\pm$2% below the standard value[@PDG], and used this density in calculations of the radiation length and $E_{LPM}$. For the $k<5$ MeV data, many systematic errors are larger. The photon cluster finder is less effective because of the possibility of non-contiguous energy deposition (7%), and the calorimeter energy calibration is worse due to the need to use the higher energy data as an intermediate calibration (3%). Also, at these energies, backgrounds are larger, a 4% uncertainty, and the Monte Carlo is probably less accurate for low energy photons (1.5%) This gives an overall 9% systematic error. For the data where the synchrotron radiation rejection cut was used, ‘high gain’ 25 GeV running, there is an additional systematic error. This is because the cut efficiency is sensitive to how well the electron beam is centered on the calorimeter. During our running, the average deviation from the calorimeter center was less than 5 mm. This introduces an additional 15% systematic error. Normalization ============= We have compared our measured absolute cross sections with the Migdal predictions by calculating the adjustment required to normalize the data to the Migdal plus dielectric suppression Monte Carlo. To avoid regions where edge effects and backgrounds are important, the 25 GeV data are normalized over the range 20 MeV to 500 MeV, and the 8 GeV data are normalized from 2 MeV to 500 MeV. For the 0.7% $X_0$ data, a narrower range, 30 MeV (10 MeV) to 500 MeV was used at 25 (8) GeV, to avoid surface effects. This is a much wider fitting range than was used previously[@prl]. For each data set, Table II gives the normalization corrections, the percentage by which it is necessary to adjust the Monte Carlo prediction to best match the data. The errors given are statistical only; the systematic errors are summarized in Table IV. The electron flux was measured using the lead glass blocks. The blocks are large enough so that there was almost no leakage out the side or top of the block stack. The major source of missed electrons was high energy bremsstrahlung where the electron lost enough energy to be bent below the lead glass blocks. Electrons with energies below 17.4 (5.8) GeV for 25 (8) GeV beams missed the blocks. The fraction of electrons missing the blocks depended on the target thickness, and was determined by the Monte Carlo; the miss probability ranged from 2% to 7%. This miss probability was folded into a matrix to estimate the number of single electron events. Because missed electrons events produce high energy photons, the events will also cause overflows in the calorimeter, thus they do not affect the histograms. In this unfolding, a fortuitous cancellation limits the systematic errors to 0.5%. Most of our running was at an average of one electron per pulse. At this level, the probability of a single electron being missed was very close to the probability of a two electron event appearing as a single electron in the lead glass blocks. So, the probability of losing an electron almost completely cancels out of the luminosity, so it is not necessary to know this number well. Many uncertainties that affect the relative measurement are reduced for the normalization, because of the more limited photon energy range. Above 20 MeV, photon-finding is much more robust, and the calorimeter nonlinearities are less significant. The target thickness measurement was more complicated than originally expected. The targets thicknesses were measured with calipers. The thinner targets were weighed, and their sizes measured, to find the thickness in gm/cm$^2$. The uncertainty in thickness contributed a 2% systematic error. Because of the previously mentioned uncertainties about the 0.1% $X_0$ gold target, it is not considered here. The normalization constant depends only slightly on the normalization procedure. Changing the lower energy limit only produces small changes, of order 0.5%. To account for these fitting uncertainties, we include a 1% systematic error. On the average, the normalizations show that the data are slightly below the Migdal prediction. The weighted averages are $-4.7\pm2.0$% ($-3.1\pm 5.6$%) at 25 (8) GeV, with a 3.5% systematic error. If the outlying 6% $X_0$ gold target is excluded from the 8 GeV data, the average becomes $-4.8\pm2.5$%. However, the 2.5% contribution to the cross section from the $(1-y)(Z^2+Z)/3$ term discussed in section II.A. increases the disagreement. Including systematic errors, we find roughly a $2\sigma$ discrepancy. This is difficult to explain by experimental effects alone. There are some attractive theoretical explanations, stemming from limitations in Migdal’s calculations. Migdal used a Gaussian approximation for multiple scattering. This underestimates the probability of large angle scatters. These occasional large angle scatters would produce some suppression for $k>k_{LPM}$, where Migdal predicts no suppression and where we determine the normalization. Fig. 12 shows that, compared to Migdal, the suppression predicted by Blankenbecler and Drell turns on much more slowly, and, hence if Blankenbecler and Drell were used in the normalization, the discrepancy would be lessened or eliminated. Zakharov’s[@Zakharov] calculation would also appear to lessen or eliminate this discrepancy. Discussion ========== As the data presented above shows, the LPM and dielectric effects suppress bremsstrahlung as expected for most of our target materials and thicknesses. The suppression scales as expected with electron energy, photon energy, and radiation length. Materials with similar radiation lengths, but different densities and atomic numbers (tungsten and uranium) display similar LPM suppression. For low photon energies, the formation length can become longer than the target thickness. When that happens, we observe that the target behaves as a single scatterer, and the spectrum again becomes flat, like the Bethe-Heitler result, but at a lower intensity. For thicker targets, there is an edge effect radiation which can be removed by subtraction. Unfortunately, we have not found a single calculation that matches the data and includes both LPM and dielectric suppression for finite target thicknesses. However, we have removed the finite target thickness effects by subtraction. Although the data clearly demonstrate LPM suppression to good accuracy, for low $Z$ targets, the simulations do not match the data as well as expected. The fact that the discrepancy is greatly reduced by the subtraction procedure indicates that some sort of a surface effect is involved. However, it is difficult to imagine how a surface effect can reduce the emission. It is also difficult to imagine instrumental effects that would affect only carbon and iron; a 20% adjustment to the the energy scale would improve the agreement for these materials, but it would produce a large disagreement for the other materials. A discrepancy in the bulk material (subtracted plots) might be explainable by material effects. The carbon targets were made of pyrolitic graphite, which has internal structure on a scale much larger than crystalline structure. If the target varied in density on a scale large with respect to the formation zone length, then the average suppression and edge effects will increase and additional transition radiation will be generated, consistent with the data at 25 GeV beam energy. The iron targets should be mechanically homogeneous, but magnetically inhomogeneous. Individual magnetic domains are magnetized to saturation ($B\sim 2T$), but in different directions. The typical domain size is of order 1$\mu$m. Magnetic bending of the elctrons can also suppress bremsstrahlung; a detailed model of the phenomenon is lacking[@klein]. Two Tesla is enough to bend the electrons by $1/\gamma$ in a distance $l_f$; in combination with multiple scattering, this could alter the spectrum. It is perhaps significant that at 8 GeV beam energy, where the formation zone is a factor of 10 shorter and edge effects consequently are greatly reduced, the data shows much better agreement than at 25 GeV beam energy. Conclusions =========== The LPM and dielectric effects suppress bremsstrahlung as expected for a variety of target materials and thicknesses and two beam energies. For carbon and iron, somewhat more suppression than expected is observed. However, the excess suppression appears to be a surface or magnetic effect, and perhaps can be explained by the properties of these targets. For most of our targets, the agreement is within 5% of the theory. Thin targets, where the formation length is longer than the target thickness, behave as single radiators. Calculations by Blankenbecler and Drell reproduce the shape of the photon spectra where dielectric suppression is unimportant. The overall bremsstrahlung cross section for low energy photons is measured to be about 5% (2$\sigma$) lower than expected due to Migdal’s work. Alternate calculations, by Blankenbecler and Drell, or by Zakharov, might agree better with the data. Acknowledgements ================ We would like to thank the SLAC Experimental Facilities group for their assistance in setting up the experiment, the SLAC Accelerator Operations group for their efficient beam delivery, and the SLAC Computing Services group for providing the data analysis facilities. We also acknowledge useful conversations and cross section calculations from Sid Drell and Richard Blankenbecler. N. Shul’ga and S. Fomin explained the details of their calculatons to us. Don Coyne provided much direct and indirect support. This work was supported by Department of Energy contracts DE-AC03-76SF00515 (SLAC), DE-AC03-76SF00098 (LBNL), and National Science Foundation grants NSF-PHY-9113428 (UCSC) and NSF-PHY-9114958 (American U.). Present address: Institut de Fisica d’Altes Energies, Universitat Autonima de Barcelona, 08193 Bellaterra (Barcelona), Spain. P. L. Anthony [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 1949 (1995). P. L. Anthony [*et al.*]{} Phys. Rev. Lett. [**76**]{}, 3550 (1996). L. D. Landau and I. J. Pomeranchuk, Dokl. Akad. Nauk. SSSR [**92**]{}, 535 (1953); [**92**]{}, 735 (1953). These two papers are available in English in L. Landau, [*The Collected Papers of L. D. Landau*]{}, Pergamon Press, 1965. A. B. Migdal, Phys. Rev. [**103**]{}, 1811 (1956). A. Misaki, Nucl. Phys B, Proc. Suppl., [**33A,B**]{}. 192 (1993); Capdevielle and K. Atallah, Nucl. Phys. [**28B**]{} (Proc. Suppl.), 90 (1992). J. Learned, Phil. Trans. R. Soc. Lond. [**A346**]{}, 99 (1994);A. Misaki, Forschr. Phys. [**38**]{}, 413 (1990). X. N. Wang, M. Gyulassy and M. Plumer, Phys. Rev. [**D51**]{}, 3436 (1995); S. Brodsky and P. Hoyer, Phys. Lett. [**B298**]{}, 165 (1993). G. Raffelt and D. Seckel, Phys. Rev. Lett. [**67**]{}, 2605 (1991). P. H. Fowler, D. H. Perkins and K. Pinkau, Phil. Mag. [**4**]{}, 1030 (1959); E. Lohrmann, Phys. Rev. [**122**]{}, 1908 (1961). K. Kasahara, Phys. Rev. [**D31**]{}, 2737 (1985); S. C. Strausz [*et al.*]{}, in [*Proc. 22nd Intl. Cosmic Ray Conf.*]{}, Dublin, Ireland, Aug 11–23, 1991, [**4**]{}, pg 233 A. Varfolomeev [*et al.*]{}, Sov. Phys. JETP [**42**]{}, 218 (1976). J. F. Bak [*et al.*]{}, Nucl. Phys. [**B302**]{}, 525 (1988). J. Dolesji, J. Hüfner and B. Z. Kopeliovich, Phys. Lett. [**B312**]{}, 235 (1993). F. R. Arutyunyan, A. A. Nazaryan and A. A. Frangyan, Sov. Phys. JETP [**35**]{}, 1067 (1972). J. D. Jackson, [*Classical Electrodynamics*]{}, 2nd ed. John Wiley & Sons, 1975, pg. 687. B. Rossi, [*High Energy Particles*]{}, Prentice Hall, Inc., 1952, pg. 68. M. L. Perl, in [*Proc. 1994 Les Rencontres de Physique de la Vallee D’Aoste*]{}, (Editions Frontieres, Gif-sur-Yvette, France, 1994). Ed. M. Grego, p. 567. Y.-S. Tsai, Rev. Mod. Phys. [**46**]{}, 815 (1974). R. Blankenbecler and S. D. Drell, Phys. Rev. [**D53**]{}, 6265 (1996). B. G. Zakharov, Pis’ma v. ZhETF [**64**]{}, 737 (1996). B. G. Zakharov, Pis’ma v. ZhETF [**63**]{}, 906 (1996). M. L. Ter-Mikaelian, Dokl. Akad. Nauk. SSR [**94**]{}, 1033 (1954). For a discussion in English, see M. L. Ter-Mikaelian, [*High Energy Electromagnetic Processes in Condensed Media*]{}, John Wiley & Sons, 1972. V. M. Galitsky and I. I. Gurevich, Il Nuovo Cimento [**32**]{}, 396 (1964). Richard Blankenbecler, private communication. I. I. Gol’dman, Sov. Phys. JETP [**11**]{}, 1341 (1960). N. F. Shul’ga and S. P. Fomin, JETP Lett. [**63**]{}, 873 (1996). F. F. Ternovskii, Sov. Phys. JETP [**12**]{}, 123 (1960). Particle Data Group, R.M.Barnett [*et al.*]{}, Phys. Rev. [**D54**]{}, 1 (1996). G. M. Garibyan, Sov. Phys. JETP [**12**]{}, 237 (1961). V. E. Pafomov, Sov. Phys. JETP [**20**]{}, 253 (1965). S. R. Klein [*et al.*]{}, in [*Proc. XVI Int. Symp. Lepton and Photon Interactions at High Energies*]{} (Ithaca, 1993), Eds. P. Drell and D. Rubin, p. 172. R. Becker-Szendy [*et al.*]{}, in [*Proc. 21st SLAC Summer Institute on Particle Physics*]{} (Palo Alto, 1994), pg. 519. P. Bosted and A. Rahbar, SLAC-NPAS-TN-85-1, February, 1985 (unpublished). I. Kirkbride, in the SLAC Users Bulletin No. 97, Jan–May, 1984, pp 10–11. M. Cavalli-Sforza [*et al.*]{}, IEEE Trans. Nucl. Sci. [**41**]{}, 1374 (1994). A. Zucchiatti [*et al.*]{}, Nucl. Instrum. & Meth. [**A281**]{}, 341 (1989). T. Stanev [*et al.*]{}, Phys. Rev. [**D25**]{}, 1291 (1982). J. Hubbell, H. Gimm and I. Overbo, J. Phys. Chem. Ref. Data [**9**]{}, 1023 (1980). X. Artru, G. B. Yodh and G. Mennessier, Phys. Rev. [**D12**]{}, 1289 (1975). Target Z $X_0$ (cm) $E_{LPM}$ (TeV) $k_{LPM25}$ (MeV) $k_{LPM8} $(MeV) $r$    ---------- ---- ------------ ----------------- ------------------- ------------------ -------------------- Carbon 6 19.6 74 8.5 0.87 $5.5\times10^{-5}$ Aluminum 13 8.9 36 15.7 1.6 $6.0\times10^{-5}$ Iron 26 1.76 6.6 95 9.7 $1.0\times10^{-4}$ Lead 82 0.56 2.1 295 30.1 $1.1\times10^{-4}$ Tungsten 74 0.35 1.32 472 48.3 $1.5\times10^{-4}$ Uranium 92 0.35 1.32 472 48.3 $1.4\times10^{-4}$ Gold 79 0.33 1.25 500 51.2 $1.5\times10^{-4}$ : $E_{LPM}$, $k_{LPM25}$, $k_{LPM8}$ and $r$ for the target materials used here. --------- -------- ------------ ------- --------------- --------------- Target $t$ X$_0$ Normalization Normalization - (mm) (g/cm$^2$) (%) (% at 25 GeV) (% at 8 GeV) 2% C 4.10 0.894 2.1 -3.0$\pm$0.3 -6.0$\pm$0.4 6% C 11.7 2.55 6.0 -2.9$\pm$0.2 -4.6$\pm$0.5 3% Al 3.12 0.842 3.5 -2.7$\pm$0.4 -3.0$\pm$0.4 6% Al 5.3 1.4 6.0 -2.8$\pm$0.3 3% Fe 0.49 0.39 2.8 -5.4$\pm$0.2 -1.4$\pm$0.4 6% Fe 1.08 0.85 6.1 -7.5$\pm$0.2 2% Pb 0.15 0.17 2.7 -4.5$\pm$0.2 -0.7$\pm$0.4 2% W 0.088 0.17 2.7 -8.3$\pm$0.3 -8.6$\pm$0.3 6% W 0.21 0.41 6.4 -4.7$\pm$0.3 3% U 0.079 0.15 2.2 -5.6$\pm$0.3 -6.3$\pm$0.3 5% U 0.147 0.279 4.2 -7.0$\pm$0.3 -7.5$\pm$0.4 0.1% Au 0.0038 0.0073 0.11 0.7% Au 0.023 0.044 0.70 -1.3$\pm$0.4 12.2$\pm$0.7 6% Au 0.20 0.39 6.0 -5.5$\pm$0.2 -5.0$\pm$0.3 --------- -------- ------------ ------- --------------- --------------- : List of target thicknesses and overall normalization constants. The target thicknesses $t$ are given in mm, gm/cm$^2$, and $X_0$. The last two columns give the normalization adjustments used to match the simulations with the data (statistical errors only). \[t\] Material 25 GeV 8 GeV ---------- -------- ------- Carbon 2.74 1.17 Aluminum 0.84 Iron 2.32 1.41 Tungsten 0.99 Uranium 1.56 0.79 Gold 0.85 2.68 : $\chi^2$ per degree of freedom of the fits to the subtracted data. The only free parameters were the absolute normalizations of the two individual targets. Typically, there were about 60 degrees of freedom. Statistical errors only were included in the fit. \[t\] ------------------------------- ---------- ----------- ----------- Source $k>5 $MeV $k<5 $MeV Absolute Relative Relative Energy Calibration 1% 1.5% 3% Photon Cluster Finding 2% 7% Calorimeter Nonlinearity 2% 3% 3% Backgrounds 1% 4% Target thickness 2% Target Density 2% 2% Electron Flux 0.5% 0.5% 0.5% Monte Carlo 1.5% 1% 1.5% Normalization Technique 1% 8 GeV Beam Total 3.5% 4.6% 9% Synchrotron Radiation Removal 15% 25 GeV Beam Total 3.5% 4.6% 17% ------------------------------- ---------- ----------- ----------- : Table of Systematic Errors. The absolute column refers to the cross section for $k=500 $MeV for both 8 and 25 GeV beams. The relative errors for $k<5$ MeV and $k>5$ MeV also apply to both 8 and 25 GeV beams, except for the synchrotron radiation removal cut, which is added in separately. Uncertainties in the theoretical calculation are not included. [^1]: Work supported by Department of Energy contract DE–AC03–76SF00515.
{ "pile_set_name": "ArXiv" }
[10]{} J. W. Anderson, P. A. Haas, L. A. Mathieson, V. Volynkin, R. Lyngso, P. Tataru, and J. Hein. Oxfold: kinetic folding of [RNA]{} using stochastic context-free grammars and evolutionary information. , 29(6):704–710, March 2013. C. B. Anfinsen. Principles that govern the folding of protein chains. , 181:223–230, 1973. I. Aviram, I. Veltman, A. Churkin, and D. Barash. Efficient procedures for the numerical simulation of mid-size [RNA]{} kinetics. , 7(1):24, 2012. A. Bujotzek, M. Shan, R. Haag, and M. Weber. Towards a rational spacer design for bivalent inhibition of estrogen receptor. , 25(3):253–262, March 2011. P. Clote and R. Backofen. . John Wiley & Sons, 2000. 279 pages. L. V. Danilova, D. D. Pervouchine, A. V. Favorov, and A. A. Mironov. : a web server that models secondary structure kinetics of an elongating [RNA]{}. , 4(2):589–596, April 2006. I. Dotu, J.A. Garcia-Martin, B.L. Slinger, V. Mechery, M.M. Meyer, and P. Clote. Complete [RNA]{} inverse folding: computational design of functional hammerhead ribozymes. , 2014. in press. C. Fasting, C. A. Schalley, M. Weber, O. Seitz, S. Hecht, B. Koksch, J. Dernedde, C. Graf, E. W. Knapp, and R. Haag. Multivalency as a chemical organization and action principle. , 51(42):10472–10498, October 2012. C. Flamm. Kinetic folding of [RNA]{}, 1998. C. Flamm, W. Fontana, I.L. Hofacker, and P. Schuster. folding at elementary step resolution. , 6:325–338, 2000. C. Flamm, I. L. Hofacker, S. Maurer-Stroh, P. F. Stadler, and M. Zehl. Design of multistable [RNA]{} molecules. , 7(2):254–265, February 2001. C. Flamm, I.L. Hofacker, P.F. Stadler, and M. Wolfinger. Barrier trees of degenerate landscapes. , 216:155–173, 2002. Christoph Flamm, Walter Fontana, Ivo L. Hofacker, and Peter Schuster. folding at elementary step resolution. , 6:325–338, 2000. Joel N. Franklin. . Dover Publications, Mineola, New York, 2000. 292 pages. E. Freyhult, V. Moulton, and P. Clote. Boltzmann probability of [RNA]{} structural neighbors and riboswitch detection. , 23(16):2054–2062, August 2007. E. Freyhult, V. Moulton, and P. Clote. : a web server for [RNA]{} structural neighbors. , 35(Web):W305–W309, July 2007. Matteo Frigo and Steven G. Johnson. The design and implementation of [FFTW3]{}. , 93(2):216–231, 2005. Special issue on “Program Generation, Optimization, and Platform Adaptation”. J. A. Garcia-Martin, P. Clote, and I. Dotu. : a web server for [RNA]{} inverse folding and molecular design. , 41(Web):W465–W470, July 2013. J.A. Garcia-Martin, P. Clote, and I. Dotu. : A constraint programming algorithm for [RNA]{} inverse folding and molecular design. , 11(2):1350001, 2013. in press. P. P. Gardner, J. Daub, J. Tate, B. L. Moore, I. H. Osuch, S. Griffiths-Jones, R. D. Finn, E. P. Nawrocki, D. L. Kolbe, S. R. Eddy, and A. Bateman. Rfam: [Wikipedia]{}, clans and the “decimal” release. , 39(Database):D141–D145, January 2011. M. Geis, C. Flamm, M. T. Wolfinger, A. Tanzer, I. L. Hofacker, M. Middendorf, C. Mandl, P. F. Stadler, and C. Thurner. Folding kinetics of large [RNAs]{}. , 379(1):160–173, May 2008. D.T. Gillespie. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. , 22(403):403–434, 1976. AR. Gruber, R. Lorenz, SH. Bernhart, R. Neubock, and IL. Hofacker. The vienna rna websuite. , 36:70–74, 2008. C. Hobartner and R. Micura. Bistable secondary structures of small [RNAs]{} and their structural probing by comparative imino proton [NMR]{} spectroscopy. , 325(3):421–431, January 2003. L. M. Hochrein, M. Schwarzkopf, M. Shahgholi, P. Yin, and N. A. Pierce. Conditional [Dicer]{} substrate formation via shape and sequence transduction with small conditional [RNAs]{}. , 135(46):17322–17330, November 2013. J. Huang and B. Voss. Analysing [RNA]{}-kinetics based on folding space abstraction. , 15:60, 2014. D. Lai, J. R. Proctor, and I. M. Meyer. On the importance of cotranscriptional [RNA]{} structure formation. , 19(11):1461–1473, November 2013. K.A. Lecuyer and D.M. Crothers. The [L]{}eptomonas collosoma spliced leader [RNA]{} can switch between two alternate structural forms. , 32(20):5301–5311, 1993. R. Lorenz, S. H. Bernhart, C. Honer Zu Siederdissen, H. Tafer, C. Flamm, P. F. Stadler, and I. L. Hofacker. Viennarna [Package]{} 2.0. , 6:26, 2011. R. Lorenz, C. Flamm, and I.L. Hofacker. 2[D]{} projections of [RNA]{} folding landscapes. In I. Grosse, S. Neumann, S. Posch, F. Schreiber, and P.F. Stadler, editors, [*German Conference on Bioinformatics 2009*]{}, volume 157 of [ *Lecture Notes in Informatics*]{}, pages 11–20, 2009. D.H. Matthews, J. Sabina, M. Zuker, and D.H. Turner. Expanded sequence dependence of thermodynamic parameters improves prediction of [RNA]{} secondary structure. , 288:911–940, 1999. N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller. Equation of state calculations by fast computing machines. , 21:1087–1092, 1953. C.D. Meyer. The role of the group inverse in the theory of finite [M]{}arkov chains. , 17(46):443–464, 1975. S.R. Morgan and P.G. Higgs. Barrier heights between ground states in a model of [RNA]{} secondary structure. , 31:3153–3170, 1998. E. Senter, I. Dotu, and P. Clote. folding pathways and kinetics using [2D]{} energy landscapes. , 2014. E. Senter, S. Sheikh, I. Dotu, Y. Ponty, and P. Clote. Using the fast fourier transform to accelerate the computational search for [RNA]{} conformational switches. , 7(12):e50506, 2012. B. A. Shapiro, D. Bengali, W. Kasprzak, and J. C. Wu. folding pathway functional intermediates: their prediction and analysis. , 312(1):27–44, September 2001. X. Tang, B. Kirkpatrick, S. Thomas, G. Song, and N. M. Amato. Using motion planning to study [RNA]{} folding kinetics. , 12(6):862–881, 2005. X. Tang, S. Thomas, L. Tapia, D. P. Giedroc, and N. M. Amato. Simulating [RNA]{} folding kinetics on approximated energy landscapes. , 381(4):1055–1067, September 2008. C. Thachuk, J. Manuch, A. Rafiey, L. A. Mathieson, L. Stacho, and A. Condon. An algorithm for the energy barrier problem without pseudoknots and temporary arcs. , 0(O):O, 2010:108-19. D. H. Turner and D. H. Mathews. : the nearest neighbor parameter database for predicting stability of nucleic acid secondary structure. , 38(Database):D280–D282, January 2010. J. R. Vieregg and I. Tinoco. Modelling [RNA]{} folding under mechanical tension. , 104(8):1343–1352, April 2006. M. Wachsmuth, S. Findeiss, N. Weissheimer, P. F. Stadler, and M. Morl. De novo design of a synthetic riboswitch that regulates transcription termination. , 41(4):2541–2551, February 2013. M.S. Waterman. . Chapman & Hall/CRC Press, 1995. M. Wolfinger, W.A. Svrcek-Seiler1, C. Flamm, and P.F. Stadler. Efficient computation of [RNA]{} folding dynamics. , 37:4731–4741, 2004. S. Wuchty, W. Fontana, I. L. Hofacker, and P. Schuster. Complete suboptimal folding of [RNA]{} and the stability of secondary structures. , 49:145–165, 1999. S. Wuchty, W. Fontana, I.L. Hofacker, and P. Schuster. Complete suboptimal folding of [RNA]{} and the stability of secondary structures. , 49:145–164, 1999. A. Xayaphoummine, T. Bucher, and H. Isambert. Kinefold web server for [RNA]{}/[DNA]{} folding path and structure prediction including pseudoknots and knots. , 33(Web):W605–W610, July 2005. J. N. Zadeh, B. R. Wolfe, and N. A. Pierce. Nucleic acid sequence design via efficient ensemble defect optimization. , 32(3):439–452, February 2011. P. Zhao, W. Zhang, and S. J. Chen. Cotranscriptional folding kinetics of ribonucleic acid secondary structures. , 135(24):245101, December 2011.
{ "pile_set_name": "ArXiv" }
--- abstract: | We solve the problem of expressing the Weyl scalars $\psi $ that describe gravitational perturbations of a Kerr black hole in terms of Cauchy data. To do so we use geometrical identities (like the Gauss-Codazzi relations) as well as Einstein equations. We are able to explicitly express $\psi $ and $% \partial _t\psi $ as functions only of the extrinsic curvature and the three-metric (and geometrical objects built out of it) of a generic spacelike slice of the spacetime. These results provide the link between initial data and $\psi $ to be evolved by the Teukolsky equation, and can be used to compute the gravitational radiation generated by two [*orbiting* ]{} black holes in the close limit approximation. They can also be used to extract waveforms from spacetimes completely generated by numerical methods. address: - | 1. Instituto de Astronomía y Física del Espacio,\ Casilla de Correo 67, Sucursal 28\ (1428) Buenos Aires, Argentina - | 2. Center for Gravitational Physics and Geometry, Department of Physics, The Pennsylvania State University,\ 104 Davey Lab, University Park, PA 16802 author: - 'Manuela Campanelli$^1$ , Carlos O. Lousto$^1$, John Baker$^2$, Gaurav Khanna$^2$, and Jorge Pullin$^2$' title: 'The imposition of Cauchy data to the Teukolsky equation III: The rotating case' --- Introduction ============ In Ref. [@CL97] the question was raised of how to impose initial data to the Teukolsky equation (that describe perturbations around a rotating black hole). We noted that the expressions of Chrzanowsky[@C75] for the Weyl scalars $\psi _4$ and $\psi _0$ in terms of metric perturbations were written as second order operators on the four-metric and appeared inconvenient at the moment to use them for building up the initial values needed to start the integration of the Teukolsky equation. The work of reference [@CL97] showed how to solve the problem for a nonrotating background, i.e. perturbations around a Schwarzschild hole by relating Weyl scalars $\psi ,$ to the Moncrief waveforms $\phi _M$, an alternative description of metric perturbations explicitly built up out of the three-metric $\stackrel{\_}{g}_{ij}$ and the extrinsic curvature $K_{ij}$ of the hypersurface $t=$ constant. In Ref. [@CKL98] the $\psi $ – $\phi _M$ relations were successfully tested with a program for integration of the Teukolsky equation. It is not obvious how to extend the above techniques to the rotating case. Thus, in the present paper we turned to a more geometrical approach that lead us to the desired relations for [*rotating* ]{}holes. In Sec. II we collect the results of the 3+1 decomposition reviewed in Ref. [@AABY96] relevant for our derivation. This has the advantage that makes $\psi $ to be automatically independent of the shift, so our task is reduced to prove that terms depending on the first perturbative order lapse vanish. This is made in Sec. III, where we also build up $\partial _t\psi $ in terms only of $% \stackrel{\_}{g}_{ij}$and $K_{ij}.$ This results allow to compare, given the initial data, evolution through integration of the full Einstein equations and Teukolsky equation (linearization around a Kerr hole), and test, for instance, the close limit approximation for orbiting holes. Notation: We use Ref. [@MTW73] conventions. An overbar on geometric quantities means that they are three-dimensional quantities, i.e. defined on the $t=$ constant hypersurfaces $\Sigma _t$ (an exception to this rule is the complex conjugation of the vector $m^\alpha ,$ i.e. $\stackrel{\_\_}{m}% ^\alpha $). $(\alpha ,\beta )$ and $[\alpha ,\beta ]$ on indices $\alpha ,\beta $ represent the usual symmetric and antisymmetric parts respectively. Greek letters indices run from 0 to 3 while latin letters indices run from 1 to 3. Subindexes (0) and (1) mean pieces of exclusively zeroth and first order respectively. Geometric structure and gravitation =================================== Following Ref. [@AABY96] we write the metric as $$ds^2=-N^2(\theta ^0)^2+g_{ij}\theta ^i\theta ^j,$$ with $\theta ^0=dt$ and $\theta ^i=dx^i+N^idt$, where $N^i$ is the shift vector and $N$ the lapse. The cobasis $\theta ^\alpha $ satisfies $$d\theta ^\alpha =-{\frac 12}C_{\beta \gamma }^\alpha \theta ^\beta \wedge \theta ^\gamma$$ with $C_{0j}^i=-C_{j0}^i=\partial _jN^i$ and all other structure coefficients zero. Note that $\bar g_{ij}=g_{ij}$ and $\bar g^{ij}=g^{ij}$. The spacetime connection one-forms are defined by $$\omega _{\beta \gamma }^\alpha =\Gamma _{\beta \gamma }^\alpha +g^{\alpha \delta }C_{\delta (\beta }^\epsilon g_{\gamma )\epsilon }-{\frac 12}C_{\beta \gamma }^\alpha =\omega _{(\beta \gamma )}^\alpha +\omega _{[\beta \gamma ]}^\alpha ,$$ where $\Gamma _{\beta \gamma }^\alpha $ denotes the Christoffel symbol. These connection forms are written out explicitly in [@ABY97]. In particular, ${\omega }^i{}_{jk}={\Gamma }^i{}_{jk}=\bar {{\Gamma }}^i{}_{jk}$ , and the extrinsic curvature is given by $$K_{ij}=-N\omega _{ij}^0\equiv -{\frac 12}N^{-1}\widehat{\partial }_0g_{ij}, \label{curvextr}$$ where we define the operator $$\widehat{\partial }_0={\frac \partial {\partial t}}-{\cal L}_{{\bf N}},$$ with ${\cal L}_{{\bf N}}$ the Lie derivative on the hypersurface $\Sigma _t$ with respect to the vector $N^i$. Note that $\widehat{\partial }_0$ and $% \partial _i$ commute. The Riemann curvature tensor is given by[@ABY97] $$R^\alpha {}_{{\beta }\rho {\sigma }}={\partial }_\rho {\omega }_{{\beta }{% \sigma }}^\alpha -{\partial }_\sigma {\omega }^{{\alpha }}{}_{{\beta }\rho }+% {\omega }^\alpha {}_{{\lambda }\rho }{\omega }^\lambda {}_{{\beta }{\sigma }% }-{\omega }^\alpha {}_{{\lambda }{\sigma }}{\omega }^\lambda {}_{{\beta }% \rho }-{\omega }^\alpha {}_{{\beta }{\lambda }}C^{{\lambda }}{}_{\rho {% \sigma }} \label{Riemann}$$ For rewriting in the next section the Weyl scalars in terms of hypersurface quantities only, we relate the spacetime Riemann tensor components to the 3-dimensional Riemann and the extrinsic curvature tensors $$\begin{aligned} R_{ijkl} &=&\bar R_{ijkl}+2K_{i[k}K_{l]j}, \label{rijkl} \\ R_{0ijk} &=&2N\bar \nabla _{[j}K_{k]i}, \label{r0jkl} \\ R_{0i0j} &=&N(\widehat{\partial }_0K_{ij}+NK_{ip}K^p{}_j+\bar \nabla _i\bar \nabla _jN). \label{r0j0l}\end{aligned}$$ Another important relation in three dimensions is $$\bar R_{ijkl}=2g_{i[k}\bar R_{l]j}+2g_{j[l}\bar R_{k]i}+\bar Rg_{i[l}g_{k]j}. \label{r3ijkl}$$ The Ricci tensor $R_{\alpha \beta }=R^\sigma {}_{\alpha \sigma \beta }$ is given by $$\begin{aligned} R_{ij} &=&\bar R_{ij}-N^{-1}\widehat{\partial }% _0K_{ij}+KK_{ij}-2K_{ip}K^p{}_j-N^{-1}\bar \nabla _i\bar \nabla _jN, \label{rij} \\ R_{0i} &=&N\bar \nabla ^j(Kg_{ij}-K_{ij}), \label{r0i} \\ R_{00} &=&N\stackrel{\_}{\nabla }^2N-N^2K_{pq}K^{pq}+N\widehat{\partial }_0K. \label{r00}\end{aligned}$$ In order to incorporate the source terms we consider the Einstein equations as $R_{\alpha \beta }=T_{\alpha \beta }-{\frac 12}g_{\alpha \beta }T$. For instance, the “Energy constraint” is defined by $$G^0{}_0={\frac 12}(K_{mk}K^{mk}-K^2-\bar R)=T^0{}_0. \label{G00}$$ Finally, from its definitions $$\widehat{\partial }_0\bar R_{ij}=\bar \nabla _k(\widehat{\partial }_0\bar \Gamma _{ij}^k)-\bar \nabla _j(\widehat{\partial }_0\bar \Gamma _{ik}^k), \label{Rijpunto}$$ where $$\widehat{\partial }_0\bar \Gamma _{ij}^k=-2\bar \nabla _{(i}(NK_{j)}{}^k)+% \bar \nabla ^k(NK_{ij}). \label{gamapunto}$$ Note that writing equations in terms of $\widehat{\partial }_0$ instead of $% \partial _t$ allowed us to get rid of the shift dependence. This is because $% \widehat{\partial }_0$ is orthogonal to the spacelike hypersurface $\Sigma _t.$ Weyl scalars for Kerr perturbations =================================== For the computation of gravitation radiation from astrophysical sources it is convenient to work with the Weyl scalar $$\psi _4=-C_{\alpha \beta \gamma \delta }n^\alpha \overline{m}^\beta n^\gamma \overline{m}^\delta ,$$ since it is directly related to the outgoing gravitational waves. For perturbations around a Kerr hole we have $$-\psi _4=R_{ijkl}n^i\overline{m}^jn^k\overline{m}^l+4R_{0jkl}n^{[0}\overline{% m}^{j]}n^k\overline{m}^l+4R_{0j0l}n^{[0}\overline{m}^{j]}n^{[0}\overline{m}% ^{l]}.$$ Eqs. (\[rijkl\]) and (\[r0jkl\]) directly give us the two first terms in the above sum in terms of hypersurface geometrical objects $(g_{ij},$ $% K_{ij}).$ In the last term we have to make use of Einstein equation (\[rij\]) to eliminate $\widehat{\partial }_0K_{ij}.$ If one now considers first order perturbations around a Kerr hole, one would have to consider in $% \psi _4$ two types of terms: terms that involve first order perturbative Riemann tensors contracted with the background tetrads and terms that involve the Riemann tensor of the background contracted with three background and one perturbative tetrads. It is not difficult to see that the latter terms vanish for the Kerr background. For the Kerr geometry the only non-vanishing Weyl scalar is $\psi _2=R_{\alpha \beta \gamma \delta }l^\alpha m^\beta n^\gamma \overline{m}^\delta $ and one can quickly see that the above contributions, even with one of the tetrads being a perturbative one, still vanish. For instance, consider the term $% R_{ijkl}{}_0n_{(1)}^i\overline{m}^jn^k\overline{m}^l$. This term vanishes because it is contracted with two $\overline{m}$ vectors, and any contraction with a repeated tetrad vector of the Riemann tensor vanishes for the Kerr spacetime. Similar arguments apply to the other terms. Let us turn our attention to the terms that involve the first order Riemann tensors contracted with the background tetrads. Taking a look at equations (\[rijkl\])-(\[r0j0l\]) we see that if one considers first order perturbations, we will have expressions involving the first order extrinsic curvature, metric, and lapse. We do not want our final expression to depend on the perturbative lapse. It is easy to see that it actually does not depend on it. For $R_{0ijk}$ we see that the lapse appears as an overall factor. So the expression evaluated for the perturbative lapse is proportional to the expression evaluated in the background, which vanishes. For $R_{0i0j}$ if we rewrite it using the Einstein equation (\[rij\]) again the lapse appears as an overall factor and the same argument as for $% R_{0ijk}$ applies. As a separate check, we have verified the independence on the perturbative lapse and shift using computer algebra. The final result for the first order expansion of the Weyl scalar $\psi _4$ therefore is, $$\begin{aligned} -\psi _4 &=&\left[ \stackrel{\_}{R}_{ijkl}+2K_{i[k}K_{l]j}\right] _{(1)}n^i% \overline{m}^jn^k\overline{m}^l-4N_{(0)}\left[ K_{j[k,l]}+\stackrel{\_}{% \Gamma }_{j[k}^pK_{l]p}\right] _{(1)}n^{[0}\overline{m}^{j]}n^k\overline{m}^l \label{psi} \\ &&\ +4N_{(0)}^2\left[ \stackrel{\_}{R}_{jl}-K_{jp}K_l^p+KK_{jl}-T_{jl}+\frac 12Tg_{jl}\right] _{(1)}n^{[0}\overline{m}^{j]}n^{[0}\overline{m}^{l]} \nonumber\end{aligned}$$ where $N_{(0)}=(g_{\text{kerr}}^{tt})^{-1/2}$ is the zeroth order lapse, $% n^i,\overline{m}^j$ are two of the null vectors of the (zeroth order) tetrad (see Ref. [@T73]), latin indices run from 1 to 3, and the brackets are computed to only first order (zeroth order excluded). To obtain $\partial _t$$\psi _4,$ the other relevant quantity in order to start the integration of the Teukolsky equation, we can operate with $% \widehat{\partial }_0$ on $\psi _4$ given by Eq. (\[psi\]) to find $$\begin{aligned} \partial _t\psi _4 &=&N_{(0)}^\phi \partial _\phi \left( \psi _4\right) -n^i% \overline{m}^jn^k\overline{m}^l\left[ \widehat{\partial }_0R_{ijkl}\right] _{(1)} \label{psipunto} \\ &&+4N_{(0)}n^{[0}\overline{m}^{j]}n^k\overline{m}^l\left[ \widehat{\partial }% _0K_{j[k,l]}+\widehat{\partial }_0\Gamma _{j[k}^pK_{l]p}+\stackrel{\_}{% \Gamma }_{j[k}^p\widehat{\partial }_0K_{l]p}\right] _{(1)} \nonumber \\ &&-4N_{(0)}^2n^{[0}\overline{m}^{j]}n^{[0}\overline{m}^{l]}\left[ \widehat{% \partial }_0\stackrel{\_}{R}_{jl}-2K_{(l}^p\widehat{\partial }% _0K_{j)p}-2N_{(0)}K_{jp}K_q^pK_l^q\right. \nonumber \\ &&\left. +K_{jl}\widehat{\partial }_0K+K\widehat{\partial }_0K_{jl}-\widehat{% \partial }_0T_{jl}+\frac 12g_{jl}T-N_{(0)}TK_{jl}\right] _{(1)} \nonumber\end{aligned}$$ where we made use of the equality $$g_{ip}\widehat{\partial }_0g^{pj}=2NK_i^j.$$ The derivatives appearing in Eq. (\[psipunto\]) can be obtained from Eq. (\[r00\]) $$\widehat{\partial }_0K=N_{(0)}K_{pq}K^{pq}-\stackrel{\_}{\nabla }% ^2N_{(0)}-N_{(0)}^{-1}T_{00}, \label{Kpunto}$$ from Eq. (\[G00\]) $$\widehat{\partial }_0\stackrel{\_}{R}=2K^{pq}\widehat{\partial }% _0K_{pq}+4N_{(0)}K_{pq}K_s^pK^{sq}-2K\widehat{\partial }_0K-2\widehat{% \partial }_0T_0^0, \label{Rpunto}$$ and from Eqs. (\[r3ijkl\]) and (\[curvextr\]) $$\begin{aligned} \widehat{\partial }_0R_{ijkl} &=&-4N_{(0)}\left\{ K_{i[k}\stackrel{\_}{R}% _{l]j}-K_{j[k}\stackrel{\_}{R}_{l]i}-\frac 12\stackrel{\_}{R}\left( K_{i[k}g_{l]j}-K_{j[k}g_{l]i}\right) \right\} \label{Rijklpunto} \\ &&\ +2g_{i[k}\widehat{\partial }_0\stackrel{\_}{R}_{l]j}-2g_{j[k}\widehat{% \partial }_0\stackrel{\_}{R}_{l]i}-g_{i[k}g_{l]j}\widehat{\partial }_0% \stackrel{\_}{R}+2K_{i[k}\widehat{\partial }_0K_{l]j}-2K_{j[k}\widehat{% \partial }_0K_{l]i}. \nonumber\end{aligned}$$ Note that in the last three equations we have taken explicitly the lapse to the zeroth perturbative order. This is so because in building up $\partial _t $$\psi _4$ explicitly all dependence on $N_{(1)}$ cancels out. To prove this one can do the explicit calculation for the Kerr background using computer algebra. An alternative is to notice that $\partial_0 \psi_4= {\cal % L}_t \psi_4$ where $t^a$ is a vector that includes the background and first order perturbations of the lapse and shift. If one now expands out this expression one gets $\partial_0 \psi_4= {\cal L}_{t_{(0)}}\psi_{4_{(0)}}+% {\cal L}_{t_{(0)}}\psi_{4_{(1)}} +{\cal L}_{t_{(1)}}\psi_{4_{(0)}}$. Now, since $\psi_{4_{(0)}}$ vanishes identically for all time, the only contribution one has is $\partial_0 \psi_4 = {\cal L}_{t_{(0)}}% \psi_{4_{(1)}} $. Therefore the time derivative of $\psi_4$ does not depend on the perturbative lapse and shift, since neither ${\cal L}_{t_{(0)}}$ (by construction) nor $\psi_{4_{(1)}}$ (due to the proof we gave above), do. The other pieces needed to build up $\partial _t$$\psi _4$ only out of hypersurface data are $\widehat{\partial }_0K_{ij},\widehat{\partial }% _0\Gamma _{ij}^k,$ and $\widehat{ \partial }_0\stackrel{\_}{R}_{ij}$ that are given by Eqs. (\[rij\]), (\[gamapunto\]) and (\[Rijpunto\]) respectively. As before, we have to consider the zeroth order lapse only, for instance $$\widehat{\partial }_0K_{ij}=N_{(0)}\left[ \bar R% _{ij}+KK_{ij}-2K_{ip}K^p{}_j-N_{(0)}^{-1}\bar \nabla _i\bar \nabla _jN_{(0)}-T_{ij}+\frac 12Tg_{ij}\right] _{(1)}. \label{Kijpunto}$$ This completes our proof. A check of the relations (\[psi\]) and (\[psipunto\]) can be made in the Schwarzschild background for close limit initial data where [@CKL98] at t=0 we have $\partial _t\psi =-\frac{2M}{% r^2}\psi .$ Discussion ========== The issue of expressing $\psi $ explicitly in terms of hypersurface data only appears as of a purely technical character, but it is of great practical use. Especially when one thinks of the important role played by first order perturbations as testbeds for comparison with full numerical integration of Einstein equations. Note that since Eqs. (\[psi\]) and (\[psipunto\]) hold on any $t=$ constant slice of the space time can not only be used to build up initial values for $\psi $ and $\partial _t\psi $, but also at a later time to extract fully numerically generated wave forms. The above equations provides the desired link between initial data (consisting of $\stackrel{\_}{g}_{ij}$and $K_{ij}$ ) and the Weyl scalar $% \psi _4$. Geometrical objects like $\stackrel{\_}{\Gamma }_{ij}^k,$ $\bar R% _{ij}$ and $\bar R_{ijkl}$ involve first and second derivatives of the metric. Since astrophysical initial data for Kerr perturbations are numerically generated [@BP98] this fact has to be taken into account. Expression (\[psi\]) also includes a source term that allows to incorporate perturbations generated by particles or accretion disks around Kerr holes. If one chooses to work in the Teukolsky equation with $\psi _0=-C_{\alpha \beta \gamma \delta }l^\alpha m^\beta l^\gamma m^\delta $ , which gives a better representation of ingoing gravitational waves, a completely analogous procedure applies to connect it to hypersurface data upon replacement of the double contractions with the corresponding null vectors $l^\alpha $ and $% m^\beta $ instead of $n^\alpha $ and $\stackrel{\_\_}{m}^\beta .$ Finally, we have been able to write $\psi _4$ and $\psi _0$ on the hypersurface $\Sigma _t,$ but we did not said why. In fact it is not warranted that one can do that with any object defined on the spacetime. Is this because they are first order gauge invariant objects? This shouldn’t be enough since we checked that for $\psi _3$ (and the same for $\psi _1$), we do not succeed in writing them in terms only of objects on the slice $t=$ constant. The key point here seems to be that $\psi _4$ and $\psi _0$ are also invariant under tetrad rotations and then directly connected to physical quantities, while $\psi _3$ and $\psi _1$ are not. The authors thank A.Ashtekar and W.Krivan for useful discussions. C.O.L. is a member of the Carrera del Investigador Científico of CONICET, Argentina and thanks FUNDACIÓN ANTORCHAS for partial financial support. This work was supported by Grant NSF-PHY-9423950, by funds of the Pennsylvania State University and its office for Minority Faculty Development, and the Eberly Family Research Fund at Penn State. JP also acknowledges support form the Alfred P. Sloan Foundation. Alternative equations ===================== We can put all this together to yield the following expression of the first order perturbation in $\psi_4$ in terms of perturbations in the 3-metric $\delta g_{ij}$, perturbations in the extrinsic curvature $\delta K_{ij}$, and several quantites from the the background (Kerr) geometry, the spatial metric $\ ^{(3)} {g^{(0)}}_{ij}$, the extrinsic curvature $K^{(0)}_{ij}$, the lapse $N^{(0)}$ and the shift $N^{(0)}_i$. We have already argued that first order perturbations of the principal null vectors $n^\mu$ and $\stackrel{\_\_}{m}^\mu$ will not contribute to $\delta \psi_4$ so we have $$\delta{\psi_4 }= {\delta A_{ijkl}}{n^i}{{\bar m}^j}{n^k}{{\bar m}^l} +2 {\delta B_{ijk}} {n^j}{{\bar m}^k} [{n^0}{{\bar m}^i}-{n^i}{{\bar m}^0}] + {\delta C_{ij}}[{n^0}{{\bar m}^i}{n^0}{{\bar m}^j} +{n^i}{{\bar m}^0}{n^j}{{\bar m}^0} -{n^0}{{\bar m}^i}{n^j}{{\bar m}^0} -{n^0}{{\bar m}^j}{n^i}{{\bar m}^0}]$$ where $$\begin{aligned} {\delta A_{ijkl}}&=& \delta{\ ^{(3)} R_{ijkl}}+ [ {K^{(0)}_{jl}}\delta {K_{ik}}+{K^{(0)}_{ik}}{\delta {K_{jl}}} -(k\leftrightarrow l)]\\ {\delta B_{ijk}}&=&{N^{(0)}}[D_j\,\delta{K_{ik}} -{1\over2}[D_k\,\delta {\ ^{(3)} g_{mi}}+ D_i\,\delta{\ ^{(3)} g_{mk}}-D_m\,\delta{\ ^{(3)} g_{ik}}] {\ ^{(3)} {g^{(0)}}^{lm}}{K^{(0)}_{lj}}- (k\leftrightarrow j)]\\ & & + {N^{(0)l}} {\delta A_{lijk}} +A^{(0)}_{lijk}{\delta^{(3)} g^{lm}}{N^{(0)}}_m\\ \delta {C_{ij}}&=&{N^{(0)2}} {A^{(0)}_{iljm}}\delta{\ ^{(3)} g^{lm}} +{N^{(0)2}}{\delta A_{iljm}}{\ ^{(3)} {g^{(0)}}^{lm}} - [\delta{B_{ijl}}{N^{(0)l}}+{B^{(0)}_{ijl}}\delta^{(3)} g^{lm} {N^{(0)},}_{m}+{A^{(0)}_{jil}}\delta {\ ^{(3)} g}^{lm} {N^{(0)}}_{m}\\ & &+\delta{A_{jil}}{N^{(0)l}}+\delta{A_{iljm}}{N^{(0)l}}{N^{(0)m}} +{A^{(0)}}_{iljm}{N^{(0)},}_{k}\delta^{(3)} g^{kl}{N^{(0)}}^{m} +{A^{(0)}}_{iljm}{N^{(0)},}^{l}\delta^{(3)} g^{km}{N^{(0)}}_{k}]\end{aligned}$$ and $$\delta{\ ^{(3)} R^{i}_{jkl}}= {1\over2}D_k[{\ ^{(3)}{g^{(0)}}^{im}}({D_l\,\delta^{(3)} g_{mj}} +{D_j\,\delta^{(3)} g_{ml}}-{D_m \delta^{(3)} g_{jl}})] - (k\leftrightarrow l)$$ To calculate $\partial_t {\psi_4}$ we use the above expression for $\delta\psi_4$ and plug in $\partial_t\delta^{(3)}{g}_{ij}$ and $\delta\partial_t K_{ij}$ for $\delta^{(3)} g_{ij}$ and $\delta K_{ij}$ in the above, respectively. Where, $\partial_t\delta^{(3)}{ g}_{ij}$ and $\delta\partial_t K_{ij}$ can be obtained from Einstein’s equations as follows: $$\begin{aligned} \partial_t\delta^{(3)}{ g}_{ij}&=& -2{N^{(0)}} \delta K_{ij} + {N^{(0)}}^{k} \delta^{(3)} g_{ij,k}+{N^{(0)}}_{l} \delta^{(3)} g^{lk} {\ ^{(3)}{g^{(0)}}}_{ij,k} + \delta^{(3)} g_{ik} {N^{(0)k}}_{,j}\\ & & +{\ ^{(3)} {g^{(0)}}}_{il}[\delta^{(3)} g^{kl}{N^{(0)}}_{k}]_{,j} +{\ ^{(3)} {g^{(0)}}}_{lj}[\delta^{(3)} g^{kl}{N^{(0)}}_{k}]_{,i} +\delta^{(3)} g_{kj} {N^{(0)k}}_{,i}\\ \delta\partial_t K_{ij}&=&{1\over2} [D_j\,\delta^{(3)} g_{mi}+D_i\,\delta^{(3)}{g_{mj}} -D_m\,\delta^{(3)}{g_{ij}}]{\ ^{(3)} {g^{(0)}}}^{mk}{N^{(0)}}_{,k}\\ & & +{N^{(0)}}[\delta^{(3)} R_{ij} - 2 {K^{(0)k}}_{j}\delta K_{ik} -2\delta{K^k}_{j}{K^{(0)}}_{ik}+{K^{(0)}}_{ij} \delta K +{K^{(0)}} \delta K_{ij}]\\ & & + {N^{(0)}}^{k} \delta K_{ij,k} + \delta K_{ik} {N^{(0)k}}_{,j} + \delta K_{kj} {N^{(0)k}}_{,i} + {K^{(0)}}_{il}[\delta^{(3)} g^{kl}{N^{(0)}}_{k}]_{,j}\\ & & +{K^{(0)}}_{lj}[\delta^{(3)} g^{kl}{N^{(0)}}_{k}]_{,i} +{N^{(0)}}_{l} \delta^{(3)} g^{lk} {K^{(0)}}_{ij,k}\end{aligned}$$ where $\delta K= {\ ^{(3)} {g^{(0)}}}^{ij}\delta K_{ij}+ {K^{(0)}_{ij}}\delta{\ ^{(3)}{g}}^{ij}$ and $\delta {K^{i}}_{j}=\delta K_{jk}{\ ^{(3)}{g^{(0)}}}^{ki}+ {K^{(0)}}_{jk}\delta{\ ^{(3)}{g}}^ {ki}$. M. Campanelli and C.O. Lousto, gr-qc/9711008. P.L. Chrzanowski, Phys. Rev. D [**11**]{}, 2042 (1975). M.Campanelli, W.Krivan and C.O.Lousto, gr-qc/9801067. A.Abrahams, A.Anderson, Y.Choquet-Bruhat and J. York Jr., Class. Q. Grav., [**14**]{}, A9-A22 (1997). C. W. Misner, K. S. Thorne and J. A. Wheeler, [*Gravitation*]{}, Freeman, San Francisco (1973). A.Anderson, Y.Choquet-Bruhat and J. York Jr., gr-qc/9710041. S.A. Teukolsky, Astrophys. J. [**185**]{}, 635 (1973). J. Baker and R. Puzio, gr-qc/9802006. W. Kinnersley , J. Math. Phys., [**10**]{}, 1195 (1969).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose [Sketched Online Newton]{}([SON]{}), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. [SON]{}is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja’s rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.' author: - | Haipeng Luo\ Princeton University, Princeton, NJ USA\ \ Alekh Agarwal\ Microsoft Research, New York, NY USA\ \ Nicolò Cesa-Bianchi\ Università degli Studi di Milano, Italy\ \ John Langford\ Microsoft Research, New York, NY USA\ \ title: Efficient Second Order Online Learning by Sketching --- Introduction {#sec:intro} ============ Online learning methods are highly successful at rapidly reducing the test error on large, high-dimensional datasets. First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size. However, the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results. See Fig. \[fig:synthetic\] for an illustration. Second order algorithms such as Online Newton Step [@HazanAgKa07] have the attractive property of being invariant to linear transformations of the data, but typically require space and update time quadratic in the number of dimensions. Furthermore, the dependence on dimension is not improved even if the examples are sparse. These issues lead to the key question in our work: *Can we develop (approximately) second order online learning algorithms with efficient updates?* We show that the answer is “yes” by developing efficient sketched second order methods with regret guarantees. Specifically, the three main contributions of this work are: [R]{}[0.5]{} ![image](synthetic.pdf){width=".35\textwidth"} #### 1. Invariant learning setting and optimal algorithms (Section \[sec:setup\]). The typical online regret minimization setting evaluates against a benchmark that is bounded in some fixed norm (such as the $\ell_2$-norm), implicitly putting the problem in a nice geometry. However, if all the features are scaled down, it is desirable to compare with accordingly larger weights, which is precluded by an apriori fixed norm bound. We study an invariant learning setting similar to the paper [@RossMiLa13] which compares the learner to a benchmark only constrained to generate bounded predictions on the sequence of examples. We show that a variant of the Online Newton Step [@HazanAgKa07], while quadratic in computation, stays regret-optimal with a nearly matching lower bound in this more general setting. #### 2. Improved efficiency via sketching (Section \[sec:sketch\]). To overcome the quadratic running time, we next develop sketched variants of the Newton update, approximating the second order information using a small number of carefully chosen directions, called a *sketch*. While the idea of data sketching is widely studied [@Woodruff14], as far as we know our work is the first one to apply it to a general adversarial online learning setting and provide rigorous regret guarantees. Two different sketching methods are considered: Frequent Directions [@GhashamiLiPhWo15; @Liberty13] and Oja’s algorithm [@Oja82; @OjaKa85], both of which allow linear running time per round. For the first method, we prove regret bounds similar to the full second order update whenever the sketch-size is large enough. Our analysis makes it easy to plug in other sketching and online PCA methods (e.g. [@garber2015online]). #### 3. Sparse updates (Section \[sec:sparse\]). For practical implementation, we further develop sparse versions of these updates with a running time linear in the sparsity of the examples. The main challenge here is that even if examples are sparse, the sketch matrix still quickly becomes dense. These are the first known sparse implementations of the Frequent Directions[^1] and Oja’s algorithm, and require new sparse eigen computation routines that may be of independent interest. Empirically, we evaluate our algorithm using the sparse Oja sketch (called [Oja-SON]{}) against first order methods such as diagonalized <span style="font-variant:small-caps;">AdaGrad</span> [@DuchiHaSi2011; @McMahanSt2010] on both ill-conditioned synthetic and a suite of real-world datasets. As Fig. \[fig:synthetic\] shows for a synthetic problem, we observe substantial performance gains as data conditioning worsens. On the real-world datasets, we find improvements in some instances, while observing no substantial second-order signal in the others. #### Related work Our online learning setting is closest to the one proposed in [@RossMiLa13], which studies scale-invariant algorithms, a special case of the invariance property considered here (see also [@orabona2015 Section 5]). Computational efficiency, a main concern in this work, is not a problem there since each coordinate is scaled independently. @OrabonaPa15 study unrelated notions of invariance. @GaoJiZhZh13 study a specific randomized sketching method for a special online learning setting. The L-BFGS algorithm [@LiuNo89] has recently been studied in the stochastic setting[^2] [@ByrdHaNoSi14; @MokhtariRi14; @MoritzNiJo15; @SchraudolphYuGu07; @SohldicksteinPoGa14], but has strong assumptions with pessimistic rates in theory and reliance on the use of large mini-batches empirically. Recent works [@ErdogduMo15; @GonenOrSh16; @GonenSh15; @PilanciWa15] employ sketching in stochastic optimization, but do not provide sparse implementations or extend in an obvious manner to the online setting. The Frank-Wolfe algorithm [@FrankWo56; @Jaggi13] is also invariant to linear transformations, but with worse regret bounds [@HazanKa12] without further assumptions and modifications [@GarberHa13]. #### Notation Vectors are represented by bold letters (e.g., ${\boldsymbol{x}}$, ${\boldsymbol{w}}$, …) and matrices by capital letters (e.g., $M$, $A$, …). $M_{i,j}$ denotes the $(i,j)$ entry of matrix $M$. ${ \boldsymbol{I}_{d} }$ represents the $d \times d$ identity matrix, ${\boldsymbol{0}}_{m\times d}$ represents the $m \times d$ matrix of zeroes, and ${\mathrm{diag}\!\left\{{{\boldsymbol{x}}}\right\}}$ represents a diagonal matrix with ${\boldsymbol{x}}$ on the diagonal. $\lambda_i(A)$ denotes the $i$-th largest eigenvalue of $A$, ${\left\|{{\boldsymbol{w}}}\right\|}_A$ denotes $\sqrt{{\boldsymbol{w}}^\top A {\boldsymbol{w}}}$, $|A|$ is the determinant of $A$, ${\textsc{tr}({A})}$ is the trace of $A$, ${ \left\langle {A, B} \right\rangle }$ denotes $\sum_{i,j}A_{i,j}B_{i,j}$, and $A \preceq B$ means that $B - A$ is positive semidefinite. The sign function ${\mbox{\sc sgn}}(a)$ is $1$ if $a\geq 0$ and $-1$ otherwise. Setup and an Optimal Algorithm {#sec:setup} ============================== We consider the following setting. On each round $t = 1,2\ldots, T$: **(1)** the adversary first presents an example ${\boldsymbol{x}}_t \in {{\mathbb{R}}}^d$, **(2)** the learner chooses ${\boldsymbol{w}}_t \in {{\mathbb{R}}}^d$ and predicts ${\boldsymbol{w}}_t^\top {\boldsymbol{x}}_t$, **(3)** the adversary reveals a loss function for some convex, differentiable ${\ell}_t: {{\mathbb{R}}}\rightarrow {{\mathbb{R}}}_+$, and **(4)** the learner suffers loss $f_t({\boldsymbol{w}}_t)$ for this round. The learner’s regret to a comparator ${\boldsymbol{w}}$ is defined as . Typical results study $R_T({\boldsymbol{w}})$ against all ${\boldsymbol{w}}$ with a bounded norm in some geometry. For an invariant update, we relax this requirement and only put bounds on the predictions ${\boldsymbol{w}}^\top {\boldsymbol{x}}_t$. Specifically, for some pre-chosen constant $C$ we define $ {\mathcal{K}}_t {\stackrel{\rm def}{=}}{ \left\{ {{\boldsymbol{w}}} \,:\, {|{\boldsymbol{w}}^\top{\boldsymbol{x}}_t| \leq C} \right\} }. $ We seek to minimize regret to all comparators that generate bounded predictions on every data point, that is: $$R_T = \sup_{{\boldsymbol{w}}\in {\mathcal{K}}} R_T({\boldsymbol{w}})~~\mbox{ where}~~ {\mathcal{K}}{\stackrel{\rm def}{=}}\bigcap_{t=1}^T {\mathcal{K}}_t = { \left\{ {{\boldsymbol{w}}} \,:\, {\forall t=1,2,\ldots T,~~|{\boldsymbol{w}}^\top{\boldsymbol{x}}_t| \leq C} \right\} }~.$$ Under this setup, if the data are transformed to $M{\boldsymbol{x}}_t$ for all $t$ and some invertible matrix $M \in {{\mathbb{R}}}^{d\times d}$, the optimal ${\boldsymbol{w}}^*$ simply moves to $(M^{-1})^\top {\boldsymbol{w}}^*$, which still has bounded predictions but might have significantly larger norm. This relaxation is similar to the comparator set considered in [@RossMiLa13]. We make two structural assumptions on the loss functions. (Scalar Lipschitz)\[ass:Lipschitz\] The loss function ${\ell}_t$ satisfies $|{\ell}_t^{'}(z)| \leq L$ whenever $|z| \leq C$. \[ass:loss\] (Curvature)\[ass:curvature\] There exists $\sigma_t \geq 0$ such that for all ${\boldsymbol{u}}, {\boldsymbol{w}}\in {\mathcal{K}}$, $f_t({\boldsymbol{w}})$ is lower bounded by $ f_t({\boldsymbol{u}}) + \nabla f_t({\boldsymbol{u}})^\top({\boldsymbol{w}}- {\boldsymbol{u}}) + \frac{\sigma_t}{2} \left( \nabla f_t({\boldsymbol{u}})^\top({\boldsymbol{u}}- {\boldsymbol{w}})\right)^2. $ \[ass:curve\] Note that when $\sigma_t = 0$, Assumption \[ass:curve\] merely imposes convexity. More generally, it is satisfied by squared loss $f_t({\boldsymbol{w}}) = ({\boldsymbol{w}}^\top{\boldsymbol{x}}_t - y_t)^2$ with $\sigma_t = \frac{1}{8C^2}$ whenever $|{\boldsymbol{w}}^\top{\boldsymbol{x}}_t|$ and $|y_t|$ are bounded by $C$, as well as for all exp-concave functions (see [@HazanAgKa07 Lemma 3]). Enlarging the comparator set might result in worse regret. We next show matching upper and lower bounds qualitatively similar to the standard setting, but with an extra unavoidable $\sqrt{d}$ factor. [^3] \[thm:lower\_bound\] For any online algorithm generating ${\boldsymbol{w}}_t \in {{\mathbb{R}}}^d$ and all $T \geq d$, there exists a sequence of $T$ examples ${\boldsymbol{x}}_t \in {{\mathbb{R}}}^d$ and loss functions $\ell_t$ satisfying Assumptions \[ass:loss\] and \[ass:curve\] (with $\sigma_t = 0$) such that the regret $R_T$ is at least $CL\sqrt{dT/2}$. We now give an algorithm that matches the lower bound up to logarithmic constants in the worst case but enjoys much smaller regret when $\sigma_t \neq 0$. At round $t+1$ with some invertible matrix $A_t$ specified later and gradient ${\boldsymbol{g}}_t = \nabla f_t({\boldsymbol{w}}_t)$, the algorithm performs the following update *before* making the prediction on the example ${\boldsymbol{x}}_{t+1}$: $$\label{eq:AON} {\boldsymbol{u}}_{t+1} = {\boldsymbol{w}}_t - A_t^{-1}{\boldsymbol{g}}_t, \quad \mbox{and} \quad {\boldsymbol{w}}_{t+1} = \operatorname*{argmin}_{{\boldsymbol{w}}\in {\mathcal{K}}_{t+1}} {\left\|{{\boldsymbol{w}}-{\boldsymbol{u}}_{t+1}}\right\|}_{A_{t}}. $$ The projection onto the set ${\mathcal{K}}_{t+1}$ differs from typical norm-based projections as it only enforces boundedness on ${\boldsymbol{x}}_{t+1}$ at round $t+1$. Moreover, this projection step can be performed in closed form. For any ${\boldsymbol{x}}\neq {\boldsymbol{0}}, {\boldsymbol{u}}\in {{\mathbb{R}}}^{d}$ and positive definite matrix $A \in {{\mathbb{R}}}^{d\times d}$, we have $$\operatorname*{argmin}_{{\boldsymbol{w}}\,:\, |{\boldsymbol{w}}^\top{\boldsymbol{x}}| \leq C} {\left\|{{\boldsymbol{w}}-{\boldsymbol{u}}}\right\|}_{A} = {\boldsymbol{u}}- \frac{\tau_C({\boldsymbol{u}}^\top{\boldsymbol{x}})}{{\boldsymbol{x}}^\top A^{-1} {\boldsymbol{x}}} A^{-1}{\boldsymbol{x}}, ~~\mbox{where $\tau_C(y) = {\mbox{\sc sgn}}(y)\max\{|y| - C, 0\}$.}$$ \[lemma:projection\] If $A_t$ is a diagonal matrix, updates similar to those of @RossMiLa13 are recovered. We study a choice of $A_t$ that is similar to the Online Newton Step (ONS) [@HazanAgKa07] (though with different projections): $$A_t = \alpha { \boldsymbol{I}_{d} } + \sum_{s=1}^t (\sigma_s + \eta_s){\boldsymbol{g}}_s {\boldsymbol{g}}_s^\top \label{eqn:ons-mat}$$ for some parameters $\alpha > 0$ and $\eta_t \geq 0$. The regret guarantee of this algorithm is shown below: \[thm:AON\] Under Assumptions \[ass:loss\] and \[ass:curvature\], suppose that $\sigma_t \geq \sigma \geq 0$ for all $t$, and $\eta_t$ is non-increasing. Then using the matrices  in the updates  yields for all ${\boldsymbol{w}}\in{\mathcal{K}}$, $$\begin{aligned} R_T({\boldsymbol{w}}) \le \frac{\alpha}{2}{\left\|{{\boldsymbol{w}}}\right\|}_{2}^2 + 2(CL)^2 \sum_{t=1}^T\eta_t + \frac{d}{2(\sigma + \eta_T)} \ln\left(1 + \frac{(\sigma+\eta_T) \sum_{t=1}^T{\left\|{{\boldsymbol{g}}_t}\right\|}_2^2}{d\alpha}\right)~.\end{aligned}$$ The dependence on ${\left\|{{\boldsymbol{w}}}\right\|}_2^2$ implies that the method is not completely invariant to transformations of the data. This is due to the part $\alpha{ \boldsymbol{I}_{d} }$ in $A_t$. However, this is not critical since $\alpha$ is fixed and small while the other part of the bound grows to eventually become the dominating term. Moreover, we can even set $\alpha = 0$ and replace the inverse with the Moore-Penrose pseudoinverse to obtain a truly invariant algorithm, as discussed in Appendix \[app:pseudoinverse\]. We use $\alpha > 0$ in the remainder for simplicity. The implication of this regret bound is the following: in the worst case where $\sigma = 0$, we set $\eta_t = \sqrt{d/C^2L^2t}$ and the bound simplifies to $$\begin{aligned} R_T({\boldsymbol{w}}) \le \frac{\alpha}{2}{\left\|{{\boldsymbol{w}}}\right\|}_{2}^2 + \frac{CL}{2}\sqrt{Td} \ln\left(1 + \frac{\sum_{t=1}^T{\left\|{{\boldsymbol{g}}_t}\right\|}_2^2}{\alpha CL\sqrt{Td}}\right) \nonumber + 4CL\sqrt{Td}~, \label{eq:aons-convex}\end{aligned}$$ essentially only losing a logarithmic factor compared to the lower bound in Theorem \[thm:lower\_bound\]. On the other hand, if $\sigma_t \geq \sigma > 0$ for all $t$, then we set $\eta_t = 0$ and the regret simplifies to $$R_T({\boldsymbol{w}}) \le \frac{\alpha}{2}{\left\|{{\boldsymbol{w}}}\right\|}_{2}^2 + \frac{d}{2\sigma}\ln\left(1 + \frac{\sigma \sum_{t=1}^T{\left\|{{\boldsymbol{g}}_t}\right\|}_2^2}{d\alpha}\right)~, \label{eq:aons-sc}$$ extending the ${\mathcal{O}}(d\ln T)$ results in [@HazanAgKa07] to the weaker Assumption \[ass:curve\] and a larger comparator set ${\mathcal{K}}$. Efficiency via Sketching {#sec:sketch} ======================== Our algorithm so far requires $\Omega(d^2)$ time and space just as ONS. In this section we show how to achieve regret guarantees nearly as good as the above bounds, while keeping computation within a constant factor of first order methods. Let $G_t \in {{\mathbb{R}}}^{t\times d}$ be a matrix such that the $t$-th row is ${{\widehat}{\boldsymbol{g}}}_t^\top$ where we define ${{\widehat}{\boldsymbol{g}}}_t = \sqrt{\sigma_t + \eta_t}{\boldsymbol{g}}_t$ to be the *to-sketch vector*. Our previous choice of $A_t$ (Eq. ) can be written as $\alpha{ \boldsymbol{I}_{d} } + G_t^\top G_t$. The idea of sketching is to maintain an approximation of $G_t$, denoted by $S_t \in {{\mathbb{R}}}^{m\times d}$ where $m \ll d$ is a small constant called the sketch size. If $m$ is chosen so that $S_t^\top S_t$ approximates $G_t^\top G_t$ well, we can redefine $A_t$ as $\alpha{ \boldsymbol{I}_{d} } + S_t^\top S_t$ for the algorithm. Parameters $C$, $\alpha$ and $m$. Initialize ${\boldsymbol{u}}_1 = {\boldsymbol{0}}_{d \times 1}$. Initialize sketch $(S, H) \leftarrow \textbf{SketchInit}(\alpha, m)$. Receive example ${\boldsymbol{x}}_t$. **Projection step:** compute ${{\widehat}{{\boldsymbol{x}}}}= S{\boldsymbol{x}}_t, \; {\ensuremath{\gamma}}= \frac{\tau_C({\boldsymbol{u}}_t^\top{\boldsymbol{x}}_t)}{{\boldsymbol{x}}_{t}^\top {\boldsymbol{x}}_{t} - {{{\widehat}{{\boldsymbol{x}}}}}^\top H {{\widehat}{{\boldsymbol{x}}}}}$ and set ${\boldsymbol{w}}_t = {\boldsymbol{u}}_t - {\ensuremath{\gamma}}({\boldsymbol{x}}_t - S^\top H {{\widehat}{{\boldsymbol{x}}}})$. Predict label $y_t = {\boldsymbol{w}}_t^\top {\boldsymbol{x}}_t$ and suffer loss $\ell_t(y_t)$. Compute gradient ${\boldsymbol{g}}_t = \ell'_t(y_t){\boldsymbol{x}}_t$ and the *to-sketch vector* ${{\widehat}{\boldsymbol{g}}}= \sqrt{\sigma_t + \eta_t}{\boldsymbol{g}}_t$. $(S, H) \leftarrow$ **SketchUpdate**(${{\widehat}{\boldsymbol{g}}}$). **Update weight:** ${\boldsymbol{u}}_{t+1} = {\boldsymbol{w}}_t - \frac{1}{\alpha}({\boldsymbol{g}}_t - S^\top H S{\boldsymbol{g}}_t)$. To see why this admits an efficient algorithm, notice that by the Woodbury formula one has $ A_t^{-1} = \frac{1}{\alpha}\bigl({ \boldsymbol{I}_{d} } - S_t^\top (\alpha{ \boldsymbol{I}_{m} } + S_t S_t^\top)^{-1} S_t \bigr). $ With the notation $H_t = (\alpha{ \boldsymbol{I}_{m} } + S_t S_t^\top)^{-1} \in {{\mathbb{R}}}^{m\times m}$ and ${\ensuremath{\gamma}}_t = \tau_C({\boldsymbol{u}}_{t+1}^\top{\boldsymbol{x}}_{t+1})/({\boldsymbol{x}}_{t+1}^\top{\boldsymbol{x}}_{t+1} - {\boldsymbol{x}}_{t+1}^\top S_t^\top H_t S_t {\boldsymbol{x}}_{t+1})$, update (\[eq:AON\]) becomes: $$\begin{aligned} {\boldsymbol{u}}_{t+1} = {\boldsymbol{w}}_t - \tfrac{1}{\alpha}\bigl({\boldsymbol{g}}_t - S_t^\top H_t S_t {\boldsymbol{g}}_t\bigr), \quad \mbox{and} \quad {\boldsymbol{w}}_{t+1} &= {\boldsymbol{u}}_{t+1} - {\ensuremath{\gamma}}_t \bigl({\boldsymbol{x}}_{t+1} - S_t^\top H_t S_t {\boldsymbol{x}}_{t+1}\bigr)~. \end{aligned}$$ The operations involving $S_t{\boldsymbol{g}}_t$ or $S_t{\boldsymbol{x}}_{t+1}$ require only ${\ensuremath{\mathcal{O}}}(md)$ time, while matrix vector products with $H_t$ require only ${\ensuremath{\mathcal{O}}}(m^2)$. Altogether, these updates are at most $m$ times more expensive than first order algorithms as long as $S_t$ and $H_t$ can be maintained efficiently. We call this algorithm [Sketched Online Newton]{}([SON]{}) and summarize it in Algorithm \[alg:SAON\]. We now discuss two sketching techniques to maintain the matrices $S_t$ and $H_t$ efficiently, each requiring ${\mathcal{O}}(md)$ storage and time linear in $d$. #### Frequent Directions (FD). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $S$ and $H$. $t$, $\Lambda$, $V$ and $H$. Set $S = {\boldsymbol{0}}_{m\times d}$ and $H = \tfrac{1}{\alpha} Set $t = 0, \Lambda = {\boldsymbol{0}}_{m \times m}, H = { \boldsymbol{I}_{m} }$. Return $(S, H)$. \tfrac{1}{\alpha} { \boldsymbol{I}_{m} }$ and $V$ to any $m \times d$ matrix with orthonormal rows. Return (${\boldsymbol{0}}_{m \times d}$, $H$). Insert ${{\widehat}{\boldsymbol{g}}}$ into the last row of $S$. Compute eigendecomposition: $V^\top \Sigma V = S^\top S$ and set $S = (\Sigma - \Sigma_{m,m}{ \boldsymbol{I}_{m} })^{\frac{1}{2}} V$. Set $H = {\mathrm{diag}\!\left\{{\frac{1}{\alpha + \Sigma_{1,1} - \Sigma_{m,m}}, \cdots, \frac{1}{\alpha}}\right\}}$. Return $(S, H)$. Update $t \leftarrow t + 1$, $\Lambda$ and $V$ as Eqn. \[eqn:oja-eigs\]. Set $S = (t\Lambda)^{\frac{1}{2}} V$. Set $H = {\mathrm{diag}\!\left\{{\frac{1}{\alpha + t\Lambda_{1,1}}, \cdots, \frac{1}{\alpha + t\Lambda_{m,m}}}\right\}}$. Return $(S, H)$. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Frequent Directions sketch [@GhashamiLiPhWo15; @Liberty13] is a deterministic sketching method. It maintains the invariant that the last row of $S_t$ is always ${\boldsymbol{0}}$. On each round, the vector ${{\widehat}{\boldsymbol{g}}}_t^\top$ is inserted into the last row of $S_{t-1}$, then the covariance of the resulting matrix is eigendecomposed into $V_t^\top \Sigma_t V_t$ and $S_t$ is set to $(\Sigma_t - \rho_{t}{ \boldsymbol{I}_{m} })^{\frac{1}{2}} V_t$ where $\rho_t$ is the smallest eigenvalue. Since the rows of $S_t$ are orthogonal to each other, $H_t$ is a diagonal matrix and can be maintained efficiently (see Algorithm \[alg:FD\]). The sketch update works in ${\mathcal{O}}(md)$ time (see [@GhashamiLiPhWo15] and Appendix \[app:sparse\]) so the total running time is ${\mathcal{O}}(md)$ per round. We call this combination [FD-SON]{}and prove the following regret bound with notation $\Omega_k = \sum_{i=k+1}^d \lambda_i(G_T^\top G_T)$ for any $k = 0,\dots,m-1$. \[thm:FD\] Under Assumptions \[ass:loss\] and \[ass:curvature\], suppose that $\sigma_t \geq \sigma \geq 0$ for all $t$ and $\eta_t$ is non-increasing. [FD-SON]{}ensures that for any ${\boldsymbol{w}}\in {\mathcal{K}}$ and $k = 0,\ldots,m-1$, we have $$\begin{aligned} R_T({\boldsymbol{w}}) \le \frac{\alpha}{2}{\left\|{{\boldsymbol{w}}}\right\|}_{2}^2 + 2(CL)^2 \sum_{t=1}^T\eta_t + \frac{m}{2(\sigma + \eta_T)} \ln\left(1 + \frac{{\textsc{tr}({S_T^\top S_T})}}{m\alpha}\right) + \frac{m\Omega_k}{2(m-k)(\sigma+\eta_T)\alpha}~.\end{aligned}$$ The bound depends on the spectral decay $\Omega_k$, which essentially is the only extra term compared to the bound in Theorem \[thm:AON\]. Similarly to previous discussion, if $\sigma_t \geq \sigma$, we get the bound $ \frac{\alpha}{2}{\left\|{w}\right\|}_{2}^2 + \frac{m}{2\sigma}\ln\left(1+ \frac{{\textsc{tr}({S_T^\top S_T})}}{m\alpha}\right) + \frac{m\Omega_k}{2(m-k)\sigma\alpha}~. $ With $\alpha$ tuned well, we pay logarithmic regret for the top $m$ eigenvectors, but a square root regret ${\mathcal{O}}(\sqrt{\Omega_k})$ for remaining directions not controlled by our sketch. This is expected for deterministic sketching which focuses on the dominant part of the spectrum. When $\alpha$ is not tuned we still get sublinear regret as long as $\Omega_k$ is sublinear. #### Oja’s Algorithm. Oja’s algorithm [@Oja82; @OjaKa85] is not usually considered as a sketching algorithm but seems very natural here. This algorithm uses online gradient descent to find eigenvectors and eigenvalues of data in a streaming fashion, with the to-sketch vector ${{\widehat}{\boldsymbol{g}}}_t$’s as the input. Specifically, let $V_t \in {{\mathbb{R}}}^{m \times d}$ denote the estimated eigenvectors and the diagonal matrix $\Lambda_t \in {{\mathbb{R}}}^{m \times m}$ contain the estimated eigenvalues at the end of round $t$. Oja’s algorithm updates as: $$\begin{aligned} \Lambda_{t} = ({ \boldsymbol{I}_{m} } - \Gamma_t) \Lambda_{t-1} + \Gamma_t \;{\mathrm{diag}\!\left\{{V_{t-1} {{\widehat}{\boldsymbol{g}}}_t}\right\}}^2, \quad\quad V_t \xleftarrow{\text{orth}} V_{t-1} + \Gamma_t V_{t-1}{{\widehat}{\boldsymbol{g}}}_t {{\widehat}{\boldsymbol{g}}}_t^\top \label{eqn:oja-eigs}\end{aligned}$$ where $\Gamma_t \in {{\mathbb{R}}}^{m \times m}$ is a diagonal matrix with (possibly different) learning rates of order $\Theta(1/t)$ on the diagonal, and the “$\xleftarrow{\text{orth}}$” operator represents an orthonormalizing step.[^4] The sketch is then $S_t = (t \Lambda_t)^{\frac{1}{2}} V_t$. The rows of $S_t$ are orthogonal and thus $H_t$ is an efficiently maintainable diagonal matrix (see Algorithm \[alg:Oja\]). We call this combination [Oja-SON]{}. The time complexity of Oja’s algorithm is ${\mathcal{O}}(m^2d)$ per round due to the orthonormalizing step. To improve the running time to ${\mathcal{O}}(md)$, one can only update the sketch every $m$ rounds (similar to the block power method [@HardtPr14; @LiLiLu15]). The regret guarantee of this algorithm is unclear since existing analysis for Oja’s algorithm is only for the stochastic setting (see e.g. [@BalsubramaniDaFr13; @LiLiLu15]). However, [Oja-SON]{}provides good performance experimentally. Sparse Implementation {#sec:sparse} ===================== In many applications, examples (and hence gradients) are sparse in the sense that ${\left\|{{\boldsymbol{x}}_t}\right\|}_{0} \leq s$ for all $t$ and some small constant $s \ll d$. Most online first order methods enjoy a per-example running time depending on $s$ instead of $d$ in such settings. Achieving the same for second order methods is more difficult since $A_t^{-1}{\boldsymbol{g}}_t$ (or sketched versions) are typically dense even if ${\boldsymbol{g}}_t$ is sparse. We show how to implement our algorithms in sparsity-dependent time, specifically, in ${\mathcal{O}}(m^2 + ms)$ for [FD-SON]{}and in ${\mathcal{O}}(m^3 + m s)$ for [Oja-SON]{}. We emphasize that since the sketch would still quickly become a dense matrix even if the examples are sparse, achieving purely sparsity-dependent time is highly non-trivial and may be of independent interest. Due to space limit, below we only briefly mention how to do it for [Oja-SON]{}. Similar discussion for the FD sketch can be found in Appendix \[app:sparse\]. Note that mathematically these updates are equivalent to the non-sparse counterparts and regret guarantees are thus unchanged. There are two ingredients to doing this for [Oja-SON]{}: (1) The eigenvectors $V_t$ are represented as $V_t = F_t Z_t$, where $Z_t \in {{\mathbb{R}}}^{m\times d}$ is a sparsely updatable direction (Step 3 in Algorithm \[alg:SOja\]) and $F_t \in {{\mathbb{R}}}^{m\times m}$ is a matrix such that $F_t Z_t$ is orthonormal. (2) The weights ${\boldsymbol{w}}_t$ are split as ${\bar{\boldsymbol{w}}}_t + Z_{t-1}^\top {\boldsymbol{b}}_t$, where ${\boldsymbol{b}}_t \in {{\mathbb{R}}}^m$ maintains the weights on the subspace captured by $V_{t-1}$ (same as $Z_{t-1}$), and ${\bar{\boldsymbol{w}}}_t$ captures the weights on the complementary subspace which are again updated sparsely. We describe the sparse updates for ${\bar{\boldsymbol{w}}}_t$ and ${\boldsymbol{b}}_t$ below with the details for $F_t$ and $Z_t$ deferred to Appendix \[app:sparse-oja\]. Since $S_t = (t \Lambda_t)^{\frac{1}{2}} V_t = (t \Lambda_t)^{\frac{1}{2}} F_tZ_t$ and ${\boldsymbol{w}}_t = {\bar{\boldsymbol{w}}}_t + Z_{t-1}^\top{\boldsymbol{b}}_t$, we know ${\boldsymbol{u}}_{t+1}$ is $$\begin{aligned} {\boldsymbol{w}}_t - \big({ \boldsymbol{I}_{d} } - S_t^\top H_t S_t\big)\tfrac{{\boldsymbol{g}}_t}{\alpha} = \underbrace{{\bar{\boldsymbol{w}}}_t - \tfrac{{\boldsymbol{g}}_t}{\alpha} - (Z_t - Z_{t-1})^\top {\boldsymbol{b}}_t}_{{\stackrel{\rm def}{=}}{\bar{\boldsymbol{u}}}_{t+1}} + Z_t^\top (\underbrace{{\boldsymbol{b}}_t + \tfrac{1}{\alpha} F_t^\top (t\Lambda_t H_t) F_t Z_t {\boldsymbol{g}}_t}_{{\stackrel{\rm def}{=}}{\boldsymbol{b}}_{t+1}'})~. \label{eqn:u-oja}\end{aligned}$$ Since $Z_t - Z_{t-1}$ is sparse by construction and the matrix operations defining ${\boldsymbol{b}}_{t+1}'$ scale with $m$, overall the update can be done in ${\mathcal{O}}(m^2 + ms)$. Using the update for ${\boldsymbol{w}}_{t+1}$ in terms of ${\boldsymbol{u}}_{t+1}$, ${\boldsymbol{w}}_{t+1}$ is equal to $$\begin{aligned} {\boldsymbol{u}}_{t+1} - {\ensuremath{\gamma}}_t ({ \boldsymbol{I}_{d} } - S_t^\top H_t S_t ) {\boldsymbol{x}}_{t+1} = \underbrace{{\bar{\boldsymbol{u}}}_{t+1} - {\ensuremath{\gamma}}_{t}{\boldsymbol{x}}_{t+1}}_{{\stackrel{\rm def}{=}}{\bar{\boldsymbol{w}}}_{t+1}} + Z_t^\top (\underbrace{{\boldsymbol{b}}_{t+1}' + {\ensuremath{\gamma}}_{t} F_t^\top (t\Lambda_t H_t) F_t Z_t{\boldsymbol{x}}_{t+1}}_{{\stackrel{\rm def}{=}}{\boldsymbol{b}}_{t+1}})~. \label{eqn:w-oja}\end{aligned}$$ Again, it is clear that all the computations scale with $s$ and not $d$, so both ${\bar{\boldsymbol{w}}}_{t+1}$ and ${\boldsymbol{b}}_{t+1}$ require only $O(m^2+ms)$ time to maintain. Furthermore, the prediction ${\boldsymbol{w}}_t^\top {\boldsymbol{x}}_t = {\bar{\boldsymbol{w}}}_t^\top {\boldsymbol{x}}_t + {\boldsymbol{b}}_t^\top Z_{t-1} {\boldsymbol{x}}_t$ can also be computed in ${\mathcal{O}}(ms)$ time. The ${\mathcal{O}}(m^3)$ in the overall complexity comes from a Gram-Schmidt step in maintaining $F_t$ (details in Appendix \[app:sparse-oja\]). The pseudocode is presented in Algorithms \[alg:SON\] and \[alg:SOja\] with some details deferred to Appendix \[app:sparse-oja\]. This is the first sparse implementation of online eigenvector computation to the best of our knowledge. Parameters $C$, $\alpha$ and $m$. Initialize ${\bar{\boldsymbol{u}}}= {\boldsymbol{0}}_{d \times 1}$ and ${\boldsymbol{b}}= {\boldsymbol{0}}_{m \times 1}$. ($\Lambda, F, Z, H) \leftarrow \textbf{SketchInit}(\alpha, m)$ (Algorithm \[alg:SOja\]). Receive example ${\boldsymbol{x}}_t$. **Projection step:** compute ${{\widehat}{{\boldsymbol{x}}}}= FZ{\boldsymbol{x}}_t$ and ${\ensuremath{\gamma}}= \frac{\tau_C({\bar{\boldsymbol{u}}}^\top{\boldsymbol{x}}_t + {\boldsymbol{b}}^\top Z{\boldsymbol{x}}_t)}{{\boldsymbol{x}}_{t}^\top {\boldsymbol{x}}_{t} - (t-1){{{\widehat}{{\boldsymbol{x}}}}}^\top \Lambda H {{\widehat}{{\boldsymbol{x}}}}}$. Obtain ${\bar{\boldsymbol{w}}}= {\bar{\boldsymbol{u}}}- {\ensuremath{\gamma}}{\boldsymbol{x}}_t$ and ${\boldsymbol{b}}\leftarrow {\boldsymbol{b}}+ {\ensuremath{\gamma}}(t-1)F^\top \Lambda H {{\widehat}{{\boldsymbol{x}}}}$ (Equation \[eqn:w-oja\]). Predict label $y_t = {\bar{\boldsymbol{w}}}^\top {\boldsymbol{x}}_t + {\boldsymbol{b}}^\top Z{\boldsymbol{x}}_t$ and suffer loss $\ell_t(y_t)$. Compute gradient ${\boldsymbol{g}}_t = \ell'_t(y_t){\boldsymbol{x}}_t$ and the *to-sketch vector* ${{\widehat}{\boldsymbol{g}}}= \sqrt{\sigma_t + \eta_t}{\boldsymbol{g}}_t$. ($\Lambda$, $F$, $Z$, $H$, ${\boldsymbol{\delta}}$) $\leftarrow$ **SketchUpdate**(${{\widehat}{\boldsymbol{g}}}$) (Algorithm \[alg:SOja\]). **Update weight:** ${\bar{\boldsymbol{u}}}= {\bar{\boldsymbol{w}}}- \tfrac{1}{\alpha} {\boldsymbol{g}}_t - ({\boldsymbol{\delta}}^\top{\boldsymbol{b}}) {{\widehat}{\boldsymbol{g}}}$ and ${\boldsymbol{b}}\leftarrow {\boldsymbol{b}}+ \tfrac{1}{\alpha}tF^\top \Lambda HF Z{\boldsymbol{g}}_t$ (Equation \[eqn:u-oja\]). $t$, $\Lambda$, $F$, $Z$, $H$ and $K$. Set $t = 0, \Lambda = {\boldsymbol{0}}_{m \times m}, F = K = \alpha H = { \boldsymbol{I}_{m} }$ and $Z$ to any $m \times d$ matrix with orthonormal rows. Return ($\Lambda$, $F$, $Z$, $H$). Update $t \leftarrow t + 1$. Pick a diagonal stepsize matrix $\Gamma_t$ to update $\Lambda \leftarrow ({ \boldsymbol{I}_{} } - \Gamma_t) \Lambda + \Gamma_t \;{\mathrm{diag}\!\left\{{FZ {{\widehat}{\boldsymbol{g}}}}\right\}}^2$. Set ${\boldsymbol{\delta}}= A^{-1}\Gamma_t FZ {{\widehat}{\boldsymbol{g}}}$ and update $K \leftarrow K + {\boldsymbol{\delta}}{{\widehat}{\boldsymbol{g}}}^\top Z^\top + Z {{\widehat}{\boldsymbol{g}}}{\boldsymbol{\delta}}^\top + ({{\widehat}{\boldsymbol{g}}}^\top {{\widehat}{\boldsymbol{g}}}) {\boldsymbol{\delta}}{\boldsymbol{\delta}}^\top $. Update $Z \leftarrow Z + {\boldsymbol{\delta}}{{\widehat}{\boldsymbol{g}}}^\top$. $(L, Q) \leftarrow \text{Decompose}(F, K)$ (Algorithm \[alg:Gram-Schmidt\]), so that $LQZ = FZ$ and $QZ$ is orthogonal. Set $F = Q$. Set $H \leftarrow {\mathrm{diag}\!\left\{{\frac{1}{\alpha + t \Lambda_{1,1}}, \cdots, \frac{1}{\alpha + t \Lambda_{m,m}}}\right\}}$. Return ($\Lambda$, $F$, $Z$, $H$, ${\boldsymbol{\delta}}$). Experiments =========== Preliminary experiments revealed that out of our two sketching options, Oja’s sketch generally has better performance (see Appendix \[app:experiment\]). For more thorough evaluation, we implemented the sparse version of [Oja-SON]{}in Vowpal Wabbit.[^5] We compare it with <span style="font-variant:small-caps;">AdaGrad</span> [@DuchiHaSi2011; @McMahanSt2010] on both synthetic and real-world datasets. Each algorithm takes a stepsize parameter: $\tfrac{1}{\alpha}$ serves as a stepsize for [Oja-SON]{}and a scaling constant on the gradient matrix for [<span style="font-variant:small-caps;">AdaGrad</span>]{}. We try both methods with the parameter set to $2^j$ for $j = -3, -2, \ldots, 6$ and report the best results. We keep the stepsize matrix in [Oja-SON]{}fixed as $\Gamma_t = \frac{1}{t}{ \boldsymbol{I}_{m} }$ throughout. All methods make one online pass over data minimizing square loss. Synthetic Datasets ------------------ ![(a) Comparison of two sketch sizes on real data, and (b) Comparison against [<span style="font-variant:small-caps;">AdaGrad</span>]{}on real data.](oja.pdf){width="100.00000%"} To investigate [Oja-SON]{}’s performance in the setting it is really designed for, we generated a range of synthetic ill-conditioned datasets as follows. We picked a random Gaussian matrix $Z\sim {{\mathbb{R}}}^{T\times d}$ ($T = 10,\!000$ and $d = 100$) and a random orthonormal basis $V \in {{\mathbb{R}}}^{d\times d}$. We chose a specific spectrum ${\boldsymbol{\lambda}}\in {{\mathbb{R}}}^d$ where the first $d-10$ coordinates are 1 and the rest increase linearly to some fixed *condition number parameter $\kappa$*. We let $X = Z{\mathrm{diag}\!\left\{{{\boldsymbol{\lambda}}}\right\}}^{\frac{1}{2}} V^\top$ be our example matrix, and created a binary classification problem with labels $y = {\ensuremath{\mbox{sign}}}({\boldsymbol{\theta}}^\top{\boldsymbol{x}})$, where ${\boldsymbol{\theta}}\in {{\mathbb{R}}}^d$ is a random vector. We generated 20 such datasets with the same $Z, V$ and labels $y$ but different values of $\kappa \in \{10, 20, \ldots, 200$}. Note that if the algorithm is truly invariant, it would have the same behavior on these 20 datasets. Fig. \[fig:synthetic\] (in Section \[sec:intro\]) shows the final progressive error (i.e. fraction of misclassified examples after one pass over data) for <span style="font-variant:small-caps;">AdaGrad</span> and [Oja-SON]{}(with sketch size $m=0,5,10$) as the condition number increases. As expected, the plot confirms the performance of first order methods such as <span style="font-variant:small-caps;">AdaGrad</span> degrades when the data is ill-conditioned. The plot also shows that as the sketch size increases, [Oja-SON]{}becomes more accurate: when $m=0$ (no sketch at all), [Oja-SON]{}is vanilla gradient descent and is worse than <span style="font-variant:small-caps;">AdaGrad</span> as expected; when $m=5$, the accuracy greatly improves; and finally when $m=10$, the accuracy of [Oja-SON]{}is substantially better and hardly worsens with $\kappa$. To further explain the effectiveness of Oja’s algorithm in identifying top eigenvalues and eigenvectors, the plot in Fig. \[fig:eigs\] shows the largest relative difference between the true and estimated top 10 eigenvalues as Oja’s algorithm sees more data. This gap drops quickly after seeing just 500 examples. Real-world Datasets {#subsec:real_data} ------------------- Next we evaluated [Oja-SON]{}on 23 benchmark datasets from the UCI and LIBSVM repository (see Appendix \[app:experiment\] for description of these datasets). Note that some datasets are very high dimensional but very sparse (e.g. for [*20news*]{}, $d \approx 102,000$ and $s \approx 94$), and consequently methods with running time quadratic (such as ONS) or even linear in dimension rather than sparsity are prohibitive. In Fig. \[fig:nodiag\], we show the effect of using sketched second order information, by comparing sketch size $m=0$ and $m=10$ for [Oja-SON]{}(concrete error rates in Appendix \[app:experiment\]). We observe significant improvements in 5 datasets ([*acoustic, census, heart, ionosphere, letter*]{}), demonstrating the advantage of using second order information. However, we found that [Oja-SON]{}was outperformed by <span style="font-variant:small-caps;">AdaGrad</span> on most datasets, mostly because the diagonal adaptation of <span style="font-variant:small-caps;">AdaGrad</span> greatly reduces the condition number on these datasets. Moreover, one disadvantage of [SON]{}is that for the directions not in the sketch, it is essentially doing vanilla gradient descent. We expect better results using diagonal adaptation as in <span style="font-variant:small-caps;">AdaGrad</span> in off-sketch directions. To incorporate this high level idea, we performed a simple modification to [Oja-SON]{}: upon seeing example ${\boldsymbol{x}}_t$, we feed $D_t^{-\frac{1}{2}} {\boldsymbol{x}}_t$ to our algorithm instead of ${\boldsymbol{x}}_t$, where $D_t \in {{\mathbb{R}}}^{d\times d}$ is the diagonal part of the matrix $\sum_{\tau=1}^{t-1} {\boldsymbol{g}}_\tau{\boldsymbol{g}}_\tau^\top$.[^6] The intuition is that this diagonal rescaling first homogenizes the scales of all dimensions. Any remaining ill-conditioning is further addressed by the sketching to some degree, while the complementary subspace is no worse-off than with <span style="font-variant:small-caps;">AdaGrad</span>. We believe this flexibility in picking the right vectors to sketch is an attractive aspect of our sketching-based approach. With this modification, [Oja-SON]{}outperforms <span style="font-variant:small-caps;">AdaGrad</span> on most of the datasets even for $m = 0$, as shown in Fig. \[fig:diag\] (concrete error rates in Appendix \[app:experiment\]). The improvement on <span style="font-variant:small-caps;">AdaGrad</span> at $m=0$ is surprising but not impossible as the updates are not identical–our update is scale invariant like @RossMiLa13. However, the diagonal adaptation already greatly reduces the condition number on all datasets except [*splice*]{} (see Fig. \[fig:splice\] in Appendix \[app:experiment\] for detailed results on this dataset), so little improvement is seen for sketch size $m=10$ over $m=0$. For several datasets, we verified the accuracy of Oja’s method in computing the top-few eigenvalues (Appendix \[app:experiment\]), so the lack of difference between sketch sizes is due to the lack of second order information after the diagonal correction. The average running time of our algorithm when $m=10$ is about 11 times slower than <span style="font-variant:small-caps;">AdaGrad</span>, matching expectations. Overall, [SON]{}can significantly outperform baselines on ill-conditioned data, while maintaining a practical computational complexity. #### Acknowledgements This work was done when Haipeng Luo and Nicolò Cesa-Bianchi were at Microsoft Research, New York. We thank Lijun Zhang for pointing out our mistake in the regret proof of another sketching method that appeared in an earlier version. **Supplementary material for\ “Efficient Second Order Online Learning by Sketching”** Proof of Theorem \[thm:lower\_bound\] {#app:lower_bound} ===================================== Assuming $T$ is a multiple of $d$ without loss of generality, we pick ${\boldsymbol{x}}_t$ from the basis vectors $\{{\boldsymbol{e}}_1, \ldots, {\boldsymbol{e}}_d\}$ so that each ${\boldsymbol{e}}_i$ appears $T/d$ times (in an arbitrary order). Note that now ${\mathcal{K}}$ is just a hypercube: $${\mathcal{K}}= { \left\{ {{\boldsymbol{w}}} \,:\, {|{\boldsymbol{w}}^\top{\boldsymbol{x}}_t| \leq C, \;\;\forall t} \right\} } = { \left\{ {{\boldsymbol{w}}} \,:\, {{\left\|{{\boldsymbol{w}}}\right\|}_\infty \leq C} \right\} }.$$ Let $\xi_1,\dots,\xi_T$ be independent Rademacher random variables such that $\Pr(\xi_t = +1) = \Pr(\xi_t = -1) = \tfrac{1}{2}$. For a scalar $\theta$, we define loss function[^7] $\ell_t(\theta) = (\xi_t L)\theta$, so that Assumptions \[ass:loss\] and \[ass:curve\] are clearly satisfied with $\sigma_t = 0$. We show that, for any online algorithm, $${{\mathbb{E}}}[ R_T] = {{\mathbb{E}}}\left[\sum_{t=1}^T \ell_t\bigl({\boldsymbol{w}}_t^{\top}{\boldsymbol{x}}_t\bigr) - \inf_{{\boldsymbol{w}}\in{\mathcal{K}}} \sum_{t=1}^T \ell_t\bigl({\boldsymbol{w}}^{\top}{\boldsymbol{x}}_t\bigr) \right] \ge CL\sqrt{\frac{dT}{2}}$$ which implies the statement of the theorem. First of all, note that ${{\mathbb{E}}}\Bigl[{\ell}_t\bigl({\boldsymbol{w}}_t^{\top}{\boldsymbol{x}}_t\bigr) \,\Big|\, \xi_1,\dots,\xi_{t-1} \Bigr] = 0$ for any ${\boldsymbol{w}}_t$. Hence we have $$\begin{aligned} {{\mathbb{E}}}\left[\sum_{t=1}^T {\ell}_t\bigl({\boldsymbol{w}}_t^{\top}{\boldsymbol{x}}_t\bigr) - \inf_{{\boldsymbol{w}}\in{\mathcal{K}}} \sum_{t=1}^T {\ell}_t\bigl({\boldsymbol{w}}^{\top}{\boldsymbol{x}}_t\bigr) \right] &= {{\mathbb{E}}}\left[\sup_{{\boldsymbol{w}}\in{\mathcal{K}}} \sum_{t=1}^T -{\ell}_t\bigl({\boldsymbol{w}}^{\top}{\boldsymbol{x}}_t\bigr) \right] = L\,{{\mathbb{E}}}\left[\sup_{{\boldsymbol{w}}\in{\mathcal{K}}} {\boldsymbol{w}}^{\top}\sum_{t=1}^T \xi_t {\boldsymbol{x}}_t \right],\end{aligned}$$ which, by the construction of ${\boldsymbol{x}}_t$, is $$CL\,{{\mathbb{E}}}\left[ {\left\|{\sum_{t=1}^T \xi_t {\boldsymbol{x}}_t}\right\|}_1 \right] = CLd\,{{\mathbb{E}}}\left[ \left|\sum_{t=1}^{T/d} \xi_t \right| \right] \geq CLd \sqrt{\frac{T}{2d}} = CL\sqrt{\frac{dT}{2}},$$ where the final bound is due to the Khintchine inequality (see e.g. Lemma 8.2 in [@CesabianchiLu06]). This concludes the proof. Projection {#app:projection} ========== We prove a more general version of Lemma \[lemma:projection\] which does not require invertibility of the matrix $A$ here. For any ${\boldsymbol{x}}\neq {\boldsymbol{0}}, {\boldsymbol{u}}\in {{\mathbb{R}}}^{d\times 1}$ and positive semidefinite matrix $A \in {{\mathbb{R}}}^{d\times d}$, we have $${\boldsymbol{w}}^* = \operatorname*{argmin}_{{\boldsymbol{w}}: |{\boldsymbol{w}}^\top{\boldsymbol{x}}| \leq C} {\left\|{{\boldsymbol{w}}-{\boldsymbol{u}}}\right\|}_{A} = \left\{ \begin{array}{cl} {\boldsymbol{u}}- \frac{\tau_C({\boldsymbol{u}}^\top{\boldsymbol{x}})}{{\boldsymbol{x}}^\top A^{\dagger} {\boldsymbol{x}}} A^{\dagger}{\boldsymbol{x}}& \text{if ${\boldsymbol{x}}\in \operatorname*{range}(A)$} \\ \\ {\boldsymbol{u}}- \frac{\tau_C({\boldsymbol{u}}^\top{\boldsymbol{x}})}{{\boldsymbol{x}}^\top ({ \boldsymbol{I}_{} } - A^{\dagger}A) {\boldsymbol{x}}} ({ \boldsymbol{I}_{} } - A^{\dagger}A) {\boldsymbol{x}}& \text{if ${\boldsymbol{x}}\notin \operatorname*{range}(A)$} \end{array} \right.$$ where $\tau_C(y) = {\mbox{\sc sgn}}(y)\max\{|y| - C, 0\}$ and $A^{\dagger}$ is the Moore-Penrose pseudoinverse of $A$. (Note that when $A$ is rank deficient, this is one of the many possible solutions.) First consider the case when ${\boldsymbol{x}}\in \operatorname*{range}(A)$. If $|{\boldsymbol{u}}^\top{\boldsymbol{x}}| \leq C$, then it is trivial that ${\boldsymbol{w}}^* = {\boldsymbol{u}}$. We thus assume ${\boldsymbol{u}}^\top{\boldsymbol{x}}\geq C$ below (the last case ${\boldsymbol{u}}^\top{\boldsymbol{x}}\leq -C$ is similar). The Lagrangian of the problem is $$L({\boldsymbol{w}}, \lambda_1, \lambda_2) = \frac{1}{2}({\boldsymbol{w}}-{\boldsymbol{u}})^\top A({\boldsymbol{w}}-{\boldsymbol{u}}) + \lambda_1({\boldsymbol{w}}^\top{\boldsymbol{x}}- C) + \lambda_2({\boldsymbol{w}}^\top{\boldsymbol{x}}+ C)$$ where $\lambda_1 \geq 0$ and $\lambda_2 \leq 0$ are Lagrangian multipliers. Since ${\boldsymbol{w}}^\top{\boldsymbol{x}}$ cannot be $C$ and $-C$ at the same time, The complementary slackness condition implies that either $\lambda_1 = 0$ or $\lambda_2 = 0$. Suppose the latter case is true, then setting the derivative with respect to ${\boldsymbol{w}}$ to $0$, we get ${\boldsymbol{w}}^* = {\boldsymbol{u}}- \lambda_1 A^{\dagger}{\boldsymbol{x}}+ ({ \boldsymbol{I}_{} } - A^\dagger A){\boldsymbol{z}}$ where ${\boldsymbol{z}}\in R^{d \times 1}$ can be arbitrary. However, since $A ({ \boldsymbol{I}_{} } - A^\dagger A) = 0$, this part does not affect the objective value at all and we can simply pick $z = 0$ so that $w^*$ has a consistent form regardless of whether $A$ is full rank or not. Now plugging ${\boldsymbol{w}}^*$ back, we have $$L({\boldsymbol{w}}^*, \lambda_1, 0) = -\frac{{\lambda_1}^2}{2}{\boldsymbol{x}}^\top A^{\dagger} {\boldsymbol{x}}+ \lambda_1 ({\boldsymbol{u}}^\top{\boldsymbol{x}}- C)$$ which is maximized when $\lambda_1 = \frac{{\boldsymbol{u}}^\top{\boldsymbol{x}}- C}{{\boldsymbol{x}}^\top A^{\dagger} {\boldsymbol{x}}} \geq 0$. Plugging this optimal $\lambda_1$ into ${\boldsymbol{w}}^*$ gives the stated solution. On the other hand, if $\lambda_1 = 0$ instead, we can proceed similarly and verify that it gives a smaller dual value ($0$ in fact), proving the previous solution is indeed optimal. We now move on to the case when ${\boldsymbol{x}}\notin \operatorname*{range}(A)$. First of all the stated solution is well defined since ${\boldsymbol{x}}^\top ({ \boldsymbol{I}_{} } - A^{\dagger}A) {\boldsymbol{x}}$ is nonzero in this case. Moreover, direct calculation shows that ${\boldsymbol{w}}^*$ is in the valid space: $|{{\boldsymbol{w}}^*}^\top{\boldsymbol{x}}| = |{\boldsymbol{u}}^\top{\boldsymbol{x}}- \tau_C({\boldsymbol{u}}^\top{\boldsymbol{x}})| \leq C$, and also it gives the minimal possible distance value ${\left\|{{\boldsymbol{w}}^*-{\boldsymbol{u}}}\right\|}_{A} = 0$, proving the lemma. Proof of Proposition \[prop:hindsight\] {#app:minimax} ======================================= Note that ${\boldsymbol{w}}\in {\mathcal{K}}'$ is equivalent to ${\left\|{G^{1/2}{\boldsymbol{w}}}\right\|}_{2} \leq \sqrt{T}LC$, where $G = \sum_{t=1}^T {\boldsymbol{g}}_t{\boldsymbol{g}}_t^\top$. So by the change of variable ${\widetilde}{{\boldsymbol{w}}} = \tfrac{G^{1/2}{\boldsymbol{w}}}{\sqrt{T}LC}$, we have $$\sup_{{\boldsymbol{w}}\in {\mathcal{K}}'} {\left\|{{\boldsymbol{w}}}\right\|}_{A}^2 = TL^2C^2 \sup_{{\left\|{{\widetilde}{{\boldsymbol{w}}}}\right\|}_{2} \leq 1} {\widetilde}{{\boldsymbol{w}}}^\top (G^{-1/2}AG^{-1/2}) {\widetilde}{{\boldsymbol{w}}} = TL^2C^2 \lambda_1(B)$$ where $B = G^{-1/2}AG^{-1/2}$ and recall that $\lambda_i(\cdot)$ represents the $i$-th largest eigenvalue of the argument. So we have $$\begin{aligned} \sup_{{\boldsymbol{w}}\in {\mathcal{K}}^{'}} \|{\boldsymbol{w}}\|_A^2 + \sum_{t=1}^T {\boldsymbol{g}}_t^\top A^{-1}{\boldsymbol{g}}_t &= TL^2C^2 \lambda_1(B) + \sum_{t=1}^T {\boldsymbol{g}}_t^\top (G^{1/2}BG^{1/2})^{-1} {\boldsymbol{g}}_t \\ &= TL^2C^2 \lambda_1(B) + { \left\langle {(G^{1/2}BG^{1/2})^{-1}, \; \sum_{t=1}^T {\boldsymbol{g}}_t{\boldsymbol{g}}_t^\top } \right\rangle } \\ &= TL^2C^2 \lambda_1(B) + {\textsc{tr}({G^{-1/2}B^{-1}G^{-1/2}G })} \\ &= TL^2C^2 \lambda_1(B) + {\textsc{tr}({B^{-1}})} \\ &= TL^2C^2 \lambda_1(B) + \sum_{i=1}^d \frac{1}{\lambda_i(B)}~.\end{aligned}$$ If $\lambda_i(B) < \lambda_1(B)$ for some $i \neq 1$, then by increasing $\lambda_i(B)$ so that $\lambda_i(B) = \lambda_1(B)$, we can always make the upper bound smaller, which means that the optimal $B$ should have $d$ identical eigenvalues, that is, $B = \lambda{ \boldsymbol{I}_{d} }$ for some $\lambda > 0$. Plugging this form of $B$ leads to $TL^2C^2\lambda + \frac{d}{\lambda} = 2CL\sqrt{Td} $ for $\lambda^* = \sqrt{\frac{d}{TC^2L^2}}$. The best $A$ is thus $G^{1/2} (\lambda^*{ \boldsymbol{I}_{d} }) G^{1/2} = \frac{1}{LC}\sqrt{\frac{d}{T}}G$. Loss functions {#app:loss} ============== The square loss $f_t({\boldsymbol{w}}) = \frac{1}{2} ({\boldsymbol{w}}^\top{\boldsymbol{x}}_t - y_t)^2$ satisfies Assumption  with $\sigma_t = \frac{1}{4C^2}$ if $|y_t| \leq C$. By definition, we have $$\begin{aligned} f_t({\boldsymbol{u}}) - f_t({\boldsymbol{w}}) &= y_t ({\boldsymbol{w}}- {\boldsymbol{u}})^\top {\boldsymbol{x}}_t + \tfrac{1}{2} ({\boldsymbol{u}}^\top{\boldsymbol{x}}_t{\boldsymbol{x}}_t^\top{\boldsymbol{u}}- {\boldsymbol{w}}^\top{\boldsymbol{x}}_t{\boldsymbol{x}}_t^\top{\boldsymbol{w}}) \\ &= \nabla f_t({\boldsymbol{u}})^\top({\boldsymbol{u}}- {\boldsymbol{w}}) - \tfrac{1}{2}({\boldsymbol{x}}_t^\top({\boldsymbol{u}}- {\boldsymbol{w}}))^2 \\ &\leq \nabla f_t({\boldsymbol{u}})^\top({\boldsymbol{u}}- {\boldsymbol{w}}) - \tfrac{1}{8C^2}(\nabla f_t({\boldsymbol{u}})^\top({\boldsymbol{u}}- {\boldsymbol{w}}))^2,\end{aligned}$$ where the last step holds since $({\boldsymbol{u}}^\top{\boldsymbol{x}}_t - y_t)^2 \leq 4C^2$. If $\exp(-\gamma f_t({\boldsymbol{w}}))$ is concave, then Assumption  holds with $\sigma_t = \frac{1}{2}\min\{\frac{1}{8CL}, \gamma\}$ (recall $L = \max_t\max_{|y| \leq C} {\ell}_t'(y)$). The proof is similar to the proof of Lemma 3 of @HazanAgKa07. Proof of Theorem \[thm:AON\] {#app:AON} ============================ We first prove a general regret bound that holds for any choice of $A_t$ in update \[eq:AON\]: $$\begin{split} {\boldsymbol{u}}_{t+1} &= {\boldsymbol{w}}_t - A_t^{-1}{\boldsymbol{g}}_t \\ {\boldsymbol{w}}_{t+1} &= \operatorname*{argmin}_{{\boldsymbol{w}}\in {\mathcal{K}}_{t+1}} {\left\|{{\boldsymbol{w}}-{\boldsymbol{u}}_{t+1}}\right\|}_{A_{t}}~. \end{split}$$ This bound will also be useful in proving regret guarantees for the sketched versions. \[prop:meta\] For any sequence of positive definite matrices $A_t$ and sequence of losses satisfying Assumptions \[ass:loss\] and \[ass:curvature\], the regret of updates  against any comparator ${\boldsymbol{w}}\in {\mathcal{K}}$ satisfies $$2R_T({\boldsymbol{w}}) \le \|{\boldsymbol{w}}\|_{A_0}^2 + \underbrace{\sum_{t=1}^T {\boldsymbol{g}}_t^TA_t^{-1}{\boldsymbol{g}}_t}_{\text{``Gradient Bound'' $R_G$}} \nonumber + \underbrace{\sum_{t=1}^T ({\boldsymbol{w}}_t - {\boldsymbol{w}})^\top(A_t - A_{t-1} - \sigma_t {\boldsymbol{g}}_t {\boldsymbol{g}}_t^\top)({\boldsymbol{w}}_t - {\boldsymbol{w}})}_{\text{``Diameter Bound'' $R_D$}} \label{eq:regret-meta}$$ Since ${\boldsymbol{w}}_{t+1}$ is the projection of ${\boldsymbol{u}}_{t+1}$ onto ${\mathcal{K}}_{t+1}$, by the property of projections (see for example [@HazanKa12 Lemma 8]), the algorithm ensures $${\left\|{{\boldsymbol{w}}_{t+1}-{\boldsymbol{w}}}\right\|}_{A_t}^2 \leq {\left\|{{\boldsymbol{u}}_{t+1}-{\boldsymbol{w}}}\right\|}_{A_t}^2 = {\left\|{{\boldsymbol{w}}_{t}-{\boldsymbol{w}}}\right\|}_{A_t}^2 + {\boldsymbol{g}}_t^\top A_t^{-1} {\boldsymbol{g}}_t - 2{\boldsymbol{g}}_t^\top({\boldsymbol{w}}_t - {\boldsymbol{w}})$$ for all ${\boldsymbol{w}}\in{\mathcal{K}}{\subseteq}{\mathcal{K}}_{t+1}$. By the curvature property in Assumption \[ass:curvature\], we then have that $$\begin{aligned} 2R_T({\boldsymbol{w}}) \;&\leq\; \sum_{t=1}^T 2{\boldsymbol{g}}_t^\top({\boldsymbol{w}}_t - {\boldsymbol{w}}) - \sigma_t\bigl({\boldsymbol{g}}_t^\top({\boldsymbol{w}}_t - {\boldsymbol{w}})\bigr)^2 \\ \;&\leq\; \sum_{t=1}^T {\boldsymbol{g}}_t^\top A_t^{-1} {\boldsymbol{g}}_t + {\left\|{{\boldsymbol{w}}_{t}-{\boldsymbol{w}}}\right\|}_{A_t}^2 - {\left\|{{\boldsymbol{w}}_{t+1}-{\boldsymbol{w}}}\right\|}_{A_t}^2 - \sigma_t\bigl({\boldsymbol{g}}_t^\top({\boldsymbol{w}}_t - {\boldsymbol{w}})\bigr)^2 \\ \;&\leq\; {\left\|{{\boldsymbol{w}}}\right\|}_{A_0}^2 + \sum_{t=1}^T {\boldsymbol{g}}_t^\top A_t^{-1} {\boldsymbol{g}}_t + ({\boldsymbol{w}}_t - {\boldsymbol{w}})^\top(A_t - A_{t-1} - \sigma_t {\boldsymbol{g}}_t{\boldsymbol{g}}_t^\top)({\boldsymbol{w}}_t - {\boldsymbol{w}}),\end{aligned}$$ which completes the proof. We apply Proposition \[prop:meta\] with the choice: $A_0 = \alpha{ \boldsymbol{I}_{d} }$ and $A_t = A_{t-1} + (\sigma_t + \eta_t){\boldsymbol{g}}_t{\boldsymbol{g}}_t^T$, which gives ${\left\|{{\boldsymbol{w}}}\right\|}_{A_0}^2 = \alpha{\left\|{{\boldsymbol{w}}}\right\|}_{2}^2$ and $$R_D = \sum_{t=1}^T \eta_t ({\boldsymbol{w}}_t - {\boldsymbol{w}})^\top {\boldsymbol{g}}_t{\boldsymbol{g}}_t^\top ({\boldsymbol{w}}_t - {\boldsymbol{w}}) \leq 4(CL)^2 \sum_{t=1}^T \eta_t~,$$ where the last equality uses the Lipschitz property in Assumption \[ass:loss\] and the boundedness of ${\boldsymbol{w}}_t^\top {\boldsymbol{x}}_t$ and ${\boldsymbol{w}}^\top {\boldsymbol{x}}_t$. For the term $R_G$, define ${{\widehat}{A}}_t = \frac{\alpha}{\sigma+\eta_T}{ \boldsymbol{I}_{d} } + \sum_{s=1}^t {\boldsymbol{g}}_s {\boldsymbol{g}}_s^\top$. Since $\sigma_t \geq \sigma$ and $\eta_t$ is non-increasing, we have $ {{\widehat}{A}}_t \preceq \frac{1}{\sigma+\eta_T} A_t$, and therefore: $$\begin{aligned} R_G &\leq \frac{1}{\sigma+\eta_T} \sum_{t=1}^T {\boldsymbol{g}}_t^\top {{\widehat}{A}}_t^{-1} {\boldsymbol{g}}_t = \frac{1}{\sigma+\eta_T} \sum_{t=1}^T { \left\langle {{{\widehat}{A}}_t - {{\widehat}{A}}_{t-1},\; {{\widehat}{A}}_t^{-1} } \right\rangle } \\ &\leq \frac{1}{\sigma+\eta_T} \sum_{t=1}^T \ln\frac{|{{\widehat}{A}}_t|}{|{{\widehat}{A}}_{t-1}|} = \frac{1}{\sigma+\eta_T} \ln\frac{|{{\widehat}{A}}_T|}{|{{\widehat}{A}}_{0}|} \\ &= \frac{1}{\sigma+\eta_T} \sum_{i=1}^d \ln\left(1 + \frac{(\sigma+\eta_T)\lambda_i\Bigl(\sum_{t=1}^T {\boldsymbol{g}}_t {\boldsymbol{g}}_t^\top\Bigr)}{\alpha} \right) \\ &\leq \frac{d}{\sigma+\eta_T} \ln\left(1 + \frac{(\sigma+\eta_T) \sum_{i=1}^d \lambda_i\Bigl(\sum_{t=1}^T {\boldsymbol{g}}_t {\boldsymbol{g}}_t^\top\Bigr)}{d\alpha} \right) \\ &= \frac{d}{\sigma + \eta_T} \ln\left(1 + \frac{(\sigma+\eta_T) \sum_{t=1}^T{\left\|{{\boldsymbol{g}}_t}\right\|}_2^2}{d\alpha}\right) \end{aligned}$$ where the second inequality is by the concavity of the function $\ln|X|$ (see [@HazanAgKa07 Lemma 12] for an alternative proof), and the last one is by Jensen’s inequality. This concludes the proof. A Truly Invariant Algorithm {#app:pseudoinverse} =========================== In this section we discuss how to make our adaptive online Newton algorithm truly invariant to invertible linear transformations. To achieve this, we set $\alpha = 0$ and replace $A_t^{-1}$ with the Moore-Penrose pseudoinverse $A_t^{\dagger}$: [^8] $$\label{eq:pseudo_AON} \begin{split} {\boldsymbol{u}}_{t+1} &= {\boldsymbol{w}}_t - A_t^{\dagger}{\boldsymbol{g}}_t, \\ {\boldsymbol{w}}_{t+1} &= \operatorname*{argmin}_{{\boldsymbol{w}}\in {\mathcal{K}}_{t+1}} {\left\|{{\boldsymbol{w}}-{\boldsymbol{u}}_{t+1}}\right\|}_{A_t}~. \end{split}$$ When written in this form, it is not immediately clear that the algorithm has the invariant property. However, one can rewrite the algorithm in a mirror descent form: $$\begin{aligned} {\boldsymbol{w}}_{t+1} &= \operatorname*{argmin}_{{\boldsymbol{w}}\in {\mathcal{K}}_{t+1}} {\left\|{{\boldsymbol{w}}-{\boldsymbol{w}}_t + A_t^{\dagger}{\boldsymbol{g}}_t}\right\|}_{A_t}^2 \\ &= \operatorname*{argmin}_{{\boldsymbol{w}}\in {\mathcal{K}}_{t+1}} {\left\|{{\boldsymbol{w}}-{\boldsymbol{w}}_t}\right\|}_{A_t}^2 + 2({\boldsymbol{w}}-{\boldsymbol{w}}_t)^\top A_t A_t^{\dagger} {\boldsymbol{g}}_t \\ &= \operatorname*{argmin}_{{\boldsymbol{w}}\in {\mathcal{K}}_{t+1}} {\left\|{{\boldsymbol{w}}-{\boldsymbol{w}}_t}\right\|}_{A_t}^2 + 2{\boldsymbol{w}}^\top {\boldsymbol{g}}_t\end{aligned}$$ where we use the fact that ${\boldsymbol{g}}_t$ is in the range of $A_t$ in the last step. Now suppose all the data ${\boldsymbol{x}}_t$ are transformed to $M{\boldsymbol{x}}_t$ for some unknown and invertible matrix $M$, then one can verify that all the weights will be transformed to $M^{-T}{\boldsymbol{w}}_t$ accordingly, ensuring the prediction to remain the same. Moreover, the regret bound of this algorithm can be bounded as below. First notice that even when $A_t$ is rank deficient, the projection step still ensures the following: ${\left\|{{\boldsymbol{w}}_{t+1}-{\boldsymbol{w}}}\right\|}_{A_t}^2 \leq {\left\|{{\boldsymbol{u}}_{t+1}-{\boldsymbol{w}}}\right\|}_{A_t}^2 $, which is proven in [@HazanAgKa07 Lemma 8]. Therefore, the entire proof of Theorem \[thm:AON\] still holds after replacing $A_t^{-1}$ with $A_t^{\dagger}$, giving the regret bound: $$\label{eq:pseudoinverse_regret} \frac{1}{2}\sum_{t=1}^T {\boldsymbol{g}}_t^\top A_t^{\dagger} \ {\boldsymbol{g}}_t + 2(CL)^2 \eta_t~.$$ The key now is to bound the term $\sum_{t=1}^T {\boldsymbol{g}}_t^\top {{\widehat}{A}}_t^{\dagger} \ {\boldsymbol{g}}_t$ where we define ${{\widehat}{A}}_t = \sum_{s=1}^t {\boldsymbol{g}}_s {\boldsymbol{g}}_s^\top$. In order to do this, we proceed similarly to the proof of [@CesabianchiCoGe05 Theorem 4.2] to show that this term is of order ${\mathcal{O}}(d^2\ln T)$ in the worst case. \[thm:pseudoinverse\] Let $\lambda^*$ be the minimum among the smallest nonzero eigenvalues of ${{\widehat}{A}}_t \; (t = 1, \ldots, T)$ and $r$ be the rank of ${{\widehat}{A}}_T$. We have $$\sum_{t=1}^T {\boldsymbol{g}}_t^\top {{\widehat}{A}}_t^{\dagger} \ {\boldsymbol{g}}_t \leq r + \frac{(1+r)r}{2} \ln \left(1 + \frac{2 \sum_{t = 1}^T {\left\|{{\boldsymbol{g}}_t}\right\|}^2_2}{(1+r)r\lambda^*} \right)~.$$ First by @CesabianchiCoGe05 [Lemma D.1], we have $${\boldsymbol{g}}_t^\top {{\widehat}{A}}_t^{\dagger} \ {\boldsymbol{g}}_t = \left\{ \begin{array}{cl} 1 & \text{if ${\boldsymbol{g}}_t \notin \operatorname*{range}({{\widehat}{A}}_{t-1})$} \\ 1 - \frac{\operatorname*{det_{+}}({{\widehat}{A}}_{t-1})}{\operatorname*{det_{+}}({{\widehat}{A}}_{t})} < 1 & \text{if ${\boldsymbol{g}}_t \in \operatorname*{range}({{\widehat}{A}}_{t-1})$} \end{array} \right.$$ where $\operatorname*{det_{+}}(M)$ denotes the product of the nonzero eigenvalues of matrix $M$. We thus separate the steps $t$ such that ${\boldsymbol{g}}_t \in \operatorname*{range}({{\widehat}{A}}_{t-1})$ from those where ${\boldsymbol{g}}_t \notin \operatorname*{range}({{\widehat}{A}}_{t-1})$. For each $k=1,\dots,r$ let $T_k$ be the first time step $t$ in which the rank of $A_t$ is $k$ (so that $T_1=1$). Also let $T_{r+1} = T+1$ for convenience. With this notation, we have $$\begin{aligned} \sum_{t=1}^T {\boldsymbol{g}}_{t}^{\top} {{\widehat}{A}}_t^{\dagger} \ {\boldsymbol{g}}_{t} &= \sum_{k=1}^r \left( {\boldsymbol{g}}_{T_k}^{\top} {{\widehat}{A}}_{T_k}^{\dagger} {\boldsymbol{g}}_{T_k} + \sum_{t = T_k+1}^{T_{k+1}-1} {\boldsymbol{g}}_{t}^{\top} {{\widehat}{A}}_t^{\dagger} \ {\boldsymbol{g}}_{t}\right) \\ &= \sum_{k = 1}^r \left(1 + \sum_{t = T_k+1}^{T_{k+1}-1} \left(1-\frac{\operatorname*{det_{+}}({{\widehat}{A}}_{t-1})}{\operatorname*{det_{+}}({{\widehat}{A}}_t)}\right) \right) \\&= r + \sum_{k = 1}^r \sum_{t = T_k+1}^{T_{k+1}-1} \left(1-\frac{\operatorname*{det_{+}}({{\widehat}{A}}_{t-1})}{\operatorname*{det_{+}}({{\widehat}{A}}_t)}\right) \\ &\le r + \sum_{k = 1}^r \sum_{t = T_k+1}^{T_{k+1}-1} \ln \frac{\operatorname*{det_{+}}({{\widehat}{A}}_t)}{\operatorname*{det_{+}}({{\widehat}{A}}_{t-1})} \\&= r + \sum_{k = 1}^r \ln \frac{\operatorname*{det_{+}}({{\widehat}{A}}_{T_{k+1}-1})}{\operatorname*{det_{+}}({{\widehat}{A}}_{T_k})}~.\end{aligned}$$ Fix any $k$ and let $\lambda_{k,1},\dots,\lambda_{k,k}$ be the nonzero eigenvalues of ${{\widehat}{A}}_{T_k}$ and $\lambda_{k,1}+\mu_{k,1},\dots,\lambda_{k,k}+\mu_{k,k}$ be the nonzero eigenvalues of ${{\widehat}{A}}_{T_{k+1}-1}$. Then $$\begin{aligned} \ln \frac{\operatorname*{det_{+}}({{\widehat}{A}}_{T_{k+1}-1})}{\operatorname*{det_{+}}({{\widehat}{A}}_{T_k})} = \ln \prod_{i=1}^k \frac{\lambda_{k,i}+\mu_{k,i}}{\lambda_{k,i}} = \sum_{i=1}^k \ln \left(1+\frac{\mu_{k,i}}{\lambda_{k,i}}\right)~.\end{aligned}$$ Hence, we arrive at $$\begin{aligned} \sum_{t=1}^T {\boldsymbol{g}}_{t}^{\top} {{\widehat}{A}}_t^{+}{\boldsymbol{g}}_{t} \le r + \sum_{k = 1}^r \sum_{i=1}^k \ln \left(1+\frac{\mu_{k,i}}{\lambda_{k,i}}\right)~.\end{aligned}$$ To further bound the latter quantity, we use $\lambda^* \leq \lambda_{k,i}$ and Jensen’s inequality : $$\begin{aligned} \sum_{k = 1}^r \sum_{i=1}^k \ln \left(1+\frac{\mu_{k,i}}{\lambda_{k,i}}\right) &\leq \sum_{k = 1}^r \sum_{i=1}^k \ln \left(1+\frac{\mu_{k,i}}{\lambda^*}\right) \\&\leq \frac{(1+r)r}{2} \ln \left(1 + \frac{2 \sum_{k = 1}^r \sum_{i=1}^k \mu_{k,i}}{(1+r)r\lambda^*} \right)~.\end{aligned}$$ Finally noticing that $$\sum_{i=1}^k \mu_{k,i} = {\textsc{tr}({{{\widehat}{A}}_{T_{k+1}-1}})} - {\textsc{tr}({{{\widehat}{A}}_{T_k}})} = \sum_{t = T_k + 1}^{T_{k+1}-1} {\textsc{tr}({{\boldsymbol{g}}_t {\boldsymbol{g}}_t^\top})} = \sum_{t = T_k + 1}^{T_{k+1}-1} {\left\|{{\boldsymbol{g}}_t}\right\|}^2_2$$ completes the proof. Taken together, Eq.  and Theorem \[thm:pseudoinverse\] lead to the following regret bounds (recall the definitions of $\lambda^*$ and $r$ from Theorem \[thm:pseudoinverse\]). If $\sigma_t = 0$ for all $t$ and $\eta_t$ is set to be $\frac{1}{CL}\sqrt{\frac{d}{t}}$, then the regret of the algorithm defined by Eq.  is at most $$\frac{CL}{2}\sqrt{\frac{T}{d}} \left(r + \frac{(1+r)r}{2} \ln \left(1 + \frac{2 \sum_{t = 1}^T {\left\|{{\boldsymbol{g}}_t}\right\|}^2_2}{(1+r)r\lambda^*} \right)\right) + 4CL\sqrt{Td} .$$ On the other hand, if $\sigma_t \geq \sigma > 0$ for all $t$ and $\eta_t$ is set to be $0$, then the regret is at most $$\frac{1}{2\sigma} \left(r + \frac{(1+r)r}{2} \ln \left(1 + \frac{2 \sum_{t = 1}^T {\left\|{{\boldsymbol{g}}_t}\right\|}^2_2}{(1+r)r\lambda^*} \right)\right)~.$$ Proof of Theorem \[thm:FD\] {#app:FD} =========================== We again first apply Proposition \[prop:meta\] (recall the notation $R_G$ and $R_D$ stated in the proposition). By the construction of the sketch, we have $$A_t - A_{t-1} = S_t^\top S_t - S_{t-1}^\top S_{t-1} = {{\widehat}{\boldsymbol{g}}}_t{{\widehat}{\boldsymbol{g}}}_t^\top - \rho_t V_t^\top V_t \preceq {{\widehat}{\boldsymbol{g}}}_t{{\widehat}{\boldsymbol{g}}}_t^\top.$$ It follows immediately that $R_D$ is again at most $4(CL)^2 \sum_{t=1}^T \eta_t$. For the term $R_G$, we will apply the following guarantee of Frequent Directions (see the proof of Theorem 1.1 of [@GhashamiLiPhWo15]): $ \sum_{t=1}^T \rho_t \leq \frac{\Omega_k}{m - k}. $ Specifically, since ${\textsc{tr}({V_t A_t^{-1} V_t^\top})} \leq \frac{1}{\alpha}{\textsc{tr}({V_t V_t^\top})} = \frac{m}{\alpha}$ we have $$\begin{aligned} R_G &= \sum_{t=1}^T \frac{1}{\sigma_t + \eta_t} { \left\langle {A_t^{-1}, A_t - A_{t-1} + \rho_t V_t^\top V_t} \right\rangle } \\ &\leq \frac{1}{\sigma + \eta_T} \sum_{t=1}^T \left( { \left\langle {A_t^{-1}, A_t - A_{t-1} + \rho_t V_t^\top V_t} \right\rangle } \right) \\ &= \frac{1}{\sigma + \eta_T} \sum_{t=1}^T \left( { \left\langle {A_t^{-1}, A_t - A_{t-1}} \right\rangle } + \rho_t {\textsc{tr}({V_t A_t^{-1} V_t^\top})} \right) \\ &\leq \frac{1}{(\sigma + \eta_T)} \sum_{t=1}^T { \left\langle {A_t^{-1}, A_t - A_{t-1}} \right\rangle } + \frac{m\Omega_k}{(m-k)(\sigma+\eta_T)\alpha}~. \end{aligned}$$ Finally for the term $\sum_{t=1}^T { \left\langle {A_t^{-1}, A_t - A_{t-1}} \right\rangle }$, we proceed similarly to the proof of Theorem \[thm:AON\]: $$\begin{aligned} \sum_{t=1}^T { \left\langle {A_t^{-1}, A_t - A_{t-1}} \right\rangle } &\leq \sum_{t=1}^T \ln \frac{|A_t|}{|A_{t-1}|} = \ln \frac{|A_T|}{|A_{0}|} = \sum_{i=1}^d \ln \left(1 + \frac{\lambda_i(S_T^\top S_T)}{\alpha} \right) \\ &= \sum_{i=1}^m \ln \left(1 + \frac{\lambda_i(S_T^\top S_T)}{\alpha} \right) \leq m \ln\left(1 + \frac{{\textsc{tr}({S_T^\top S_T})}}{m\alpha}\right) \end{aligned}$$ where the first inequality is by the concavity of the function $\ln|X|$, the second one is by Jensen’s inequality, and the last equality is by the fact that $S_T^\top S_T$ is of rank $m$ and thus $\lambda_i(S_T^\top S_T) = 0$ for any $i > m$. This concludes the proof. Sparse updates for FD sketch {#app:sparse} ============================ The sparse version of our algorithm with the Frequent Directions option is much more involved. We begin by taking a detour and introducing a fast and epoch-based variant of the Frequent Directions algorithm proposed in [@GhashamiLiPhWo15]. The idea is the following: instead of doing an eigendecomposition immediately after inserting a new ${{\widehat}{\boldsymbol{g}}}$ every round, we double the size of the sketch (to $2m$), keep up to $m$ recent ${{\widehat}{\boldsymbol{g}}}$’s, do the decomposition only at the end of every $m$ rounds and finally keep the top $m$ eigenvectors with shrunk eigenvalues. The advantage of this variant is that it can be implemented straightforwardly in ${\mathcal{O}}(md)$ time on average without doing a complicated rank-one SVD update, while still ensuring the exact same guarantee with the only price of doubling the sketch size. Algorithm \[alg:FD\_epoch\] shows the details of this variant and how we maintain $H$. The sketch $S$ is always represented by two parts: the top part ($DV$) comes from the last eigendecomposition, and the bottom part ($G$) collects the recent to-sketch vector ${{\widehat}{\boldsymbol{g}}}$’s. Note that within each epoch, the update of $H^{-1}$ is a rank-two update and thus $H$ can be updated efficiently using Woodbury formula (Lines \[line:update\_H\_1\] and \[line:update\_H\_2\] of Algorithm \[alg:FD\_epoch\]). $\tau, D, V, G$ and $H$. Set $\tau = 1, D = {\boldsymbol{0}}_{m\times m}, G = {\boldsymbol{0}}_{m\times d}, H = \tfrac{1}{\alpha} { \boldsymbol{I}_{2m} }$ and let $V$ be any $m \times d$ matrix whose rows are orthonormal. Return $({\boldsymbol{0}}_{2m \times d}, H)$. Insert ${{\widehat}{\boldsymbol{g}}}$ into the $\tau$-th row of $G$. Let ${\boldsymbol{e}}$ be the $2m \times 1$ basis vector whose $(m + \tau)$-th entry is 1 and ${\boldsymbol{q}}= S{{\widehat}{\boldsymbol{g}}}- \tfrac{{{\widehat}{\boldsymbol{g}}}^\top{{\widehat}{\boldsymbol{g}}}}{2}{\boldsymbol{e}}$. \[line:update\_H\_1\] Update $H \leftarrow H - \frac{H {\boldsymbol{q}}{\boldsymbol{e}}^\top H}{1 + {\boldsymbol{e}}^\top H {\boldsymbol{q}}}$ and $H \leftarrow H - \frac{H {\boldsymbol{e}}{\boldsymbol{q}}^\top H}{1 + {\boldsymbol{q}}^\top H{\boldsymbol{e}}}$. \[line:update\_H\_2\] Update $\tau \leftarrow \tau + 1$. $(V, \Sigma) \leftarrow \textbf{ComputeEigenSystem}\left(\left( \begin{array}{c} DV \\ G \end{array} \right)\right)$ (Algorithm \[alg:eigen\]). \[alg:FD:eigen\] Set $D$ to be a diagonal matrix with $D_{i,i} = \sqrt{\Sigma_{i,i} - \Sigma_{m, m}}, \; \forall i \in [m]$. Set $H \leftarrow {\mathrm{diag}\!\left\{{\frac{1}{\alpha + D_{1,1}^2}, \cdots, \frac{1}{\alpha + D_{m,m}^2}, \frac{1}{\alpha}, \ldots, \frac{1}{\alpha}}\right\}} $. Set $ G = {\boldsymbol{0}}_{m \times d}$. Set $\tau = 1$. Return $\left(\left( \begin{array}{c} DV \\ G \end{array} \right), H\right) $ . Although we can use any available algorithm that runs in ${\mathcal{O}}(m^2 d)$ time to do the eigendecomposition (Line \[alg:FD:eigen\] in Algorithm \[alg:FD\_epoch\]), we explicitly write down the procedure of reducing this problem to eigendecomposing a small square matrix in Algorithm \[alg:eigen\], which will be important for deriving the sparse version of the algorithm. Lemma \[lem:eigen\] proves that Algorithm \[alg:eigen\] works correctly for finding the top $m$ eigenvector and eigenvalues. $S = \left( \begin{array}{c} DV \\ G \end{array} \right)$. $V' \in {{\mathbb{R}}}^{m \times d}$ and diagonal matrix $\Sigma \in {{\mathbb{R}}}^{m \times m}$ such that the $i$-th row of $V'$ and the $i$-th entry of the diagonal of $\Sigma$ are the $i$-th eigenvector and eigenvalue of $S^\top S$ respectively. Compute $M = GV^\top$. Decompose $G - MV$ into the form $LQ$ where $L \in {{\mathbb{R}}}^{m \times r}$, $Q$ is a $r \times d$ matrix whose rows are orthonormal and $r$ is the rank of $G - MV$ (e.g. by a Gram-Schmidt process). \[alg:eigen:LQ\] Compute the top $m$ eigenvectors ($U \in {{\mathbb{R}}}^{m \times (m + r)}$) and eigenvalues ($\Sigma \in {{\mathbb{R}}}^{m \times m }$) of the matrix $\left( \begin{array}{cc} D^2 & {\boldsymbol{0}}_{m \times r} \\ {\boldsymbol{0}}_{r \times m} & {\boldsymbol{0}}_{r \times r} \end{array} \right) + \left( \begin{array}{c} M^\top \\ L^\top \end{array} \right) \left( \begin{array}{cc} M & L \end{array} \right) $. Return $(V', \Sigma)$ where $V' = U\left( \begin{array}{c} V \\ Q \end{array} \right)$. \[lem:eigen\] The outputs of Algorithm \[alg:eigen\] are such that the $i$-th row of $V'$ and the $i$-th entry of the diagonal of $\Sigma$ are the $i$-th eigenvector and eigenvalue of $S^\top S$ respectively. Let $W^\top \in {{\mathbb{R}}}^{d \times (d - m - r) }$ be an orthonormal basis of the null space of $\left(\begin{array}{c} V \\ Q \end{array}\right)$. By Line \[alg:eigen:LQ\], we know that $GW^\top = {\boldsymbol{0}}$ and $E = (V^\top \; Q^\top \; W^\top)$ forms an orthonormal basis of ${{\mathbb{R}}}^d$. Therefore, we have $$\begin{aligned} S^\top S &= V^\top D^2 V + G^\top G \\ &= E \left(\begin{array}{ccc} D^2 & {\boldsymbol{0}}& {\boldsymbol{0}}\\ {\boldsymbol{0}}& {\boldsymbol{0}}& {\boldsymbol{0}}\\ {\boldsymbol{0}}& {\boldsymbol{0}}& {\boldsymbol{0}}\end{array}\right) E^\top + E E^\top G^\top G E E^\top \\ &= E \left( \left(\begin{array}{ccc} D^2 & {\boldsymbol{0}}& {\boldsymbol{0}}\\ {\boldsymbol{0}}& {\boldsymbol{0}}& {\boldsymbol{0}}\\ {\boldsymbol{0}}& {\boldsymbol{0}}& {\boldsymbol{0}}\end{array}\right) + \left(\begin{array}{c} VG^\top \\ QG^\top \\ WG^\top \end{array}\right) (GV^\top \; G Q^\top \; G W^\top) \right) E^\top \\ &= (V^\top \; Q^\top) \underbrace{\left( \left( \begin{array}{cc} D^2 & {\boldsymbol{0}}\\ {\boldsymbol{0}}& {\boldsymbol{0}}\end{array} \right) + \left( \begin{array}{c} M^\top \\ L^\top \end{array} \right) \left( \begin{array}{cc} M & L \end{array} \right)\right) }_{= C} \left(\begin{array}{c} V \\ Q \end{array}\right)\end{aligned}$$ where in the last step we use the fact $GQ^\top = (MV + LQ)Q^\top = L$. Now it is clear that the eigenvalue of $C$ will be the eigenvalue of $S^\top S$ and the eigenvector of $C$ will be the eigenvector of $S^\top S$ after left multiplied by matrix $(V^\top \; Q^\top)$, completing the proof. We are now ready to present the sparse version of [SON]{}with Frequent Direction sketch (Algorithm \[alg:SFDN\]). The key point is that we represent $V_t$ as $F_t Z_t$ for some $F_t \in {{\mathbb{R}}}^{m \times m}$ and $Z_t \in {{\mathbb{R}}}^{m \times d}$, and the weight vector ${\boldsymbol{w}}_t$ as ${\bar{\boldsymbol{w}}}_t + Z_{t-1}^\top {\boldsymbol{b}}_t$ and ensure that the update of $Z_t$ and ${\bar{\boldsymbol{w}}}_t$ will always be sparse. To see this, denote the sketch $S_t$ by $\left( \begin{array}{c} D_t F_t Z_t \\ G_t \end{array} \right) $ and let $H_{t,1}$ and $H_{t,2}$ be the top and bottom half of $H_t$. Now the update rule of ${\boldsymbol{u}}_{t+1}$ can be rewritten as $$\begin{aligned} {\boldsymbol{u}}_{t+1} &= {\boldsymbol{w}}_t - \big({ \boldsymbol{I}_{d} } - S_t^\top H_t S_t\big)\tfrac{{\boldsymbol{g}}_t}{\alpha} \\ &= {\bar{\boldsymbol{w}}}_t + Z_{t-1}^\top {\boldsymbol{b}}_t - \frac{1}{\alpha} {\boldsymbol{g}}_t + \frac{1}{\alpha} (Z_t^\top F_t^\top D_t, G_t^\top) \left( \begin{array}{c} H_{t,1} S_t {\boldsymbol{g}}_t \\ H_{t,2} S_t {\boldsymbol{g}}_t \end{array} \right) \\ &= \underbrace{{\bar{\boldsymbol{w}}}_t + \frac{1}{\alpha} (G_t^\top H_{t,2} S_t {\boldsymbol{g}}_t - {\boldsymbol{g}}_t) - (Z_t - Z_{t-1})^\top {\boldsymbol{b}}_t}_{{\bar{\boldsymbol{u}}}_{t+1}} + Z_t^\top\underbrace{ ({\boldsymbol{b}}_t + \frac{1}{\alpha}F_t^\top D_t H_{t,1}S_t {\boldsymbol{g}}_t ) }_{{\boldsymbol{b}}'_{t+1}}\end{aligned}$$ We will show that $Z_t - Z_{t-1} = \Delta_t G_t$ for some $\Delta_t \in {{\mathbb{R}}}^{m \times m}$ shortly, and thus the above update is efficient due to the fact that the rows of $G_t$ are collections of previous sparse vectors ${{\widehat}{\boldsymbol{g}}}$. Similarly, the update of ${\boldsymbol{w}}_{t+1}$ can be written as $$\begin{aligned} {\boldsymbol{w}}_{t+1} &= {\boldsymbol{u}}_{t+1} - {\ensuremath{\gamma}}_t ({\boldsymbol{x}}_{t+1} - S_t^\top H_t S_t {\boldsymbol{x}}_{t+1}) \\ &= {\bar{\boldsymbol{u}}}_{t+1} + Z_t^\top {\boldsymbol{b}}'_{t+1} - {\ensuremath{\gamma}}_t {\boldsymbol{x}}_{t+1} + {\ensuremath{\gamma}}_t (Z_t^\top F_t^\top D_t, G_t^\top) \left( \begin{array}{c} H_{t,1} S_t {\boldsymbol{x}}_{t+1} \\ H_{t,2} S_t {\boldsymbol{x}}_{t+1} \end{array} \right) \\ &= \underbrace{{\bar{\boldsymbol{u}}}_{t+1} + {\ensuremath{\gamma}}_t (G_t^\top H_{t,2}S_t {\boldsymbol{x}}_{t+1} - {\boldsymbol{x}}_{t+1})}_{{\bar{\boldsymbol{w}}}_{t+1}} + Z_t^\top \underbrace{({\boldsymbol{b}}'_{t+1} + {\ensuremath{\gamma}}_t F_t^\top D_t H_{t,1} S_t {\boldsymbol{x}}_{t+1})}_{{\boldsymbol{b}}_{t+1}} .\end{aligned}$$ It is clear that ${\ensuremath{\gamma}}_t$ can be computed efficiently, and thus the update of ${\boldsymbol{w}}_{t+1}$ is also efficient. These updates correspond to Line \[alg:SFDN:projection\] and \[alg:SFDN:weight\_update\] of Algorithm \[alg:SFDN\]. It remains to perform the sketch update efficiently. Algorithm \[alg:SFD\] is the sparse version of Algorithm \[alg:FD\_epoch\]. The challenging part is to compute eigenvectors and eigenvalues efficiently. Fortunately, in light of Algorithm \[alg:eigen\], using the new representation $V = FZ$ one can directly translate the process to Algorithm \[alg:sparse\_eigen\] and find that the eigenvectors can be expressed in the form $N_1 Z + N_2 G$. To see this, first note that Line 1 of both algorithms compute the same matrix $M = GV^\top = GZ^\top F^\top$. Then Line \[alg:eigen:decompose\] decomposes the matrix $$G - MV = G - MFZ = \left(\begin{array} {cc} -MF & { \boldsymbol{I}_{m} } \end{array} \right) \left( \begin{array}{c} Z \\ G \end{array} \right) {\stackrel{\rm def}{=}}PR$$ using Gram-Schmidt into the form $LQR$ such that the rows of $QR$ are orthonormal (that is, $QR$ corresponds to $Q$ in Algorithm \[alg:eigen\]). While directly applying Gram-Schmidt to $PR$ would take ${\mathcal{O}}(m^2 d)$ time, this step can in fact be efficiently implemented by performing Gram-Schmidt to $P$ (instead of $PR$) in a Banach space where inner product is defined as $\langle {\boldsymbol{a}}, {\boldsymbol{b}}\rangle = {\boldsymbol{a}}^\top K {\boldsymbol{b}}$ with $$K = RR^\top = \left(\begin{array} {cc} ZZ^\top & ZG^\top \\ GZ^\top & GG^\top \end{array}\right)$$ being the Gram matrix of $R$. Since we can efficiently maintain the Gram matrix of $Z$ (see Line \[alg:SFDN:Gram\] of Algorithm \[alg:SFD\]) and $GZ^\top$ and $GG^\top$ can be computed sparsely, this decomposing step can be done efficiently too. This modified Gram-Schmidt algorithm is presented in Algorithm \[alg:Gram-Schmidt\] (which will also be used in sparse Oja’s sketch), where Line \[alg:Gram-Schmidt:inner\] is the key difference compared to standard Gram-Schmidt (see Lemma \[lem:Gram-Schmidt\] below for a formal proof of correctness). Line 3 of Algorithms \[alg:eigen\] and \[alg:sparse\_eigen\] are exactly the same. Finally the eigenvectors $U\left( \begin{array}{c} V \\ Q \end{array} \right)$ in Algorithm \[alg:eigen\] now becomes (with $U_1, U_2, Q_1, Q_2, N_1, N_2$ defined in Line 4 of Algorithm \[alg:sparse\_eigen\]) $$\begin{aligned} U\left( \begin{array}{c} FZ \\ QR \end{array} \right) &= (U_1, U_2) \left( \begin{array}{c} FZ \\ QR \end{array} \right) = U_1 FZ + U_2 (Q_1, Q_2) \left( \begin{array}{c} Z \\ G \end{array} \right) \\ &= (U_1 FZ + U_2 Q_1) Z + U_2 Q_2 G = N_1 Z + N_2 G.\end{aligned}$$ Therefore, having the eigenvectors in the form $N_1 Z + N_2 G$, we can simply update $F$ as $N_1$ and $Z$ as $Z + N_1^{-1} N_2 G$ so that the invariant $V = FZ$ still holds (see Line \[alg:SFDN:Z\_update\] of Algorithm \[alg:SFD\]). The update of $Z$ is sparse since $G$ is sparse. We finally summarize the results of this section in the following theorem. The average running time of Algorithm \[alg:SFDN\] is ${\mathcal{O}}\bigl(m^2 + ms\bigr)$ per round, and the regret bound is exactly the same as the one stated in Theorem \[thm:FD\]. Parameters $C$, $\alpha$ and $m$. Initialize ${\bar{\boldsymbol{u}}}= {\boldsymbol{0}}_{d \times 1}$, ${\boldsymbol{b}}= {\boldsymbol{0}}_{m \times 1}$ and $(D, F, Z, G, H) \leftarrow \textbf{SketchInit}(\alpha, m)$ (Algorithm \[alg:SFD\]). Let $S$ denote the matrix $\left( \begin{array}{c} DFZ \\ G \end{array} \right)$ throughout the algorithm (without actually computing it). Let $H_1$ and $H_2$ denote the upper and lower half of $H$, i.e. $H = \left( \begin{array}{c} H_1 \\ H_2 \end{array} \right)$. Receive example ${\boldsymbol{x}}_{t}$. Projection step: compute ${{\widehat}{{\boldsymbol{x}}}}= S{\boldsymbol{x}}_t$ and ${\ensuremath{\gamma}}= \frac{\tau_C({\bar{\boldsymbol{u}}}^\top {\boldsymbol{x}}_{t} + {\boldsymbol{b}}^\top Z{\boldsymbol{x}}_t)}{{\boldsymbol{x}}_{t}^\top {\boldsymbol{x}}_{t} - {{\widehat}{{\boldsymbol{x}}}}^\top H {{\widehat}{{\boldsymbol{x}}}}}$. Obtain ${\bar{\boldsymbol{w}}}= {\bar{\boldsymbol{u}}}+ {\ensuremath{\gamma}}(G^\top H_2 {{\widehat}{{\boldsymbol{x}}}}- {\boldsymbol{x}}_t )$ and ${\boldsymbol{b}}\leftarrow {\boldsymbol{b}}+ {\ensuremath{\gamma}}F^\top DH_1 {{\widehat}{{\boldsymbol{x}}}}$. \[alg:SFDN:projection\] Predict label $y_t = {\bar{\boldsymbol{w}}}^\top {\boldsymbol{x}}_t + {\boldsymbol{b}}^\top Z{\boldsymbol{x}}_t$ and suffer loss $\ell_t(y_t)$. Compute gradient ${\boldsymbol{g}}_t = \ell_t'(y_t) {\boldsymbol{x}}_t$ and the to-sketch vector ${{\widehat}{\boldsymbol{g}}}= \sqrt{\sigma_t + \eta_t}{\boldsymbol{g}}_t$. $(D, F, Z, G, H, \Delta) \leftarrow \textbf{SketchUpdate}({{\widehat}{\boldsymbol{g}}})$ (Algorithm \[alg:SFD\]). Update ${\bar{\boldsymbol{u}}}= {\bar{\boldsymbol{w}}}+ \frac{1}{\alpha}(G^\top H_2 S {\boldsymbol{g}}- {\boldsymbol{g}}) - G^\top\Delta^\top {\boldsymbol{b}}$ and ${\boldsymbol{b}}\leftarrow {\boldsymbol{b}}+ \frac{1}{\alpha} F^\top DH_1 S{\boldsymbol{g}}$. \[alg:SFDN:weight\_update\] $\tau, D, F, Z, G, H$ and $K$. Set $\tau = 1, D = {\boldsymbol{0}}_{m\times m}, F = K = { \boldsymbol{I}_{m} }, H = \tfrac{1}{\alpha} { \boldsymbol{I}_{2m} }, G = {\boldsymbol{0}}_{m\times d}$, and let $Z$ be any $m \times d$ matrix whose rows are orthonormal. Return $(D, F, Z, G, H)$. Insert ${{\widehat}{\boldsymbol{g}}}$ into the $\tau$-th row of $G$. Let ${\boldsymbol{e}}$ be the $2m \times 1$ basic vector whose $(m + \tau)$-th entry is 1 and compute ${\boldsymbol{q}}= S{{\widehat}{\boldsymbol{g}}}- \tfrac{{{\widehat}{\boldsymbol{g}}}^\top{{\widehat}{\boldsymbol{g}}}}{2}{\boldsymbol{e}}$. Update $H \leftarrow H - \frac{H {\boldsymbol{q}}{\boldsymbol{e}}^\top H}{1 + {\boldsymbol{e}}^\top H {\boldsymbol{q}}}$ and $H \leftarrow H - \frac{H {\boldsymbol{e}}{\boldsymbol{q}}^\top H}{1 + {\boldsymbol{q}}^\top H{\boldsymbol{e}}}$. Set $\Delta = {\boldsymbol{0}}_{m \times m}$. Set $\tau \leftarrow \tau + 1$. $(N_1, N_2, \Sigma) \leftarrow \textbf{ComputeSparseEigenSystem} \left(\left( \begin{array}{c} DFZ \\ G \end{array} \right), K\right)$ (Algorithm \[alg:sparse\_eigen\]). Compute $\Delta = N_1^{-1} N_2$. Update Gram matrix $K \leftarrow K + \Delta G Z^\top + ZG^\top \Delta^\top + \Delta G G^\top \Delta^\top $. \[alg:SFDN:Gram\] Update $F = N_1, Z \leftarrow Z + \Delta G$, and let $D$ be such that $D_{i,i} = \sqrt{\Sigma_{i,i} - \Sigma_{m, m}}, \; \forall i \in [m]$. \[alg:SFDN:Z\_update\] Set $H \leftarrow {\mathrm{diag}\!\left\{{\frac{1}{\alpha + D_{1,1}^2}, \cdots, \frac{1}{\alpha + D_{m,m}^2}, \frac{1}{\alpha}, \ldots, \frac{1}{\alpha}}\right\}} $. Set $ G = {\boldsymbol{0}}_{m \times d}$. Set $\tau = 1$. Return $(D, F, Z, G, H, \Delta)$. $S = \left( \begin{array}{c} DFZ \\ G \end{array} \right)$ and Gram matrix $K = ZZ^\top$. $N_1, N_2 \in {{\mathbb{R}}}^{m \times m}$ and diagonal matrix $\Sigma \in {{\mathbb{R}}}^{m \times m}$ such that the $i$-th row of $N_1 Z + N_2 G$ and the $i$-th entry of the diagonal of $\Sigma$ are the $i$-th eigenvector and eigenvalue of the matrix $S^\top S$. Compute $M = GZ^\top F^\top$. $(L, Q) \leftarrow \text{Decompose}\left(\left(\begin{array} {cc} -MF & { \boldsymbol{I}_{m} } \end{array} \right), \left(\begin{array} {cc} K & ZG^\top \\ GZ^\top & GG^\top \end{array}\right) \right)$ (Algorithm \[alg:Gram-Schmidt\]). \[alg:eigen:decompose\] Let $r$ be the number of columns of $L$. Compute the top $m$ eigenvectors ($U \in {{\mathbb{R}}}^{m \times (m + r)}$) and eigenvalues ($\Sigma \in {{\mathbb{R}}}^{m \times m }$) of the matrix $\left( \begin{array}{cc} D^2 & {\boldsymbol{0}}_{m \times r} \\ {\boldsymbol{0}}_{r \times m} & {\boldsymbol{0}}_{r \times r} \end{array} \right) + \left( \begin{array}{c} M^\top \\ L^\top \end{array} \right) \left( \begin{array}{cc} M & L \end{array} \right) $. Set $N_1 = U_1 F + U_2 Q_1$ and $N_2 = U_2 Q_2$ where $U_1$ and $U_2$ are the first $m$ and last $r$ columns of $U$ respectively, and $Q_1$ and $Q_2$ are the left and right half of $Q$ respectively. Return $(N_1, N_2, \Sigma)$. \[lem:Gram-Schmidt\] The output of Algorithm \[alg:Gram-Schmidt\] ensures that $LQR = PR$ and the rows of $QR$ are orthonormal. It suffices to prove that Algorithm \[alg:Gram-Schmidt\] is exactly the same as using the standard Gram-Schmidt to decompose the matrix $PR$ into $L$ and an orthonormal matrix which can be written as $QR$. First note that when $K = { \boldsymbol{I}_{n} }$, Algorithm \[alg:Gram-Schmidt\] is simply the standard Gram-Schmidt algorithm applied to $P$. We will thus go through Line 1-10 of Algorithm \[alg:Gram-Schmidt\] with $P$ replaced by $PR$ and $K$ by ${ \boldsymbol{I}_{n} }$ and show that it leads to the exact same calculations as running Algorithm \[alg:Gram-Schmidt\] directly. For clarity, we add “$\;\tilde{}\;$” to symbols to distinguish the two cases (so $\tilde{P} = PR$ and $\tilde{K} = { \boldsymbol{I}_{n} }$). We will inductively prove the invariance $\tilde{Q} = QR$ and $\tilde{L} = L$. The base case $\tilde{Q} = QR = {\boldsymbol{0}}$ and $\tilde{L} = L = {\boldsymbol{0}}$ is trivial. Now assume it holds for iteration $i - 1$ and consider iteration $i$. We have $$\tilde{{\boldsymbol{\alpha}}} = \tilde{Q}\tilde{K}\tilde{{\boldsymbol{p}}} = QRR^\top{\boldsymbol{p}}= QK{\boldsymbol{p}}= {\boldsymbol{\alpha}},$$ $$\tilde{{\boldsymbol{\beta}}} = \tilde{{\boldsymbol{p}}} - \tilde{Q}^\top \tilde{{\boldsymbol{\alpha}}} = R^\top{\boldsymbol{p}}- (QR)^\top{\boldsymbol{\alpha}}= R^\top ({\boldsymbol{p}}- Q^\top{\boldsymbol{\alpha}}) = R^\top {\boldsymbol{\beta}},$$ $$\tilde{c} = \sqrt{\tilde{{\boldsymbol{\beta}}}^\top \tilde{K} \tilde{{\boldsymbol{\beta}}}} = \sqrt{(R^\top {\boldsymbol{\beta}})^\top (R^\top {\boldsymbol{\beta}})} = \sqrt{{\boldsymbol{\beta}}^\top K {\boldsymbol{\beta}}} = c,$$ which clearly implies that after execution of Line 5-9, we again have $\tilde{Q} = QR$ and $\tilde{L} = L$, finishing the induction. Details for sparse Oja’s algorithm {#app:sparse-oja} ================================== We finally provide the missing details for the sparse version of the Oja’s algorithm. Since we already discussed the updates for ${\bar{\boldsymbol{w}}}_t$ and ${\boldsymbol{b}}_t$ in Section \[sec:sparse\], we just need to describe how the updates for $F_t$ and $Z_t$ work. Recall that the dense Oja’s updates can be written in terms of $F$ and $Z$ as $$\label{eq:Oja} \begin{split} \Lambda_{t} &= ({ \boldsymbol{I}_{m} } - \Gamma_t) \Lambda_{t-1} + \Gamma_t \;{\mathrm{diag}\!\left\{{F_{t-1}Z_{t-1} {{\widehat}{\boldsymbol{g}}}_t}\right\}}^2 \\ F_t Z_t &\xleftarrow{\text{orth}} F_{t-1}Z_{t-1} + \Gamma_t F_{t-1}Z_{t-1} {{\widehat}{\boldsymbol{g}}}_t {{\widehat}{\boldsymbol{g}}}_t^\top = F_{t-1} (Z_{t-1} + F_{t-1}^{-1}\Gamma_t F_{t-1}Z_{t-1} {{\widehat}{\boldsymbol{g}}}_t {{\widehat}{\boldsymbol{g}}}_t^\top)~. \end{split}$$ Here, the update for the eigenvalues is straightforward. For the update of eigenvectors, first we let $Z_t = Z_{t-1} + {\boldsymbol{\delta}}_t {{\widehat}{\boldsymbol{g}}}_t^\top$ where ${\boldsymbol{\delta}}_t = F_{t-1}^{-1}\Gamma_t F_{t-1}Z_{t-1} {{\widehat}{\boldsymbol{g}}}_t$ (note that under the assumption of Footnote \[fn:full\_rank\], $F_t$ is always invertible). Now it is clear that $Z_t - Z_{t-1}$ is a sparse rank-one matrix and the update of ${\bar{\boldsymbol{u}}}_{t+1}$ is efficient. Finally it remains to update $F_t$ so that $F_t Z_t$ is the same as orthonormalizing $F_{t-1} Z_t$, which can in fact be achieved by applying the Gram-Schmidt algorithm to $F_{t-1}$ in a Banach space where inner product is defined as $\langle {\boldsymbol{a}}, {\boldsymbol{b}}\rangle = {\boldsymbol{a}}^\top K_t {\boldsymbol{b}}$ where $K_t$ is the Gram matrix $Z_t Z_t^\top$ (see Algorithm \[alg:Gram-Schmidt\]). Since we can maintain $K_t$ efficiently based on the update of $Z_t$: $$K_t = K_{t-1} + {\boldsymbol{\delta}}_t {{\widehat}{\boldsymbol{g}}}_t^\top Z_{t-1}^\top + Z_{t-1} {{\widehat}{\boldsymbol{g}}}_t {\boldsymbol{\delta}}_t^\top + ({{\widehat}{\boldsymbol{g}}}_t^\top {{\widehat}{\boldsymbol{g}}}_t) {\boldsymbol{\delta}}_t {\boldsymbol{\delta}}_t^\top,$$ the update of $F_t$ can therefore be implemented in ${\mathcal{O}}(m^3)$ time. $P \in {{\mathbb{R}}}^{m \times n}$, $K \in {{\mathbb{R}}}^{m\times m}$ such that $K$ is the Gram matrix $K = RR^\top$ for some matrix $R \in {{\mathbb{R}}}^{n \times d}$ where $n \geq m, d \geq m$, $L \in {{\mathbb{R}}}^{m \times r}$ and $Q \in {{\mathbb{R}}}^{r \times n}$ such that $LQR = PR$ where $r$ is the rank of $PR$ and the rows of $QR$ are orthonormal. Initialize $L = {\boldsymbol{0}}_{m \times m}$ and $Q = {\boldsymbol{0}}_{m \times n}$. Let ${\boldsymbol{p}}^\top$ be the $i$-th row of $P$. Compute ${\boldsymbol{\alpha}}= Q K {\boldsymbol{p}}, {\boldsymbol{\beta}}= {\boldsymbol{p}}- Q^\top {\boldsymbol{\alpha}}$ and $c = \sqrt{{\boldsymbol{\beta}}^\top K {\boldsymbol{\beta}}}$. \[alg:Gram-Schmidt:inner\] Insert $\frac{1}{c}{\boldsymbol{\beta}}^\top$ to the $i$-th row of $Q$. Set the $i$-th entry of ${\boldsymbol{\alpha}}$ to be $c$ and insert ${\boldsymbol{\alpha}}$ to the $i$-th row of $L$. Delete the all-zero columns of $L$ and all-zero rows of $Q$. Return $(L, Q)$. Experiment Details {#app:experiment} ================== This section reports some detailed experimental results omitted from Section \[subsec:real\_data\]. Table \[tab:datasets\] includes the description of benchmark datasets; Table \[tab:error2\] reports error rates on relatively small datasets to show that [Oja-SON]{}generally has better performance; Table \[tab:error\] reports concrete error rates for the experiments described in Section \[subsec:real\_data\]; finally Table \[tab:eigen\] shows that Oja’s algorithm estimates the eigenvalues accurately. As mentioned in Section \[subsec:real\_data\], we see substantial improvement for the [*splice*]{} dataset when using Oja’s sketch even after the diagonal adaptation. We verify that the condition number for this dataset before and after the diagonal adaptation are very close (682 and 668 respectively), explaining why a large improvement is seen using Oja’s sketch. Fig. \[fig:splice\] shows the decrease of error rates as [Oja-SON]{}with different sketch sizes sees more examples. One can see that even with $m=1$ [Oja-SON]{}already performs very well. This also matches our expectation since there is a huge gap between the top and second eigenvalues of this dataset ($50.7$ and $0.4$ respectively). ![Error rates for [Oja-SON]{}with different sketch sizes on splice dataset[]{data-label="fig:splice"}](splice.pdf){width=".6\textwidth"} [^1]: Recent work by [@GhashamiLiPh16] also studies sparse updates for a more complicated variant of Frequent Directions which is randomized and incurs extra approximation error. [^2]: Stochastic setting assumes that the examples are drawn i.i.d. from a distribution. [^3]: In the standard setting where ${\boldsymbol{w}}_t$ and ${\boldsymbol{x}}_t$ are restricted such that ${\left\|{{\boldsymbol{w}}_t}\right\|} \leq D$ and ${\left\|{{\boldsymbol{x}}_t}\right\|} \leq X$, the minimax regret is ${\mathcal{O}}(DXL\sqrt{T})$. This is clearly a special case of our setting with $C = DX$. [^4]: For simplicity, we assume that $V_{t-1} + \Gamma_t V_{t-1}{{\widehat}{\boldsymbol{g}}}_t {{\widehat}{\boldsymbol{g}}}_t^\top$ is always of full rank so that the orthonormalizing step does not reduce the dimension of $V_t$. \[fn:full\_rank\] [^5]: An open source machine learning toolkit available at [http://hunch.net/\~ vw](http://hunch.net/~ vw) [^6]: $D_1$ is defined as $0.1\times { \boldsymbol{I}_{d} }$ to avoid division by zero. [^7]: By adding a suitable constant, these losses can always be made nonnegative while leaving the regret unchanged. [^8]: See Appendix \[app:projection\] for the closed form of the projection step.
{ "pile_set_name": "ArXiv" }
--- abstract: | Let $g$ be a bounded symmetric measurable nonnegative function on $[0,1]^2$, and ${\left\lVertg\right\rVert} = \int_{[0,1]^2} g(x,y) dx dy$. For a graph $G$ with vertices $\{v_1,v_2,\ldots,v_n\}$ and edge set $E(G)$, we define $$t(G,g) \; = \; \int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G)} g(x_i,x_j) \: dx_1 dx_2 \cdots dx_n \; .$$ We conjecture that $t(G,g) \geq {\left\lVertg\right\rVert}^{|E(G)|}$ holds for any graph $G$ and any function $g$ with nonnegative spectrum. We prove this conjecture for various graphs $G$, including complete graphs, unicyclic and bicyclic graphs, as well as graphs with $5$ vertices or less. [**Keywords**]{}: graphon, Sidorenko’s conjecture, doubly nonnegative matrix, subdivision, norming graph, locally dense graph. [**MSC**]{}: 05C35, 05C22, 26D20 author: - | Alexander Sidorenko\ Armonk, NY, U.S.A.\ sidorenko.ny@gmail.com title: Inequalities for doubly nonnegative functions --- Introduction ============ Let $\mu$ be the Lebesgue measure on $[0,1]$. Let $\mathcal{H}$ denote the space of bounded measurable real functions on $[0,1]^2$, and $\mathcal{G} \subset \mathcal{H}$ denote the subspace of symmetric functions. Let $\mathcal{H}_+$ and $\mathcal{G}_+$ denote the subsets of nonnegative functions in $\mathcal{H}$ and $\mathcal{G}$, respectively. Let $G$ be a simple graph with vertices $\{v_1,v_2,\ldots,v_n\}$ and edge set $E(G)$. We would like to know what conditions on $G$ and $g\in\mathcal{G}_+$ guarantee that $$\label{eq:CI2} t(G,g) \overset{\underset{\mathrm{def}}{}}{=} \int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G)} g(x_i,x_j) \: d\mu^n \; \geq \; \left( \int_{[0,1]^2} g \: d\mu^2 \right)^{|E(G)|} .$$ One approach is to ask what graphs $G$ satisfy \[eq:CI2\] for every function $g\in\mathcal{G}_+$. It is easy to show that such graphs can not have odd cycles, so only graphs with chromatic number $2$ are suitable candidates. It led to \[conj:SC\] Let $H$ be a bipartite graph with two vertex sets $V=\{v_1,v_2,\ldots,v_n\}$, $W=\{w_1,w_2,\ldots,w_m\}$ and edge set $E(H) \subseteq V \times W$. Then for any function $h\in\mathcal{H}_+$ (not necessarily symmetric) $$\label{eq:SC} t(H,h) \overset{\underset{\mathrm{def}}{}}{=} \int_{[0,1]^{n+m}} \prod_{\{v_i,w_j\} \in E(H)} h(x_i,y_j) \: d\mu^{n+m} \; \geq \; \left( \int_{[0,1]^2} h \: d\mu^2 \right)^{|E(H)|} .$$ We discuss \[conj:SC\] in \[sec:SC\]. For a (simple or bipartite) graph $G$, let $E(G)$ denote its edge set, and ${\rm e}(G)=|E(G)|$. For a simple graph $G$, let $V(G)$ denote its vertex set, and ${\rm v}(G)=|V(G)|$. The 1-[*subdivision*]{} of a simple graph $G$ is a bipartite graph $H={\rm Sub}(G)$ with vertex sets $V(G)$ and $E(G)$, where $v \in V(G)$ and $e \in E(G)$ form an edge in $H$ if $v \in e$ in $G$. We call a bipartite graph $H$ [*symmetric*]{} if it has an automorphism $\phi$ which switches its vertex-sets $V$ and $W$: $\phi(V)=W$, $\phi(W)=V$. We call a function $g\in\mathcal{G}_+$ [*doubly nonnegative*]{} if there is a function $h\in\mathcal{H}$ such that $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. Equivalently, a doubly nonnegative function is a nonnegative symmetric function with nonnegative spectrum. We call a function $g\in\mathcal{G}_+$ [*completely positive*]{} if there is a function $h\in\mathcal{H}_+$ such that $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. The terms “doubly nonnegative” and “completely positive” come from matrix theory; there exist functions which are doubly nonnegative but not completely positive (see \[sec:matrices\]). In this article, we study two problems: (a) what functions $g\in\mathcal{G}_+$ satisfy $t(G,g) \geq {\left\lVertg\right\rVert}^{{\rm e}(G)}$ for all simple graphs $G$ (we call such functions [*nice*]{}); and (b) what graphs $G$ satisfy the same inequality for any doubly nonnegative function $g$ (we call such graphs [*good*]{}). If $G$ is good, then \[conj:SC\] holds for $H={\rm Sub}(G)$. We show in \[sec:SC\] that for a fixed $G$, inequality (\[eq:CI2\]) holds for any completely positive function $g$ if and only if \[conj:SC\] holds for $H={\rm Sub}(G)$. Thus, it is reasonable to expect that all completely positive functions are nice. \[conj:CI\] All doubly nonnegative functions are nice. All simple graphs are good. Our \[th:permutation\] demonstrates that there are nice functions which are not doubly nonnegative. If chromatic number $\chi(G)=2$, then goodness of $G$ should follow from \[conj:SC\]. In Sections \[sec:norming\]-\[sec:small\], we give examples of good graphs $G$ with $\chi(G)\geq 3$. In particular, we prove that complete graphs, graphs whose complements consist of disjoint edges, unicyclic and bicyclic graphs, generalized theta graphs, and graphs with $\leq 5$ vertices are all good. In \[sec:extra-good\], we consider a strengthened variant of inequality (\[eq:CI2\]) which in many instances is easier to prove than the original one. We say that a simple graph $G$ is [*extra-good*]{} if for any doubly nonnegative function $g$ and any bounded measurable nonnegative functions $f_1,\ldots,f_{{\rm v}(G)}$ on $[0,1]$, $$\begin{gathered} \label{eq:extra-good} \int_{[0,1]^{{\rm v}(G)}} \prod_{\{v_i,v_j\} \in E(G)} g(x_i,x_j) \: \prod_{i=1}^{{\rm v}(G)} f_i(x_i) \: d\mu^{{\rm v}(G)} \\ \geq \; \left( \int_{[0,1]^2} f(x) g(x,y) f(y) \: d\mu^2 \right)^{{\rm e}(G)} ,\end{gathered}$$ where $f(x) = \left(\prod_{i=1}^{{\rm v}(G)} f_i(x)\right)^{1/(2{\rm e}(G))}$. Obviously, if $G$ is extra-good, then $G$ is good. For each graph that we proved to be good, we also were able to prove that it is extra-good. It is possible that every good graph is extra-good. A connection to Kohayakawa–Nagle–Rödl-Schacht conjecture is discussed in \[sec:KNRS\]. In \[sec:multivar\], we discuss generalizations of inequality \[eq:CI2\] for bounded measurable nonnegative symmetric functions of $r \geq 3$ variables. Doubly nonnegative and\ completely positive matrices {#sec:matrices} ============================ A [*doubly nonnegative matrix*]{} is a real positive semidefinite square matrix with nonnegative entries. A [*completely positive matrix*]{} is a doubly nonnegative matrix which can be factorized as $A=BB^{T}$ where $B$ is a nonnegative (not necessarily square) matrix. It is well known (see [@Berman:2003]) that for any $k\geq 5$ there exist doubly nonnegative $k \times k$ matrices which are not completely positive. For a $k \times k$ matrix $A=[a_{ij}]$, we define a function $g_A$ on $[0,1]^2$ as $g(x,y)=a_{ij}$ for $(i-1)/k < x \leq i/k$, $(j-1)/k < y \leq j/k$, and $g(x,y)=0$ if $xy=0$. Obviously, $g_A$ is a doubly nonnegative (completely positive) function if and only if $A$ is a doubly nonnegative (completely positive) matrix. If $A(H)$ is the adjacency matrix of a $k$-vertex simple graph $H$, then $t(G,g_{A(H)}) k^{{\rm v}(G)}$ is the number of homomorphisms from $G$ to $H$. Notice, that if a nonzero $k \times k$ matrix $A$ has zero diagonal, then $g_A$ is not nice, since $t(G,g_A)=0$ for any graph $G$ with chromatic number $\chi(G) > k$. We are going to demonstrate that presence of a single positive diagonal entry can be sufficient to make $g_A$ nice. \[th:permutation\] Let $P$ be a symmetric permutation matrix of order $k$ with $a \geq 1$ diagonal entries equal to $1$, and $b \geq 1$ pairs of off-diagonal entries equal to $1$ $(a+2b=k)$. Then $g_P$, while not being positive semidefinite, is a nice function. $P$ has eigenvalues $1$ with multiplicity $a+b$, and $-1$ with multiplicity $b\geq 1$. Therefore, $P$ is not positive semidefinite. If graph $G$ has connected components $G_1,G_2,\ldots,G_m$ then $t(G,g) = \prod_{i=1}^m t(G_i,g)$. Hence, to prove \[eq:CI2\] for $g=g_P$ it is sufficient to consider connected graphs $G$. If $G$ is a tree, then validity of \[eq:CI2\] follows from \[eq:SC\] (\[conj:SC\] has been proved for trees by various authors; a short proof can be found in [@Jagger:1996]). Hence, we may assume that $G$ is not a tree. If $n={\rm v}(G)$, then ${\rm e}(G) \geq n$. As $P$ has $a \geq 1$ diagonal entries equal to $1$, we get $t(G,g_P) \geq a \: (1/k)^n \geq (1/k)^{{\rm e}(G)}$ and $\int_{[0,1]^2} g_P \: d\mu^2 = 1/k$. More on \[conj:SC\] {#sec:SC} =================== The earliest known works where inequalities of type \[eq:CI2,eq:SC\] appear are [@Mulholland:1959] and [@Atkinson:1960]. In 1959, Mulholland and Smith [@Mulholland:1959] proved that for any symmetric nonnegative matrix $A$ and any nonnegative vector ${\bf z}$, $$\label{eq:MS} ({\bf z}^{\mathsf{T}}\! A^k {\bf z}) \cdot ({\bf z}^{\mathsf{T}} {\bf z})^{k-1} \; \geq \; ({\bf z}^{\mathsf{T}}\! A {\bf z})^k \; ,$$ where equality takes place if and only if ${\bf z}$ is an eigenvector of $A$ or a zero vector. Note that (\[eq:MS\]) is a particular case of (\[eq:CI2\]) where $H$ is the $k$-edge path $P_k$. Almost at the same time, Atkinson, Watterson and Moran [@Atkinson:1960] proved that $ nm \cdot s(A A^{\mathsf{T}}\! A) \; \geq \; s(A)^3 \: , $ where $A$ is an (asymmetric) nonnegative $(n \times m)$-matrix, and $s(A)$ is the sum of entries of $A$. They presented their inequality in both matrix and integral form, and conjectured validity of \[eq:SC\] for $H=P_k$ with $k \geq 3$. In 1965, Blakley and Roy [@Blakley:1965], being unaware of the article [@Mulholland:1959], rediscovered (\[eq:MS\]). Lately, \[conj:SC\] has been proved for various bipartite graphs (see [@Conlon:2010; @Conlon:2018; @Conlon:2017; @Conlon:2019; @Kim:2016; @Li:2011; @Lovasz:2011; @Parczyk:2014; @Sidorenko:1992; @Sidorenko:1993; @Sidorenko:1991; @Szegedy:2014; @Szegedy:2015]), among them: trees, complete bipartite graphs, and graphs with $9$ vertices or less. Some of the authors restricted \[eq:SC\] to symmetric functions $h$. Nevertheless, the proofs of their results can be extended to asymmetric $h$ as well. Let $\mathfrak{S}$ be the class of bipartite graphs that satisfy \[conj:SC\], and $\mathfrak{S}_*$ be the class of bipartite graphs $H$ that satisfy \[eq:SC\] for all $h\in\mathcal{G}_+$. Obviously, $\mathfrak{S}\subseteq\mathfrak{S}_*$. It would be nice to prove $\mathfrak{S}_* \backslash \mathfrak{S} = \emptyset$. \[th:symm\] If $H\in\mathfrak{S}_*$ is symmetric, then $H\in\mathfrak{S}$. In the proof of \[th:symm\], we will use the so called “tensor-trick” lemma. \[th:tensor\_trick\] If there exists a constant $c = c_H > 0$ such that $t(H,h) \geq c \cdot \left(\int_{[0,1]^2} h d\mu^2\right)^{{\rm e}(H)}$ for any $h\in\mathcal{H}_+$, then $H\in\mathfrak{S}$. It is sufficient to consider the case when $H$ is connected. Denote by $n$ the size of each vertex set of $H$, so the total number of vertices is $2n$. Let $h\in\mathcal{H}_+$. Define its “transpose” $h^{\mathsf{T}}$ as $h^{\mathsf{T}}(x,y) = h(y,x)$. As $H$ is symmetric, $t(H,h)=t(H,h^{\mathsf{T}})$. Define symmetric function $\tilde{h}\in\mathcal{G}_+$ as follows: $$\tilde{h}(x,y) = \begin{cases} 0 & \mbox{if } 0 \leq x,y < 1/2; \\ h(2x,2y-1) & \mbox{if } 0 \leq x < 1/2 \leq y \leq 1; \\ h(2y,2x-1) & \mbox{if } 0 \leq y < 1/2 \leq x \leq 1; \\ 0 & \mbox{if } 1/2 \leq x,y \leq 1. \end{cases}$$ Notice that $\int_{[0,1]^2} \tilde{h} d\mu^2 = (1/2) \int h_{[0,1]^2} \: d\mu^2$. As $H$ is connected, $$t(H,\tilde{h}) = 2^{-2n} (t(H,h) + t(H,h^{\mathsf{T}})) = 2^{1-2n} \: t(H,h) .$$ Since $H \in \mathfrak{S}_{*}$, we get $t(H,\tilde{h}) \geq \left(\int_{[0,1]^2} \tilde{h} d\mu^2\right)^{{\rm e}(H)}$. Hence, $$t(H,h) \geq 2^{2n-1-{\rm e}(H)} \left(\int_{[0,1]^2} h d\mu^2\right)^{{\rm e}(H)} ,$$ and by \[th:tensor\_trick\], $H\in\mathfrak{S}$. \[th:measure\] It is a classical fact that there exists a measure preserving bijection between any two atomless measure spaces with total measure $1$. In particular, if $\mu_1$ and $\mu_2$ are atomless measures on $[0,1]$, and a bipartite graph $H\in\mathfrak{S}$ has vertex sets of sizes $n$ and $m$, then for any bounded non-negative function $h$ on $[0,1]^2$, measurable with respect to $\mu_1 \otimes \mu_2$, $$\int_{[0,1]^{n+m}} \prod_{\{v_i,w_j\} \in E(H)} h(x_i,y_j) \: d\mu_1^n d\mu_2^m \; \geq \; \left( \int_{[0,1]^2} h \: d\mu_1 d\mu_2 \right)^{{\rm e}(H)} .$$ \[th:same\_degree\] If \[conj:SC\] holds for a bipartite graph $H$, and all vertices from the first vertex set of $H$ have the same degree $a$, then $t(H,h) \geq t(K_{1,a},h)^{{\rm e}(H)/a}$ for any $h\in\mathcal{H}_+$. If all vertices from the second vertex set of $H$ have the same degree $b$, then $t(H,h) \geq t(K_{b,1},h)^{{\rm e}(H)/b}$ for any $h\in\mathcal{H}_+$. We will prove the first part of the statement (the proof of the second part is similar). Notice that ${\rm e}(H)=na$. It is sufficient to consider functions $h\in\mathcal{H}_+$ that are separated from zero: $\inf_{[0,1]^2} h > 0$. Denote $\varphi(x)=\int_{[0,1]} h(x,y)d\mu(y)$. Then $c = \int_{[0,1]} \varphi(x)^{a} d\mu > 0$, and $f(x) = \varphi(x)^{a} / c$ is positive and bounded on $[0,1]$. Consider a measure $\mu_*$ on $[0,1]$ defined by $d\mu_* = fd\mu$, so $\mu_*([0,1])=1$. Denote $\widehat{h}(x,y) = h(x,y) f(x)^{-1/a}$. Clearly, $\widehat{h}$ is bounded and measurable with respect to $\mu_* \otimes \mu$. By \[th:measure\], $$\begin{aligned} t(H,h)^{1/n} & = & \left( \int_{[0,1]^{n+m}} \prod_{(v_i,w_j) \in E(H)} h(x_i,y_j) d\mu^n d\mu^m \right)^{1/n} \\ & = & \left( \int_{[0,1]^{n+m}} \prod_{(u_i,w_j) \in E(H)} \widehat{h}(x_i,y_j) d\mu_*^n d\mu^m \right)^{1/n} \\ & \geq & \left( \int_{[0,1]^2} \widehat{h}(x,y) d\mu_*(x) d\mu(y) \right)^a \\ & = & \left( \int_{[0,1]^2} h(x,y) f(x)^{-1/a} d\mu_*(x) d\mu(y) \right)^a \\ & = & \left( \int_{[0,1]^2} h(x,y) f(x)^{1-(1/a)} d\mu^2 \right)^a \\ & = & \left( \int_{[0,1]} \varphi(x) f(x)^{1-(1/a)} d\mu \right)^a \\ & = & c^{1-a} \left( \int_{[0,1]} \varphi(x)^a d\mu \right)^a \; = \; \int_{[0,1]} \varphi(x)^a d\mu \; = \; t\left(K_{1,a},h\right) \; .\end{aligned}$$ \[th:sub\] For a fixed graph $G$, inequality \[eq:CI2\] holds for any completely positive function $g$ if and only if \[conj:SC\] holds for $H={\rm Sub}(G)$. Suppose that \[conj:SC\] holds for $H={\rm Sub}(G)$, and a function $g$ is completely positive. There exists $h\in\mathcal{H}_+$ such that $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. Then $t(G,g)=t(H,h)$. Every vertex in the second vertex set of $H$ has degree $2$. By \[th:same\_degree\], we have $t(H,h) \geq t(K_{2,1},h)^{{\rm e}(H)/2}$. As ${\rm e}(G) = {\rm e}(H)/2$ and $t(K_{2,1},h) = \int_{[0,1]^2} g d\mu^2$, we get \[eq:CI2\]. Now suppose that \[eq:CI2\] holds for any completely positive function $g$. Let $h\in\mathcal{H}_+$. Set $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. Then $t(H,h) = t(G,g) \geq (\int_{[0,1]^2} g d\mu^2)^{{\rm e}(G)} \geq (\int_{[0,1]^2} h d\mu^2)^{2{\rm e}(G)} = (\int_{[0,1]^2} h d\mu^2)^{{\rm e}(H)}$. Subdivisions that are norming {#sec:norming} ============================= We say that a bipartite graph $H$ with vertex sets $V=\{v_1,v_2,\ldots,v_n\}$ and $W=\{w_1,w_2,\ldots,w_m\}$ has the [*Hölder property*]{} if for any assignment $f: E(H) \to \mathcal{H}$, $$\label{eq:norming} \left(\int_{[0,1]^{n+m}} \prod_{e = \{v_i,w_j\} \in E(H)} f_e(x_i,y_j) \: d\mu^{n+m} \right)^{{\rm e}(H)} \leq \prod_{e \in E(H)} t(H, f_e) .$$ It is known (see [@Hatami:2010; @Lovasz:2010]) that every graph $H$ with the Hölder property (except a star with even number of edges) is a [*norming graph*]{}: $t(H,h)^{1/{\rm e}(H)}$ is a norm on $\mathcal{H}$. Conversely, every norming graph has the Hölder property. \[th:norming\] If ${\rm Sub}(G)$ has the Hölder property then $G$ is good. Let $H={\rm Sub}(G)$. If $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$ then $t(G,g) = t(H,h)$. Select a pair of edges $e',e''$ in $H$ which subdivide the same edge of $G$. Assign $f_{e'} = f_{e''} = h$ and $f_e = 1$ for $e \neq e',e''$. Then the left hand side of \[eq:norming\] is $\left(\int_{[0,1]^2} g d\mu^2\right)^{2{\rm e}(G)}$, and the right hand side is $t(G,g)^2$. The 1-subdivision of cycle $C_n$ is an even cycle $C_{2n}$ which is a norming graph. The 1-subdivision of the octahedron $K_{2,2,2}$ is norming (see [@Conlon:2017 Example 4.15]). Hence, $C_n$ and $K_{2,2,2}$ are good graphs. In a norming graph, the degrees of vertices are even (see Observation 2.5 in [@Hatami:2010]). Hence, ${\rm Sub}(K_{2r})$ is not norming. While ${\rm Sub}(K_3)=C_6$ is norming, it is not known whether ${\rm Sub}(K_{2r+1})$ with $r \geq 2$ is norming. We will prove in the next section that all complete graphs are good. Complete graphs are good {#sec:clique} ======================== \[th:G+v\] If graph $G$ is good, and graph $G_1$ is obtained from $G$ by adding a new vertex adjacent to all vertices of $G$, then $G_1$ is good. \[th:clique\] The complete graphs are good. \[th:G+vw\] If graph $G$ is good, and graph $G_2$ is obtained from $G$ by adding two vertices adjacent to all vertices of $G$ but not to each other, then $G_2$ is good. Any graph whose complement is a set of independent edges is good. To prove \[th:G+v,th:G+vw\] we need a couple of auxiliary results. For $g\in\mathcal{G}_+$, we denote ${\left\lVertg\right\rVert} = \int_{[0,1]^2} g d\mu^2$. \[th:m\_1\_2\] If function $g$ is doubly nonnegative, then $t(K_3,g) \! \geq t(K_{1,2},g)^{3/2}$ and $t(K_4-e,g) \geq t(K_{2,2},g)^{5/4}$. As $g$ is doubly nonnegative, $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. Then $t(K_3,g)=t(C_6,h)$, $t(K_{1,2},g)=t(P_4,h)$, $t(K_{2,2},g)=t(C_8,h)$, where $P_4$ denotes the $4$-edge path. As $C_6$ is norming, and $P_4$ is a subgraph of $C_6$, we have $t(C_6,h)^{1/6} \geq t(P_4,h)^{1/4}$. By the Cauchy–Schwarz inequality, $t(K_4-e,g) {\left\lVertg\right\rVert} \geq t(K_3,g)^2 \!= t(C_6,h)^2$. As $t(C_{2k}(h)^{1/2k}$ is the $(2k)$-th Schatten norm of $h$, we get $t(C_6,h)^{1/6} \geq t(C_8,h)^{1/8}$. Hence, $t(K_4-e,g) \geq t(K_{2,2},g)^{3/2} {\left\lVertg\right\rVert}^{-1} \geq t(K_{2,2},g)^{5/4}$. For $n > m \geq 1$, let $K_n-K_m$ denote the complement of $K_m$ in $K_n$. It follows from \[th:m\_1\_2\] that for $m=1,2$ and any doubly nonnegative function $g$, $$\label{eq:m_1_2} t(K_{m+2}-K_m,g) \; \geq \; t(K_{2,m},g) \cdot {\left\lVertg\right\rVert} \; .$$ Thus, \[th:G+v,th:G+vw\] follow from the next proposition. \[th:G\_m\] Let integer $m$ be such that \[eq:m\_1\_2\] holds for all doubly nonnegative functions. If graph $G$ is good, and graph $G_m$ is obtained from $G$ by adding a group of $m$ independent vertices that are adjacent to all vertices of $G$, then $G_m$ is good. We may assume ${\rm v}(G) \geq 2$. It is sufficient to consider functions $g$ that are separated from zero: $\inf_{[0,1]^2} g > 0$. Then function $\varphi(x_1,\ldots,x_m) = \int_{[0,1]} \prod_{i=1}^m g(x_i,y) d\mu(y)$ is positive and bounded on $[0,1]^m$. For each ${\bf x}=(x_1,\ldots,x_m) \in [0,1]^m$, consider measure $\mu_{\bf x}$ on $[0,1]$, defined by $d\mu_{\bf x} = f_{\bf x} d\mu$, where $f_{\bf x}(y) = \prod_{i=1}^m g(x_i,y) \varphi(x_1,\ldots,x_m)^{-1}$. It is easy to see that $\mu_{\bf x}([0,1])=1$, and $g$ is bounded and measurable with respect to $\mu_{\bf x} \otimes \mu_{\bf x}$. By \[th:measure\], $$t(G,g,\mu_{\bf x}) \overset{\underset{\mathrm{def}}{}}{=} \int_{[0,1]^{{\rm v}(G)}} \prod_{\{v_i,v_j\} \in E(G)} g(y_i,y_j) \: d\mu_{\bf x}^{{\rm v}(G)} \; \geq \; \left( \int_{[0,1]^2} g \: d\mu_{\bf x}^2 \right)^{{\rm e}(G)} .$$ Hence, $$\begin{gathered} t(G_m,g) = \int_{[0,1]^m} t(G,g,\mu_{x_1,\ldots,x_m}) \varphi(x_1,\ldots,x_m)^{{\rm v}(G)} \: d\mu(x_1) \cdots d\mu(x_m) \\ \geq \int_{[0,1]^m} \left( \int_{[0,1]^2} g \: d\mu_{x_1,\ldots,x_m}^2 \right)^{{\rm e}(G)} \varphi(x_1,\ldots,x_m)^{{\rm v}(G)} \: d\mu(x_1) \cdots d\mu(x_m) \\ = \int_{[0,1]^m} \left( \int_{[0,1]^2} g(y,z) \prod_{i=1}^m \left( g(x_i,y) g(x_i,z) \right) \: d\mu(y) d\mu(z) \right)^{{\rm e}(G)} \\ \times \varphi(x_1,\ldots,x_m)^{{\rm v}(G) - 2{\rm e}(G)} \: d\mu(x_1) \cdots d\mu(x_m) .\end{gathered}$$ As $1 + {\rm e}(G) - {\rm v}(G)/2 \leq {\rm e}(G)$, by using the Hölder inequality, we get $$\begin{gathered} t(G_m,g) \cdot t(K_{2,m},g)^{{\rm e}(G) - {\rm v}(G)/2} \\ = \; t(G_m,g) \cdot \left(\int_{[0,1]^m} \varphi^2(x_1,\ldots,x_m) \: d\mu(x_1) \cdots d\mu(x_m) \right)^{{\rm e}(G) - {\rm v}(G)/2} \\ \geq \; \left(\int_{[0,1]^{m+2}} \! g(y,z) \prod_{i=1}^m \left( g(x_i,y) g(x_i,z) \right) \: d\mu(y) d\mu(z) d\mu(x_1) \cdots d\mu(x_m) \right)^{{\rm e}(G)} \\ = \; t(K_{m+2}-K_m,g)^{{\rm e}(G)} \: .\end{gathered}$$ As \[conj:SC\] holds for complete bipartite graphs (see [@Sidorenko:1992]), $K_{2,m}$ is good. By using \[eq:m\_1\_2\], we get $$\begin{gathered} t(G_m,g) \: \geq \: t(K_{2,m},g)^{-{\rm e}(G) + {\rm v}(G)/2} \: t(K_{2,m},g)^{{\rm e}(G)} {\left\lVertg\right\rVert}^{{\rm e}(G)} \\ = \: t(K_{2,m},g)^{{\rm v}(G)/2} {\left\lVertg\right\rVert}^{{\rm e}(G)} \: \geq \: {\left\lVertg\right\rVert}^{m{\rm v}(G)} {\left\lVertg\right\rVert}^{{\rm e}(G)} \: = \: {\left\lVertg\right\rVert}^{{\rm e}(G_m)} \: .\end{gathered}$$ Extra-good graphs {#sec:extra-good} ================= If $G$ is vertex-transitive and good, then $G$ is extra-good. We denote $n={\rm v}(G)$, $f(x) = \left(\prod_{i=1}^n f_i(x)\right)^{1/(2{\rm e}(G))}$ and $\tilde{g}(x,y) = \linebreak f(x)g(x,y)f(y)$. Notice that $\tilde{g}$ is doubly nonnegative. Since $G$ is vertex-transitive, any permutation of the functions $f_1,f_2,\ldots,f_n$ does not change the value of the integral on the left hand side of \[eq:extra-good\]. By applying the Hölder inequality to the geometric mean of all $n!$ possible integrals, we get $$\begin{gathered} \int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G)} g(x_i,x_j) \: \prod_{i=1}^n f_i(x_i) \: d\mu^n \\ \geq \; \int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G)} g(x_i,x_j) \: \prod_{i=1}^n f(x_i)^{2{\rm e}(G)/n} \: d\mu^n \\ = \; \int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G)} \tilde{g}(x_i,x_j) \: d\mu^n \; .\end{gathered}$$ Since $G$ is good, $$\int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G)} \tilde{g}(x_i,x_j) \: d\mu^n \; \geq \; \left( \int_{[0,1]^2} \tilde{g}(x,y) \: d\mu^2 \right)^{{\rm e}(G)} .$$ Cycles and complete graphs are extra-good. \[th:leaf\] If vertex $v$ is a leaf in graph $G$, and $G-v$ is extra-good, then $G$ is extra-good too. Let ${\rm v}(G)=n+1$, $V(G)=\{v_1,\ldots,v_n,v_{n+1}\}$, and $v=v_{n+1}$ is a leaf. For a set of functions $f_1,\ldots,f_n,f_{n+1}$, define $\tilde{f_i} = f_i$ for $i=1,\ldots,n-1$, and $\tilde{f_n}(x) = f_n(x) \int_{[0,1]} g(x,y) f_{n+1}(y) d\mu(y)$. Then $$\begin{gathered} \int_{[0,1]^{n+1}} \prod_{\{v_i,v_j\} \in E(G)} g(x_i,x_j) \prod_{i=1}^{n+1} f_i(x_i) d\mu^{n+1} \\ = \; \int_{[0,1]^n} \prod_{\{v_i,v_j\} \in E(G-v)} g(x_i,x_j) \prod_{i=1}^n \tilde{f}_i(x_i) d\mu^n \\ \geq \; \left( \int_{[0,1]^2} \tilde{f}(x) g(x,y) \tilde{f}(y) \: d\mu^2 \right)^{{\rm e}(G)-1} = \; I_0^{{\rm e}(G)-1},\end{gathered}$$ where $\tilde{f}(x)=\left(\prod_{i=1}^n \tilde{f}_i(x)\right)^{1/(2{\rm e}(G)-2)}$. Set $\varphi(x) = \int_{[0,1]} g(x,y) f_{n+1}(y) d\mu(y)$, $I_1 = \int_{[0,1]^2} \varphi(x)^{-1} g(x,y) f_{n+1}(y) d\mu^2$, $I_2 = \int_{[0,1]^2} f_{n+1}(x) g(x,y) \varphi(y)^{-1} d\mu^2$. It is easy to see that $I_1 = \int_{[0,1]} \varphi(x)^{-1} \varphi(x) d\mu = 1$, and similarly, $I_2 = 1$. By the Hölder inequality, $$I_0^{{\rm e}(G)-1} \cdot 1 \cdot 1 \; = \; I_0^{{\rm e}(G)-1} \cdot I_1^{1/2} \cdot I_2^{1/2} \; \geq \; \left( \int_{[0,1]^2} f(x) \: g(x,y) \: f(x) \: d\mu^2 \right)^{{\rm e}(G)} ,$$ where $$\begin{aligned} f^{2{\rm e}(G)} & = & (\tilde{f})^{2{\rm e}(G)-2} \: \varphi^{-1} f_{n+1} = \prod_{i=1}^n \tilde{f}_i \cdot \varphi^{-1} f_{n+1} = \prod_{i=1}^{n-1} f_i \cdot (f_n \varphi) \cdot \varphi^{-1} f_{n+1} \\ & = & \prod_{i=1}^{n+1} f_i \; .\end{aligned}$$ A connected graph $G$ is called [*unicyclic*]{} if ${\rm e}(G) = {\rm v}(G)$, and [*bicyclic*]{} if ${\rm e}(G) = {\rm v}(G)+1$. \[th:unicyclic\] Trees and unicyclic graphs are extra-good. Let $P(k_1,k_2,\ldots,k_r)$ denote a graph consisting of two vertices joined by $r$ internally disjoint paths of length $k_1,k_2,\ldots,k_r$. Such a graph is called [*theta graph*]{} (see [@Blinco:2004]). If $r=2$, then $P(k_1,k_2)$ is simply a cycle of length $k_1+k_2$. \[th:multipath\] $P(k_1,k_2,\ldots,k_r)$ is extra-good. Let $C(k_1,k_2,m)$ denote a graph which consists of two cycles of length $k_1$ and $k_2$ connected by a path of length $m \geq 0$ (when $m=0$, the cycles share a vertex). Notice, that $P(k_1,k_2,k_3)$ and $C(k_1,k_2,m)$ are the only bicyclic graphs without leaves. In view of \[th:leaf,th:multipath\], in order to prove that all bicyclic graphs are extra-good, it is sufficient to show that $C(k_1,k_2,m)$ is extra-good. This will follow from the next result. \[th:cycle\] Let graph $G_0$ be formed by attaching a cycle to one of the vertices of graph $G$. If $G$ is extra-good, then $G_0$ is extra-good. \[th:bicyclic\] Bicyclic graphs are extra-good. The proofs of \[th:multipath,th:cycle\] are given in the appendix. Consider a tree $T$ whose vertices are arbitrarily colored in black and white so that at least one vertex is black. Take $r$ disjoint copies of $T$ and glue together “sister” black vertices from different copies. We call the resulting graph a [*multitree*]{}. For example $P(k,k,\ldots,k)$ is a multitree. The case of even cycle in \[th:cycle\] is a particular case of the following statement. \[th:multitree\] Let graph $G_0$ be formed by gluing a black vertex of a multitree to one of the vertices of graph $G$. If $G$ is extra-good, then $G_0$ is extra-good. \[th:G+v\_extra\] If graph $G$ is extra-good, and graph $G_1$ is obtained from $G$ by adding a new vertex adjacent to all vertices of $G$, then $G_1$ is extra-good. \[th:G+vw\_extra\] If graph $G$ is extra-good, and graph $G_2$ is obtained from $G$ by adding two vertices adjacent to all vertices of $G$ but not to each other, then $G_2$ is extra-good. Theorems \[th:multitree\], \[th:G+v\_extra\], and \[th:G+vw\_extra\] are not used in the rest of the article. We omit their proofs as they are very similar to the proofs of Theorems \[th:cycle\], \[th:G+v\], and \[th:G+vw\]. Graphs with small number of vertices {#sec:small} ==================================== \[th:small\] All graphs with $5$ vertices or less are good. If graph $G$ has connected components $G_1,G_2,\ldots,G_m$ then $t(G,g) = \prod_{i=1}^m t(G_i,g)$. Hence, it is sufficient to consider connected graphs $G$ only. As \[conj:SC\] has been proved for bipartite graphs with $9$ vertices or less, it is sufficient to consider connected simple graphs with at least one odd cycle. The results of \[sec:clique,sec:extra-good\] cover all such graphs with $\leq 5$ vertices. A table of all $5$-vertex graphs that do not have isolated vertices can be found in [@Adams:2008 Figure 6]. Some $6$-vertex graphs are not covered by the results of \[sec:clique,sec:extra-good\], but for almost all of them we were able to prove that they are good by using the Cauchy–Schwarz and Hölder inequalities. The only graph with $6$ vertices that we were unable to prove to be good is the complement of the $5$-edge path. Locally dense graphs {#sec:KNRS} ==================== A simple graph $H$ is called $(\varepsilon,d)$-[*dense*]{} if every subset $X \subseteq V(H)$ of size $|X| \geq \varepsilon |V(H)|$ spans at least $\frac{d}{2} |X|^2$ edges. \[conj:KNRS\] For any graph $G$ and $\delta,d \in (0,1)$, there exists $\varepsilon = \varepsilon(\delta,d,G)$ such that there are at least $(d^{{\rm e}(G)} - \delta) {\rm v}(H)^{{\rm v}(G)}$ homomorphisms of $G$ into any sufficiently large $(\varepsilon,d)$-dense graph $H$. When $\chi(G)=2$, \[conj:KNRS\] follows from \[conj:SC\]. It is known that \[conj:KNRS\] holds for complete graphs and multipartite complete graphs (see[@Kohayakawa:2010; @Reiher:2014]). Christian Reiher [@Reiher:2014] proved \[conj:KNRS\] for odd cycles. Joonkyung Lee [@Lee:2019b] proved that adding an edge to a cycle or a tree produces graphs that satisfy the conjecture. He also proved that \[conj:KNRS\] holds for a class of graphs obtained by gluing complete multipartite graphs (or odd cycles) in a tree-like way. All graphs with $5$ vertices or less satisfy the conjecture. For a nonnegative symmetric $k \times k$ matrix $A$, we define its [*density*]{} $d(A)$ as the minimum of ${\bf x}^{\mathsf{T}} A {\bf x}$ over all nonegative $k$-dimensional vectors ${\bf x}$ with the sum of entries equal to $1$. Clearly, $d(A) \leq {\left\lVertg_A\right\rVert}$. We call a graph $G$ [*density-friendly*]{} if $t(G,g_A) \geq d(A)^{{\rm e}(G)}$ for any $A$. It is easy to see that any density-friendly graph satisfies \[conj:KNRS\], but the converse is not obvious. While \[conj:CI,conj:KNRS\] looks very different, the sets of graphs which are known to satisfy them are surprisingly similar. One can try to build a bridge between these two topics by defining $$c(A) \: = \: \inf_G t(G,g_A)^{1/{\rm e}(G)} \: .$$ Then \[conj:CI\] claims that $c(A)={\left\lVertg_A\right\rVert}$ for any doubly nonnegative matrix $A$, and \[conj:KNRS\] claims that $c(A) \geq d(A)$ for any nonnegative symmetric matrix $A$. Functions of $r$ variables {#sec:multivar} ========================== Let $G$ be an $r$-uniform hypergraph with vertex set $V(G)=\{v_1,v_2,\ldots,v_n\}$ and edge set $E(G)$ (edges are $r$-element subsets of the vertex set). Let $g$ be a bounded symmetric measurable nonnegative function defined on $[0,1]^r$. Denote ${\left\lVertg\right\rVert} = \int_{[0,1]^r} g d\mu^r$ and $$t(G,g) \; = \; \int_{[0,1]^n} \prod_{\{v_{i_1},v_{i_2},\ldots,v_{i_r}\}\in E(G)} g(x_{i_1},x_{i_2},\ldots,x_{i_r}) \: d \mu^n \; .$$ Characterize functions $g$ such that $$\label{eq:CIr} t(G,g) \; \geq \; {\left\lVertg\right\rVert}^{|E(G)|}$$ holds for every $r$-uniform hypergraph $G$. When $r=1$, it is obvious that \[eq:CIr\] holds for any nonnegative function $h$ on $[0,1]$. The [*incidence graph*]{} of an $r$-uniform hypergraph $G$ is a bipartite graph ${\rm Inc}(G)$ with vertex sets $V(G)$ and $E(G)$, where $v \in V(G)$ and $e \in E(G)$ form an edge $\{v,e\}$ in ${\rm Inc}(G)$ if and only if $v \in e$ in $G$. If there is a function $h\in\mathcal{H}$ such that $$\label{eq:multivar} g(x_1,x_2,\ldots,x_r) \; = \; \int_{[0,1]} \prod_{i=1}^r h(x_i,y) \: d \mu(y) \: ,$$ then $t(G,g) = t({\rm Inc}(G),h)$. Similarly to \[th:sub\], if ${\rm Inc}(G)$ satisfies \[conj:SC\], then \[eq:CIr\] holds for functions $g$ that have representation \[eq:multivar\] with nonnegative $h$. Similarly to \[th:norming\], if ${\rm Inc}(G)$ is norming (it requires $r$ to be even), then \[eq:CIr\] holds for functions $g$ that have representation \[eq:multivar\], where $h\in\mathcal{H}$ can take negative values. [**Acknowledgement.**]{} The author would like to thank Joonkyung Lee for his valuable comments and suggestions. [99]{} P. Adams, D. Bryant, and M. Buchanan. A survey on the existence of [$G$]{}-designs. *J. Combin. Designs*, 16(5):373–410, 2008. [[`doi:10.1002/jcd.20170`](http://dx.doi.org/10.1002/jcd.20170)]{}. F. V. Atkinson, G. A. Watterson, and P. A. D. Moran. A matrix inequality. *Quarterly J. of Math.*, 11(42):137–140, 1960. [[`doi:10.1093/qmath/11.1.137`](http://dx.doi.org/10.1093/qmath/11.1.137)]{}. A. Berman and N. Shaked-Monderer. *Completely Positive Matrices*, World Scientific, 2003. [[`doi:10.1142/5273`](http://dx.doi.org/10.1142/5273)]{}. G. R. Blakley and P. Roy. Hölder type inequality for symmetric matrices with nonnegative entries. *Proc. Amer. Math. Soc.*, 16(6):1244–1245, 1965. [[`doi:10.1090/S0002-9939-1965-0184950-9`](http://dx.doi.org/10.1090/S0002-9939-1965-0184950-9)]{}. A. Blinco. Theta graphs, graph decompositions and related graph labelling techniques. *Bull. Austral. Math. Soc.*, 69(1):173–175, 2004. [[`doi:10.1017/S0004972700034377`](http://dx.doi.org/10.1017/S0004972700034377)]{}. D. Conlon, J. Fox, and B. Sudakov. An approximate version of Sidorenko’s conjecture. *Geom. Funct. Anal.*, 20(6):1354–1366, 2010. [[`doi:10.1007/s00039-010-0097-0`](http://dx.doi.org/10.1007/s00039-010-0097-0)]{}. D. Conlon, J. H. Kim, C. Lee, and J. Lee. Some advances on Sidorenko’s conjecture. *J. London Math. Soc.*, 98(3):593–608, 2018. [[`doi:10.1112/jlms.12142`](http://dx.doi.org/10.1112/jlms.12142)]{}. D. Conlon and J. Lee. Finite reflection groups and graph norms. *Adv. Math.*, 315(31):130–165, 2017. [[`doi:10.1016/j.aim.2017.05.009`](http://dx.doi.org/10.1016/j.aim.2017.05.009)]{}. D. Conlon and J. Lee. Sidorenko’s conjecture for blow-ups. [[`arXiv:1809.01259`](http://arxiv.org/abs/1809.01259)]{}, 2018. H. Hatami. Graph norms and Sidorenko’s conjecture. *Israel J. Math.*, 175(1):125–150, 2010. [[`doi:10.1007/s11856-010-0005-1`](http://dx.doi.org/10.1007/s11856-010-0005-1)]{}. C. Jagger, P. Šťoviček, and A. Thomason. Multiplicities of subgraphs. *Combinatorica*, 16(1):123–141, 1996. [[`doi:10.1007/BF01300130`](http://dx.doi.org/10.1007/BF01300130)]{}. J. H. Kim, C. Lee, and J. Lee. Two approaches to Sidorenko’s conjecture. *Trans. Amer. Math. Soc.*, 368(7):5057–5074, 2016. [[`doi:10.1090/tran/6487`](http://dx.doi.org/10.1090/tran/6487)]{}. Y. Kohayakawa, B. Nagle, V. Rödl, and M. Schacht. Weak hypergraph regularity and linear hypergraphs. *J. Combin. Theory Ser. B*, 100(2):151–160, 2010. [[`doi:10.1016/j.jctb.2009.05.005`](http://dx.doi.org/10.1016/j.jctb.2009.05.005)]{}. J. Lee. On some graph densities in locally dense graphs. [[`arXiv:1707.02916`](http://arxiv.org/abs/1707.02916)]{}, 2019. J. L. X. Li and B. Szegedy. On the logarithmic calculus and Sidorenko’s conjecture. [[`arXiv:1107.1153`](http://arxiv.org/abs/1107.1153)]{}, 2011. L. Lovász. *Large networks and graph limits*, volume 60 of *Colloquium Publications*. AMS, 2012. [[`doi:10.1090/coll/060`](http://dx.doi.org/10.1090/coll/060)]{}. L. Lovász. Subgraph densities in signed graphons and the local Sidorenko conjecture. *Electr. J. Combin.*, 18(1) \#P127, 2011. H. P. Mulholland and C. A. B. Smith. An inequality arising in genetical theory. *Amer. Math. Monthly*, 66(8):673–683, 1959. [[`doi:10.2307/2309342`](http://dx.doi.org/10.2307/2309342)]{}. O. Parczyk. On Sidorenko’s conjecture, Master’s thesis, Freie Universität Berlin, 2014. <http://www.uni-frankfurt.de/58522166>. C. Reiher. Counting odd cycles in locally dense graphs. *J. Combin. Theory Ser. B*, 105:1–5, 2014. [[`doi:10.1016/j.jctb.2013.12.002`](http://dx.doi.org/10.1016/j.jctb.2013.12.002)]{}. A. Sidorenko. Inequalities for functionals generated by bipartite graphs. *Discrete Math. Appl.*, 2(5):489–504, 1992. [[`doi:10.1515/dma.1992.2.5.489`](http://dx.doi.org/10.1515/dma.1992.2.5.489)]{}. A. Sidorenko. A correlation inequality for bipartite graphs. *Graphs and Combinatorics*, 9(2):201–204, 1993. [[`doi:10.1007/BF02988307`](http://dx.doi.org/10.1007/BF02988307)]{}. A. Sidorenko. An analytic approach to extremal problems for graphs and hypergraphs. In *Extremal Problems for Finite Sets (Visegrád, 1991)*, volume 3 of *Bolyai Soc. Math. Stud.*, pages 423–455. János Bolyai Math. Soc., Budapest, 1994. B. Szegedy. An information theoretic approach to Sidorenko’s conjecture. [[`arXiv:1406.6738`](http://arxiv.org/abs/1406.6738)]{}, 2014. B. Szegedy. Sparse graph limits, entropy maximization and transitive graphs. [[`arXiv:1504.00858`](http://arxiv.org/abs/1504.00858)]{}, 2015. Proofs of \[th:multipath,th:cycle\] =================================== The proof would be easy if all $k_i$ were even: then we could use the Cauchy–Schwarz inequality and the fact that the tree formed by paths of length $k_1/2,k_2/2,\ldots,k_r/2$ connected at one endpoint is an extra-good graph. To deal with odd values of $k_i$, we are going to subdivide the middle edge of the path. We assume that $k_1,\ldots,k_s$ are odd, and $k_{s+1},\ldots,k_r$ are even ($0 \leq s\leq r$). We denote the vertices of the $i$th path by $v_{i0},v_{i1},\ldots,v_{ik_i}$. Gluing together vertices $v_{i0}$ for all $i$ produces vertex $v_0$. Gluing together vertices $v_{ik_i}$ produces vertex $v_\infty$. The total number of vertices is $n = 2 + \sum_{i=1}^r (k_i - 1)$, and the number of edges is $m = \sum_{i=1}^r k_i$. Let $g$ be a doubly negative function: $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. Let $f_0,f_\infty,f_{ij}$ ($1 \leq i \leq r$, $1 \leq j \leq k_i-1$) be bounded measurable nonnegative functions on $[0,1]$. Let $$f \; = \; \left( f_0 \cdot f_\infty \cdot \prod_{i=1}^r \prod_{j=1}^{k_i-1} f_{ij} \right)^{1/(2m)}$$ For $i=1,2,\ldots,r$, denote $$P_i \; = \; g(x_0,x_{i1}) \left( \prod_{j=2}^{k_i-1} g(x_{i,j-1},x_{i,j}) \right) g(x_{i,k_i-1},x_\infty) \prod_{j=1}^{k_i-1} f_{i,j}(x_{i,j}) \; .$$ We need to prove $I \geq J^m$, where $$I = \int_{[0,1]^n} f_0(x_0) f_\infty(x_\infty) \prod_{i=1}^r P_i \: d\mu^n , \;\;\;\;\;\; J = \int_{[0,1]^2} f(x) g(x,y) f(y) \: d\mu^2 .$$ We can assume that $f_0=f_\infty$, and $f_{i,j} = f_{i,k_i-j}$ for all $i$ and $j$. Indeed, we could swap $f_0$ with $f_\infty$, and $f_{i,j}$ with $f_{i,k_i-j}$ for all $i,j$ (this would not change the value of $I$), and then apply the Cauchy–Schwarz inequality to the geometric mean of both expressions for $I$. For $i=1,2,\ldots,s$ (that is when $k_i$ is odd), we can add a variable $z_i$ and replace $g(x_{i,(k_i-1)/2},\: x_{i,(k_i+1)/2}) $ in the expression for $P_i$ with the product $ h(x_{i,(k_i-1)/2},z_i) \: h(x_{i,(k_i+1)/2},z_i) \; $: $$\begin{gathered} P_i \; = \; g(x_0,x_{i,1}) \left( \prod_{j=2}^{(k_i-1)/2} g(x_{i,j-1},x_{i,j}) \right) h(x_{i,(k_i-1)/2},z_i) \: h(x_{i,(k_i+1)/2},z_i) \\ \times \left( \prod_{j=(k_i+3)/2}^{k_i-1} g(x_{i,j-1},x_{i,j}) \right) g(x_{i,k_i-1},x_\infty) \prod_{j=1}^{k_i-1} f_{i,j}(x_{i,j}) \; .\end{gathered}$$ Now in the expression for $I$ we have to integrate over $n+s$ variables. Define for $i \leq s$, $$Q_i \; = \; g(x_0,x_{i1}) \prod_{j=2}^{(k_i-1)/2} g(x_{i,j-1},x_{i,j}) \; h(x_{i,(k_i-1)/2},z_i) \prod_{j=1}^{(k_i-1)/2} f_{i,j}(x_{i,j}) \: ,$$ and for $i > s$, $$Q_i \; = \; g(x_0,x_{i,1}) \prod_{j=2}^{k_i/2} g(x_{i,j-1},x_{i,j}) \prod_{j=1}^{k_i/2-1} f_{i,j}(x_{i,j}) \; \left(f_{i,k_i/2}(x_{i,k_i/2})\right)^{1/2} .$$ Let $A(z_1,\ldots,z_s,\: x_{s+1,k_{s+1}/2},\ldots,x_{r,k_r/2})$ denote the integral of $f_0(x_0) \prod_{i=1}^r Q_i$ over all variables except $z_1,\ldots,z_s,\: x_{s+1,k_{s+1}/2},\ldots,x_{r,k_r/2}$. Set $$\begin{gathered} B(y_1,\ldots,y_s,\: x_{s+1,k_{s+1}/2},\ldots,x_{r,k_r/2}) \\ = \; \int_{[0,1]^s} \! A(z_1,\ldots,z_s,\: x_{s+1,k_{s+1}/2},\ldots,x_{r,k_r/2}) \prod_{i=1}^s h(y_i,z_i) \: d\mu(z_1) \cdots d\mu(z_s) \: .\end{gathered}$$ Then $$I \; = \; \int_{[0,1]^r} A(z_1,\ldots,z_s,\: x_{s+1,k_{s+1}/2},\ldots,x_{r,k_r/2})^2 d\mu^r \: ,$$ and $$\begin{gathered} I \cdot J^s \; = \; I \cdot \int_{[0,1]^s} \prod_{i=1}^s \left( \int_{[0,1]} f(y_i) \: h(y_i,z_i) \: d\mu(y_i) \right)^2 d\mu(z_1) \cdots d\mu(z_s) \\ \geq \; \left( \int_{[0,1]^r} B(y_1,\ldots,y_s,\: x_{s+1,k_{s+1}/2},\ldots,x_{r,k_r/2}) \: d\mu^r \right)^2 .\end{gathered}$$ Notice that $\int_{[0,1]^r} B d\mu^r$ is the left hand side of \[eq:extra-good\] for the tree $G$ formed by paths of length $(k_1+1)/2,\ldots,(k_s+1)/2,\: k_{s+1}/2,\ldots,k_r/2$ connected at one endpoint. By \[th:unicyclic\], $G$ is extra-good, so $\int_{[0,1]^r} B d\mu^r \geq J^{{\rm e}(G)}$. As ${\rm e}(G) = (m+s)/2$, we get $$I \; \geq \; J^{-s} \left(\int_{[0,1]^r} B d\mu^r\right)^2 \; \geq \; J^m \; .$$ Let $G_0$ be formed by attaching a $k$-edge cycle $(v_0,v_1,\ldots,v_{k-1},v_k=v_0)$ to a vertex $v$ of graph $G$, so $v=v_0=v_k$. Let $g$ be a doubly nonnegative function. There exists $h \in {\mathcal H}$ such that $g(x,y) = \int_{[0,1]} h(x,z)h(y,z) d\mu(z)$. Assign bounded measurable nonnegative functions on $[0,1]$ to all vertices of $G$, and denote by $F(x)$ the integral on the left hand side of \[eq:extra-good\] taken over all variables except the one that corresponds to $v$. Assign bounded measurable nonnegative functions $f_1,f_2,\ldots,f_{k-1}$ to vertices $v_1,v_2,\ldots,v_{k-1}$. For functions $\gamma_1,\gamma_2,\ldots,\gamma_r$, denote $$I_r(x_0,x_r; \gamma_1,\gamma_2,\ldots,\gamma_r) \: = \int_{[0,1]^{r-1}} \prod_{i=1}^r \left( g(x_{i-1},x_i) \: \gamma_i(x_i) \right) d\mu(x_1) \cdots d\mu(x_{r-1}) .$$ Let $I_{G_0}$ be the value of the integral on the left hand side of \[eq:extra-good\] for $G_0$. Then $$I_{G_0} \: = \int_{[0,1]} F(x) \: I_k(x,x; f_1,f_2,\ldots,f_{k-1},1) \: d\mu(x) \: .$$ Let $f_0$ be the product of all ${\rm v}(G_0)$ functions assigned to vertices of $G_0$. We need to prove $I_{G_0} \geq J^{{\rm e}(G_0)}$, where $J=\int_{[0,1]^2} f(x)g(x,y)f(y)d\mu^2$ and $f=(f_0)^{1/(2{\rm e}(G_0))}$. Set $\gamma_i = \sqrt{f_i f_{k-i}}$, so $\gamma_{k-i} = \gamma_i$. By the Cauchy–Schwarz inequality, $$\begin{gathered} I_k(x,x; f_1,f_2,\ldots,f_{k-1},1)^2 \\ = \: I_k(x,x; f_1,f_2,\ldots,f_{k-1},1) \cdot I_k(x,x; f_{k-1},f_{k-2},\ldots,f_1,1) \\ \geq \: I_k(x,x; \gamma_1,\gamma_2,\ldots,\gamma_{k-1},1)^2 \: .\end{gathered}$$ If $k=2a$, then $$\begin{aligned} I_{2a}(x,x; \gamma_1,\ldots,\gamma_{2a-1},1) & = & \int_{[0,1]} I_a(x,y; \gamma_1,\ldots,\gamma_{a-1},\sqrt{\gamma_a})^2 \: d\mu(y) \\ & \geq & \left( \int_{[0,1]} I_a(x,y; \gamma_1,\ldots,\gamma_{a-1},\sqrt{\gamma_a}) \: d\mu(y) \right)^2 .\end{aligned}$$ Construct graph $G_a$ from graph $G$ by attaching two disjoint $a$-edge paths to vertex $v$. By \[th:leaf\], $G_a$ is extra-good. Assign functions $\gamma_1,\ldots,\gamma_{a-1}, \linebreak \sqrt{\gamma_a}$ to vertices of each of the two paths. Let $I_{G_a}$ be the value of the integral on the left hand side of \[eq:extra-good\] for $G_a$. Then $$I_{G_a} \: = \int_{[0,1]} F(x) \left( \int_{[0,1]} I_a(x,y; \gamma_1,\ldots,\gamma_{a-1},\sqrt{\gamma_a}) \: d\mu(y) \right)^2 d\mu(x) \: \leq \: I_{G_0} \: .$$ The right hand side of \[eq:extra-good\] is the same for both $G_0$ and $G_a$, so we are done with the case $k=2a$. If $k=2a+1$, we have $$\begin{gathered} I_{2a+1}(x,x; \gamma_1,\ldots,\gamma_{2a},1) \\ = \int_{[0,1]^2} I_a(x,y'; \gamma_1,\ldots,\gamma_a) \: g(y',y'') \: I_a(x,y''; \gamma_{2a},\ldots,\gamma_{a+1}) \: d\mu(y') d\mu(y'') \\ = \int_{[0,1]} \left( I_a(x,y; \gamma_1,\ldots,\gamma_a) \: h(y,z) \: d\mu(y) \right)^2 d\mu(z) \: ,\end{gathered}$$ and $$J \: = \int_{[0,1]^2} f(s') \: g(s',s'') \: f(s'') \: d\mu^2 = \int_{[0,1]} \left( \int_{[0,1]} h(s,z) \: f(s) \: d\mu(s) \right)^2 d\mu(z) \: .$$ Hence, by the Cauchy–Schwarz inequality, $$\begin{gathered} I_{2a+1}(x,x; \gamma_1,\ldots,\gamma_{2a},1) \cdot J \: \geq \\ \geq \left( \int_{[0,1]^3} I_a(x,y; \gamma_1,\ldots,\gamma_a) \: h(y,z) \: h(s,z) \: f(s) \: d\mu(y) d\mu(z) d\mu(s) \right)^2 \\ = \left( \int_{[0,1]} I_{a+1}(x,s; \gamma_1,\ldots,\gamma_a,f) \: d\mu(s) \right)^2 .\end{gathered}$$ Construct $G_{a+1}$ from $G$ by attaching two disjoint $(a+1)$-edge paths to vertex $v$. By \[th:leaf\], $G_{a+1}$ is extra-good. Assign functions $\gamma_1,\ldots,\gamma_a,f$ to vertices of each of the two paths. Let $I_{G_{a+1}}$ be the value of the integral on the left hand side of \[eq:extra-good\] for $G_{a+1}$. Then $$I_{G_{a+1}} \: = \int_{[0,1]} F(x) \left( \int_{[0,1]} I_{a+1}(x,s; \gamma_1,\ldots,\gamma_a,f) \: d\mu(s) \right)^2 d\mu(x) \: \leq \: I_{G_0} \cdot J \: .$$ The right hand side of \[eq:extra-good\] is equal to $J^{{\rm e}(G_0)}$ for $G_0$, and $J^{{\rm e}(G_{a+1})} = J \cdot J^{{\rm e}(G_0)}$ for $G_{a+1}$. As \[eq:extra-good\] holds for $G_{a+1}$, it holds for $G_0$, too.
{ "pile_set_name": "ArXiv" }
--- address: | $^{a}$ Physics Department, University of Athens, Athens, Greece\ $^{b}$ Dipartimento IA di Fisica dell’Universit[à]{} e del Politecnico di Bari and INFN, Bari, Italy\ $^{c}$ Fysisk Institutt, Universitetet i Bergen, Bergen, Norway\ $^{d}$ H[ø]{}gskolen i Bergen, Bergen, Norway\ $^{e}$ University of Birmingham, Birmingham, UK\ $^{f}$ Comenius University, Bratislava, Slovakia\ $^{g}$ University of Catania and INFN, Catania, Italy\ $^{h}$ CERN, European Laboratory for Particle Physics, Geneva, Switzerland\ $^{i}$ Institute of Experimental Physics, Slovak Academy of Science, Košice, Slovakia\ $^{j}$ P.J. Šafárik University, Košice, Slovakia\ $^{k}$ Fysisk Institutt, Universitetet i Oslo, Oslo, Norway\ $^{l}$ University of Padua and INFN, Padua, Italy\ $^{m}$ Collège de France, Paris, France\ $^{n}$ Institute of Physics, Prague, Czech Republic\ $^{o}$ University “La Sapienza” and INFN, Rome, Italy\ $^{p}$ Dipartimento di Scienze Fisiche “E.R. Caianiello” dell’Universit[à]{} and INFN, Salerno, Italy\ $^{q}$ State University of St. Petersburg, St. Petersburg, Russia\ $^{r}$ IReS/ULP, Strasbourg, France\ $^{s}$ Utrecht University and NIKHEF, Utrecht, The Netherlands author: - | T. Virgili, for the NA57 collaboration\ The NA57 Collaboration:\ F Antinori$^{l}$, P Bacon$^{e}$, A Badal[à]{}$^{g}$, R Barbera$^{g}$, A Belogianni$^{a}$, A Bhasin$^{e}$, I J Bloodworth$^{e}$, M Bombara$^{i}$, G E Bruno$^{b}$, S A Bull$^{e}$, R Caliandro$^{b}$, M Campbell$^{h}$, W Carena$^{h}$, N Carrer$^{h}$, R F Clarke$^{e}$, A Dainese$^{l}$, A P de Haas$^{s}$, P C de Rijke$^{s}$, D Di Bari$^{b}$, S Di Liberto$^{o}$, R Divià$^{h}$, D Elia$^{b}$, D Evans$^{e}$, G A Feofilov$^{q}$, R A Fini$^{b}$, P Ganoti$^{a}$, B Ghidini$^{b}$, G Grella$^{p}$, H Helstrup$^{d}$, K F Hetland$^{d}$, A K Holme$^{k}$, A Jacholkowski$^{g}$, G T Jones$^{e}$, P Jovanovic$^{e}$, A Jusko$^{e}$, R Kamermans$^{s}$, J B Kinson$^{e}$, K Knudson$^{h}$, A A Kolozhvari$^{q}$, V Kondratiev$^{q}$, I Králik$^{i}$, A Kravčáková$^{j}$, P Kuijer$^{s}$, V Lenti$^{b}$, R Lietava$^{e}$, G Løvhøiden$^{k}$, V Manzari$^{b}$, G Martinská$^{j}$, M A Mazzoni$^{o}$, F Meddi$^{o}$, A Michalon$^{r}$, M Morando$^{l}$, E Nappi$^{b}$, F Navach$^{b}$, P I Norman$^{e}$, A Palmeri$^{g}$, G S Pappalardo$^{g}$, B Pastirčák$^{i}$, J Pišút$^{f}$, N Pišútová$^{f}$, R J Platt$^{e}$, F Posa$^{b}$, E Quercigh$^{l}$, F Riggi$^{g}$, D Röhrich$^{c}$, G Romano$^{p}$, K Šafařík$^{h}$, L Šándor$^{i}$, E Schillings$^{s}$, G Segato$^{l}$, M Sené$^{m}$, R Sené$^{m}$, W Snoeys$^{h}$, F Soramel$^{l}$ [^1], M Spyropoulou-Stassinaki$^{a}$, P Staroba$^{n}$, T A Toulina$^{q}$, R Turrisi$^{l}$, T S Tveter$^{k}$, J Urbán$^{j}$, F F Valiev$^{q}$, A van den Brink$^{s}$, P van de Ven$^{s}$, P Vande Vyvre$^{h}$, N van Eijndhoven$^{s}$, J van Hunen$^{h}$, A Vascotto$^{h}$, T Vik$^{k}$, O Villalobos Baillie$^{e}$, L Vinogradov$^{q}$, T Virgili$^{p}$, M F Votruba$^{e}$, J Vrláková$^{j}$ and P Závada$^{n}$. title: 'Recent results from NA57 on strangeness production in p-A and Pb-Pb collisions at 40 and 158 $A$ GeV/c' --- Introduction ============ The experimental programme with heavy-ion beams at CERN SPS aims at the study of hadronic matter under extreme conditions of temperature, pressure and energy density. NA57 at the CERN SPS is a dedicated second-generation experiment for the study of the production of strange and multi-strange particles in Pb-Pb and p-Be collisions [@NA57proposal]. In this paper we present results on strangeness enhancements at 40 $A$ GeV/$c$ and 158 $A$ GeV/$c$. A study of the transverse mass ($m_{\tt T}=\sqrt{p_{\tt T}^2+m^2}$) spectra for $\Lambda$, $\Xi$, $\Omega$ hyperons, their antiparticles and $K^0_s$ measured in Pb-Pb collisions at 158 $A$ GeV/$c$, is also discussed. The multiplicity of charged particles in the central rapidity region has been measured in Pb–Pb collisions at both beam momenta: 158 A GeV/[*c*]{} and 40 A GeV/[*c*]{}. The value of $dN_{ch}/d\eta$ at the maximum and its behaviour as a function of centrality is here presented for the first time. Analysis and results ==================== The NA57 apparatus has been described in detail elsewhere [@MANZ]. The strange particle signals are extracted by reconstructing the weak decays into final states containing only charged particles, using geometric and kinematic constraints, with a method similar to that used in the WA97 experiment [@WA97PhysLettB433]. For each particle species we define the fiducial acceptance window using a Monte Carlo simulation of the apparatus and excluding the border regions. All data are corrected for geometrical acceptance and for detector and reconstruction inefficiencies on an event-by-event basis, with the procedure described in reference [@QM02Manzari]. Multiplicity measurement ------------------------ The procedure for the measurement of the multiplicity distribution and the determination of the collision centrality for each class is described in reference [@Multiplicity]. As a measure of the collision centrality we use the number of wounded nucleons $N_{wound}$. The distribution of the charged particle multiplicity measured in Pb-Pb interactions has been divided into five centrality classes (0,1,2,3,4), class 0 being the most peripheral and class 4 being the most central. The fractions of the inelastic cross section for the five classes are given in table \[tab:InvMSD\]. The charged multiplicity measured in the central unit of pseudorapidity $\eta$ is also used to determine the maximum of the pseudorapidity distribution ($dN_{ch}/d\eta|_{max}$). This is the variable most frequently used to characterize the multiplicity of the interaction; ($dN_{ch}/d\eta|_{max}$ is about 2% larger than the charged multiplicity in the central unit of $\eta$. In fig. \[fig:mult1\] the values of $<dN/d\eta |_{max}>$ are reported as a function of $N_{wound}$ for both 40 GeV/c (right) and 158 GeV/c (left) beam momenta. In the same figure are reported the values measured by the NA50 [@NA50] and NA49 [@NA49] collaborations. At 158 $A$ GeV/[*c*]{} a reasonable agreement is observed, with a small discrepancy for the most central classes. At 40 $A$ GeV/[*c*]{} a strong disagreement among the three experiments is observed. The values of the participants for a given fraction of total inleastic cross-section determined by the three experiments are similar. \ In proton–proton collisions, the charged multiplicity at central rapidity is found to scale approximately with the logarithm of the centre of mass energy [@Eskola]. Assuming the same dependence one would expect: $dN_{ch}\eta |_{max}$(158 $A$ GeV/[*c*]{})/$dN_{ch}d\eta |_{max}$ (40 $A$ GeV/[*c*]{})$\simeq \ln (17.3)/\ln (8.77)$=1.31. The value measured in NA57 for the most central class is 1.37$\pm$0.05. Transverse mass spectra in Pb-Pb at 158 $A$ GeV/$c$ --------------------------------------------------- The double-differential $(y,m_{\tt T})$ distributions for each of the measured particle species can be parametrized using the expression $$\label{eq:expo} \frac{d^2N}{m_{\tt T}\,dm_{\tt T} dy}=f(y) \hspace{1mm} \exp\left(-\frac{m_{\tt T}}{T_{app}}\right).$$ Assuming the rapidity distribution to be flat within our acceptance region ($f(y)={\rm const}$), the inverse slope parameters $T_{app}$ (“apparent temperature”) have been extracted by means of maximum likelihood fits of equation 1 to the data. The $1/m_{\tt T} \, dN/dm_{\tt T} $ distributions are well described by exponential functions [@BlastPaper]. The inverse slope parameters $T_{app}$ are given in table \[tab:InvMSD\] as a function of centrality, which is expressed for Pb-Pb intractions in terms of % of inelastic cross section. An increase of $T_{app}$  with centrality is observed in Pb-Pb for $\Lambda$, $\Xi$ and possibly also for $\bar\Lambda$. Inverse slopes for p-Be and p-Pb collisions [@Slope-p] are also given in table \[tab:InvMSD\]. In central and semi-central Pb-Pb collisions (i.e. classes 1 to 4) the baryon and antibaryon $m_{\tt T}$ distributions have similar slopes. This suggests that strange baryons and antibaryons are produced by a similar mechanism. Within the blaste-wave model [@BlastRef] the apparent temperature can be interpreted as due to the thermal motion coupled with a collective transverse flow of the fireball. The model predicts a double differential cross-section of the form: $$\frac{d^2N_j}{m_{\tt T} dm_{\tt T} dy} %\propto = \mathcal{A}_j \int_0^{R_G}{ m_{\tt T} K_1\left( \frac{m_{\tt T} \cosh \rho}{T} \right) I_0\left( \frac{p_{\tt T} \sinh \rho}{T} \right) r \, dr} \label{eq:Blast}$$ where $\rho(r)=\tanh^{-1} \beta_{\perp}(r)$ is a transverse boost, $K_1$ and $I_0$ are modified Bessel functions, $R_G$ is the transverse geometric radius of the source at freeze-out and $\mathcal{A}_j$ is a normalization constant. The transverse velocity field $\beta_{\perp}(r)$ has been parametrized according to a power law: $$\beta_{\perp}(r) = \beta_S \left[ \frac{r}{R_G} \right]^{n} \quad \quad \quad r \le R_G \label{eq:profile}$$ With this type of profile the numerical value of $R_G$ does not influence the shape of the spectra but just the absolute normalization (i.e. the $\mathcal{A}_j$ constant). The parameters which can be extracted from a fit of equation \[eq:Blast\] to the experimental spectra are thus the thermal freeze-out temperature $T$ and the [*surface*]{} transverse flow velocity $\beta_S$. Assuming a uniform particle density, the latter can be replaced by the [*average*]{} transverse flow velocity, $ <\beta_{\perp}> = \frac{2}{2+n} \beta_S $ [@BlastPaper]. The use of the three profiles $n=0$, $n=1/2$ and $n=1$ results in similar values of the freeze-out temperatures and of the average transverse flow velocities, with good values of $\chi^2/ndf$. The quadratic profile is disfavoured by our data [@BlastPaper]. The global fit of equation \[eq:Blast\] with $n=1$  to the spectra of all the measured strange particle describes the data with $\chi^2/ndf=37.2/48$, yielding the following values for the two parameters $T$ and $ <\beta_\perp>$ for the most central class: $ T = 118 \pm 13 {\rm MeV} \, , \quad <\beta_\perp>=0.45 \pm 0.02$. The $T$ and $<\beta_\perp>$ parameters are statistically anti-correlated. The systematic errors on $T$  and $<\beta_{\perp}>$  are correlated; they are estimated to be $10\%$ and $3\%$, respectively. Strangeness enhancement ----------------------- By using equation \[eq:expo\] we can extrapolate the yield measured in the selected acceptance window to a common phase space window covering full $p_{\tt T}$ and one unit of rapidity centered at midrapidity: $$Y=\int_{m}^{\infty} {\rm d}m_{\tt T} \int_{y_{cm}-0.5}^{y_{cm}+0.5} {\rm d}y \frac{{\rm d}^2N}{{\rm d}m_{\tt T} {\rm d}y}. \label{eq:yield}$$ The enhancement $E$  is defined as $$E={\left( \frac{Y}{<N_{wound}>} \right)_{Pb-Pb}} / { \left( \frac{Y}{<N_{wound}>} \right)_{p-Be} } \label{eq:enh2}$$ In figure \[fig:HypEnh1\] and figure \[fig:HypEnh2\] we show the enhancements as a function of $N_{wound}$  for 158 and 40 $A$ GeV/$c$ respectively. \ The enhancements are shown separately for particles containing at least one valence quark in common with the nucleon (left) and for those with no valence quark in common with the nucleon (right). The 158 $A$ GeV/$c$ results confirm the picture which emerged from WA97 — the enhancement increases with the strangeness content of the hyperon — and extend the measurements to lower centrality. For all the particles except for $\bar\Lambda$ we observe a significant centrality dependence of the enhancements, although a saturation cannot be excluded for the two or three most central classes. A significant enhancement of strangeness production when going from p-Be to Pb-Pb is observed also in the 40 $A$ GeV/$c$ data. For the $\bar\Xi$ particle, due to the limited statistics in p-Be collisions at 40 GeV/$c$, we could estimate only an upper limit to the production yield. This limit for the four most central classes at 95% confidence level is indicated by the arrow in figure \[fig:HypEnh2\] (right). The enhancement pattern follows the same hierarchy with the strangeness content observed at 158 GeV/$c$: $ E(\Lambda) < E(\Xi)$ and $E(\bar\Lambda) < E(\bar\Xi)$. Comparing the measurements at the two beam momenta: for the most central collisions (bins $3$ and $4$) the enhancements are higher at 40 than at 158 GeV/$c$, the increase with $N_{wound}$ is steeper at 40 than at 158 GeV/$c$. Conclusions =========== We have reported an enhanced production of $\Lambda$, $\bar\Lambda$, $\Xi$ and $\bar\Xi$  when going from p-Be to Pb-Pb collisions at 40 $A$ GeV/$c$. The enhancement pattern follows the same hierarchy with the strangeness content as at 158 GeV/$c$: $ E(\Lambda) < E(\Xi)$, $ E(\bar\Lambda) < E(\bar\Xi)$. For central collisions (classes $3$ and $4$) the enhancement is larger at 40 GeV/$c$. In Pb-Pb collisions the hyperon yields increase with $N_{wound}$ faster at 40 than at 158 $A$ GeV/$c$. The analysis of the transverse mass spectra at 158 $A$ GeV/$c$  in the framework of the blast-wave model suggests that after a central collision the system expands explosively and then it freezes-out when the temperature is of the order of 120 MeV with an average transverse flow velocity of about one half of the speed of light. Finally, the measurements of the charged particle multiplicity indicate that $dN_{ch}/d\eta$ at the maximum is close to a logarithmic scaling with the centre of mass energy. [33]{} Caliandro R [*et al.*]{}, NA57 proposal, 1996 [*CERN/SPSLC 96-40, SPSLC/P300*]{}. Manzari V [*et al.*]{} 2001 J. Phys. G [**27**]{} 383; T. Virgili [*et al.*]{} 2001 (NA57 Coll.), Nucl. Phys. A [**681**]{} 165c. Andersen E [*et al.*]{} 1998 Phys. Lett. B [**433**]{} 209; Lietava R [*et al.*]{} 1999 J. Phys. G [**25**]{} 181; Fini R A [*et al.*]{} 2001 J. Phys. G [**27**]{} 375. Manzari V [*et al.*]{} 2003 Nucl. Phys. A [**715**]{} 140c. Carrer N [*et al.*]{} 2001 J. Phys. G [**27**]{} 391; Antinori F [*et al.*]{} 2004 submitted to J. Phys. G. M.C. Abreu [*et al.*]{} 2002, Phys. Lett. B [**530**]{} 33; M.C. Abreu [*et al.*]{} 2002, Phys. Lett. B [**530**]{} 43. S.V. Afanasiev [*et al.*]{}, 2002, Phys.Rev. C [**66**]{} 054902; T. Anticic [*et al.*]{} 2004, Phys.Rev. C [**69**]{} 024902. K.J. Eskola, Nucl. Phys. A [**698**]{} (2002) 78. Antinori F [*et al.*]{} 2004 J. Phys. G [**30**]{} 823. Fini R A [*et al.*]{} 2001 Nucl. Phys. A [**681**]{} 141c. Schnedermann E, Sollfrank J and Heinz U 1993 Phys. Rev. C [**48**]{} 2462; Schnedermann E, Sollfrank J and Heinz U 1994 Phys. Rev. C [**50**]{} 1675. [^1]: Permanent address: University of Udine, Udine, Italy
{ "pile_set_name": "ArXiv" }
--- abstract: 'We deal with a random graph model evolving in discrete time steps by duplicating and deleting the edges of randomly chosen vertices. We prove the existence of an a.s. asymptotic degree distribution, with stretched exponential decay; more precisely, the proportion of vertices of degree $d$ tends to some positive number $c_d>0$ almost surely as the number of steps goes to infinity, and $c_d\sim (e\pi)^{1/2} d^{1/4} e^{-2\sqrt d}$ holds as $d\to\infty$.' address: - | Department of Probability Theory and Statistics\ Eötvös Loránd University\ Pázmány P. s. 1/C, H-1117 Budapest, Hungary - | Department of Probability Theory and Statistics\ Eötvös Loránd University\ Pázmány P. s. 1/C, H-1117 Budapest, Hungary author: - Ágnes Backhausz - 'Tamás F. Móri' date: 30 August 2013 title: Asymptotic properties of a random graph with duplications --- [ scale free, duplication, deletion, random graphs, martingales. ]{} Introduction ============ In the last decades, inspired by the examination of large real networks, various types of random graph models with preferential attachment dynamics (meaning that vertices with larger degree have larger chance to get new edges as the graph evolves randomly) were introduced and analysed. After some early work [@yule; @sim; @szym], this started with the seminal papers of Barabási and Albert [@ba 1999] and Bollobás, Riordan, Spencer and Tusnády [@bb 2001]. Among many others, we may mention the model of Cooper and Frieze for the Internet [@cf 2003] or that of Sridharan, Yong Gao, Kui Wu and Nastos [@sridharan 2011] for social networks. An important feature of these graph sequences is the scale-free property: the proportion of vertices of degree $d$ tends to some positive number $c_d$ almost surely as the number of steps goes to infinity, and $c_d\sim Kd^{-\gamma}$ holds as $d\rightarrow \infty$ (throughout this paper, $a_d\sim b_d$ means that $a_d/b_d\rightarrow 1$ as $d\rightarrow \infty$). To put it in another way, the asymptotic degree distribution $(c_d)$ is polynomially decaying. See also [@ba; @fal; @notices] and the references therein about the scale-free property of the internet. However, scale-free property captures only the behavior of the degrees of vertices, and does not examine other kinds of structures. For example, especially in biological networks, e.g., proteomes, it happens that we can find groups of vertices having a similar neighborhood, that is, most of their neighbors are the same. One can say that these networks are highly clustered; loosely speaking, there are large cliques, in which almost every vertex is connected to almost every other one, and there are only a few edges going between cliques. A simple way to generate cliques is duplication: when a new vertex is added, we choose an old vertex randomly, and connect the new vertex to the neighbors of the old one. In other words, the new vertex becomes a copy of the old vertex. Note that if the old vertex is chosen uniformly at random, then the probability that a vertex of degree $d$ gets a new edge is just the probability that one of its neighbors is chosen, which is proportional to its actual degree. Hence this model is also driven by a kind of preferential attachment dynamics. After the duplication, we can add some extra edges randomly, or we can delete some of them to guarantee that the network remains sparse. The graph may still have some large cliques due to the duplication. Duplication is not only a technical step that proved to be useful: it is inherent. “This may be because duplication of the information in the genome is a dominant evolutionary force in shaping biological networks (like gene regulatory networks and protein–protein interaction networks)” [@chung]. These kinds of models – where the duplicated vertex is chosen uniformly at random – were examined for example by Kim, Krapivsky, Kahng, and Redner [@kim 2002]. In their model the new vertex is connected to each neighbor of the chosen one with probability $1-\delta$, independently. In addition, the new vertex is connected to each old one independently with probability $\beta/n$ at the $n$th step ($\delta, \beta$ are the parameters of the model). Scale free property is claimed for this model. However, Pastor-Satorras, Smith and Solé [@pastor 2003] stated that, instead of polynomial decay, for the limit $c_d$ of the expected value of the proportion of vertices of degree $d$ we have $c_d\sim Kd^{-\gamma} e^{-\lambda d}$ with some positive constants $K, \gamma, \lambda$; that is, the degree distribution has a polynomial decay with exponential cut-off. On the contrary, Chung, Lu, Dewey, and Galas [@chung 2003] claimed that for $\beta=0$, when we do not have any extra edges, the asymptotic degree distribution exists, and $(c_d)$ is decaying polynomially. None of these papers contained a mathematically rigorous proof. Bebek, Berenbrink, Cooper, Friedetzky, Nadeau and Sahinalp [@bebek 2006] disclaimed the above mentioned results of [@pastor] and [@chung]. In the latter case, they showed that the fraction of isolated vertices (that have no edges) increases with time in the pure duplication model, where $\beta=0$. They modified the model to avoid singletons by adding a fixed number of edges to the new vertex, chosen uniformly at random. They assumed without any proof that the asymptotic degree distribution exists, and they claimed that it is decaying polynomially. Hamdi, Krishnamurthy and Yin [@hamdi 2013+] present a model where the probabilities of adding a duplicated edge depends on the state of a hidden Markov chain. Polynomial decay is stated for the limit of the mean of degree distribution. We also mention the somewhat different model of Jordan [@jordan 2011], and the duplication model of Cohen, Jordan and Voliotis [@cohen 2010], where the duplicated vertex is chosen not uniformly, but with probabilities proportional to the actual degrees. In our paper, we present a simple random graph model based on the duplication of a vertex chosen uniformly at random, and the erasure of the edges of another vertex also chosen uniformly at random. We prove that for all $d$, the proportion of vertices of degree $d$ tends to some $c_d$ with probability $1$ as the number of steps goes to infinity. Here $c_d$ is a positive number; we will formulate it as an integral, and then we will determine the asymptotics of the sequence $(c_d)$ as $d\rightarrow \infty$, showing that it has a stretched exponential decay. Hence this model does not have the scale free property. We use methods of martingale theory for proving almost sure convergence, and generating function and Taylor series techniques for deriving the integral representation and the asymptotics of the sequence $(c_d)$. Definition of the model and main results ======================================== Our model has two different versions. Both of them start with a single vertex. The graph evolves in discrete time steps; each step has a duplication and an erasure part. At each step a new vertex will be born; therefore the number of vertices after $n$ steps is $n+1$. The graph is always a simple graph; it has neither multiple edges nor loops. At each step we do the following. *Version 1*. We choose two (not necessarily different) old vertices independently, uniformly at random. Then the new vertex is added to the graph; we connect it to the first vertex and to all its neighbors. After that we delete all edges emanating from the second old vertex we have selected, with the possible exception that edges of the new vertex cannot be deleted. *Version 2*. We choose two (not necessarily different) old vertices independently, uniformly at random. The new vertex is connected to the first one and to its neighbors. Then we delete all edges of the second vertex without any exceptions. That is, the new edges are protected in the erasure part of the same step in version 1, but they might be deleted immediately in version 2. We will see that the version 2 graph has a simple structure that enables us to describe its asymptotical degree distribution. Then, using this and a coupling of the two models, we can prove similar results for version 1. Let us remark that the presence of deletion makes the analysis more difficult than in the usual recursive graph models, since it causes intensive fluctuation in the model’s behavior. Our model is a kind of coagulation–fragmentation one: the effect of duplication is coagulation, and deletion results fragmentation. Coagulation–fragmentation models are frequently used in several areas, see e.g. [@dong]. Ráth and Tóth applied these models for random graph models [@bal], namely, for the Erdős–Rényi model, which is completely different from ours. The basic property of version 2 is that the evolving graph always consists of separated complete graphs. That is, it is a disjoint union of cliques. Within a component, every pair of vertices is connected, and there are no edges between the components. Indeed, we start from a single vertex, which is a clique of size one, and both duplication and erasure make cliques from cliques. Moreover, it is easy to see that if we start the model with an arbitrary graph, all edges of the initial configuration are deleted after a while, and after that the graph will consist of separated cliques. So the initial configuration does not make any difference asymptotically. We may formulate the second version as follows. At each step we choose two components independenty such that the probability that a given clique is chosen is proportional to its size. The new vertex is attached to the first clique, so its size is increased by $1$; the size of the secondly chosen clique is decreased by $1$, and an isolated vertex (the deleted one) comes into existence. Note that if we choose an isolated vertex to be deleted, then it remains isolated. This structure of version 2 makes it easier to handle, as the number of $d$-cliques does not vary so vehemently as the number of degree $d$ vertices; the fluctuation is bounded by $2$. This will lead to the description of the asymptotic degree distribution of version 1 in an almost sure sense. Our main results are the following. \[dupt1\] Denote by $X[n,d]$ the number of vertices of degree $d$ after $n$ steps in version 1. Then $$\frac{X[n,d]}{n+1}\rightarrow c_d$$ holds almost surely as $n\rightarrow\infty$, where $(c_d)$ is a sequence of positive numbers satisfying $$\label{duprekurzio} c_0= \frac{1+c_1}{3}; \quad c_d=\frac{d+1}{2d+3}\bigl(c_{d-1}+c_{d+1}\bigr) \ \ \ \ (d\geq 2).$$ For the asymptotic analysis we first present an integral representation for the limiting sequence $(c_d)$. As a corollary, we get that the sum of this sequence is $1$; it is really a probability distribution. \[int\] For the sequence $(c_d)$ of Theorem $\ref{dupt1}$ we have $$c_d=(d+1)\int_0^\infty\frac{y^d e^{-y}}{(1+y)^{d+2}}\,dy \qquad (d\geq 0),$$ and $\sum_{d=0}^{\infty} c_d=1$. Using this formula we can derive the asymptotics of $c_d$. \[asymp\]For the sequence $(c_d)$ of Theorem $\ref{dupt1}$ we have $$c_d\sim (e\pi)^{1/2}\,d^{1/4}\,e^{-2\sqrt{d}},\quad as\ d\to\infty.$$ Our model is invented to ensure high degree clustering. Finally, let us quantify this property. The local clustering coefficient of a vertex of degree $d$ is defined to be the fraction of connections that exist between the $\binom{d}{2}$ pairs of neighbors (meant $0$ when $d<2$). Watts and Strogatz [@WS] define the clustering coefficient of the whole graph as the average of the local clustering coefficients of all the vertices. Let us call this quantity the average clustering coefficient. Another possibility for a such a measure is the ratio of $3$ times the number of triangles divided by the number of connected triplets (paths of length $2$), see [@hofstad]. This version is sometimes called transitivity; we will refer to it as the global clustering coefficient. Since the graph in version 2 is consists of disjoint cliques, its global clustering coefficient is obviously $1$, while the average clustering coefficient is equal to the proportion of vertices with degree at least $2$. By Theorem \[dupt1\] it converges to $1-c_0-c_1=2-4c_0$ almost surely, as $n\to\infty$. We note that the limit is equal to $0.38538\dots$ by Theorem \[int\]. These results can be transferred to version 1. \[clust\] In version 1, the global clustering coefficient converges to $1$, and the average clustering coefficient to $1-c_0-c_1$, almost surely, as $n\to\infty$. The high clustering property of our model shows that is is a so-called small-world graph [@WS]. Proofs ====== Preliminaries. {#preliminaries. .unnumbered} -------------- First we formulate the lemma from martingale theory that we will use several times and whose proof can be found in [@publ]. \[publ1\] Let $(\mathcal F_n)$ be a filtration, $(\xi_n)$ a nonnegative adapted process. Suppose that $$\label{lemmafelt} \mathbb \mathbb{E}\bigl((\xi_n-\xi_{n-1})^2\bigm|\mathcal F_{n-1}\bigr)=O\left( n^{1-\delta}\right)$$ holds with some $\delta>0$. Let $(u_n)$, $(v_n)$ be nonnegative predictable processes such that $u_n<n$ for all $n\geq 1$. Finally, let $(w_n)$ be a regularly varying sequence of positive numbers with exponent $\mu\ge 0$. $(a)$ Suppose that $$\mathbb \mathbb{E}(\xi_n\mid\mathcal F_{n-1})\le \Bigl(1-\dfrac{u_n}{n}\Bigr)\xi_{n-1}+v_n,$$ and $\lim_{n\rightarrow \infty} u_n=u$, $\limsup_{n\rightarrow \infty} v_n/w_n\le v$ with some random variables $u>0,\ v\geq 0$. Then $$\limsup_{n\rightarrow\infty}\frac{\xi_n}{nw_n}\le \frac{v}{u+\mu+1} \quad a.s.$$ $(b)$ Suppose that $$\mathbb \mathbb{E}(\xi_n\mid\mathcal F_{n-1})\ge \Bigl(1-\dfrac{u_n}{n}\Bigr)\xi_{n-1}+v_n,$$ and $\lim_{n\rightarrow \infty}u_n=u$, $\liminf_{n\rightarrow \infty} v_n/w_n\ge v$ with some random variables $u>0,\ v\geq 0$. Then $$\liminf_{n\rightarrow\infty}\frac{\xi_n}{nw_n}\ge \frac{v}{u+\mu+1} \quad a.s.$$ Asymptotic degree distribution in version 2. {#asymptotic-degree-distribution-in-version-2. .unnumbered} -------------------------------------------- Recall that in this case the graph is always disjoint union of complete graphs. First we prove the following analogue of Theorem \[dupt1\]. \[dupall1\] Denote by $Y[n,k]$ the number of cliques of size $k$ after $n$ steps in version 2. Then for all positive integers $k$ we have $$\frac{Y[n,k]}{n}\rightarrow y_k \quad {\text\ almost\ surely\ as\ } n\rightarrow \infty,$$ where $(y_k)$ is a sequence of positive numbers satisfying $$\label{dupe0} y_1=\frac{1+2y_2}{3}, \qquad y_k=\frac{(k-1)y_{k-1}+(k+1)y_{k+1}}{2k+1} \quad (k\geq 2).$$ Note that (as well as equation ) is not a recursion. This prevents us proceeding simply in the usual, direct way, with induction over $k$. **Proof.** For $n=0$ we have $Y[0,1]=1$, all the other ones are equal to zero. The total number of vertices is $n$ after $n-1$ steps. Let $\mathcal F_n$ denote the $\sigma$-field generated by the first $n$ steps. We enumerate the events that can happen to the cliques of different sizes at a step. At the $n$th step an isolated vertex may become - a clique of size 2 (increased but not decreased) with probability $\frac1n \big(1-\frac 1n\big)$; - an isolated vertex (any other cases). A clique of size $k\geq 2$ may become a clique of size - $k-1$ (not increased but decreased) with probability $\frac kn \big(1-\frac kn\big)$; - $k+1$ (increased but not decreased) with probability $\frac kn \big(1-\frac kn\big)$; - $k$ (any other cases). The deleted vertex will be a new isolated one unless one of them is chosen for erasure but not for duplication, which has probability $\frac1n \big(1-\frac 1n\big)$ for each of them. Putting this together with the fact that the random choices are independent and probabilities are proportional to clique sizes, we can compute the conditional expectation of $Y[n,k]$ with respect to $\mathcal F_{n-1}$, which is the $\sigma$-field generated by the first $n-1$ steps. $$\begin{aligned} E( Y[n,1]\vert \mathcal F_{n-1})&=Y[n-1,1]\bigg[1-\frac{1}{n}\bigg(1-\frac{1}{n}\bigg )-\frac{1}{n}\bigg(1-\frac{1}{n}\bigg )\bigg ]\\ &\quad +1+ Y[n-1,2]\cdot \frac{2}{n}\bigg(1-\frac 2n\bigg); \\ E(Y[n,k]\vert \mathcal F_{n-1})&=Y[n-1,k]\bigg [1-2\cdot \frac kn\bigg(1-\frac{k}{n}\bigg)\bigg]\\ &\quad +Y[n-1,k-1] \cdot \frac{k-1}{n} \bigg (1-\frac{k-1}{n}\bigg)\\ &\quad+Y[n-1,k+1]\cdot \frac{k+1}{n}\bigg(1-\frac{k+1}{n}\bigg) \quad (k\geq 2).\end{aligned}$$ Let $A_k=\liminf_{n\rightarrow\infty} \frac{Y[n,k]}{n}$ and $B_k=\limsup_{n\rightarrow\infty} \frac{Y[n,k]}{n}$ for $k\geq 1$. It is clear that $0\leq A_k\leq B_k\leq 1$ holds for these random variables. We will give a sequence of lower bounds for $(A_k)$, and similarly, a sequence of upper bounds for $(B_k)$; the we will show that their limits are equal to each other. First, let $a_k^{(0)}=0$ for $k\geq 1$. Having constructed the sequence $(a_k^{(j)})_{k\geq 1}$, we define $$\label{dupe1} a_1^{(j+1)}=\frac{1+2a_2^{(j)}}{3}\,, \quad a_k^{(j+1)}=\frac{(k-1)a_{k-1}^{(j)}+(k+1)a_{k+1}^{(j)}}{2k+1} \quad (k\geq 2).$$ We get $a_k^{(j)}$ recursively for every $k\geq 1$ and $j\geq 1$. We prove by induction on $j$ that $a_k^{(j)}\leq A_k \ (k\geq 1)$. Since $Y[n,k]\geq 0$, this is clear for $j=0$. Suppose that this is satisfied for some $j$ for every $k$. For $k=1$ we apply Lemma \[publ1\] with $$\xi_n={Y[n,1]}, \ u_n=2-\frac{2}{n}\rightarrow 2, \ v_n=1+Y[n-1,2]\cdot\frac{2}{n}\bigg(1-\frac 2n\bigg).$$ Now $(\xi_n)$ is nonnegative adapted. $(u_n)$ and $(v_n)$ are clearly nonnegative predictable sequences; we can choose $w_n=1$, $\mu=0$, $u=2>0$ and finally, $v=1+2a_2^{(j)}\geq 0$ due to the induction hypothesis. Note that at each step at most one of the isolated points vanishes and at most two may appear. Thus is clearly satisfied. Lemma \[publ1\] implies that $$A_1=\liminf_{n\rightarrow \infty} \frac{Y[n,1]}{n}=\liminf_{n\rightarrow \infty} \frac{\xi_n}{n} \geq \frac{v}{u+1}=\frac{1+2a_2^{(j)}}{3}=a_1^{(j+1)}$$ almost surely. Similarly, for $k\geq 2$, if we have $A_k\geq a_k^{(j)}$ for some $j\geq 1$, we can choose $$\begin{aligned} \xi_n&={Y[n,k]}, \ u_n=2k-\frac{2k^2}{n}\rightarrow 2k,\\ v_n&=Y[n-1,k-1]\cdot\frac{k-1}{n}\bigg (1-\frac{k-1}{n}\bigg)\\ &\quad +Y[n-1,k+1]\cdot \frac{k+1}{n}\bigg(1-\frac{k+1}{n}\bigg),\\ v&=(k-1)a_{k-1}^{(j)}+(k+1)a_{k+1}^{(j)}.\end{aligned}$$ At each step at most three cliques are changed, which implies that holds. Thus in this case from Lemma \[publ1\] we obtain that $$A_k=\liminf_{n\rightarrow \infty} \frac{Y[n,k]}{n} \geq \frac{v}{u+1}= \frac{(k-1)a_{k-1}^{(j)}+(k+1)a_{k+1}^{(j)}}{2k+1}=a_k^{(j+1)}$$ almost surely. By induction on $j$ we get that $A_k\geq a_k^{(j)}$ holds almost surely for $k\geq 1$ and $j\geq 0$. Now we verify that for fixed $k$ the sequence $(a_k^{(j)})$ is monotone increasing in $j$. Since $a_k^{(0)}=0$ for every $k$, from equations it is clear that $a_k^{(1)}\geq a_k^{(0)}$. Suppose that for some $j\geq 1$ we have $a_k^{(j)}\geq a_k^{(j-1)}$ for every $k$. Then $$\begin{aligned} a_1^{(j+1)}&=\frac{1+2a_2^{(j)}}{3}\geq \frac{1+2a_2^{(j-1)}}{3}= a_1^{(j)};\\ a_k^{(j+1)}&=\frac{(k-1)a_{k-1}^{(j)}+(k+1)a_{k+1}^{(j)}}{2k+1}\\ &\geq \frac{(k-1)a_{k-1}^{(j-1)}+(k+1)a_{k+1}^{(j-1)}}{2k+1} =a_k^{(j)}\end{aligned}$$ follows from equations . Thus by induction on $j$ we get that $a_k^{(j)}\geq a_k^{(j-1)}$ for $k, j\geq 1$. It is clear that the sequence $(a_k^{(j)})_{j\geq 0}$ is uniformly bounded from above by $1$. Using monotonicity we can define $$a_k= \lim_{j\rightarrow \infty }a_k^{(j)} \qquad (k\geq 1) .$$ From equation it follows that $(a_k)$ satisfies , that is, $$a_1=\frac{1+2a_2}{3}, \qquad a_k=\frac{(k-1)a_{k-1}+(k+1)a_{k+1}}{2k+1} \quad (k\geq 2).$$ On the other hand, since $A_k\geq a_k^{(j)}$ for $k\geq 1$ and $j\geq 0$, we have $A_k\geq a_k$ almost surely. Similarly, we define $b_k^{(0)}=1$ for every $k$, and then $$b_1^{(j+1)}=\frac{1+2b_2^{(j)}}{3}, \qquad b_k^{(j+1)}= \frac{(k-1)b_{k-1}^{(j)}+(k+1)b_{k+1}^{(j)}}{2k+1} \quad (k\geq 2).$$ Using part (a) of Lemma \[publ1\] it follows by induction on $j$ that $B_k\leq b_k^{(j)}$ holds almost surely. In this case, for fixed $k$ the sequence $b_k^{(j)}$ is decreasing, and for the limits $b_k=\lim_{j\rightarrow \infty} b_k^{(j)}$ we also have $$b_1=\frac{1+2b_2}{3}, \qquad b_k=\frac{(k-1)b_{k-1}+(k+1)b_{k+1}}{2k+1} \quad (k\geq 2).$$ In addition, $B_k\leq b_k$ almost surely. By definition, $0\leq A_k\leq B_k\leq 1$ and $0\leq a_k\leq b_k\leq 1$ hold. Let $d_k=b_k-a_k\geq 0$ for all $k$. We have the same equations for $(a_k)$ and $(b_k)$. This yields $$d_1=\frac{2d_2}{3}, \qquad d_k=\frac{(k-1)d_{k-1}+(k+1)d_{k+1}}{2k+1} \quad (k\geq 2).$$ By rearranging we get that $$\label{dupe4} d_2=\frac{3}{2}d_1, \qquad d_{k+1}=\frac{(2k+1)d_{k}-(k-1)d_{k-1}}{k+1} \quad (k\geq 2).$$ Suppose that $d_k\geq \frac{k+1}{k} d_{k-1}$ holds for some $k\geq 2$. (For $k=2$ this is true with equality.) Since $d_{k-1}$ is nonnegative, $d_k\geq d_{k-1}$ also follows from this assumption. We obtain from equation that $$d_{k+1}\geq \frac{(k+2)d_k}{k+1}.$$ Therefore this inequality holds for every $k$. This implies that $d_k\geq (k+1) d_1$ for every $k$. Since $0\leq d_k=b_k-a_k\leq 1$, it follows that $d_1=0$. From we obtain that $d_k=0$ for all $k$, which implies that $a_k=b_k$. Since these were the lower and upper bounds for the limit inferior and limit superior of $\frac{Y[n,k]}{n}$, we get that the latter must converge almost surely as $n\to\infty$, and the limits satisfy . \[dupk2\] In version 2, the proportion of vertices of degree $d$ tends to $c_d$ satisfying almost surely as $n\rightarrow \infty$. **Proof.** For a fixed $d$ we have $d+1$ vertices of degree $d$ in each clique of size $k=d+1$. Therefore for the proportion of vertices of degree $d$ tends to $(d+1)y_{d+1}$ by Proposition \[dupall1\]. From equations we obtain that $$c_0=y_1=\frac{1+2y_2}{3}=\frac{1+c_1}{3};$$ $$c_d=(d+1)y_{d+1}=\frac{d+1}{2d+3}\bigl(c_{d-1}+c_{d+1}\bigr) \quad (d\geq 2).$$ Asymptotic degree distribution in version 1. {#asymptotic-degree-distribution-in-version-1. .unnumbered} -------------------------------------------- When proving the results for version 2 we essentially used the property that the graphs consists of disjoint union of cliques: at most three of the cliques may change at a step, but the number of vertices whose degree is changed is not bounded uniformly. However, we can push through the results by a kind of coupling of versions 1 and 2. [**Proof of Theorem \[dupt1\].**]{} Both in versions 1 and 2 two old vertices are selected with replacement, independently, uniformly at random. Thus we can couple the models such that the selected vertices are the same in all steps. The duplication part is the same in the two versions. The difference is in the deletion: in version 1, the edges of the new vertex cannot be deleted. So in version 1, we do the following. In the deletion part, we colour an edge red if it is saved in version 2. That is, if it connects the new vertex with the old vertex to be deleted. In the duplication part, copies of red edges are also red: if there is a red edge between the duplicated vertex and one of its neighbors, then the new edge connecting this neighbor to the new vertex is also red. All other new edges are originally black, but they may turn red in the deletion part of the same step. The colouring is defined in such a way that the graph sequence of the black edges is a realization of version 2. Indeed, edges turning red are deleted and hence the copies of them does not appear in this model, but all other edges are black. Our goal is to prove that the number of vertices having red edges divided by $n$ tends to zero almost surely. This implies that the results of Corollary \[dupk2\] holds for version 1 as well. First we need an upper bound for the total number of edges. \[lemma2\] Denote by $S_n$ the number of edges (both black and red ones) after $n$ steps in version 1. Then for all $\varepsilon>0$ we have $S_n=O\bigl(n\log^{1+\varepsilon}n\bigr)$ with probability $1$. [**Proof.**]{} Let $\delta_n=S_n-S_{n-1}$. As before, $\mathcal F_n$ denotes the $\sigma$-field generated by the first $n$ steps, and $X[n,d]$ is the number of vertices of degree $d$ after $n$ steps. Let $U_n$, resp. $V_n$, denote the degree of the old vertex selected for duplication, resp. deletion, at step $n$. The new vertex is connected to the duplicated one with an edge that cannot be deleted; this increases the number of edges by $1$ for sure. Thus, $\delta_n=U_n-V_n+1$. Clearly, $U_n$ and $V_n$ are conditionally i.i.d. with respect to $\mathcal F_{n-1}$, hence $S_n-n=\sum_{j=1}^n(\delta_j-1)$ is a zero mean martingale. Consequently, $ES_n=n$ for every $n$. Clearly, $$E\bigl(|\delta_n-1|\bigm|\mathcal F_{n-1}\bigr)\le 2 E(U_n\vert\mathcal F_{n-1})=\sum_{d=0}^n \frac{X[n-1,d]}{n}\,d= \frac{2S_{n-1}}{n}\,.$$ Hence $$E\left(\sum_{n=2}^\infty\frac{|\delta_n-1|}{n\log^{1+\varepsilon}n} \right)<\infty,$$ therefore the series $$\sum_{n=2}^\infty\frac{\delta_n-1}{n\log^{1+\varepsilon}n}$$ is convergent with probability $1$. Then Kronecker’s lemma [@sh Lemma IV.3.2] implies that $$\frac{S_n-n}{n\log^{1+\varepsilon}n}\to 0\quad\text{a.s.}$$ as $n\to\infty$. Now we will colour some of the vertices red in such a way that the remaining black vertices cannot have any red edges. We will be able to give an upper bound for the number of red vertices. At the duplication step the new vertex becomes red if and only if the duplicated vertex is red. If this old vertex is black and has no red edges, the same holds for the new vertex at the moment. After that, if there is an edge between the new vertex and the deleted one, this edge may turn red, as we defined before. We colour both endpoints of this new red edge red. On the other hand, if the old vertex chosen for deletetion loses all its edges, then its new colour will be black. Note that black vertices still have only black edges, but it may happen that an old vertex has only one red edge which is deleted, because its other endpoint is chosen for deletion; in this case the vertex stays red without having any red edges. The proof continues with giving an upper bound for the number of red vertices. \[lemma3\] Denote by $Z_n$ the number of red vertices after $n$ steps. Then for all $\varepsilon>0$ we have $Z_n=O(\log^{2+\varepsilon}n)$ almost surely. [**Proof.**]{} At each step, every old vertex has the same probability to be duplicated or deleted. If a red vertex is duplicated, then the new vertex becomes red; if it is deleted, then $Z_n$ decreases by 1 unless the deleted vertex is connected to the new one which turns this edge red. Therefore without the exceptional new red edge, the conditional expectation of $Z_n$ with respect to $\mathcal F_{n-1}$ would be equal to $Z_{n-1}$. The deleted vertex and the new one are connected if and only if the deleted and duplicated vertices are the same or they are connected to each other. Since we did sampling with replacement, the probability of the first event is $1/n$; while the probability of the second event is $2S_{n-1}/n^2$. In the first case, the new vertex is red originally, but the other one stays red instead of turning back to black when deleted; $Z_n$ is increased by an extra 1. In the other case, both endpoints of the edge turning red may be red vertices in addition. To sum up, we obtain that $$E(Z_n\vert \mathcal F_{n-1})\leq Z_{n-1}+\frac 1n+ 4\cdot\frac{S_{n-1}}{n^2}.$$ We set $\eta_n=Z_n-Z_{n-1}$. With this notation $$\label{Ed_n} E(\eta_n\vert \mathcal F_{n-1})\leq \frac 1n+ 4\cdot\frac{S_{n-1}}{n^2}$$ We have already shown that $ES_{n-1}=n-1$, hence $E\eta_n\le 5/n$, and $EZ_n=O(\log n)$. Note that the number of red vertices cannot change by more than three at a single step, because if an old vertex is neither deleted, nor duplicated, it cannot be coloured red. Hence $|\eta_n|\leq 3$ for all $n$. Moreover, we can give an upper bound on the probability that the number of red vertices changes at step $n$. Namely, it can change only if - we duplicate and delete the same vertex; this has (conditional) probability $1/n$. - the duplicated and the deleted vertices are connected to each other; this has probability $2S_{n-1}/n^2$, because there are $S_{n-1}$ edges. - a red vertex is duplicated; this has probability $Z_{n-1}/n$. - a red vertex is deleted; this has probability $Z_{n-1}/n$. Thus $$\label{Vd_n} P(Z_n\neq Z_{n-1}\vert \mathcal F_{n-1})\leq \frac 1n +2\frac{S_{n-1}}{n^2}+2\frac{Z_{n-1}}{n},$$ therefore $$E|\eta_n|\le 3P(Z_n\ne Z_{n-1})=O\Bigl(\frac{\log n}{n}\Bigr),$$ which implies that $$E\left(\sum_{n=2}^\infty\frac{|\eta_n|}{\log^{2+\varepsilon}n}\right) <\infty.$$ The proof can be completed by the help of Kronecker’s lemma, just like in the proof of Lemma \[lemma2\]. Now we can finish the proof of Theorem \[dupt1\]. The total number of vertices is $n+1$ after $n$ steps, hence the proportion of red vertices converges to 0 almost surely as $n\rightarrow \infty$. Since we defined the colours in such a way that red edges are exactly the edges that are present in version 1 but are not present in version 2, and only red vertices may have red edges, it follows that the proportion of vertices having different degree in the two versions converges to 0. Corollary \[dupk2\] states that for every $d$ the proportion of vertices of degree $d$ in version 2 converges almost surely to $c_d$. Now the same follows for version 1, which is the statement of Theorem \[dupt1\]. We could have given an upper bound for the conditional expectation of the number of red edges. The advantage of using red vertices is the uniform bound on the total change in their number; there is no such bound for the change in the number of red edges. It follows that version 1 has a quite specific structure: it consists of cliques that are connected with relatively few edges (those are coloured red). An edge can be red only if both its endpoints are red, hence Lemma \[lemma3\] gives an $O\bigl(\log^{4+\varepsilon}n\bigr)$ bound for the number of red edges. This is not sharp; however, the estimates of Lemmas \[lemma2\] and \[lemma3\] can be further improved, which might be, as pointed out above, of independent interest. Thus, before turning to the proof of Theorem \[int\], we present the following improvement. \[sharp\] $S_n\sim n$, and $Z_n=O\bigl(\log^{1+\varepsilon}n\bigr)$ for every $\varepsilon >0$ almost surely, as $n\to\infty$. **Proof.** First we give a crude bound for the maximal degree $M_n=\max\{d: X[n,d]>0\}$. According to Lemma \[lemma2\], $S_n=O\bigl(n\log^{1+\varepsilon}n\bigr)$ also holds for the number of edges in version 2. Since a clique of size $k$ contains $\binom{k}{2}$ edges, it follows that the size of the maximal clique is $O\bigl(n^{1/2+\varepsilon}\bigr)$. The same holds for the maximal degree in version 2; and, by Lemma \[lemma3\], in version 1, too. Thus $M_n=O\bigl(n^{1/2+\varepsilon}\bigr)$ for every $\varepsilon>0$. Next, consider the martingale $S_n-n=\sum_{j=1}^n(\delta_j-1)$ from the proof of Lemma \[lemma2\]. In order to prove that $S_n-n=o(\gamma_n)$ for a positive increasing predictable sequence $(\gamma_n)$ it is sufficient to show that $$\sum_{n=1}^\infty\gamma_n^{-2}\,E\bigl((\delta-1)^2\bigm|\mathcal F_{n-1} \bigr)<\infty$$ with probability $1$ [@sh Theorem VII.5.4]. To this end we need to estimate the conditional variance of the martingale differences. $$\begin{gathered} \mathrm{Var}(\delta_n-1\vert\mathcal F_{n-1})= 2\mathrm{Var}(U_n\vert\mathcal F_{n-1})\le 2E(U_n^2\vert\mathcal F_{n-1})\\ =2\sum_{d=1}^n \frac{X[n-1,d]}{n}\,d^2\le \frac 2n\,M_{n-1} \sum_{d=1}^n X[n-1,d]\,d\\ =\frac 2n\,M_{n-1}S_{n-1}= O\bigl(n^{1/2+\varepsilon}\bigr),\end{gathered}$$ for every positive $\varepsilon$. Hence $$\sum_{n=1}^\infty \frac{E\bigl((\delta-1)^2\bigm|\mathcal F_{n-1} \bigr)}{n^{3/2+\varepsilon}}<\infty,$$ implying $$S_n-n=o\bigl(n^{3/4+\varepsilon}\bigr)$$ Thus $S_n\sim n$ a.s., indeed. Finally, let us consider the martingale $\zeta_n=\sum_{j=1}^n\bigl(\eta_j-E(\eta_j|\mathcal F_{j-1})\bigr)$, where $\eta_n=Z_n-Z_{n-1}$, and derive an upper bound for the conditional variance of the differences. Keeping in mind that $|\eta_n|\le 3$ and using we have $$\begin{gathered} E\bigl((\zeta_n-\zeta_{n-1})^2\bigm| \mathcal F_{n-1}\bigr)= \textrm{Var}(\eta_n\vert \mathcal F_{n-1})\\ \leq E((Z_n-Z_{n-1})^2\vert \mathcal F_{n-1}) \le 9 P(Z_n\neq Z_{n-1}\vert \mathcal F_{n-1})\\ \le 9\bigg ( \frac 1n +2\frac{S_{n-1}}{n^2}+2\frac{Z_{n-1}}{n}\bigg) =O\Bigl(\frac{1+Z_{n-1}}{n}\Bigr).\end{gathered}$$ Now suppose that $Z_n=O(\log^\alpha n)$ is satisfied for some $\alpha>0$. Then $$E\bigl((\zeta_n-\zeta_{n-1})^2\bigm| \mathcal F_{n-1}\bigr)= O\Bigl(\frac{\log^{\alpha}n}{n}\Bigr),$$ hence $$\sum_{n=2}^\infty \frac{E\bigl((\zeta_n-\zeta_{n-1})^2\bigm| \mathcal F_{n-1}\bigr)}{\log^{\alpha+1+\varepsilon}n}<\infty$$ with probability $1$. Again, by [@sh Theorem VII.5.4] we have $$\label{zeta} \zeta_n=o\bigl(\log^{(\alpha+1)/2+\varepsilon}\bigr)\quad\text{a.s.}$$ for every positive $\varepsilon$. Clearly, $$Z_n=\sum_{j=1}^n \eta_j=\zeta_n+\sum_{j=1}^n E(\eta_j|\mathcal F_{j-1}),$$ where the last sum can be estimated by the help of in the following way. Since $S_{n-1}\sim n$, we have $E(\eta_n|\mathcal F_{n-1})=O(1/n)$, hence $$\sum_{j=1}^n E(\eta_j|\mathcal F_{j-1})=O(\log n).$$ This, combined with gives that $Z_n= O\bigl(\log^{(\alpha+1)/2+\varepsilon}\bigr)$ holds almost surely for all $\varepsilon>0$. By Lemma \[lemma3\] we can start from $\alpha=2+\varepsilon$, and repeating the argument we finally end up with the a.s. estimation $Z_n=O\bigl(\log^{1+\varepsilon}\bigr)$, for all $\varepsilon>0$. [**Proof of Theorem \[int\].**]{} Let $G(z)$ denote the generating function of the sequence $(c_d)$, that is, $$G(z)=\sum_{d=0}^{\infty}c_dz^d,\quad |z|\le 1.$$ Multiplying equation $(d+1)(c_{d-1}+c_{d+1})=(2d+3)c_d$ by $z^d$, then summing up from $d=1$ to $\infty$ and using that $c_0=(1+c_1)/3$, we obtain an inhomogeneous linear differential equation for $G(z)$. $$(1-z)^2G^{\prime}(z)=(3-2z)G(z)-1,\quad G(0)=c_0.$$ Solving this equation we get the following expression $$G(z)=\frac{c(z)}{(1-z)^2}\,\exp\Bigl(\frac{z}{1-z}\Bigr),$$ where $$c(z)=c_0-\int_0^z\exp\Bigl(-\frac{y}{1-y}\Bigr)\,dy.$$ Since $G(1)=\sum_{d=0}^{\infty} c_d\le 1$, it follows that $$c_0=\int_0^1\exp\Bigl(-\frac{y}{1-y}\Bigr)\,dy,$$ hence, via the substitution $x=1-y$, $$c(z)=\int_z^1\exp\Bigl(-\frac{y}{1-y}\Bigr)\,dy= \int_0^{1-z}\exp\Bigl(1-\frac{1}{x}\Bigr)\,dx.$$ Thus we have $$G(z)=\int_0^{1-z}\exp\Bigl(1-\frac{1}{x}\Bigr)\,dx\ \frac{1}{(1-z)^2}\, \exp\Bigl(\frac{z}{1-z}\Bigr),$$ from which, by substituting $y=\frac 1x-\frac{1}{1-z}$, we obtain $$\begin{gathered} \label{genfv} G(z)=\int_0^\infty\frac{e^{-y}}{(1+(1-z)y)^2}\ dy\\ =\int_0^\infty\frac{e^{-y}}{(1+y)^2\bigl(1-z\, \frac{y}{1+y}\bigr)^2}\ dy\\ =\int_0^\infty\sum_{d=0}^\infty(d+1)\,\frac{z^dy^d\,e^{-y}} {(1+y)^{d+2}}\ dy\\ =\sum_{d=0}^\infty z^d\,(d+1)\int_0^\infty\frac{y^d e^{-y}} {(1+y)^{d+2}}\ dy,\end{gathered}$$ completing the proof of the first statement of the theorem. In addition, note that the first equality of immediately implies that $\sum_{d=0}^{\infty} c_d=G(1)=1$. [**Proof of Theorem \[asymp\].**]{} In order to approximate the integral of Theorem \[int\] we first analyse the behavior of the integrand around the point where it attains its maximum. Let $$y_d=\arg\max\frac{y^d e^{-y}}{(1+y)^{d+2}}=\arg\max f(y),$$ where $$f(y)=d\log y-(d+2)\log(1+y)-y.$$ Clearly, $$\begin{aligned} f^\prime(y)&=\frac dy-\frac{d+2}{y+1}-1=-\frac{y^2+3y-d}{y(y+1)}\,,\\ f''(y)&=-\frac{d}{y^2}+\frac{d+2}{(y+1)^2}=\frac{2y^2-2dy-d}{y^2(y+1)^2} \,,\\ f'''(y)&=\frac{2d}{y^3}-\frac{2(d+2)}{(y+1)^3}\,.\end{aligned}$$ Since $y_d$ satisfies $f^\prime(y_d)=0$, we get that $$y_d=-\frac 32 + \sqrt{d+\frac 94}=\sqrt{d}-\frac 32+o(1).$$ Let us write $y$ in the form $y=y_d+y_d^{1/2}t$. Then $$g(t):=f(y)-f(y_d)=\frac{y_d}{2}\,f''(y_d+\theta y_d^{1/2}t)\,t^2,$$ where $\theta=\theta(d,t)$ belongs to the interval $[0;1]$. For every fixed $t$ $$f''(y_d+\theta y_d^{1/2}t)\sim -2y_d^{-1},$$ thus $g(t)\to -t^2$ as $d\to\infty$. Moreover, for $y\le y_d$, that is, for $y_d^{1/2}\le t\le 0$ we have $f'(y) \ge 0$. Thus $d/y - (d+2)/(y+1) > 0$ holds, and after rearranging we get that $(d+2)/d < (y+1)/y$. This yields that $(d+2)/d < (y+1)^3/y^3$ is satisfied, which implies that $f'''(y)\ge 0$. Hence $$g(t)\le\frac{y_d}{2}\,f''(y_d)\,t^2=a_d\,t^2,$$ where $a_d\to -1$, as $d\to\infty$. On the other hand, let $y_d\le y\le \frac 32\,y_d$, that is, $0\le t\le \frac 12\,y_d^{1/2}$. In this domain $f'''$ is increasing, hence $f'''(y)\le f'''(y_d)\sim 6dy_d^{-4}\sim 6d^{-1}$. Thus, $$\begin{gathered} g(t)\le\frac{y_d}{2}\,f''(y_d)\,t^2+\frac 16\,y_d^{3/2}f'''(y_d)\,t^3\\ \le\left(\frac{y_d}{2}\,f''(y_d)+\frac{y_d^2}{12}\,f'''(y_d)\right)t^2 =b_dt^2,\end{gathered}$$ where $b_d\to -1/2$, as $d\to\infty$. Thus, by the dominated convergence theorem, $$\begin{gathered} \int_0^{3y_d/2}e^{f(y)}\,dy=y_d^{1/2}\int_{-y_d^{1/2}}^{\frac 12\,y_d^{1/2}} \exp\bigl(f(y_d)+g(t)\bigr)\,dt\\ \sim y_d^{1/2}\exp\bigl(f(y_d)\bigr)\int_{-\infty}^{+\infty} \exp\bigl(-t^2\bigr)\,dt=\sqrt{\pi}\,y_d^{1/2} \exp\bigl(f(y_d)\bigr).\end{gathered}$$ Here $$f(y_d)=-2\log y_d-(d+2)\log\Bigl(1+\frac{1}{y_d}\Bigr)-y_d,$$ and $$\begin{aligned} (d+2)\log\Bigl(1+\frac{1}{y_d}\Bigr)&=(d+2)\Bigl(\frac{1}{y_d}- \frac{1}{2y_d^2}\Bigr)+o(1)\\ &=y_d+\frac{(d+2)(2y_d-1)-2y_d^3}{2y_d^2}+o(1)\\ &=y_d+\frac{(y_d^2+3y_d+2)(2y_d-1)-2y_d^3}{2y_d^2}+o(1)\\ &=y_d+\frac{5y_d^2+y_d-2}{2y_d^2}+o(1),\end{aligned}$$ where we used that $y_d^2+3y_d=d$. Thus, $$f(y_d)=-2\log y_d-2y_d-\frac 52+o(1)=-2\log y_d-2\sqrt{d}+\frac 12+o(1).$$ Finally, $$\begin{aligned} \int_{3y_d/2}^\infty e^{f(y)}\,dy&\le \bigl(2y_d\bigr)^{-2} \int_{3y_d/2}^\infty \Bigl(1-\frac{1}{1+y}\Bigr)^{\!d}\,e^{-y}\,dy\\ &\leq\bigl(2y_d\bigr)^{-2}\int_{3y_d/2}^\infty \exp\Bigl(-\frac{d}{y+1} -y\Bigr)\,dy.\end{aligned}$$ The exponent on the right-hand side can be estimated with the help of the AM–GM inequality as follows. $$-\frac{d}{y+1}-y=-\frac{d}{y+1}-\frac{y+1}{2}-\frac{y-1}{2}\le -\sqrt{2d}-\frac{y-1}{2},$$ hence $$\int_{3y_d/2}^\infty e^{f(y)}\,dy\le \bigl(2y_d\bigr)^{-2}\, \exp\Bigl(-\sqrt{2d}+\frac 12-\frac 34\,y_d\Bigr)= o\Bigl(y_d^{-2}\,\exp\bigl(-2\sqrt{d}\bigr)\Bigr).$$ From all these we obtain that $$c_d=(d+1)\int_0^\infty e^{f(y)}\,dy\sim (e\pi)^{1/2}\,d^{1/4}\, e^{-2\sqrt{d}},$$ as claimed. **Proof of Theorem \[clust\].** Black vertices have the same local clustering coefficient in both versions. Since the proportion of red vertices tends to be negligible as $n\to\infty$, the limit of the average clustering coefficient is also the same in both versions. The global clustering coefficient of version 2 is identically equal to $1$. In its defining fraction the numerator and the denominator are proportional to $n$. When turning to version 1 the denominator have to be increased by the number of triplets containing at least one red edge. Such a triplet must have a red central vertex and at least one more red vertex. Hence the increment of the denominator cannot exceed $M_nZ_n^2$, where $M_n$ denotes the maximal degree, and $Z_n$ the number of red vertices. In the proof of Proposition \[sharp\] we have shown that $M_n=O(n^{1/2+\varepsilon})$ and $Z_n=O(\log^{1+\varepsilon}n)$, thus the increment of the denominator is asymptotically negligible with respect to $n$. Hence the global clustering coefficient of version 1 must converge to $1$. [99]{} Backhausz, Á., and Móri, T. F., A random model of publication activity, *Discrete Appl. Math.* **162** (2014), 78–89. Barab[á]{}si, A-L., and Albert, R., Emergence of scaling in random networks, *Science* **286** (1999), 509–512. Cooper, C., and Frieze, A., A general model of web graphs, *Random Structures Algorithms*, **22** (2003), 311–335. Bebek, G., Berenbrink, P., Cooper, C., Friedetzky, T., Nadeau, J. and Sahinalp, S. C., The degree distribution of the generalized duplication model. *Theor. Comput. Sci.*, **369** (2006), 234–249. Bollobás, B., Riordan, O., Spencer, J., and Tusnády, G., The degree sequence of a scale-free random graph process, *Random Structures Algorithms*, **18** (2001), 279–290. Chung, F., Lu, L., Dewey, T. G., and Galas, D. J., Duplication models for biological networks, *J. Comput. Biol.*, **16** (2003), 677–687. Cohen, N., Jordan, J., and Voliotis, M., Preferential duplication graphs, *J. Appl. Probab.*, **47** (2010), 572–585. Dong, R., Goldschmidt, C., and Martin, J. B., Coagulation-fragmentation duality, Poisson–Dirichlet distributions and random recursive trees. *Ann. Appl. Probab.* **16** (2006), 1733–1750. Durrett, R., *Random graph dynamics*, Cambridge University Pres, 2006. Faloutsos, M., Faloutsos, P., and Faloutsos, C., On power-law relationships of the internet topology, in [ *Proceedings of the Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, SIGCOMM ’99*]{}. ACM, New York, 1999, 251–262. Hamdi, M., Krishnamurthy, V., Yin, G. G., Tracking a Markov-modulated stationary degree distribution of a dynamic random graph, [*§IEEE Trans. Inform. Theory*]{} [**60**]{} (2014), no. 10, 6609–6625. Jordan, J., Randomised reproducing graphs. *Electron. J. Probab.*, **16** (2011), 1549–1562. Kim, J., Krapivsky, P. L., Kahng, B. and Redner, S., Infinite-order percolation and giant fluctuations in a protein interaction network. *Phys. Rev.*, E66: 055101(R), 2002. Pastor-Satorras, R., Smith, E., and Solé, R. V., Evolving protein interaction networks through gene duplication. *J. Theor. Biol.*, **222** (2003), 199–210. Ráth, B. and Tóth, B., Erdős–Rényi random graphs + forest fires = self-organized criticality. *Electron. J. Probab.,*, **14** (2009), 1290–1327. Shiryaev, A. N., *Probability*, 2nd ed., Springer, New York, 1996. Simon, H. A., On a class of skew distribution functions. *Biometrika*, **42** (1955), 425–440. Sridharan, A., Gao, Y., Wu, K., and Nastos, J., Statistical behavior of embeddedness and communities of overlapping cliques in online social networks. In: *2011 Proceedings IEEE INFOCOM*, 546–550. Szymańsky, J., On a nonuniform random recursive tree, in *Random graphs ’85, Poznań 1985*, North-Holland, Amsterdam, 1985, 297–306. Yule, G. U., A mathematical theory of evolution, based on the conclusions of Dr. J. C. Willis, F.R.S., *Philos. Trans. R. Soc. Lond. Ser. B.*, **213** (1925), 402–410. van der Hofstad, R., *Random graphs and complex networks.* Preprint, `http://www.win.tue.nl/~rhofstad/NotesRGCN.pdf`. Watts, D. J., Strogatz, S. H., Collective dynamics of ’small-world’ networks. *Nature* **393** (1998), 440–442. Willinger, W., Alderson, D., and Doyle, J. C., Mathematics and the Internet: A source of enormous confusion and great potential, *Notices of the AMS* **56(5)** (2009), 586–599.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In most introductory courses on electrodynamics, one is taught the electric charge is quantised but no theoretical explanation related to this law of nature is offered. Such an explanation is postponed to graduate courses on electrodynamics, quantum mechanics and quantum field theory, where the famous Dirac quantisation condition is introduced, which states that a single magnetic monopole in the Universe would explain the electric charge quantisation. Even when this condition assumes the existence of a not-yet-detected magnetic monopole, it provides the most accepted explanation for the observed quantisation of the electric charge. However, the usual derivation of the Dirac quantisation condition involves the subtle concept of an “unobservable” semi-infinite magnetised line, the so-called “Dirac string,” which may be difficult to grasp in a first view of the subject. The purpose of this review is to survey the concepts underlying the Dirac quantisation condition, in a way that may be accessible to advanced undergraduate and graduate students. Some of the discussed concepts are gauge invariance, singular potentials, single-valuedness of the wave function, undetectability of the Dirac string and quantisation of the electromagnetic angular momentum. Five quantum-mechanical and three semi-classical derivations of the Dirac quantisation condition are reviewed. In addition, a simple derivation of this condition involving heuristic and formal arguments is presented.' author: - title: 'Dirac quantisation condition: a comprehensive review' --- Magnetic monopoles; charge quantisation; gauge invariance. Introduction {#1} ============ In the early months of 1931, Dirac was seeking for an explanation of the observed fact that the electric charge is always quantised [@1]. In his quest for explaining this mysterious charge quantisation, he incidentally came across with the idea of magnetic monopoles, which turned out to be of vital importance for his ingenious explanation presented in his 1931 paper [@2]. In this seminal paper, Dirac envisioned hypothetical nodal lines to be semi-infinite magnetised lines with vanishing wave function and having the same end point, which is the singularity of the magnetic field where the monopole is located (see Figure \[Fig1\]). A quantum-mechanical argument on these nodal lines led him to his celebrated quantisation condition: $q g=n\hbar c/2$. Here, $q$ and $g$ denote electric and magnetic charges, $\hbar$ is the reduced Planck’s constant, $c$ is the speed of light, $n$ represents an integer number, and we are adopting Gaussian units. Dirac wrote [@2]: “Thus at the end point \[of nodal lines\] there will be a magnetic pole of strength \[$g=n\hbar c/(2q)$\].” This is the original statement by which magnetic monopoles entered into the field of quantum mechanics. In 1948, Dirac [@3] presented a relativistic extension of his theory of magnetic monopoles, in which he drew one of his most famous conclusions: “Thus the mere existence of one pole of strength $[g]$ would require all electric charges to be quantised in units of $[\hbar c/ (2g)].$” ![Nodal lines as envisioned by Dirac.[]{data-label="Fig1"}](Fig1.eps){width="208pt"} For the modern reader, the Dirac argument for the quantisation of the electric charge involving the elusive magnetic monopole is indeed ingenious. The basis of this argument is the interaction of an electric charge with the vector potential of a magnetic monopole attached to an infinitely long and infinitesimally thin solenoid, the so-called “Dirac string" which is shown to be undetectable by assuming the single-valuedness of the wave function of the electric charge, and as a consequence the Dirac quantisation condition $q g=n\hbar c/2$ is required. According to this condition, the existence of just one monopole anywhere in the Universe would explain why the electric charge is quantised. Indeed, if we identify the elementary magnetic charge with $g_0,$ then $q=n\hbar c/(2 g_0)$. Now for $n=1,$ we have the elementary electric charge $e=\hbar c/(2 g_0)$, which combines with $q=n\hbar c/(2 g_0)$ to give the law expressing the quantisation of the electric charge: $q=n e$. At the present time, the Dirac quantisation condition provides the most accepted explanation for the electric charge quantisation even when it relies on the existence of still undetected magnetic monopoles. It is pertinent to note that there are excellent books [@4; @5; @6; @7] and reviews [@8; @9; @10; @11; @12; @13; @14; @15; @16; @17] on magnetic monopoles, which necessarily touch on the subject of the Dirac quantisation condition and the Dirac string. So far, however, a review paper dealing with the Dirac condition rather than with magnetic monopoles seems not to appear in the standard literature. The present review attempts to fill this gap for the benefit of the non-specialist. Typically, the Dirac condition is discussed in graduate texts on electrodynamics [@18; @19; @20], quantum mechanics [@21] and quantum field theory [@22; @23; @24; @25]. The topic is rarely discussed in undergraduate textbooks [@26]. The purpose of this review is to survey the ideas underlying the Dirac quantisation condition, in a way that may be accessible to advanced undergraduate as well as graduate students. After commenting on the status of the Dirac quantisation condition, i.e., to discuss its past and present impact on theoretical physics, we find convenient to review the derivation of the Dirac condition given in Jackson’s book [@18]. We next present a heuristic derivation of the this condition in which we attempt to follow Feynman’s teaching philosophy that if we cannot provide an explanation for a topic at the undergraduate level then it means we do not really understand this topic [@27]. We then review four quantum-mechanical and three semi-classical derivations of the Dirac quantisation condition. Some of the relevant calculations involved in these derivations are detailed in Appendices. We think worthwhile to gather together the basic ideas underlying these derivations in a review, which may be accessible to advanced undergraduate and graduate students. Status of the Dirac quantisation condition: past and present {#2} ============================================================ To appreciate the relevance of the method followed by Dirac to introduce his quantization condition, let us briefly outline the historical context in which Dirac derived this condition. As is well known, Maxwell built his equations on the assumption that no free magnetic charges exist, which is formally expressed by the equation ${\boldsymbol{\nabla}}\cdot\v B=0$. With the advent of quantum mechanics, magnetic charges were virtually excluded because the coupling of quantum mechanics with electrodynamics required the inclusion of the vector potential $\v A $ defined through $\v B={\boldsymbol{\nabla}}\times \v A.$ But it was clear that this equation precluded magnetic monopoles because of the well-known identity ${\boldsymbol{\nabla}}\cdot({\boldsymbol{\nabla}}\times \v A)\!\equiv\!0.$ Before 1931, magnetic monopoles were irreconcilable within an electrodynamics involving the potential $\v A,$ and hence with quantum mechanics [@28]. Furthermore, for quantum physicists of the early twentieth century, magnetic monopoles were mere speculations lacking physical content and were therefore not of interest at all in quantum theory prior to 1931. This was the state of affairs when Dirac suggested in his 1931 paper [@2] to reconsider the idea of magnetic monopoles. Using an innovative method, Dirac was able to reconcile the equations ${\boldsymbol{\nabla}}\cdot\v B\not=0$ and $\v B={\boldsymbol{\nabla}}\times \v A,$ and therefore he was successful in showing that the interaction of an electron with a magnetic monopole was an idea fully consistent in both classical and quantum physics. According to Dirac, the introduction of monopoles in quantum mechanics required magnetic charges to be necessarily quantised in terms of the electric charge and that quantisation of the latter should be in terms of the former. In his own words [@2]: “Our theory thus allows isolated magnetic poles \[$g$\], but the strength of such poles must be quantised, the quantum \[$g_0$\] being connected with the electronic charge $e$ by $[g_0=\hbar c/(2e)]$ ... The theory also requires a quantisation of electric charge ....” In his 1931 paper [@2], Dirac seems to favor the monopole concept when he pointed out: “... one would be surprised if Nature had made no use of it. ”. As Polchinski has noted [@29]: “From the highly precise electric charge quantisation that is seen in nature, it is then tempting to infer that magnetic monopoles exist, and indeed Dirac did so”. However, Dirac was very aware that isolated magnetic monopoles were still undetected and he proposed a physical explanation for this fact. When interpreting his result $g_0=(137/2)e$, he pointed out: “This means that the attractive force between two one-quantum poles of opposite sign is $46921/4$ times that between electron and proton. This very large force may perhaps account for why poles of opposite sign have never yet been separated.” Let us emphasise that the true motivation of Dirac in his 1931 paper was twofold; on one hand, he wanted to explain the electric charge quantisation and on the other, to find the reason why the elementary electric charge had its reported experimental value. Such motivations were explicitly clarified by Dirac in 1978 [@30]: “I was not searching for anything like monopoles at the time. What I was concerned with was the fact that electric charge is always observed in integral multiples of the electronic charge $e$, and I wanted some explanation for it. There must be some fundamental reason in nature why that should be so, and also there must be some reason why the charge $e$ should have just the value that it does have. It has the value that makes $[\hbar c/e^2]$ approximately 137. And I was looking for some explanation of this 137.” In his 1948 paper [@3], Dirac stressed the idea that each magnetic monopole is attached at the end of an “unobservable” semi-infinite string (a refinement of the nodal lines introduced in his 1931 paper [@2]). In retrospective, one can imagine that the idea of an unobservable string might have seemed strange at that time, and if additionally the theory was based on the existence of undetected magnetic monopoles, then it is not difficult to understand why this theory was received sceptically by some of Dirac’s contemporaries. In a first view, Pauli disliked the idea of magnetic monopoles and sarcastically referred to Dirac as “Monopoleon”. But some years later, he reconsidered his opinion by saying that [@31]: “This title \[Monopoleon\] shall indicate that I have a friendlier view to his theory of ‘monopoles’ than earlier: There is some mathematical beauty in this theory.” On the other hand, Bohr, unlike Dirac, thought that one would be surprised if Nature had made use of magnetic monopoles [@32]. After Dirac’s 1931 seminal paper, Saha [@33] presented in 1936 a semi-classical derivation of the Dirac quantisation condition based on the quantisation of the electromagnetic angular momentum associated to the static configuration formed by an electric charge and a magnetic charge separated by a finite distance, the so-called Thomson dipole ([@34], see also [@35]). This same derivation was independently presented in 1949 by Wilson [@36; @37]. In 1944, Fierz [@38] derived the Dirac condition by quantising the electromagnetic angular momentum arising from the classical interaction of a moving charge in the field of a stationary magnetic monopole. Schwinger [@39] in 1969 used a similar approach to derive a duality-invariant form of the Dirac condition by assuming the existence of particles possessing both electric and magnetic charges, the so-called dyons. On the other hand, the Aharanov–Bohm effect [@40] suggested in 1959 has been recurrently used to show the undetectability of the Dirac string [@1; @8; @9; @10; @11; @12; @14; @15; @16; @17; @22; @23; @41; @42], giving a reversible argument. If Dirac’s condition holds then the string is undetectable, and vice versa, if the string is undetectable then Dirac’s condition holds. The path-integral approach to quantum mechanics, suggested by Dirac in 1933 [@43], formally started by Feynman in his 1942 Ph.D. thesis [@44] and completed by him in 1948 [@48], has also been used to obtain the Dirac condition [@22]. Several authors have criticised the Dirac argument because of its unpleasant feature that it necessarily involves singular gauge transformations [@9]. A formal approach presented by Wu and Yang [@46] in 1975 avoids such annoying feature by considering non-singular potentials, using the single-valuedness of the wave function and then deriving the Dirac condition without using the Dirac string [@4; @8; @9; @11; @12; @13; @16; @24]. Other derivations of the Dirac condition have been presented over the years, including one by Goldhaber [@47], Wilzcek [@48; @49] and Jackiw [@50; @51; @52]. Remarkably, in 1974 t’Hooft [@53] and Polyakov [@54] independently discovered monopole solutions for spontaneously broken non-Abelian gauge theories. This originated another way to understand why electric charge is quantised in grand unified theories, where monopoles are necessarely present. If the electromagnetic $U(1)$ gauge group is embedded into a non-Abelian gauge group, then charge quantisation is automatic, for considerations of group theory [@4; @11]. It is not surprising then that charge quantisation is now considered as an argument in support of grand unified theories [@4; @29; @55]. In the context of unified theories, Polchinski goes even further arguing that [@29] “In any theoretical framework that requires charge to be quantised, there will exist magnetic monopoles.” On the other hand, it has been noted that the integer $n$ in Dirac’s condition can be identified as a winding number, which gives a topological interpretation of this condition [@4; @11; @56]. Finally, it is pertinent to mention the recent claim that the Dirac condition also holds in the Proca electrodynamics with non-zero photon mass [@57], reflecting the general character of this quantisation condition. The preceding comments allow us to put in context the review presented here on the basic ideas underpinning the Dirac quantisation condition, such as gauge invariance, singular vector potentials, single-valuedness of the wave function, undetectibility of the Dirac string and the quantisation of the electromagnetic angular momentum. The present review is organised as follows. In Section \[3\], we closely review Jackson’s treatment of the Dirac quantisation condition. In Sections \[3\]-\[6\], we present a new derivation of the Dirac condition based on heuristic and formal arguments, which does not consider the Dirac string. The specific gauge function required in this heuristic derivation is discussed. In Section \[7\], we examine in detail the Dirac strings by explicitly identifying their singular sources. In Section \[8\], we study the classical interaction of the electric charge with the Dirac string and conclude that this string has a mathematical rather than a physical meaning. In Section \[9\], we examine the quantum-mechanical interaction of the electric charge with the Dirac string and show that if the string is undetectable then the Dirac quantisation condition holds. We review in Section \[10\] the Aharanov–Bohm effect and show how it can be used to derive the Dirac condition. In Section \[11\], we outline Feynman’s path integral approach to quantum mechanics and show how it can be used to obtain the Dirac condition. In Section \[12\], we briefly discuss the Wu–Yang approach that allows us to derive the Dirac condition without the recourse of the Dirac string. In Section \[13\], we review three known semi-classical derivations of the Dirac condition. The first one makes use of the Thomson dipole. The second one considers the interaction between a moving charge and the field of a stationary monopole, and the third one considers the interaction between a moving dyon and the field of a stationary dyon. In Section \[14\], we make some final remarks on the Dirac quantisation condition. In Section \[15\], we make a final comment on the concept of nodal lines and in Section \[16\], we present our conclusions. In Appendices \[A\]–\[E\], we perform some calculations involved in the derivations of the Dirac condition. Jackson’s treatment of the Dirac quantisation condition {#3} ======================================================= The [*first quantum-mechanical derivation of the Dirac condition*]{} we will review is that given in Jackson’s book [@18]. The magnetic monopole is imagined either as one particle to be at the end of a line of dipoles or at the end of a tightly wound solenoid that stretches off to infinity, as shown in Figure \[Fig2\]. Any of these equivalent configurations can be described by the vector potential of a magnetic dipole $\v A(\v x)=[\v m\times (\v x- \v x')]/|\v x-\v x'|^3$, where $\v x$ is the field point, $\v x'$ is the source point and $\v m$ is the magnetic dipole moment. The line of dipoles is a string formed by infinitesimal magnetic dipole moments $d\v m$ located at $\v x'$ whose vector potential is $d\v A(\v x)=-d\v m\times{\boldsymbol{\nabla}}\big(1/|\v x-\v x'|\big)$, where we have used ${\boldsymbol{\nabla}}\big(1/|\v x-\v x'|\big)=-(\v x-\v x')/|\v x-\v x'|^3$. With the identification $d\v m= g d\v l'$, with $g$ being the magnetic charge and $d\v l'$ a line element, the total vector potential for a string or solenoid lying on the curve $L$ reads $$\begin{aligned} \v A_L=-g\int_L d\v l'\times {\boldsymbol{\nabla}}\bigg(\frac{1}{|\v x-\v x'|} \bigg).\end{aligned}$$ Using the result ${\boldsymbol{\nabla}}\times (d \v l'/|\v x\!-\!\v x'|) = -d\v l' \times {\boldsymbol{\nabla}}(1/|\v x\! -\!\v x'|)$, we can write Equation (1) as $$\begin{aligned} \v A_L=g{\boldsymbol{\nabla}}\times \int_L \frac{d\v l'}{|\v x-\v x'|}.\end{aligned}$$ Notice that this potential is already in the Coulomb gauge: ${\boldsymbol{\nabla}}\cdot \v A_L=0$ because ${\boldsymbol{\nabla}}\cdot [{\boldsymbol{\nabla}}\times (\;\;)]\equiv 0$. ![ Representation of a magnetic monopole $g$ as the end of a line of dipoles or as the end of a tightly wound solenoid that stretches off to infinity. []{data-label="Fig2"}](Fig2.eps){width="228pt"} In Appendix \[A\], we show that the curl of this potential gives $$\begin{aligned} {\boldsymbol{\nabla}}\times\v A_L=\frac{g}{R^2}\hat{\v R} + 4\pi g\!\int_L\! \delta(\v x-\v x')\,d\v l',\end{aligned}$$ where $\delta(\v x-\v x')$ is the Dirac delta function, $R\!=\!|\v x-\v x'|$ and $\hat{\v R}\!=\!(\v x-\v x')/R$. To have a clearer meaning of Equation (3), it is convenient to write this equation as $$\begin{aligned} \v B_\texttt{mon}\!=\! {\boldsymbol{\nabla}}\times \v A_L-\v B_\texttt{string},\end{aligned}$$ where $$\begin{aligned} \v B_\texttt{mon}=\frac{g}{R^2}\hat{\v R},\end{aligned}$$ is the field of the magnetic monopole $g$ located at the point $\v x'$ and $$\begin{aligned} \v B_\texttt{string}= 4\pi g\!\int_L \!\delta(\v x-\v x')\,d\v l',\end{aligned}$$ is a singular magnetic field contribution along the curve $L$. By taking the divergence to $\v B_\texttt{mon}$ it follows $$\begin{aligned} {\boldsymbol{\nabla}}\cdot\v B_\texttt{mon}=&{\boldsymbol{\nabla}}\cdot\bigg(\frac{g}{R^2}\hat{\v R}\bigg)= 4\pi g\delta(\v x\!-\!\v x'),\end{aligned}$$ where we have used ${\boldsymbol{\nabla}}\cdot (\hat{\v R}/R^2)=4 \pi \delta (\v x - \v x').$ Similarly, if we take the divergence to $\v B_\texttt{string},$ we obtain the result $$\begin{aligned} \nonumber {\boldsymbol{\nabla}}\cdot\v B_\texttt{string}=& {\boldsymbol{\nabla}}\cdot \bigg(4\pi g\!\int_L \!\delta(\v x-\v x')\,d\v l'\bigg)\\ =&\nonumber -4\pi g\!\int_L\! {\boldsymbol{\nabla}}'\delta(\v x\!-\!\v x')\cdot d\v l'\\ =&-4\pi g\,\delta(\v x\!-\!\v x'),\end{aligned}$$ where we have used ${\boldsymbol{\nabla}}\delta(\v x\!-\!\v x')=-{\boldsymbol{\nabla}}'\delta(\v x\!-\!\v x').$ When Equations (7) and (8) are used in the divergence of Equation (3) we verify the expected result ${\boldsymbol{\nabla}}\cdot({\boldsymbol{\nabla}}\times\v A_L)=0$. Expressed in an equivalent way, the fluxes of the fields $\v B_\texttt{mon}$ and $\v B_\texttt{string}$ mutually cancel: $$\begin{aligned} \oint_S\v B_\texttt{mon}\cdot d\v a=&\int_V{\boldsymbol{\nabla}}\cdot\v B_\texttt{mon}\, d^3x= 4\pi g,\\ \oint_S\v B_\texttt{string}\cdot d\v a=&\int_V{\boldsymbol{\nabla}}\cdot\v B_\texttt{string}\,d^3x= -4\pi g,\end{aligned}$$ where $d\v a$ and $d^3x$ denote the differential elements of surface and volume, and the Gauss theorem has been used. As a particular application, let us consider the case in which the string lays along the negative $z$-axis and the magnetic monopole is at the origin. In this case $d\v l'=dz'\hat{\bf z}$, and the corresponding potential is $$\begin{aligned} \v A_L=g {\boldsymbol{\nabla}}\times \hat{\v z}\int\limits_{-\infty}^{0}\frac{ dz'}{|\v x-z'\hat{\v z}|}.\end{aligned}$$ In Appendix \[A\], we show that the curl of Equation (11) yields $$\begin{aligned} {\boldsymbol{\nabla}}\times \v A_L =\frac{g}{r^2}\hat{\v r} + 4\pi g\delta(x)\delta(y)\Theta(-z)\hat{\v z},\end{aligned}$$ where now $r=|\v x|, \hat{\v r}=\v x/r$, and $\Theta(z)$ is the step function which is undefined at $z=0$ but it is defined as $\Theta(z)\!=\!0$ if $z<\!0$ and $\Theta(z)\!=\!1$ if $z\!>\!0$. The highly singular character of the magnetic field of the string is clearly noted in the second term on the right of Equation (12). It is interesting to note that in his original paper [@2], Dirac wrote the following solution for the vector potential in spherical coordinates $\v A_L= (g/r)\tan (\theta/2)\hat{\phi}$ and noted that this potential gives the radial field $g\hat{\v r}/r^2$. He pointed out: “This solution is valid at all points except along the line $\theta = \pi$, where \[$\v A_L$\] become infinite.” The solution considered by Dirac is equivalent to $$\begin{aligned} \v A_L= g\frac{1-\cos\theta}{r\sin\theta}\hat{\phi}.\end{aligned}$$ This expression can be obtained by performing the integration specified in Equation (11), which requires the condition $\sin\theta\not=0$. This is shown in Appendix \[B\]. Clearly, the curl of Equation (13) subjected to $\sin\theta\not=0$ gives only the field of the magnetic monopole ${\boldsymbol{\nabla}}\times \v A_L= g\hat{\v r}/r^2=\v B_\texttt{mon}$. This is so because the singularity originated by $\sin\theta=0$ is avoided in the differentiation process. As far as the computation of the total magnetic field of the configuration formed by a string laying along the negative $z$-axis and a magnetic monopole at the origin is concerned, it is simpler to take the curl to the implicit form of the potential defined by Equation (11) rather than taking the curl of a regularised form of the potential in Equation (13) \[see Appendix \[D\]\]. ![ Representation of a magnetic monopole $g$ as the end of a line of dipoles or as the end of a tightly wound solenoid that stretches off to infinity. The solid angle $\Omega_C$ is subtended by the curve $C = L - L'$, which embeds the area $S.$ The potentials $\v A_L$ and $\v A_{L'}$ correspond to the strings $L$ and $L'.$ []{data-label="Fig3"}](Fig3.eps){width="230pt"} If an electric charge is interacting with the potential given in Equation (2), then it is ultimately interacting with a magnetic monopole and a magnetised string. Dirac argued that the interaction must only be with the magnetic monopole and therefore the charge $q$ should never “see” the singular field $\v B_\texttt{string}$ defined by Equation (6). For this reason he postulated that the wave function must vanish along the string. But this requirement is certainly criticisable because it would mean that the string does not exist at all. This postulate is known as the “Dirac veto" which in an alternative form states that any interaction of the electric charge with the string is forbidden. In Dirac’s own words [@30]: “You must have the monopoles and the electric charges occupying distinct regions of space. The strings, which come out from the monopoles, can be drawn anywhere subject to the condition that they must not pass through a region where there is electric charge present.” The next step of the argument is to show that Equation (4) does not depend on the location of the string. To show this statement, consider two different strings $L'$ and $L$ with their respective vector potentials $\v A_{L'}$ and $\v A_L$. Evidently, the equivalence of these potentials will imply the equivalence of their respective strings indicating that the location of the string is irrelevant. The difference of the potentials $\v A_{L'}$ and $\v A_L$ can be obtained from Equation (2) with the integration taken along the closed curve $C=L'-L$ around the area $S$ as shown in Figure \[Fig3\]. The result can be written as [@18] $$\begin{aligned} \v A_{L'}-\v A_L= g{\boldsymbol{\nabla}}\times \oint_C \frac{d\v l'}{|\v x-\v x'|}= {\boldsymbol{\nabla}}(g\Omega_C),\end{aligned}$$ where $\Omega_C$ is the solid angle function subtended by the curve $C$. The integral specified in Equation (14) is done in Appendix \[C\]. The fact that $\v A_{L'}$ and $\v A_{L}$ are connected by the gradient of a function reminds us of the gauge transformation $\v A'=\v A+ {\boldsymbol{\nabla}}\Lambda$, where $\Lambda$ is a gauge function. Without any loss of generality, we can then write $\v A'\equiv\v A_{L'}, \v A\equiv\v A_{L}$ and $\Lambda\equiv g\Omega_C$. Notice that $\v A_{L'}$ and $\v A_{L}$ are in the Coulomb gauge: ${\boldsymbol{\nabla}}\cdot\v A_{L'}=0$ and ${\boldsymbol{\nabla}}\cdot\v A_{L}=0$. However, this does not prevent these potentials from being connected by a further gauge transformation whenever the gauge function $\Lambda$ is restricted to satisfy ${\boldsymbol{\nabla}}^2\Lambda=0$. We can verify that this is indeed the case by taking the divergence to Equation (14) and obtaining ${\boldsymbol{\nabla}}^2\Lambda=0,$ indicating that the potentials $\v A_{L'}$ and $\v A_{L}$ are connected by a restricted gauge transformation. The remarkable point here is that different string positions correspond to different choices of gauge, or a change in string from $L$ to $L'$ is equivalent to a gauge transformation from $\v A_{L}$ to $\v A_{L'}$ with the gauge function $\Lambda=g\Omega_C$. With the identification $\Lambda=g\Omega_C$, the associated phase transformation of the wave function $\Psi'= {\rm e}^{iq \Lambda/(\hbar c)}\Psi$ takes the form $\Psi'= {\rm e}^{iqg\Omega_C /(\hbar c)}\Psi.$ Now a crucial point of the argument. The solid angle $\Omega_C$ undergoes a discontinuous variation of $4\pi$ as the observation point (or equivalently the charge $q$) crosses the surface $S$. This makes the gauge function $\Lambda=g\Omega_C$ multi-valued which implies that ${\rm e}^{iqg\Omega_C}$ is also multi-valued, i.e., ${\rm e}^{iqg\Omega_C}\!\not=\!{\rm e}^{iqg(\Omega_C+4\pi)}.$ Thus the transformed wave function of the charge $q$ will be multi-valued when $q$ crosses $S$, unless we impose the condition ${\rm e}^{i4\pi qg /(\hbar c)}\!=\!1\!$. But this condition and ${\rm e}^{i 2\pi n}=1$ with $n$ being an integer, imply $4\pi qg/(\hbar c)\! =\!2\pi n$, and hence, the Dirac quantisation condition $q g=n\hbar c/2$ is obtained. Accordingly, the field of the monopole in Equation (4) does not depend on the location of the string. The price we must pay is the imposition of the Dirac condition. The lesson to be learned here is that gauge invariance and single-valuedness of the wave function are the basic pieces to ensemble the Dirac quantisation condition. The above derivation of the Dirac condition puts emphasis on the idea that the location of the string is irrelevant. But the argument might equally put emphasis on the idea that the string is unobservable. In fact, consider the value $\Omega_1$ corresponding to one side of the surface $S$ and the value $\Omega_2$ corresponding to the other side. They are related by $\Omega_1=\Omega_2+4\pi$. It follows that ${\rm e}^{iqg\Omega_1/(\hbar c)}={\rm e}^{iqg(\Omega_2 +4\pi)/(\hbar c)}.$ This means that the wave function of the charge $q$ differs by the quantity ${\rm e}^{i4\pi qg/(\hbar c)}$, and this would make the Dirac string observable as the charge crosses the surface, unless we impose the condition ${\rm e}^{i4\pi qg/(\hbar c)}=1$, which is satisfied if $q g=n\hbar c/2$ holds, i.e, the price we must pay for the unobservability of the string is the imposition of the Dirac condition. The standard derivation of the Dirac quantisation condition explained in this section is appropriate to be presented to graduate students. In Sections \[4\]-\[9\] we will suggest a presentation of the Dirac condition that encapsules the main ideas underlying this condition, which may be suitable for advanced undergraduate students. How to construct a suitable quantisation condition {#4} ================================================== The origin of the letter $n$ appearing in the Dirac quantisation condition $qg=n\hbar c/2$ can be traced to the trigonometric identity $\cos{(2\pi n)}=1,$ where $n=0\pm1,\,\pm2,\,\pm3 ...$ This trigonometric identity can be expressed as $$\begin{aligned} {\rm e}^{i2\pi n}=1,\end{aligned}$$ which follows from Euler’s formula ${\rm e}^{i\alpha}=\cos\alpha +i\sin\alpha$ with $\alpha=2\pi n$. Consider now spherical coordinates $(r,\theta,\phi)$ with their corresponding unit vectors $(\hat {\bf r},\hat{\theta},\hat{\phi})$. For fixed $r$ and $\theta$, the azimuthal angles $\phi$ and $\phi +2\pi$ represent the same point. This property allows us to define a single-valued function of the azimuthal angle $F(\phi)$ as one that satisfies $F(\phi)=F(\phi +2\pi)$. We note that the particular function $F(\phi)=\phi$ is not a single-valued function because $F(\phi)=\phi$ and $F (\phi +2\pi)=\phi+2\pi$ take different values: $F(\phi)\not=F(\phi+2\pi)$. We then say that $F=\phi$ is a multi-valued function. The complex function $F(\phi)={\rm e}^{i2k\phi}$ with $k$ being an arbitrary constant is not generally a single-valued function because $F(\phi)={\rm e}^{i2k\phi}$ and $F (\phi+2\pi)={\rm e}^{i2k(\phi +2\pi)}$ can take different values: $F(\phi)\not=F(\phi +2\pi)$. This is so because in general ${\rm e}^{i4\pi k}\not=1$ for arbitrary $k$. In this case, however, we can impose a condition on the arbitrary constant $k$ so that $F={\rm e}^{i2k\phi}$ becomes a single-valued function. By considering Equation (15), we can see that ${\rm e}^{i4\pi k}=1$ holds when $k$ is dimensionless and satisfies the “quantisation” condition: $$\begin{aligned} k=\frac{n}{2}, \quad n=0,\pm1,\pm2,\pm3,....\end{aligned}$$ Under this condition, $F={\rm e}^{i2k\phi}$ becomes a single-valued function: $F(\phi)=F(\phi +2\pi)$. In short: the single-valuedness of $F={\rm e}^{i2k\phi}$ requires the quantisation condition specified in Equation (16). Notice that a specific value of $k$ may be obtained in principle by considering the basic equations of a specific physical theory. We will see that electrodynamics with magnetic monopoles and quantum mechanics conspire to yield the specific value of $k$ that leads to the Dirac quantisation condition. Gauge invariance and the Dirac quantisation condition {#5} ===================================================== We will now to present a [*heuristic quantum-mechanical derivation of the Dirac condition*]{}. The Schrödinger equation for a non-relativistic particle of mass $m$ and electric charge $q$ coupled to a time-independent vector potential $\v A(\v x)$ is given by $$\begin{aligned} i \hbar \frac{\partial \Psi}{\partial t}= \frac{1}{2m}\bigg(\!-i\hbar {\boldsymbol{\nabla}}- \frac{q}{c}\v A \bigg)^{\!2}\Psi.\end{aligned}$$ This equation is invariant under the simultaneous application of the gauge transformation of the potential $$\begin{aligned} \v A' = \v A+ {\boldsymbol{\nabla}}\Lambda,\end{aligned}$$ and the local phase transformation of the wave function $$\begin{aligned} \Psi'= {\rm e}^{iq \Lambda/(\hbar c)}\,\Psi,\end{aligned}$$ where $\Lambda(\v x)$ is a time-independent gauge function. Equations (17)-(19) are well known in textbooks (see note at the end of this review). At first glance, Equations (17)-(19) do not seem to be related to some quantisation condition. But a comparison between the previously discussed function ${\rm e}^{i 2k\phi}$ with the phase factor ${\rm e}^{iq\Lambda/(\hbar c)}$ appearing in Equation (19), $$\begin{aligned} {\rm e}^{i 2k\phi}\;\;\longleftrightarrow \;\;{\rm e}^{iq\Lambda/(\hbar c)},\end{aligned}$$ suggests the possibility of constructing a specific quantisation condition connected with Equations (17)-(19). Consider first that $k$ is an arbitrary constant. Therefore ${\rm e}^{i 2 k\phi}$ is not generally a single-valued function. We recall that the gauge function $\Lambda$ in the phase ${\rm e}^{iq\Lambda/(\hbar c)}$ of the transformation in Equation (19) is an arbitrary function which may be single-valued or multi-valued. In view of the arbitrariness of $k$ and $\Lambda$, we can make equal both functions: ${\rm e}^{iq \Lambda/(\hbar c)}={\rm e}^{i2 k\phi}$, which implies $$\begin{aligned} \Lambda q=2 k \hbar c \phi.\end{aligned}$$ This is the key equation to find a quantisation condition that leads to the electric charge quantisation. The genesis of this remarkable equation is the gauge invariance of the interaction between the charge $q$ and the potential $\v A$. By direct substitution we can show that a particular solution of Equation (21) is given by the relations $$\begin{aligned} k=\frac{qg}{\hbar c},\end{aligned}$$ and $$\begin{aligned} \Lambda= 2g \phi,\end{aligned}$$ where the constant $g$ is introduced here to make the constant $k$ dimensionless. The constant $g$ has the dimension of electric charge and its physical meaning is unknown at this stage. Notice that $\Lambda$ in Equation (23) is a multi-valued gauge function. We require now that the phase ${\rm e}^{iq \Lambda/(\hbar c)}$ be single-valued. From ${\rm e}^{iq \Lambda/(\hbar c)}={\rm e}^{i 2k\phi}$ it follows that ${\rm e}^{i 2k\phi}$ must be single-valued and then $k$ must satisfy the quantisation condition displayed in Equation (16). In other words, by demanding the single-valuedness of ${\rm e}^{iq \Lambda/(\hbar c)}$, Equations (16) and (22) yield the quantisation condition $$\begin{aligned} qg=\frac{n}{2}\hbar c.\end{aligned}$$ If now the constant $g$ is assumed to be the magnetic charge then Equation (24) is the Dirac quantisation condition. Notice that according to the heuristic approach followed here, the derivation of Equation (24) relies on the existence of the gauge function $\Lambda= 2g \phi$. In the following section we will discuss the feasibility of this specific gauge function and argue the identification of $g$ with the magnetic charge. For now we observe that the heuristic approach uses the same two fundamental pieces discussed in Section \[3\], namely, the single-valuedness of the wave function and gauge invariance. However, the heuristic approach makes use of these two pieces in a simpler way. The gauge function $\Lambda= 2g \phi$ {#6} ===================================== It is convenient to assume first the existence of the gauge function $\Lambda= 2g \phi$ with the purpose of elucidating its associated gauge potentials. The gradient of $\Lambda= 2g \phi$ in spherical coordinates gives $$\begin{aligned} {\boldsymbol{\nabla}}\Lambda=\frac{2g}{r\sin{\theta}}\hat{\phi}.\end{aligned}$$ Notice that this gradient is singular at $r=0$. This is a real singular point which is not problematic and we agree it is allowed. However, this gradient is also singular at those values of the polar coordinate $\theta$ satisfying $\sin{\theta}=0$, which represent lines of singularities involving non-trivial consequences, which will be discussed in Section \[7\]. Presumably, there exist two vector potentials such that $$\begin{aligned} \v A'-\v A=\frac{2g}{r\sin{\theta}}\hat{\phi}.\end{aligned}$$ Both potentials $\v A'$ and $\v A$ must originate the same magnetic field $\v B$, i.e., ${\boldsymbol{\nabla}}\times\v A'={\boldsymbol{\nabla}}\times\v A=\v B$. From Equation (26) we can see that $\v A'$ and $\v A$ may be of the generic form $$\begin{aligned} \v A'= g\frac{1-f(\theta)}{r\sin\theta}\hat{\phi}, \quad \v A= -g\frac{1+f(\theta)}{r\sin\theta}\hat{\phi},\end{aligned}$$ where $f(\theta)$ is an unspecified function such that it does not change the validity of Equation (26). Notice that $\v A'$ and $\v A$ have singularities originated by $\sin \theta\!=\!0$. These will not be considered for now. We observe that $\v A'$ and $\v A$ in Equation (27) are of the form $\v A'=[0,0,A'_\phi(r,\theta)]=A'_\phi(r,\theta)\hat{\phi}$ and $\v A=[0,0,A_\phi(r,\theta)]=A_\phi(r,\theta)\hat{\phi}$. The curl of a generic vector of the form $\bfF=\bfF[0,0,F_\phi(r,\theta)]$ in spherical coordinates reads $$\begin{aligned} {\boldsymbol{\nabla}}\times \bfF=\frac{1}{r\sin{\theta}}\frac{\partial}{\partial \theta}\big(\sin{\theta}F_\phi\big)\hat{\v r}-\frac{1}{r}\frac{\partial}{\partial r}\big(r F_\phi\big)\hat{\theta}.\end{aligned}$$ When this definition is applied to $\v A'$ and $\v A$ and $\sin \theta\not=0$ is assumed we obtain $$\begin{aligned} {\boldsymbol{\nabla}}\times \v A'={\boldsymbol{\nabla}}\times \v A=-\frac{g}{r^2\sin{\theta}}\frac{\partial f}{\partial \theta}\hat{\v r},\end{aligned}$$ and therefore both potentials yield the same field $$\begin{aligned} \v B=-\frac{g}{r^2\sin{\theta}}\frac{\partial f}{\partial \theta}\hat{\v r}.\end{aligned}$$ In the particular case $f(\theta)=\cos\theta,$ this field becomes $$\begin{aligned} \v B=\frac{g}{r^2}\hat{\v r}.\end{aligned}$$ The nature of the constant $g$ is then revealed in this particular case. Equation (31) is the magnetic field produced by a magnetic charge $g$ located at the origin. In other words, the constant $g$ introduced by hand in Equations (22) and (23) is naturally identified with the magnetic monopole! The potentials $\v A'$ and $\v A$ in Equation (27) are in the Coulomb gauge. In fact, using the definition of the divergence of the generic vector $\bfF=\bfF[0,0,F_\phi(r,\theta)]$ in spherical coordinates ${\boldsymbol{\nabla}}\cdot \bfF=[1/(r\sin\theta)]\partial \bfF_\phi/\partial \phi$, it follows that ${\boldsymbol{\nabla}}\cdot \v A'=0$ and ${\boldsymbol{\nabla}}\cdot \v A=0$. Here, there is a point that requires to be clarified. At first glance, there seems to be some inconsistency when connecting $\v A'$ and $\v A$ via a gauge transformation because both potentials are already in a specific gauge, namely, the Coulomb gauge. However, there is no inconsistence as explained in Section \[3\], because even for potentials satisfying the Coulomb gauge there is arbitrariness. Evidently, the restricted gauge transformation $\v A\to \v A'=\v A+{\boldsymbol{\nabla}}\Lambda$, where ${\boldsymbol{\nabla}}^2\Lambda=0$, preserves the Coulomb gauge. The definition of the Laplacian of the generic scalar function $f=f(\phi)$ in spherical coordinates reads ${\boldsymbol{\nabla}}^2 f=[1/(r\sin\theta)^2]\partial^2f/\partial \phi^2$. Using this definition with $f=\Lambda= 2g \phi,$ it follows that ${\boldsymbol{\nabla}}^2\Lambda=0,$ indicating that $\v A'$ and $\v A$ are connected by a restricted gauge transformation. Let us recapitulate. By assuming the existence of the gauge function $\Lambda=2 g\phi$, we have inferred the potentials $$\begin{aligned} \v A'= g\frac{1-\cos\theta}{r\sin\theta}\hat{\phi}, \quad \v A= -g\frac{1+\cos\theta}{r\sin\theta}\hat{\phi}.\end{aligned}$$ \[these are $\v A'$ and $\v A$ in Equation (27) with $f(\theta)=\cos\theta$\], which originate the same field given in Equation (31) whenever the condition $\sin \theta\not=0$ is assumed. This field is the Coloumbian field due to a magnetic monopole $g$. With the identification of $g$ as the magnetic monopole, we can say that Equation (24) is the Dirac quantisation condition. Evidently, we can reverse the argument by introducing first the potentials $\v A'$ and $\v A$ by means of Equation (32) considering $\sin \theta\not=0$ and then proving they yield the same magnetic field in Equation (31). The existence of these potentials guarantees the existence of the gauge function $\Lambda=2 g\phi$. Once the existence of the gauge function $\Lambda=2 g\phi$ has been justified with $g$ being the magnetic monopole, the heuristic derivation of the Dirac quantisation condition has been completed. However, we should note that this heuristic procedure involves an aspect that could be interpreted as an inconsistency. According to the traditional interpretation, the existence of magnetic monopoles implies ${\boldsymbol{\nabla}}\cdot\v B\not=0$ and therefore we cannot write $\v B={\boldsymbol{\nabla}}\times \v A$, at least not globally. This is so because ${\boldsymbol{\nabla}}\cdot({\boldsymbol{\nabla}}\times\v A)=0$. The origin of this apparent inconsistency deals with the singularity originated by the value $\sin \theta=0$ and its explanation will take us to one of the most interesting concepts in theoretical physics, the Dirac string, which will be discussed in the following section. Dirac strings {#7} ============= As previously pointed out, both potentials in Equation (32) yield the same magnetic field given in Equation (31) whenever $\sin \theta\not =0$ is assumed. The question naturally arises: What does $\sin \theta =0$ mean? The answer is simple: $\theta=0$ and $\theta=\pi$. The first value represents the positive semi-axis $z$, i.e., $z>0$, whereas the second value represents the negative semi-axis $z$, i.e., $z<0$. Therefore, the condition $\sin \theta\not=0$ means that the semi-axes $z>0$ and $z<0$ have been excluded in the heuristic treatment. Accordingly, when we took the curl to $\v A'$ and $\v A$, we obtained the magnetic field $\v B=g\hat{\v r}/r^2$ in all space except at $r=0$ (which we agree it is allowed) and except along the negative semi-axis in the case of $\v A'$, and also except along the positive semi-axis in the case of $\v A$. Expectably, if we additionally consider the field contributions associated to the Dirac strings located in the positive and negative semi-axes then we can reasonably assume the following equations: $$\begin{aligned} {\boldsymbol{\nabla}}\times\v A'= &\;\frac{g}{r^2}\hat{\v r} + \v B'({\rm along}\;z<0),\\ {\boldsymbol{\nabla}}\times\v A = &\;\frac{g}{r^2}\hat{\v r} + \v B({\rm along}\;z>0).\end{aligned}$$ Here $\v B'(z<0)$ and $\v B(z>0)$ represent magnetostatic fields produced by Dirac strings. The formal determination of these fields is not an easy task because they are highly singular objects. But, fortunately, heuristic considerations allow us to elucidate the explicit form of these fields. We note that the semi-axis $z<0$ can be represented by the singular function $-\delta(x)\delta(y)\Theta(-z)\hat{\v z}$ and the semi-axis $z>0$ by the singular function $\delta(x)\delta(y)\Theta(z)\hat{\v z}$. Therefore, the fields $\v B'(z<0)$ and $\v B(z>0)$ may be appropriately modelled by the singular functions $$\begin{aligned} \v B'(z<0)=& -K\delta(x)\delta(y)\Theta(-z)\hat{\v z},\\ \v B(z>0)=&\; K\delta(x)\delta(y)\Theta(z)\hat{\v z},\end{aligned}$$ where $K$ is a constant to be determined. Using Equations (33)-(36), we obtain $$\begin{aligned} {\boldsymbol{\nabla}}\times\v A'= &\;\frac{g}{r^2}\hat{\v r} - K\delta(x)\delta(y)\Theta(-z)\hat{\v z},\\ {\boldsymbol{\nabla}}\times\v A = &\;\frac{g}{r^2}\hat{\v r} + K\delta(x)\delta(y)\Theta(z)\hat{\v z}.\end{aligned}$$ The divergence of Equation (37) gives $$\begin{aligned} 0= 4\pi g\delta(\v x)+K\delta(\v x),\end{aligned}$$ where ${\boldsymbol{\nabla}}\cdot (\hat{\v r}/r^2)\!=\! 4\pi\delta(\v x)$ with $\delta(\v x)=\delta(x)\delta(y)\delta(z)$ and $\partial\Theta(-z)/\partial z\!=\!-\delta(z)$ have been used. A similar calculation on Equation (38) gives Equation (39) again. From Equation (39), it follows that $K=-4\pi g$ and thus we get the final expressions $$\begin{aligned} {\boldsymbol{\nabla}}\times\v A'= &\;\frac{g}{r^2}\hat{\v r} + 4\pi g\delta(x)\delta(y)\Theta(-z)\hat{\v z},\\ {\boldsymbol{\nabla}}\times\v A = &\;\frac{g}{r^2}\hat{\v r} - 4\pi g\delta(x)\delta(y)\Theta(z)\hat{\v z}.\end{aligned}$$ We should emphasise that simple heuristic arguments have been used to infer Equations (40) and (41). We also note that Equation (40) is the same as Equation (12), which was in turn derived by the more complicated approach outlined in Section \[3\]. The advantage of the heuristic argument is that it has nothing to do with the idea of modelling a magnetic monopole either as the end of an infinite line of infinitesimal magnetic dipoles or as the end of a tightly wound solenoid that stretches off to infinity. Equation (40) is also formally derived in Appendix \[A\] by means of an integration process. Furthermore, Equation (40) can alternatively be obtained by differentiation, which is done in Appendix \[D\], where an appropriate regularisation of the potential $\v A'$ is required. Expressed differently, the potentials $\v A'$ and $\v A$ appearing in Equations (40) and (41) produce respectively the fields $\v B'_\texttt{ms}={\boldsymbol{\nabla}}\times\v A'$ and $\v B_\texttt{ms}={\boldsymbol{\nabla}}\times\v A$, and so we can write $$\begin{aligned} \v B'_\texttt{ms}= &\;\v B_\texttt{mon} +\v B'_\texttt{string},\\ \v B_\texttt{ms}= &\;\v B_\texttt{mon} +\v B_\texttt{string},\end{aligned}$$ where the respective magnetic fields are defined as $$\begin{aligned} \v B_\texttt{mon}=&\;\frac{g}{r^2}\hat{\v r},\\ \v B'_\texttt{string}=&\;4\pi g\delta(x)\delta(y)\Theta(-z)\hat{\v z},\\ \v B_\texttt{string}=&- 4\pi g\delta(x)\delta(y)\Theta(z)\hat{\v z}.\end{aligned}$$ Figures \[Fig4\] and \[Fig5\] show a pictorial representation of the fields appearing in Equations (42) and (43). It is conceptually important to identify the sources of the fields described by Equations (42) and (43). The magnetic field $\v B_\texttt{mon}$ in Equation (44) satisfies $$\begin{aligned} {\boldsymbol{\nabla}}\cdot\v B_\texttt{mon}&=4\pi g\delta(\v x),\\ {\boldsymbol{\nabla}}\times\v B_\texttt{mon}&=0,\end{aligned}$$ The magnetic field $\v B'_\texttt{string}$ in Equation (45) satisfies $$\begin{aligned} {\boldsymbol{\nabla}}\cdot\v B'_\texttt{string}=& -4\pi g\delta(\v x), \\ {\boldsymbol{\nabla}}\times\v B'_\texttt{string}=&\,4\pi g\Theta(-z)\big[\delta(x)\delta'(y) \hat{\v x}-\delta'(x)\delta(y)\hat{\v y}\big],\end{aligned}$$ where $\delta'(x)= d\delta(x)/dx$ and $\delta'(y)= d\delta(y)/dy$ are delta function derivatives. The field $\v B_\texttt{string}$ in Equation (46) is shown to satisfy $$\begin{aligned} {\boldsymbol{\nabla}}\cdot\v B_\texttt{string}&=-4\pi g\delta(\v x),\\ {\boldsymbol{\nabla}}\times\v B_\texttt{string}&=-4\pi g\Theta(z)\big[\delta(x)\delta'(y) \hat{\v x}-\delta'(x)\delta(y)\hat{\v y}\big].\end{aligned}$$ Therefore, the field $\v B'_\texttt{ms}$ defined by Equation (42) satisfies $$\begin{aligned} {\boldsymbol{\nabla}}\cdot \v B'_\texttt{ms}&= 0,\end{aligned}$$ $$\begin{aligned} {\boldsymbol{\nabla}}\times\v B'_\texttt{ms}&= 4\pi g\Theta(-z)\big[\delta(x)\delta'(y) \hat{\v x}-\delta'(x)\delta(y)\hat{\v y}\big],\end{aligned}$$ and the field $\v B_\texttt{ms}$ defined by Equation (43) satisfies $$\begin{aligned} {\boldsymbol{\nabla}}\cdot \v B_\texttt{ms}&= 0,\\ {\boldsymbol{\nabla}}\times\v B_\texttt{ms}&= -4\pi g\Theta(z)\big[\delta(x)\delta'(y) \hat{\v x}-\delta'(x)\delta(y)\hat{\v y}\big].\end{aligned}$$ Let us return to the Schr$\ddot{\rm o}$dinger equation defined by Equation (17). According to this equation, the electric charge $q$ interacts with the potential $\v A$. From the gauge function $\Lambda= 2g \phi,$ we inferred the potentials $\v A'$ and $\v A$ given in Equation (32). The curl of each of these potentials originates the field of the magnetic monopole plus the field of the respective string as may be seen in Equations (40) and (41). If any of these potentials is considered in Equation (17), then a question naturally arises: Does the electric charge interact only with the monopole or with the monopole and a Dirac string? In other words: Can the electric charge physically interact with a Dirac string? The answer is not as simple as might appear at first sight. The Dirac string is a subtle object whose physical nature has originated controversy and debate. Typically, the magnetic field of the Dirac string is discussed together with the Coulombian field of the magnetic monopole. But since we have identified the sources of the magnetic field of the string \[those given on the right of Equations (49) and (50) or also on the right of Equations (51) and (52)\], we can study the magnetic field of the Dirac string with no reference to the Coulombian field. In the following section, we will discuss the interaction of an electric charge with a Dirac string from classical and quantum-mechanical viewpoints. Classical interaction between the electric charge and the Dirac string {#8} ====================================================================== In order to understand the possible meaning of the Dirac string, we should first study the sources of the magnetostatic field produced by this string. Let us assume that the string lies along the negative $z$-axis. From Equations (49) and (50), we can see that this string has the associated charge and current densities: $$\begin{aligned} \rho_\texttt{string}&=-g\delta(\v x),\\ \v J_\texttt{string}&=cg\Theta(-z)\big[\delta(x)\delta'(y) \hat{\v x}-\delta'(x)\delta(y)\hat{\v y}\big],\end{aligned}$$ which generate the magnetic field $$\begin{aligned} \v B'_\texttt{string}= 4\pi g\delta(x)\delta(y)\Theta(-z)\hat{\v z}.\end{aligned}$$ A regularised vector potential in cylindrical coordinates for the field $\v B'_\texttt{string}$ reads $$\begin{aligned} \v A_\texttt{string}=\frac{2g\Theta(\rho-\varepsilon)\Theta(-z)}{\rho}\hat{\phi},\end{aligned}$$ where $\varepsilon >0$ is an infinitesimal quantity. Notice that the potential $\v A_\texttt{string}$ for $\rho>\varepsilon$ and $z<0$ is a pure gauge potential, i.e., it can be expressed as the gradient of a scalar field. To show that $\v A_\texttt{string}$ generates $\v B'_\texttt{string}$ consider the curl of the generic vector $\bfF= \bfF[0, F_\phi(\rho,z),0]$ in cylindrical coordinates $$\begin{aligned} {\boldsymbol{\nabla}}\times \bfF=-\frac{\partial F_\phi}{\partial z} \hat{\rho} + \frac{1}{\rho}\frac{\partial}{\partial \rho}\big(\rho F_\phi\big)\hat{\v z}.\end{aligned}$$ When this definition is applied to the potential $\v A_\texttt{string}$ defined by Equation (60), we obtain $$\begin{aligned} {\boldsymbol{\nabla}}\times\v A_\texttt{string}=&\, \frac{2g\Theta(\rho-\varepsilon)\delta(z)}{\rho}\hat{\rho} + \frac{2g\delta(\rho-\varepsilon)\Theta(-z)}{\rho}\hat{\v z}.\end{aligned}$$ Since we are only considering $z\!<\!0$ the first term vanishes and then $$\begin{aligned} {\boldsymbol{\nabla}}\times\! \v A_\texttt{string}&=\frac{2g\delta(\rho-\varepsilon)\Theta(-z)}{\rho}\hat{\v z}\nonumber\\ &= 4\pi g\delta(x)\delta(y)\Theta(-z)\hat{\v z}\nonumber\\ &=\v B'_\texttt{string},\end{aligned}$$ where we have used the formula [@58]: $$\begin{aligned} \delta(x)\delta(y)=\frac{\delta(\rho-\varepsilon)}{2\pi\rho},\end{aligned}$$ in which the limit $\varepsilon\to 0$ is understood. Having all the classical ingredients on the table, we will now proceed to interpret them from both mathematical and physical point of views. These ingredients are highly singular and therefore such interpretations are full of subtleties. Assuming the existence of magnetic monopoles, the classical interaction between a moving electric charge $q$ and the magnetic field $\v B'_\texttt{string}$ is given by the Lorentz force $\v F = q(\bfv/c)\times \v B'_\texttt{string}$. Expressing the velocity $\bfv$ of the charge in cylindrical coordinates $\bfv=(v_\rho, v_\phi,v_z)$ and using the regularised form of $\v B'_\texttt{string}={\boldsymbol{\nabla}}\times \v A_\texttt{string}$ defined in the first line of Equation (63), this force reads $$\begin{aligned} \v F= -\frac{2qg\Theta(-z)}{c}\frac{\delta(\rho-\varepsilon)}{\rho}\big[v_\phi \hat{\rho}-v_\rho\hat{\phi}\big].\end{aligned}$$ The singular character of this force becomes evident. If $\rho \neq\varepsilon $ this force vanishes and then the charge $q$ is insensitive to the string. If the charge $q$ approaches too much to the string, then $\rho\to \varepsilon$, which implies $\rho\to 0$ because $\varepsilon\to 0$. In this case, we have $$\begin{aligned} \lim_{\rho\to 0} \frac{\delta(\rho-\varepsilon)}{\rho}=0,\end{aligned}$$ and again the force in Equation (65) vanishes indicating that the charge $q$ is also unaffected by the string in this extreme case. However, from a mathematical point of view, when $\rho=\varepsilon$ the force in Equation (65) becomes infinite $(\infty/0=\infty),$ which is physically unacceptable. Two results are then conclusive. On one hand, if the electric charge $q$ is outside the string, then $q$ does not feel the action of the magnetic field of the string. This is true even when the charge $q$ is very close to the string. On the other hand, if $\rho=\varepsilon,$ then the charge $q$ feels an infinite force due to the magnetic field of the string. The idea of an infinite force leads us to conclude that the Dirac string lacks any physical meaning. Thus the common statement that the Dirac string cannot be detected is meaningful in purely classical considerations. The interpretation of the potential in Equation (60) is also somewhat subtle. There is no problem when $\rho>\varepsilon$ because in this case $\v A_\texttt{string}=2g\Theta(-z)\hat{\phi}/\rho$ exhibits a regular behaviour which is drawn in Figure \[Fig6\]. There is also no problem when $\rho<\varepsilon$ because in this case $\v A_\texttt{string}=0$. When $\rho\to\varepsilon,$ it follows $\rho\to 0$ because $\varepsilon\to 0$. In this case $$\begin{aligned} \lim_{\rho\to 0} \frac{\Theta(\rho-\varepsilon)}{\rho}=0,\end{aligned}$$ and again $\v A_\texttt{string}$ vanishes. The problematic issue arises when $\rho=\varepsilon$ because in this case $\v A_\texttt{string}$ becomes undefined. Quantum-mechanical interaction between the electric charge and the Dirac string {#9} =============================================================================== The [*second quantum-mechanical derivation of the Dirac condition*]{} will now be reviewed. We have argued that the classical interaction of an electric charge with the Dirac string is not physically admissible. Now we will consider the possibility of a quantum-mechanical interaction between the electric charge and the string. Dirac [@2] noted that the interaction of an electric charge with a vector potential is given by the phase in the wave function $$\begin{aligned} \Psi={\rm e}^{i[q/(\hbar c)]\int_0^{\v x}\v A(\v x')\cdot\, d\v l'}\Psi_0,\end{aligned}$$ where $\Psi_0$ is the solution of the free Schrödinger equation and the line integral is taken a long a path of the electric charge from the origin to the point $\v x$. The quantum mechanical analogous to the classical Lorentz force $\v F = q(\bfv/c)\times \v B$ is given by the phase ${\rm e}^{i[q/(\hbar c)]\int_0^{\v x}\v A(\v x')\cdot d\v l'}$ appearing in Equation (68), which in turn represents the solution of the Schrödinger equation given in Equation (17). This solution assumes that $\v B={\boldsymbol{\nabla}}\times\v A=0$ holds in the considered region, otherwise the line integral depends on the path. We note that the phase of the wave function can be discontinuous at some point but the wave function must be a continuous function. Consider the particular case in which $\v A=\v A_\texttt{string}$, i.e., when the charge $q$ interacts with the potential $\v A_\texttt{string}$ associated to the string $L'$. With this identification and using cylindrical coordinates, the Dirac condition can be implied by assuming (i) that the path is a closed line surrounding the string $$\begin{aligned} \Psi={\rm e}^{i [q/(\hbar c)]\oint_C \v A_\texttt{string}\,\cdot \rho \,d\phi\, \hat{\phi}}\,\Psi_0,\end{aligned}$$ and (ii) that the phase change $[q/(\hbar c)]\oint_C \v A_\texttt{string}\cdot \rho\, d\phi\, \hat{\phi}$ within Equation (69) satisfies the condition $$\begin{aligned} \frac{q}{\hbar c}\oint_{C} \v A_\texttt{string}\cdot \rho\, d\phi\, \hat{\phi}=2\pi n.\end{aligned}$$ Under these specific conditions, the possible quantum-mechanical effect of the string on the electric charge will disappear because ${\rm e}^{i [q/(\hbar c)]\oint_C \v A_\texttt{string}\cdot \rho\, d\phi\, \hat{\phi}}={\rm e}^{i 2\pi n}=1$. Integration of the left-hand side of Equation (70) with the potential defined by Equation (60) gives $$\begin{aligned} \frac{q}{\hbar c}\oint_C \v A_\texttt{string}\cdot\rho \,d\phi\, \hat{\phi}&=\frac{2qg}{\hbar c}\Theta(\rho-\varepsilon)\Theta(-z)\int\limits_0^{2\pi}d \phi\nonumber\\ &=\frac{4\pi qg}{\hbar c}\Theta(\rho-\varepsilon)\Theta(-z)\nonumber\\ &=\frac{4\pi qg}{\hbar c},\end{aligned}$$ for $\rho>\varepsilon$ and $z<0$. From Equations (70) and (71), we directly obtain the Dirac quantisation condition $qg=n\hbar c/2$. We then conclude that from quantum-mechanical considerations the unobservability of the string (classically well argued) implies the Dirac condition. The argument can be reversed. If we start by imposing the Dirac condition then the Dirac string turns out to be undetectable. The previous treatment to the Dirac string may be seen as a complementary discussion to the heuristic approach to the Dirac condition. In the following section, we will review some of the well-known derivations of the Dirac quantisation condition. Aharonov–Bohm effect and the Dirac quantisation condition {#10} ========================================================= We will now review the [*third quantum-mechanical derivation of the Dirac condition*]{}. According to the Aharonov–Bohm (AB) effect [@40], particles can be affected by a vector potential even in regions where the magnetic field vanishes. We observe that this effect and the derivation of the Dirac quantisation condition require similar objects: a long solenoid for the AB effect and a semi-infinite string for the Dirac condition. Therefore, we may think of the Dirac string as the AB solenoid and investigate as to whether the undetectability of the Dirac string can be demonstrated via a hypothetical AB interference experiment [@4; @8; @9; @10; @11; @12; @14; @15; @17; @22; @23; @41; @42]. Let us imagine a double-slit AB experiment with a Dirac string inserted between the slits as shown in Figure \[Fig7\]. Electric charges are emitted by a source at point A, pass through two slits 1 and 2 of the screen located at point B, and finally are detected at point C. The wave function in a region of zero vector potential is simply $\Psi=\Psi_1\!+ \!\Psi_2 $ where $\Psi_1$ and $\Psi_2$ are the wave functions of the charges passing through the slits 1 and 2. Without the presence of the string, the wave function of the charges combines coherently in such a way that the probability density at C reads $P=|\Psi_1+ \Psi_2|^2.$ Since the Dirac string is inserted between the two slits, it is clear that each of the wave functions $\Psi_1$ and $\Psi_2$ pick up a phase due to the string potential $\v A_\texttt{string}\equiv\v A_s$. Thus the wave function of the charges is now given by $$\begin{aligned} \Psi =&\,{\rm e} ^{(iq/\hbar c)\int_{1} \v A_s \cdot \rho\, d\phi\, \hat{\phi}}\,\Psi_1 + {\rm e}^{(iq/\hbar c)\int_{2} \v A_s \cdot \rho \,d\phi\, \hat{\phi}}\,\Psi_2 \nonumber\\=& \, \bigg(\Psi_1 + {\rm e}^{(iq/\hbar c)\oint_{C} \v A_s \cdot \rho\, d\phi\, \hat{\phi}}\,\Psi_2\bigg) \,{\rm e}^{(iq/\hbar c)\int_{1} \v A_s \cdot \rho\, d\phi\, \hat{\phi}}\nonumber\\ =& \, \bigg(\Psi_1 + {\rm e}^{i 4\pi q g/(\hbar c)}\,\Psi_2\bigg) \,{\rm e}^{(iq/\hbar c)\int_{1} \v A_s \cdot \rho\, d\phi\, \hat{\phi}},\end{aligned}$$ where we have used the expression for $\v A_\texttt{string}$ given in Equation (60) and written as $$\begin{aligned} \oint_{C} \v A_s \cdot \rho\, d\phi\, \hat{\phi}=\int\limits_{2} \v A_s \cdot \rho\, d\phi\, \hat{\phi}- \int\limits_{1} \v A_s \cdot \rho\, d\phi\, \hat{\phi}.\end{aligned}$$ It follows now that the probability density at C reads $$\begin{aligned} P= |\Psi_1 + {\rm e}^{i4\pi q g/(\hbar c)}\,\Psi_2|^2.\end{aligned}$$ The effect of the Dirac string would be unobservable if ${\rm e}^{i4\pi q g/(\hbar c)}=1$ and this implies the Dirac quantisation condition $qg=n\hbar c/2.$ Under this condition, the probability density becomes $P= |\Psi_1 + \Psi_2|^2$, meaning that no change in the interference pattern would be observed due to the Dirac string. In short: the Dirac string is undetectable if the Dirac quantisation condition holds. We can reverse the argument: if the Dirac quantisation condition holds, then the Dirac string is unobservable. Feynman’s path integral approach and the Dirac quantisation condition {#11} ===================================================================== We will now discuss the [*fourth quantum-mechanical derivation of the Dirac condition*]{}. The path-integral approach to quantum mechanics, suggested by Dirac in 1933 [@43], formally started by Feynman in his 1942 Ph.D. thesis [@44] and fully discussed by him in 1948 [@45], provides an elegant procedure to obtain the Dirac condition, which is similar to a certain extent to that of the Aharonov–Bohm effect. Let us first briefly discuss the essence of the path-integral approach. Question [@59]: If a particle is at an initial position A, what is the probability that it will be at another position B at the latter time? Schrödinger’s wave function tells us the probability for a particle to be in a certain point in time, but it does not tell us the transition probability for a particle to be between two points at different times. We need to introduce a quantity that generalises the concept of wave function to include transition probabilities. According to Feynman, this concept is the “transition probability amplitude" (or amplitude for short) which relates the state of a wave function from the initial position and time $\ket{\Psi(\v x_\text{i}, t_\text{i})}$ to its final position and time $\ket{\Psi(\v x_\text{f}, t_\text{f})}$, and is given by the inner product $K=\braket{\Psi(\v x_\text{f}, t_\text{f})|\Psi(\v x_\text{i}, t_\text{i})},$ where we have used Dirac’s “bra-ket" notation. It follows that the transition probability (or probability for short) is defined as $P=|K|^2.$ Dirac [@43] suggested that the amplitude for a given path is proportional to the exponent of the classical action associated to the path ${\rm e}^{(i/\hbar) {\cal S}(\v x)},$ where ${\cal S}(\v x)= \int L(\v x, \dot{\v x})dt,$ is the classical action, with $L$ being the Lagrangian. But a particle can take any possible path from the initial to the final point (there is no reason for the particle to take the shortest path). Therefore, to compute the amplitude, Feynman proposed to sum over all the infinite paths that the particle can take. More specifically, the transition probability amplitude $K$ for a charged particle to propagate from an initial point A to a final point B is given by the integral over all possible paths $$\begin{aligned} K=\int \!\mathcal{D(\v x)}\,{\rm e}^{(i/\hbar) {\cal S}(\v x)},\end{aligned}$$ where $\int\mathcal{D(\v x)}$ is a short hand to indicate a product of integrals performed over all paths $\v x(t)$ leading from $\text{A}$ to $\text{B},$ and ${\cal S}$ is the classical action associated to each path. For example, consider two generic paths $\gamma_1$ and $\gamma_2$ each of which starts at $\text{A}$ and ends at $\text{B}.$ The amplitude is $$\begin{aligned} K=K_1+K_2=\int\limits_{\gamma_1} \!\mathcal{D(\v x)}\, {\rm e}^{(i/\hbar) {\cal S}^{(1)}(\v x)} + \int\limits_{\gamma_2}\!\mathcal{D(\v x)}\,{\rm e}^{(i/\hbar) {\cal S}^{(2)}(\v x)},\end{aligned}$$ where $K_1$ is the amplitude associated to the integration over all paths through $\gamma_1$ and $K_2$ is the amplitude associated to the integration over all paths through $\gamma_2.$ Consider first the action for a free particle ${\cal S}_0= \int \! m\dot{\v x}^2/2\,dt.$ In this case, there is not external interaction and therefore the probability is simply $P=|K_1+K_2|^2.$ Nothing really interesting happens there. Consider now the case where the electric charge is affected by the potential due to the magnetic monopole and the Dirac string given in Equation (2). Furthermore, suppose that the paths $\gamma_1$ and $\gamma_2$ pass on each side of the Dirac string and form the boundary of a surface ${\rm S}$ as seen in Figure \[Fig8\]. ![A Dirac string is encircled between two generic paths $\gamma_1$ and $\gamma_2$ starting at A, ending at B, and forming the boundary of the surface ${\rm S}$.[]{data-label="Fig8"}](Fig8.eps){width="260pt"} \ \ \ The external vector potential $\v A_L$ will affect the motion of the particle because the action acquires an interaction term $$\begin{aligned} {\cal S}= {\cal S}_0 + \frac{q}{c} \int \v A_L \cdot d \v l.\end{aligned}$$ Thus the amplitude becomes $$\begin{aligned} \nonumber K=& \int\limits_{\gamma_1} \!\mathcal{D}(\v x)\,{\rm e}^{(i/\hbar) ({\cal S}^{(1)}_{0}+ (q/c)\int_{(1)} \v A_L \cdot d \v l)} + \int\limits_{\gamma_2} \!\mathcal{D}(\v x)\,{\rm e}^{(i/\hbar) ({\cal S}^{(2)}_{0} + (q/c)\int_{(2)} \v A_L \cdot d \v l)} \\ =& \bigg(K_1 + {\rm e}^{ (i q/\hbar c)\oint_{C} \v A_L \cdot d \v l }\,K_2\bigg) {\rm e}^{ (i q/\hbar c)\int_{(1)} \v A_L \cdot d \v l},\end{aligned}$$ where we have written $$\begin{aligned} \oint_{C} \v A_L \cdot d \v l=\int_{(2)} \v A_L \cdot d \v l- \int_{(1)} \v A_L \cdot d \v l.\end{aligned}$$ Clearly, the contributions from $\gamma_1$ and $\gamma_2$ interfere, giving the interference term ${\rm e}^{(i q/\hbar c)\oint_{C} \v A_L \cdot d \v l }.$ Using Stoke’s theorem and Equation (4) we can write the integral of this exponent as $$\begin{aligned} \oint_{C} \v A_L \cdot d \v l = \int_{S} {\boldsymbol{\nabla}}\times \v A_L \cdot d\v a = \int_{\rm S} \v B_\texttt{mon} \cdot d \v a + \int_{S} \v B_\texttt{string} \cdot d\v a.\end{aligned}$$ Therefore, we may write the interference term as $$\begin{aligned} {\rm e}^{(i q/\hbar c)\oint_{C} \v A_L \cdot d \v l }= {\rm e}^{(iq/\hbar c) \int_{\rm S} \v B_\texttt{mon} \cdot d \v a} \, {\rm e}^{ (iq/\hbar c)\int_{\rm S} \v B_\texttt{string} \cdot d\v a}.\end{aligned}$$ The term ${\rm e}^{(iq/\hbar c) \int_{\rm S} \v B_\texttt{mon} \cdot d \v a}$ is perfectly fine because the charged particle should be influenced by the magnetic monopole. However, the second term must not contribute or otherwise the string would be observable. Therefore, we must demand ${\rm e}^{ (iq/\hbar c)\int_{\rm S} \v B_\texttt{string} \cdot d\v a}=1.$ But the flux through the string is $\int_{\rm S} \v B_\texttt{string} \cdot d\v a=4 \pi g$ so that ${\rm e}^{i4 \pi qg/\hbar c} =1,$ which implies the Dirac quantisation condition $qg= n \hbar c /2.$ As may be seen, the procedure to obtain the Dirac quantisation condition based on Feynman’s path integral approach is similar to the procedure based on the Aharonov–Bohm effect. If one first teaches the latter procedure in an advanced undergraduate course, then one may teach the former procedure in a graduate course, following Feynman’s opinion that [@45]: “there is a pleasure in recognising old things from a new point of view." The Wu–Yang approach and the Dirac quantisation condition {#12} ========================================================= We will now examine the [*fifth quantum-mechanical derivation of the Dirac condition*]{}. Let us rewrite Equations (40) and (41) as follows: $$\begin{aligned} \v B'\!&={\boldsymbol{\nabla}}\times\v A'= \;\frac{g}{r^2}\hat{\v r} + 4\pi g\delta(x)\delta(y)\Theta(-z)\hat{\v z},\\ \v B &={\boldsymbol{\nabla}}\times\v A = \;\frac{g}{r^2}\hat{\v r} - 4\pi g\delta(x)\delta(y)\Theta(z)\hat{\v z}.\end{aligned}$$ A direct look at these equations reveals an unpleasant but formal result: $\v B'\not=\v B.$ This result follows from the difference of the delta-field contributions of the respective strings. Therefore, the potentials $\v A'$ and $\v A$ are not equivalent. Strictly speaking they are not gauge potentials. However, it is possible to extend the gauge symmetry to include contributions due to strings [@8], but this possibility will not be discussed here. Using the property $\Theta(-z)=1-\Theta(z)$, the difference of the magnetic fields is given by $\v B'-\v B= 4\pi g\delta(x)\delta(y)\hat{\v z},$ where the right-hand side of this equation is a singular magnetic field attributable to an infinite string lying along the entire $z$-axis. The fact that $\v B'$ and $\v B$ are different is not an unexpected result because the current densities producing them are different as may be seen in Equations (50) and (52). However, we have argued that the Dirac strings are unphysical and should therefore be unobservable. The question then arises: How should the potentials $\v A'$ and $\v A$ be interpreted? A rough answer will be that $\v A'$ and $\v A$ are equivalent because they produce the same magnetic field \[the first terms of Equations (82) and (83)\] and because the field contributions of the strings \[the last terms of Equations (82) and (83)\] can be physically ignored. But we must recognise that this answer is not very satisfactory from a formal point of view. In other words, $\v A'$ and $\v A$ are physically but not mathematically equivalent. Furthermore, it can be argued that the derivation of the Dirac condition involves some unpleasant features like singular gauge transformations and singular potentials [@9]. Fortunately, a procedure due to Wu and Yang [@46] avoids these unpleasant features and leads also to the Dirac condition. The Wu–Yang method does not to deal with singular potentials nor with singular gauge transformations (except with the real singularity at the origin). The strategy of Wu and Yang was to use different vector potentials in different regions of space. In more colloquial words, if the Dirac string is the cause of the difficulties and subtleties, then the Wu-Yang approach provides a simple solution: to get rid of the Dirac string via a formal procedure. In the Wu–Yang method the potentials $\v A'$ and $\v A$ displayed in Equation (32) are non-singular if we define them in an appropriate domain: $$\begin{aligned} \v A'=& \,\,g\frac{1-\cos\theta}{r \sin\theta}\hat{\phi},\qquad\quad R^N:\;0 \leq \theta < \frac{\pi}{2}+\frac\varepsilon2\\ \v A =& -g\frac{1+\cos\theta}{r \sin\theta}\hat{\phi},\quad \quad R^S:\;\frac{\pi}{2}-\frac\varepsilon2 < \theta \leq \pi\end{aligned}$$ where $\varepsilon>0$ is an infinitesimal quantity. The potentials $\v A'$ and $\v A$ are in the Coulomb gauge: ${\boldsymbol{\nabla}}\cdot \v A =0$ and ${\boldsymbol{\nabla}}\cdot \v A'=0$. Furthermore, these potentials are non-global functions since they are defined only on their respective domains: $R^N$ and $R^S$. The region $R^N$, where $\v A'$ is defined, excludes the string along the negative semi-axis $(\theta=\pi)$ and represents a North hemisphere. The region $R^S$, where $\v A$ is defined, excludes the string along the positive semi-axis $(\theta=0)$ and represents a South hemisphere. The union of the hemispheres $R^N\cup R^S$ covers the whole space (except on the origin, where there is a magnetic monopole). In the intersection $R^N\cap R^S$ (the “equator”) both hemispheres are slightly overlapped. A representation of the Wu-Yang configuration is shown in Figure \[Fig9\]. ![The Wu-Yang configuration describing a magnetic monopole without the Dirac strings.[]{data-label="Fig9"}](Fig9.eps){width="228pt"} Using Equation (28), the potentials $\v A'$ and $\v A$ defined by Equations (84) and (85) yield the field of a magnetic monopole: $\v B = {\boldsymbol{\nabla}}\times \v A'= {\boldsymbol{\nabla}}\times \v A= g \hat{\v r}/r^2$. Therefore, the potentials $\v A'$ and $\v A$ must be connected by a gauge transformation in the overlapped region $\pi/2 - \varepsilon/2 < \theta <\pi/2 + \varepsilon/2$, where both potentials are well defined. At first glance, $\v A'-\v A=2 g\hat{\phi}/(r \sin\theta)$. But in the overlapped region, we have $\lim \sin(\pi/2\pm\varepsilon/2)=1$ as $\varepsilon\to 0$ and thus $$\begin{aligned} \v A'-\v A=\frac{2 g}{r}\hat{\phi}={\boldsymbol{\nabla}}(2g\phi)={\boldsymbol{\nabla}}\Lambda,\end{aligned}$$ where $\Lambda =2 g \phi$ (the gauge function $\Lambda$ satisfies ${\boldsymbol{\nabla}}^2\Lambda=0$ indicating that $\v A'$ and $\v A$ are related by a restricted gauge transformation). Suppose now that an electric charge is in the vicinity of the magnetic monopole. In this case, we require two wave functions to describe the electric charge: $\Psi'$ for $ R^N$ and $\Psi$ for $ R^S$. In the overlapped region, the wave functions $\Psi'$ and $\Psi$ must be related by the phase transformation $\Psi'={\rm e}^{iq\Lambda/(\hbar c)}\,\Psi$, which is associated to the gauge transformation given in Equation (86). This phase transformation with $\Lambda =2 g \phi$ reads $$\begin{aligned} \Psi'={\rm e}^{i2qg \phi/(\hbar c)}\,\Psi.\end{aligned}$$ But the wave functions $\Psi'$ and $\Psi$ must be single-valued ($\Psi'|_{\phi} = \Psi'|_{\phi + 2\pi}\big),$ which requires ${\rm e}^{i4 \pi qg/ (\hbar c )}\!=\!1,$ and this implies the Dirac quantisation condition $qg=n\hbar c/2$. Remarkably, Equations (84)-(87) do not involve unpleasant singularities. The approach suggested by Wu and Yang constitutes a refinement of Dirac’s original approach. It is pertinent to say that the Wu–Yang approach has become popular in many treatments of the Dirac quantisation condition [@4; @8; @9; @11; @12; @13; @16; @24]. Semi-classical derivations of the Dirac quantisation condition {#13} ============================================================== We will now discuss the [*first semi-classical derivation of the Dirac condition*]{}. In 1936, Saha wrote [@33]: “If we take a point charge $e$ at A and a magnetic pole $\mu$ at B, classical electrodynamics tells us that the angular momentum of the system about the line AB is just $e\mu/c$. Hence, following the quantum logic, if we put this $= h/(2\pi)$, the fundamental unit of angular momentum, we have $\mu= ch/(4\pi e)$ which is just the result obtained by Dirac.” This relatively simple semi-classical argument to arrive at the Dirac condition \[with $n=1$\] remained almost ignored until 1949 when Wilson [@36; @37] used the same argument to obtain this condition \[now with $n$ integer\]. Let us develop in more detail the derivation of Dirac’s condition suggested by Saha and also by Wilson. When the Dirac condition is written as $qg/c=n\hbar/2,$ we can see that the left-hand side has units of angular momentum because the constant $\hbar$ has these units. This suggests the possibility that the quantity $qg/c$ can be obtained from the electromagnetic angular momentum: $$\begin{aligned} \v L_\texttt{EM} = \frac{1}{4 \pi c} \int_{V} \v x \times (\v E \times \v B)\,d^3x,\end{aligned}$$ with the idea that the field $\v E$ is produced by the electric charge $q$ and the field $\v B$ by the magnetic charge $g$, both charges at rest and separated by a finite distance. This configuration was considered by Thomson [@34; @35] in 1904, and is now known as the “Thomson dipole.” More precisely stated, the Thomson dipole is a static dipole formed by an electric charge $q$ and a magnetic charge $g$ separated by the distance $a\!=\!|\v a|,$ where the vector $\v a$ is directed from the charge $q$ to the charge $g.$ For convenience, we place the charge $q$ at $\v x'=-{\v a}/2$ and the charge $g$ at $\v x'\!=\!{\v a}/2$ as seen in Figure \[Fig10\]. Clearly, there is no mechanical momentum associated to this dipole because it is at rest. In Appendix \[E\], we show that the electromagnetic angular momentum due to the fields of the charges $q$ and $g$ is given by $$\begin{aligned} \v L_\texttt{EM}= \frac{qg}{c}\hat{\v a},\end{aligned}$$ where $\hat{\v a}=\v a/a$. This equation was derived by Thomson [@34; @35]. Remarkably, the magnitude of $\v L_\texttt{EM}$ does not depend on the distance between the charges. We note that Equation (89) has been derived by several equivalent procedures [@60; @61]. Notice also that this angular momentum is conserved: $d \v L_\texttt{EM}/dt = 0.$ We now invoke a quantum mechanical argument: quantisation of the angular momentum. As is well known in quantum mechanics, the total (conserved) angular momentum operator $\widehat{\cJ}$ of a system reads [@21]: $\widehat{\cJ}= \widehat{\cL} + \widehat{\cS},$ where $\widehat{\cL}$ is the orbital angular momentum operator and $\widehat{\cS}$ is the spin angular momentum operator. In order to obtain $\widehat{\cJ}$ for a given system, we first identify its corresponding classical counterpart. Evidently, the Thomson dipole lacks of an orbital angular momentum. We can therefore identify $\widehat{\cS}$ with $\widehat{\cJ}$ and make the substitution $\v L_\texttt{EM}\rightarrow \widehat{\cJ}.$ If we measure $\widehat{\cJ}$ along any of its three spatial components, say $z,$ it takes the discrete values $J_z = n\hbar/2$ [@21]. Therefore, if we choose $\hat{\v a} = \hat{\v z}$ in Equation (89) then we can quantise the $z$ component of this equation. Following this argument we obtain $J_z = qg/c =n\hbar/2,$ which yields the Dirac condition $qg=n \hbar c /2.$ We should emphasise that this method is semiclassical in the sense that the angular momentum $qg/c$ is first obtained from purely classical considerations and then it is made equal to $n\hbar/2$ by invoking a quantum argument. We will now examine the [*second semi-classical derivation of the Dirac condition*]{}. We can also arrive at the Dirac condition by another semiclassical method due to Fierz [@38]. Consider an electric charge $q$ moving with velocity $\dot{\v x}$ in the field of a monopole $g$ centred at the origin: $\v B= g\hat{\v r}/r^2.$ This configuration is illustrated in Figure \[Fig11\]. The charge $q$ experiences the Lorentz force $$\begin{aligned} \frac{d\v p}{dt} = q\bigg(\frac{\dot{\v x}}{c}\times \v B\bigg),\end{aligned}$$ where $\v p = m \dot{\v x}$ is the mechanical momentum associated to the charge $q$. The field of the monopole is spherically symmetric and therefore one should expect the total angular momentum of the system is conserved. To see this, we take the cross product of Equation (90) with the position vector $\v x,$ use $\v x \times (d\v p/dt) = d(\v x\times \v p)/dt,$ and obtain the corresponding torque $$\begin{aligned} \frac{d (\v x\times \v p)}{dt}=&\, \frac{q}{c}\big(\v x \times (\dot{\v x}\times \v B)\big)= \frac{qg}{c}\bigg( \frac{\v x \times (\dot{\v x}\times \v x)}{r^3}\bigg)= \frac{d}{dt}\bigg(\frac{qg }{c}\hat{\v r} \bigg),\end{aligned}$$ where we have used the identity $$\begin{aligned} \frac{\v x \times (\dot{\v x}\times \v x)}{r^3} =\frac{d \hat{\v r}}{dt}.\end{aligned}$$ Clearly, the mechanical angular momentum $\v x \times \v p$ is not conserved $d(\v x \times \v p)/dt \neq 0.$ This is an expected result because there is an extra contribution attributed to the angular momentum of the electromagnetic field. From Equation (91), it follows $$\begin{aligned} \frac{d }{dt} \bigg(\v x\times \v p - \frac{qg }{c}\hat{\v r} \bigg)=0.\end{aligned}$$ Hence, the total (conserved) angular momentum is $$\begin{aligned} \v J = \v x \times \v p - \frac{qg }{c}\hat{\v r}.\end{aligned}$$ This interesting result was observed by Poincaré [@62] in 1896, although it was already anticipated by Darboux in 1878 [@63]. From Equation (94), it follows that the radial component of this angular momentum is constant $\v J \cdot \hat{\v r} = - qg/c.$ With regard to the quantity $qg/c$, Fierz [@38] pointed out: “...the classic value $qg/c,$ must be in quantum theory equal to an integer or half-integer multiple of $\hbar.$” Following this argument, we can quantise the radial component of the angular momentum in Equation (94): $J_r=qg/c=n \hbar/2$ (the minus sign is absorbed by $n$) and this yields the Dirac condition $qg=n\hbar c/2.$ We will now review the [*third semi-classical derivation of the Dirac condition*]{}. Strictly speaking, we will review the derivation of a generalised duality-invariant form of this condition due to Schwinger [@39]. The approach followed by Schwinger is similar to that of Fierz but now applied to the case of dyons, which are particles with both electric and magnetic charge. The approach considers the interaction of a dyon of mass $m$ carrying an electric charge $q_1$ and a magnetic charge $g_1,$ moving with velocity $\dot{\v x}$ in the field of a stationary dyon with electric charge $q_2$ and magnetic charge $g_2$ centred at the origin, as seen in Figure \[Fig12\]. The Lorentz force due to the moving dyon takes the duality-invariant form $$\begin{aligned} \frac{d\v p}{dt} = q_1\bigg(\v E + \frac{\dot{\v x}}{c}\times \v B\bigg)+ g_1 \bigg( \v B - \frac{\dot{\v x}}{c}\times \v E \bigg),\end{aligned}$$ where the electric and magnetic fields produced by the charges $q_2$ and $g_2$ of the stationary dyon are $$\begin{aligned} \v E =\frac{q_2}{r^2}\hat{\v r}, \quad \v B = \frac{g_2}{r^2}\hat{\v r}.\end{aligned}$$ Therefore, we may write Equation (95) as $$\begin{aligned} \frac{d\v p}{dt} =\big( q_1 q_2 + g_1 g_2\big)\frac{\hat{\v r}}{r^2} + \big(q_1g_2-q_2g_1 \big)\frac{\dot{\v x} \times \v x}{c \,r^3}.\end{aligned}$$ To find the conserved angular momentum of the system, we take the cross product of Equation (97) with the position vector $\v x$, use $\v x \times (d\v p/dt) = d(\v x\times \v p)/dt,$ and obtain $$\begin{aligned} \frac{d (\v x\times \v p)}{dt}= \, \frac{\big(q_1g_2-q_2g_1 \big)}{c}\,\frac{d \hat{\v r} }{dt},\end{aligned}$$ where we have used Equation (92). The conserved angular momentum is thus $$\begin{aligned} \v J= \v x\times \v p - \big(q_1g_2-q_2g_1 \big)\frac{\hat{\v r}}{c},\end{aligned}$$ whose radial component $ \v J \cdot \hat{\v r}= -(q_1g_2-q_2g_1)/c$ can be quantised: $J_r=(q_1g_2-q_2g_1)/c=n \hbar/2,$ yielding the Schwinger–Swanziger quantisation condition $$\begin{aligned} q_1g_2-q_2g_1 = \frac{n}{2}\hbar c.\end{aligned}$$ In contrast to the Dirac condition $qg=n\hbar c/2$, which for a fixed value of $n$ is not invariant under the dual changes $q\to g$ and $g\to -q$, the Schwinger–Swanziger condition is clearly invariant under these dual changes. Equation (100) was first obtained by Schwinger [@64] and independently by Swanziger [@65]. Interestingly, both of these authors argued that the quantisation in Equation (100) should take integer and not half-integer values, i.e. Equation (100) should be written as $q_1g_2-q_2g_1 = n \hbar c.$ Final remarks on the Dirac quantisation condition {#14} ================================================= The advent of the Dirac quantisation condition brought us two news: one good and another bad. The good news is that this condition allows us to explain the observed quantisation of the electric charge. The bad news is that such an explanation is based on the existence of unobserved magnetic monopoles. One is left with the feeling that the undetectability of magnetic monopoles spoils the Dirac quantisation condition. Evidently, the fact that the Dirac condition explains the electric charge quantisation cannot be considered as a proof of the existence of magnetic monopoles. Although it has recently been argued that magnetic monopoles may exist, not as elementary particles, but as emergent particles (quasiparticles) in exotic condensed matter magnetic systems such as “spin ice” [@66; @67; @68], there is still no direct experimental evidence of Dirac monopoles. However, experimental searches for monopoles continue to be of great interest [@69; @70; @71; @72; @73; @76]. It can be argued that the idea of undetected magnetic monopoles is too high a price to pay for explaining the observed charge quantisation. But equally it can be argued that magnetic monopoles constitute an attractive theoretical concept, which is not precluded by any fundamental theory and has been extremely useful in modern gauge field theories [@4; @29]. In any case, magnetic monopoles are like the Loch Ness monster, much talked about but never seen. Although many theoretical physicists would say that the idea of magnetic monopoles is too attractive to set aside, we think it would be desirable to have a convincing explanation for the electric charge quantisation without appealing to magnetic monopoles. It is interesting to note that the introduction of magnetic monopoles in Dirac’s 1931 paper [@2] was not taken fondly by Dirac himself. He wrote: “The theory leads to a connection, namely, $[eg_0=\hbar c/2]$, between the quantum of magnetic pole and the electronic charge. It is rather disappointing to find this reciprocity between electricity and magnetism, instead of a purely electronic quantum condition such as \[$\hbar c/e^2$\].” However, no satisfactory explanation for the charge quantisation was proposed between 1931 and 1948 and this seemed to led him to reinforce his idea about magnetic monopoles. In his 1948 paper he wrote [@3]: “The quantisation of electricity is one of the most fundamental and striking features of atomic physics, and there seems to be no explanation for it apart from the theory of poles. This provides some grounds for believing in the existence of these poles.” The story of the Dirac quantisation condition may be traced to the story of a man \[P. A. M. Dirac: the theorist of theorists!\] who wanted to know why the electric charge is quantised and why the electric charge of the electron had just the numerical value that makes the inverse of the fine structure constant to acquire the value $\alpha^{-1}=\hbar c/e^2\approx 137$. Many years later, he expressed his frustration at not being able to find this magic number. He criticised his theory because it [@30]: “...did not lead to any value for this number $[\alpha^{-1}\approx 137],$ and, for that reason, my argument seemed to be a failure and I was disappointed with it.” But the idea of explaining this number seems to have been always important for him. With the confidence of a master, Dirac wrote [@30]: “The problem of explaining this number $\hbar c/e^2$ is still completely unsolved. Nearly 50 years have passed since then. I think it is perhaps the most fundamental unsolved problem of physics at the present time, and I doubt very much whether any really big progress will be made in understanding the fundamentals of physics until it is solved.” Although Dirac was not successful in explaining why the charge of the electron has its observed value, in the search for this ambitious goal, he envisioned a magnetic monopole attached to a semi-infinite string, which he required to be unobservable by a quantum argument, obtaining thus a condition that explains the electric charge quantisation. This is indeed a brilliant idea not attributable to an ordinary genius but rather to a magician, a person “whose inventions are so astounding, so counter to all the intuitions of their colleagues, that it is hard to see how any human could have imagined them” [@74]. A final comment on nodal lines {#15} ============================== Berry [@77] has pointed out that the nodal lines introduced by Dirac in his 1931 paper [@2] are an example of dislocations in the probability waves of quantum mechanics. The history can be traced to 1974 when Nye and Berry [@78] observed that wavefronts can contain dislocation lines, closely analogous to those found in crystals. They defined these dislocation lines as those lines on which the phase of the complex wave function is undetermined, which requires the amplitude be zero, indicating that dislocation lines are lines of singularity (or lines of zeros). Remarkably, the lines of singularity (also called wave dislocations, nodal lines, phase singularities and wave vortices) are generic features of waves of all kinds, such as light waves, sound waves and quantum mechanical waves. These lines involve two essential properties: on these lines the phase is singular (undetermined) and around these lines the phase changes by a multiple (typically $\pm1$) of $2\pi.$ Even though the concept of the line of singularity has been extensively discussed in the literature (see, for example, the collection of papers in the special issues mentioned in References [@79; @80; @81; @82]), its connection with the Dirac strings is not usually commented on. In his review on singularities in waves [@77], Berry has claimed: “He \[Dirac\] recognises that $\Psi_0$ \[appearing in Equation (68)\] can have nodal lines around which the phase $\chi_0$ in the absence of magnetic field changes by $2n\pi$, i.e. he recognises the existence of wavefront dislocations.” However, it should be emphasised that the semi-infinite nodal lines introduced by Dirac are unobservable because of the Dirac quantisation condition. But in the general case, the lines of singularity are physical and can form closed loops, which can be linked and knotted [@83]. Conclusion {#16} ========== In this review paper, we have discussed five quantum-mechanical derivations, three semiclassical derivations and a novel heuristic derivation of the Dirac quantisation condition. They are briefly resumed as follows. . In this derivation, the magnetic monopole is attached to an infinite line of dipoles, the so-called Dirac string [@18]. The vector potential of this configuration yields the field of the magnetic monopole plus a singular magnetic field due to the Dirac string. By assuming that the location of the string must be irrelevant, it is shown that the two arbitrary positions of the string are connected with two gauge potentials, meaning that the change of a string to another string is equivalent to a gauge transformation involving a multi-valued gauge function. By demanding the wave function in the phase transformation be single-valued, the Dirac condition is required. (i) It starts with the relation ${\rm e}^{i 2k\phi}={\rm e}^{iq \Lambda/(\hbar c)}$, where $k$ is an arbitrary constant, $\phi$ the azimuthal angle and $\Lambda$ an unspecified gauge function; (ii) from this relation it follows the remarkable equation $\Lambda q/(\hbar c)=2 k \phi$. One solution of this equation is given by $k=qg/(\hbar c)$ and $\Lambda= 2g \phi$, where $g$ is a constant to be identified; (iii) if the phase ${\rm e}^{iq \Lambda/(\hbar c)}$ is required to be single-valued, then ${\rm e}^{i 2k\phi}$ must be also single-valued and this implies the “quantisation" condition $k=n/2$ with $n$ being an integer; (iv) from this condition and $k=qg/(\hbar c),$ we get the relation $qg=n\hbar c/2$; (v) the function $\Lambda= 2g \phi$ with $g$ being the magnetic charge is proved to be a gauge function and this allows us to finally identify $q g=n\hbar c/2$ with the Dirac quantisation condition; (vi) a weak point of this heuristic derivation is that the associated Dirac strings are excluded; (vii) classical considerations indicate that the Dirac string lacks of physical meaning and is thus unobservable; (viii) Quantum mechanical considerations show that the undetectability of the Dirac string implies the Dirac condition. . The quantum-mechanical interaction of an electric charge $q$ with the potential $\v A$ is given by the phase appearing in the wave function $\Psi={\rm e}^{i[q/(\hbar c)]\int_0^{\v x}\v A(\v x')\cdot d\v l'}\Psi_0,$ where $\Psi_0$ is the solution of the free Schrödinger equation and the line integral in the phase is taken a long a path followed by $q$ from the origin to the point $\v x$. If $\v A=\v A_\texttt{string}=2g\Theta(\rho-\varepsilon)\Theta(-z)\hat{\phi}/\rho$ and the path is a closed line surrounding the string, we have $[q/(\hbar c)]\oint_C \v A_\texttt{string}\cdot \rho \,d\phi\, \hat{\phi}= 4\pi qg/(\hbar c)$ for $\rho>\varepsilon$ and $z<0$. If now we demand this quantity to be equal to $2\pi n,$ then the effect of the string on the charge $q$ disappears because ${\rm e}^{i 4\pi qg/(\hbar c)}={\rm e}^{i 2\pi n}=1$ and this implies the Dirac condition. [*Third quantum mechanical derivation.*]{} This derivation is directly related to the Aharonov–Bohm double-slit experiment [@40] with the Dirac string inserted between the slits. Considering the vector potential of the string, it is shown that the corresponding probability density is $P= |\Psi_1 + {\rm e}^{i4\pi q g/(\hbar c)}\,\Psi_2|^2.$ The effect of the Dirac string is unobservable if ${\rm e}^{i4\pi q g/(\hbar c)}=1$ and this implies the Dirac condition. Vice versa, if this condition holds *a priori* then the Dirac string is unobservable. According to Feynman’s path-integral approach to quantum mechanics [@45], the amplitude of a particle reads $K=\int \!\mathcal{D(\v x)}\,{\rm e}^{(i/\hbar) {\cal S}(\v x)}$, where $\int\mathcal{D(\v x)}$ indicates a product of integrals performed over all paths $\v x(t)$ going from $\text{A}$ to $\text{B},$ and ${\cal S}$ is the classical action associated to each path. For two such generic paths in free space, $\gamma_1$ and $\gamma_2$, we have $K=K_1+K_2=\!\int_{\gamma_1} \!\!\mathcal{D(\v x)}\, {\rm e}^{(i/\hbar){\cal S}^{(1)}(\v x)} \!+\! \int_{\gamma_2}\!\!\mathcal{D(\v x)}\,{\rm e}^{(i/\hbar) {\cal S}^{(2)}(\v x)}.$ Suppose that $\gamma_1$ and $\gamma_2$ pass on each side of the Dirac string and form the boundary of a surface S. As a result, the action acquires an interaction term ${\cal S}\!=\! {\cal S}_0 + (q/c)\int \v A_L \cdot d \v l,$ where ${\cal S}_0$ is the action for the free path. Thus the amplitude becomes $K\!=\! \big(K_1\! + \!{\rm e}^{(i q/\hbar c)\oint_{C} \v A_L \cdot d \v l }{ K_2}\big) {\rm e}^{ (i q/\hbar c)\int_{(1)} \v A_L \cdot d \v l},$ and the interference term is ${\rm e}^{(i q/\hbar c)\oint_{C} \v A_L \cdot d \v l }.$ Using the Stoke’s theorem and ${\boldsymbol{\nabla}}\times \v A_L\! =\!\v B_\texttt{mon}+\v B_\texttt{string}$, the interference term becomes ${\rm e}^{(i q/\hbar c)\oint_{C} \v A_L \cdot d \v l }\!=\! {\rm e}^{(iq/\hbar c) \int_{s} \v B_\texttt{mon} \cdot d \v a} \, {\rm e}^{ (iq/\hbar c)\int_{\rm S} \v B_\texttt{string} \cdot d\v a}.$ The second exponential factor on the right should not contribute or otherwise the string would be observable. Thus we must demand ${\rm e}^{ (iq/\hbar c)\int_{S} \v B_\texttt{string} \cdot d\v a}=1.$ But the flux through the string is $\int_{\rm S} \v B_\texttt{string} \cdot d\v a=4 \pi g$ so that ${\rm e}^{i 4 \pi qg/\hbar c} =1,$ which implies Dirac’s condition. This derivation describes a magnetic monopole without Dirac strings [@46] using two non-singular potentials which are defined in two different regions of space. In the intersection region, both potentials are connected by a non-singular gauge transformation with the gauge function $\Lambda =2 g \phi$. The description of an electric charge in the vicinity of the magnetic monopole requires two wave functions $\Psi'$ and $\Psi$, which are related by the phase transformation $\Psi'={\rm e}^{i2qg \phi/(\hbar c)}\Psi$ in the overlapped region. But $\Psi'$ and $\Psi$ must be single-valued ($\Psi'|_{\phi}\!=\!\Psi'|_{\phi+2\pi}\big),$ which requires ${\rm e}^{i4 \pi qg/ (\hbar c )}\!=\!1,$ and this implies Dirac’s condition. This derivation considers the Thomson dipole [@34; @35], which is a static dipole formed by an electric charge $q$ and a magnetic charge $g$ separated by the distance $a\!=\!|\v a|$ [@60; @61]. The electromagnetic angular momentum of this dipole is given by $\v L_\texttt{EM}= qg\hat{\v a}/c.$ By assuming that any of the spatial components of the angular momentum must be quantised in inter multiples of $\hbar/2$, we obtain Dirac’s condition. This derivation considers an electric charge $q$ moving with speed $\dot{\v x}$ in the field of a monopole $g$ [@8; @38]. The associated Lorentz force $d\v p/dt= q\big(\dot{\v x}\times \v B/c\big)$ is used to obtain total (conserved) angular momentum of this system $\v J = \v x \times \v p - qg\hat{\v r}/c.$ The radial component $ \v J \cdot \hat{\v r}= -qg/c$ is then quantised yielding Dirac’s condition. This derivation considers a dyon of mass $m$ carrying an electric charge $q_1$ and a magnetic charge $g_1,$ moving with velocity $\dot{\v x}$ in the field of a stationary dyon with charge $q_2$ and $g_2$ located at the origin [@39]. Using the duality-invariant form of the Lorentz force $d\v p/dt = q_1\big(\v E + \dot{\v x}\times \v B/c\big)\!+\! g_1 \big( \v B -\dot{\v x}\times \v E/c \big)$ the total angular momentum of this system is found to be $\v J= \v x\times \v p - \big(q_1g_2-q_2g_1 \big)\hat{\v r}/c.$ The radial component $ \v J \cdot \hat{\v r}= -(q_1g_2-q_2g_1)/c$ is then quantised yielding the Schwinger–Swanziger condition $q_1g_2-q_2g_1 = n\hbar c/2$ which is a duality invariant form of Dirac’s condition. Note {#note .unnumbered} ==== A derivation of Equations (17)-(19), which is more pedagogical than that appearing in the standard graduate textbooks (for example in Reference [@21]), is available in the author’s website: [www.ricardoheras.com](http://ricardoheras.com/). Acknowledgements {#acknowledgements .unnumbered} ================ I wish to thank Professor Michael V. Berry for bringing my attention to the important topic of wavefront dislocations and its connection with the Dirac strings. Notes on contributor {#notes-on-contributor .unnumbered} ==================== Ricardo Heras is an undergraduate student in Astrophysics at University College London. He has been inspired by Feynman’s teaching philosophy that if one cannot provide an explanation for a topic at the undergraduate level then it means one doesn’t really understand this topic. His interest in understanding physics has led him to publish several papers in *The European Journal of Physics* on the teaching of electromagnetism and special relativity. He has also authored research papers on magnetic monopoles, pulsar astrophysics, history of relativity, and two essays in Physics Today. For Ricardo the endeavour of publishing papers in physics represents the first step towards becoming a physicist driven by “The pleasure of finding things out.” Derivation of Equations (3) and (12) {#A} ==================================== The curl of Equation (2) gives $$\begin{aligned} {\boldsymbol{\nabla}}\times \v A_L &={\boldsymbol{\nabla}}\times \bigg( {\boldsymbol{\nabla}}\times \bigg\{ \int_L \frac{g \,d\v l'}{|\v x-\v x'|} \bigg\}\bigg)\nonumber\\ \nonumber &=\, {\boldsymbol{\nabla}}\bigg(\!{\boldsymbol{\nabla}}\! \cdot\! \bigg\{ \! \int_L \!\frac{g \,d\v l'}{|\v x\!-\!\v x'|} \bigg\} \!\bigg)\!-\!{\boldsymbol{\nabla}}^2 \bigg\{ \!\int_L \!\frac{g \,d\v l'}{|\v x\!-\!\v x'|} \!\bigg\} \\ & = \, g{\boldsymbol{\nabla}}\!\!\int_L \! {\boldsymbol{\nabla}}\!\cdot\! \bigg(\!\frac{d\v l'}{|\v x\!-\!\v x'|}\!\bigg) \!-\! g\! \int_L \! {\boldsymbol{\nabla}}^2\bigg(\!\frac{1}{|\v x\!-\!\v x'|}\!\bigg)d\v l'.\end{aligned}$$ Using the result ${\boldsymbol{\nabla}}\cdot (d \v l'/|\v x\!-\!\v x'|) = d\v l' \cdot {\boldsymbol{\nabla}}(1/|\v x\!-\!\v x'|),$ the first integral becomes $$\begin{aligned} \nonumber \int_L {\boldsymbol{\nabla}}\cdot \bigg(\frac{d\v l'}{|\v x\!-\!\v x'|}\bigg)=& \int_L {\boldsymbol{\nabla}}\bigg(\frac{1}{|\v x\!-\!\v x'|}\bigg)\cdot d\v l' \\ \nonumber = & - \int_L {\boldsymbol{\nabla}}'\bigg(\frac{1}{|\v x\!-\!\v x'|}\bigg)\cdot d\v l' \\ = &-\frac{1}{|\v x\!-\!\v x'|}.\end{aligned}$$ Considering Equation (A2), the first term of Equation (A1) yields the field of the magnetic monopole $$\begin{aligned} g{\boldsymbol{\nabla}}\int_L {\boldsymbol{\nabla}}\cdot \bigg(\frac{d\v l'}{|\v x\!-\!\v x'|}\bigg) = g {\boldsymbol{\nabla}}\bigg( \!\!-\frac{1}{|\v x\!-\!\v x'|} \bigg)= \frac{g}{R^2}\hat{\v R},\end{aligned}$$ where we have used ${\boldsymbol{\nabla}}(1/|\v x- \v x'|)= -\hat{\v R}/R^2.$ The second term of Equation (A1) yields the magnetic field of the Dirac string $$\begin{aligned} - g\! \int_L{\boldsymbol{\nabla}}^2\bigg(\frac{1}{|\v x-\v x'|}\bigg)d\v l'= 4 \pi g\! \int_L \delta(\v x \!-\! \v x')\, d\v l',\end{aligned}$$ where we have used ${\boldsymbol{\nabla}}^2( 1 / |\v x\!-\!\v x'|)=-4 \pi \delta(\v x \!-\! \v x').$ The Addition of Equations (A3) and (A4) yields Equation (3). To derive Equation (12), we first take the curl of Equation (11), $$\begin{aligned} \nonumber {\boldsymbol{\nabla}}\times \v A_L =&{\boldsymbol{\nabla}}\times \bigg({\boldsymbol{\nabla}}\times \bigg\{\uvec{z} \int\limits_{-\infty}^{0}\frac{g\, dz'}{|\v x\!-\!z'\hat{\v z}|}\bigg\}\bigg) \\ \nonumber=&\, {\boldsymbol{\nabla}}\bigg(\!{\boldsymbol{\nabla}}\! \cdot\! \bigg\{\hat{\v z} \!\! \int\limits_{-\infty}^{0}\!\!\frac{g \,dz'}{|\v x\!-\!z'\uvec{z}|}\bigg\} \bigg)\!-\!{\boldsymbol{\nabla}}^2 \bigg\{\!\uvec{z} \!\! \int\limits_{-\infty}^{0}\!\!\frac{g \,dz'}{|\v x\!-\!z'\uvec{z}|}\bigg\} \\= & \, g{\boldsymbol{\nabla}}\!\!\!\int\limits_{-\infty}^{0}\!\! \frac{\partial}{\partial z}\bigg(\! \frac{dz'}{|\v x\!-\!z'\uvec{z}|} \!\bigg)\!-\! g\,\hat{\v z} \!\!\!\int\limits_{-\infty}^{0}\!\!{\boldsymbol{\nabla}}^2\bigg(\frac{ dz'}{|\v x\!-\!z'\hat{\v z}|}\bigg).\end{aligned}$$ To simplify the first term we may write $$\begin{aligned} \frac{\partial}{\partial z}\bigg( \frac{1}{|\v x\!-\!z'\uvec{z}|} \bigg)= - \frac{z-z'}{\big(x^2+y^2+(z-z')^2\big)^{3/2}},\end{aligned}$$ so that $$\begin{aligned} \int\limits_{-\infty}^{0}\!\!\frac{\partial}{\partial z} \bigg(\frac{d z' }{|\v x\!-\!z'\uvec{z}|}\bigg)= -\!\!\int\limits_{-\infty}^{0}\! \frac{z-z'}{\big(x^2+y^2+(z-z')^2\big)^{3/2}} \,dz'.\end{aligned}$$ Consider the substitution $u(z')=x^2+y^2+(z-z')^2$. Hence, $du = -2(z-z')dz',$ and the right-hand side of the integral in Equation (A7) takes the form $$\begin{aligned} \frac{1}{2}\lim_{\beta\to\infty}\int_{u(z'=-\beta)}^{u(z'=0)}\!\frac{ du }{u^{3/2}} = \lim_{\beta\to\infty} \frac{-1}{\sqrt{u}}\bigg|^{u(z'=0)}_{u(z'=-\beta)} = -\frac{1}{|\v x|}+ \lim_{z'\to-\infty} \frac{1}{|\v x - z'\uvec{z}|}= -\frac{1}{r}.\end{aligned}$$ Using this result in the first term in Equation (A5) we obtain the monopole field $$\begin{aligned} g{\boldsymbol{\nabla}}\!\!\!\int\limits_{-\infty}^{0}\!\! \frac{\partial}{\partial z}\bigg(\! \frac{dz'}{|\v x\!-\!z'\uvec{z}|} \!\bigg)=g {\boldsymbol{\nabla}}\bigg(\!\!-\frac{1}{r} \bigg)= \frac{g}{r^2}\hat{\v r}.\end{aligned}$$ To simplify the second term in Equation (A5) consider $$\begin{aligned} \nonumber {\boldsymbol{\nabla}}^2\bigg(\frac{1}{|\v x\!-\!z'\uvec{z}|}\bigg)\!=&-4\pi \delta(\v x\!-\!z'\uvec{z})\! \\=&-4 \pi\delta(x)\delta(y)\delta(z\!-\!z').\end{aligned}$$ Using this equation in the second term of Equation (A5) we obtain the string field $$\begin{aligned} \nonumber - g\hat{\v z}\!\!\!\int\limits_{-\infty}^{0}\!\!{\boldsymbol{\nabla}}^2\bigg(\!\frac{d z'}{|\v x\!-\!z'\uvec{z}|}\!\bigg)=&4\pi g\delta(x)\delta(y)\bigg\{\!\!\int\limits_{-\infty}^{0}\!\delta(z\!-\!z')dz'\bigg\}\hat{\v z} \\ =&4 \pi g \delta(x)\delta(y)\Theta(-z)\hat{\v z},\end{aligned}$$ where in the last step we have used the integral representation of the step function $\Theta(\xi\!-\!\alpha) =\int_{- \infty}^{\xi} \delta(\tau\!-\!\alpha)d\tau$ to identify the quantity within the brackets $\{\,\,\,\}$ in Equation (A11) as $\Theta(-z)=\int_{-\infty}^{0}\delta(z\!-\!z')dz'.$ Addition of Equations (A9) and (A11) yields Equation (12). Derivation of Equation (13) {#B} =========================== Using Equation (11), we obtain $$\begin{aligned} \nonumber \v A_L=&\,g {\boldsymbol{\nabla}}\times \hat{\v z}\int\limits_{-\infty}^{0}\frac{ dz'}{|\v x-z'\hat{\v z}|} = \, g\bigg( \frac{\partial}{\partial y}\hat{\v x} - \frac{\partial}{\partial x} \hat{\v y}\bigg)\int\limits_{-\infty}^{0} \frac{dz'}{|\v x-z'\hat{\v z}|} \\ =& \,g\!\!\int\limits_{-\infty}^{0}\! \bigg\{\frac{\partial}{\partial y}\bigg(\frac{\hat{\v x}}{|\v x\!-\!z'\hat{\v z}|}\bigg)-\frac{\partial}{\partial x}\bigg(\frac{\hat{\v y}}{|\v x\!-\!z'\hat{\v z}|}\bigg)\bigg\} \,dz'.\end{aligned}$$ Now, $$\begin{aligned} \frac{\partial}{\partial y}\bigg(\frac{1}{|\v x-z'\hat{\v z}|}\bigg) =& -\frac{y}{(x^2+y^2 +(z-z')^2)^{3/2}},\\ \frac{\partial}{\partial x}\bigg(\frac{1}{|\v x-z'\hat{\v z}|}\bigg) =& -\frac{x}{(x^2+y^2 +(z-z')^2)^{3/2}}.\end{aligned}$$ Inserting these equations in Equation (B1) we obtain $$\begin{aligned} \v A_L=&g \big(\!- y\hat{\v x} + x\hat{\v y}\big)\int\limits_{-\infty}^{0}\frac{dz'}{(x^2+y^2 +(z-z')^2)^{3/2}}.\end{aligned}$$ The integral can be solved by a variable change and an appropriate substitution. We can write $(z-z')^2=(z'-z)^2.$ Now we let $u(z') = z'-z$ so that $du=dz'.$ Hence, the integral in Equation (B4) may be written as $$\begin{aligned} \lim_{\beta\to\infty}\int_{u(z'=-\beta)}^{u(z'=0)}&\frac{du}{(x^2+y^2 +u^2)^{3/2}}.\end{aligned}$$ An appropriate substitution for solving this integral is $u(v) = \sqrt{x^2 +y^2} \tan(v),$ where $v=\tan^{-1}( u/\sqrt{x^2+y^2}).$ This relation assumes $\sqrt{x^2 +y^2}\neq 0,$ indicating that the negative $z$-axis associated to the Dirac string has been avoided. It follows that $du=\sec^2(v)dv$ and then the integral in Equation (B5) becomes $$\begin{aligned} \lim_{\beta\to\infty}\int_{v(u(z'=-\beta))}^{v(u(z'=0))}\frac{\sqrt{x^2+y^2} \sec^2(v)}{\big((x^2+y^2)(\tan^2(v)+1)\big)^{3/2}}\,dv.\end{aligned}$$ Using the identity $\sec^2(v)=\tan^2(v) +1$, the denominator in Equation (B6) simplifies to $(x^2+y^2)^{3/2}\sec^3(v).$ It follows $$\begin{aligned} \nonumber \frac{1}{x^2+y^2}\lim_{\beta\to\infty}\int_{v(u(z'=-\beta))}^{v(u(z'=0))}\frac{dv}{\sec(v)} =& \frac{1}{x^2\!+\!y^2}\lim_{\beta\to\infty}\!\int_{v(u(z'=-\beta))}^{v(u(z'=0))} \!\cos(v)\,dv\\ =& \lim_{\beta\to\infty}\frac{\sin(v)}{x^2+y^2}\bigg|^{v(u(z'=0))}_{v(u(z'=-\beta))},\end{aligned}$$ where $\cos(v) = 1/\sec(v)$ has been used. Considering the identity $\sin\big(\tan^{-1}(\alpha)\big)=\alpha/\sqrt{\alpha^2+1}$, we can easily evaluate Equation (B7) $$\begin{aligned} \nonumber \lim_{\beta\to\infty}\,\frac{\sin(v)}{x^2+y^2}\bigg|^{v(u(z'=0))}_{v(u(z'=-\beta))} =&\, \bigg(\!\frac{1}{x^2+y^2}\!\bigg)\lim_{\beta\to\infty} \frac{u}{\sqrt{x^2+y^2}\sqrt{\frac{u^2}{x^2+y^2} +1}}\bigg|^{u(z'=0)}_{u(z'=-\beta)} \\ \nonumber=&\,\bigg(\!\frac{1}{x^2+y^2}\!\bigg) \lim_{\beta\to\infty}\frac{z'-z}{\sqrt{x^2+y^2+(z\!-\!z')^2}}\bigg|^{z'=0}_{z'=-\beta} \\ =&\,\frac{1}{x^2+y^2}\bigg(1 -\frac{z}{\sqrt{x^2+y^2+z^2}}\bigg).\end{aligned}$$ From Equation (B8) in Equation (B4) we obtain $$\begin{aligned} \v A_L=\,g\frac{\big(\!- y\hat{\v x} + x\hat{\v y}\big)}{x^2+y^2}\bigg(1 -\frac{z}{\sqrt{x^2+y^2+z^2}}\bigg).\end{aligned}$$ Considering spherical coordinates $r=\sqrt{x^2+y^2+z^2},$ $r\sin\theta=\sqrt{x^2+y^2},$ $r\cos\theta=z$ and $\hat{\phi}=(- y\hat{\v x}+x\hat{\v y})/(\sqrt{x^2+y^2})$, Equation (B9) takes the form $\v A_L= g [(1 - \cos\theta)/(r \sin\theta)]\hat{\phi},$ which is Equation (13). Derivation of Equation (14) {#C} =========================== Consider the first equality in Equation (14) $$\begin{aligned} \v A_{L'}-& \v A_L = \,g{\boldsymbol{\nabla}}\times \oint_C \frac{d\v l'}{|\v x-\v x'|}.\end{aligned}$$ Using Stoke’s theorem and ${\boldsymbol{\nabla}}(1/|\v x \!-\! \v x'|)=-{\boldsymbol{\nabla}}'(1/|\v x \!-\!\v x'|),$ Equation (C1) becomes $$\begin{aligned} \nonumber \v A_{L'} - \v A_L =& -g {\boldsymbol{\nabla}}\times \int_{S} {\boldsymbol{\nabla}}'\bigg(\frac{1}{|\v x \!-\! \v x'|} \bigg)\times d\v a' \\ \nonumber = & \, {\boldsymbol{\nabla}}\times \bigg( {\boldsymbol{\nabla}}\times \bigg\{\int_{S} \frac{g \,d\v a'}{|\v x \!-\! \v x'|}\bigg\} \bigg) \\ =&\, {\boldsymbol{\nabla}}\bigg(\!{\boldsymbol{\nabla}}\! \cdot\! \bigg\{\!\int_{S} \frac{g \,d\v a'}{|\v x \!-\! \v x'|}\bigg\} \bigg)\!-\!{\boldsymbol{\nabla}}^2 \bigg\{ \!\int_{S} \frac{g \,d\v a'}{|\v x\!-\!\v x'|} \bigg\}.\end{aligned}$$ Making use of ${\boldsymbol{\nabla}}\cdot (d \v a'/|\v x\!-\!\v x'|) =d\v a' \cdot {\boldsymbol{\nabla}}(1/|\v x\!-\!\v x'|)$ Equation (C2) reads $$\begin{aligned} \nonumber \v A_{L'}\!-\! \v A_L =& \,g {\boldsymbol{\nabla}}\!\!\int_{S} \!{\boldsymbol{\nabla}}\bigg(\!\frac{1}{|\v x \!-\! \v x'|}\!\bigg) \!\cdot\! d \v a' \!-\! g \!\!\int_{S}\!{\boldsymbol{\nabla}}^2 \bigg(\!\frac{1}{|\v x \!-\! \v x'|}\!\bigg)d\v a' \\ = & \,g {\boldsymbol{\nabla}}\!\!\int_{S} \frac{(\v x' \!-\!\v x)\cdot d \v a'}{|\v x \!-\! \v x'|^3}+ 4\pi g \!\int_{S} \! \delta(\v x \!-\!\v x')\,d\v a',\end{aligned}$$ where we have used ${\boldsymbol{\nabla}}(1/|\v x\!-\!\v x'|) = -(\v x \!-\!\v x')/|\v x\!-\!\v x '|^3$ and ${\boldsymbol{\nabla}}^2(1/|\v x\!-\!\v x'|)\!=\!-4 \pi \delta(\v x\!-\!\v x').$ The integral in the first term of Equation (C3) is the solid angle [@75] $$\begin{aligned} \Omega(\v x) = \int_{S} \frac{(\v x' \!-\!\v x)\cdot d \v a'}{|\v x \!-\! \v x'|^3},\end{aligned}$$ and therefore $$\begin{aligned} \v A_{L'}- \v A_L = \,g {\boldsymbol{\nabla}}\Omega + 4\pi g \!\int_{S} \!\delta(\v x \!-\!\v x')\,d\v a'.\end{aligned}$$ The delta integral contribution vanishes at any point $\v x$ not on the surface $S$ and can therefore be dropped [@7]. Thus we obtain $\v A_{L'}- \v A_L = \,g {\boldsymbol{\nabla}}\Omega,$ which is Equation (14). Discussions on Equation (C5) can be found in References [@7; @12; @84]. Derivation of Equation (40) {#D} =========================== Consider the first vector potential given in Equation (32), namely $\v A' = [g(1\!-\! \cos\theta)/(r\sin\theta)]\hat{\phi}$ which is valid for $z<0.$ For convenience, we express this potential in cylindrical coordinates $$\begin{aligned} \v A' = \frac{g}{\rho}\bigg(1- \frac{z}{\sqrt{\rho^2+z^2}} \bigg)\hat{\phi}.\end{aligned}$$ where we have used $\cos \theta= z/\sqrt{\rho^2 +z^2},$ and $r\sin \theta= \rho,$ with $\rho=\sqrt{x^2 + y^2}.$ A regularised form of this potential can be obtained by making the replacements [@58]: $1/\rho\rightarrow\Theta(\rho-\varepsilon)/\rho,$ and $z/\sqrt{\rho^2 +z^2}\rightarrow z/\sqrt{\rho^2 +z^2+\varepsilon^2},$ where $\Theta$ is the step function and $\varepsilon>0$ is an infinitesimal quantity. It follows $$\begin{aligned} \v A'_{\varepsilon} = \frac{g\,\Theta(\rho-\varepsilon)}{\rho}\bigg(1 - \frac{z}{\sqrt{\rho^2+z^2+\varepsilon^2}} \bigg)\hat{\phi}.\end{aligned}$$ Clearly, in the limit $\varepsilon \to 0$ we recover Equation (D1). Consider now the definition of the curl of the generic vector $\bfF= \bfF[0, F_\phi(\rho,z),0]$ in cylindrical coordinates given in Equation (61). Using this definition in Equation (D2) we obtain $$\begin{aligned} \nonumber{\boldsymbol{\nabla}}\times \v A'_{\varepsilon}=& -\frac{g \Theta (\rho-\varepsilon)}{\rho} \bigg( \frac{\rho^2 + \varepsilon^2}{(\rho^2+z^2+\varepsilon^2)^{3/2}} \bigg)\hat{\rho} + \frac{g\Theta(\rho - \varepsilon)}{\rho} \bigg( \frac{z}{(\rho^2 +z^2 +\varepsilon^2)^{3/2}} \bigg)\hat{\v z} \\ \nonumber & + \bigg\{\frac{g \delta(\rho - \varepsilon)}{\rho}- \frac{gz\delta(\rho-\varepsilon)}{\rho \sqrt{\rho^2+z^2+\varepsilon^2}}\bigg\}\hat{\v z} \\ \nonumber =\,& \frac{g \Theta(\rho-\varepsilon)}{(\rho^2 +z^2 + \varepsilon^2)}\bigg( \frac{\rho \hat{\rho}+z \hat{\v z}}{\sqrt{\rho^2+z^2 +\varepsilon^2}} \bigg) - \frac{\varepsilon^2 \,g \Theta (\rho\!-\!\varepsilon)\hat{\rho}}{\rho(\rho^2+z^2+\varepsilon^2)^{3/2}} \\ & + \bigg\{ \frac{g \delta(\rho-\varepsilon)}{\rho} - \frac{g z \delta(\rho-\varepsilon)}{\rho \sqrt{\rho^2+z^2+\varepsilon^2}} \bigg\}\hat{\v z}.\end{aligned}$$ In the last term enclosed within the brackets $\{\,\,\,\},$ we add the exact zero quantity $\big[g\delta(\rho-\varepsilon)/\rho- g\delta(\rho-\varepsilon)/\rho\big]\hat{\v z} \equiv 0,$ and obtain $$\begin{aligned} \nonumber{\boldsymbol{\nabla}}\times \v A'_{\varepsilon}= &\, \frac{g\, \Theta(\rho-\varepsilon)}{(\rho^2 +z^2 + \varepsilon^2)}\bigg( \frac{\rho \hat{\rho}+z \hat{\v z}}{\sqrt{\rho^2+z^2 +\varepsilon^2}} \bigg)+ \frac{2g\, \delta(\rho -\varepsilon)\hat{\v z}}{\rho} \\ & - \frac{\varepsilon^2 \,g \Theta (\rho\!-\!\varepsilon)\hat{\rho}}{\rho(\rho^2+z^2+\varepsilon^2)^{3/2}} -\frac{g \,\delta(\rho-\varepsilon)}{\rho}\bigg( \frac{\sqrt{\rho^2 +z^2 + \varepsilon^2}+z}{\sqrt{\rho^2 + z^2 +\varepsilon^2}} \bigg)\hat{\v z}.\end{aligned}$$ This is a regularised form of the magnetic field produced by the potential $\v A'_{\varepsilon}.$ The first two terms of Equation (D4) are the only non-vanishing terms in the limit $\varepsilon \rightarrow 0.$ The third term is shown to vanish easily because there is a term $\varepsilon^2$ in the numerator. However, it is not clear why the last term should vanish. Let us analyse this term. Consider an arbitrary point $z_0$ on the negative $z$-axis. For small $\varepsilon,$ we can make the replacement [@22]: $\sqrt{\rho^2 +z^2 + \varepsilon^2}+z \to (\rho^2+\varepsilon^2)/(2z_0).$ With this replacement, the last term in Equation (D4) becomes $$\begin{aligned} \bigg(\frac{g \,\delta(\rho-\varepsilon)\rho}{2 z_0\sqrt{\rho^2 + z^2 +\varepsilon^2}} + \frac{g \,\delta(\rho-\varepsilon)\,\varepsilon^2}{2\rho z_0\sqrt{\rho^2 + z^2 +\varepsilon^2}} \bigg) \hat{\v z}.\end{aligned}$$ In the limit $\varepsilon \rightarrow 0,$ it follows that Equation (D5) vanishes because $\varepsilon^2 \rightarrow 0$ and $\delta(\rho) \rho =0.$ Hence, $$\begin{aligned} \nonumber\lim_{\varepsilon\to 0} {\boldsymbol{\nabla}}\times \v A'_{\varepsilon} =& \lim_{\varepsilon\to 0}\bigg\{\frac{g \Theta(\rho-\varepsilon)}{(\rho^2 +z^2 + \varepsilon^2)}\bigg( \frac{\rho \hat{\rho}+z \hat{\v z}}{\sqrt{\rho^2+z^2+ \varepsilon^2}} \bigg) + \frac{2g \delta(\rho-\varepsilon)\hat{\v z}}{\rho}\bigg\} \\ = &\; g\frac{\hat{\v r}}{r^2} + 4 \pi g \delta(x)\delta(y)\Theta(-z)\hat{\v z},\end{aligned}$$ where we have used $\hat{\v r}=(\rho \hat{\rho} +z\hat{\v z}) /(\sqrt{\rho^2 +z^2}),$ and inserted $\Theta(-z)=1$ to specify that this expression is valid only for $z<0.$ Derivation of Equation (89) {#E} =========================== Consider the electromagnetic angular momentum of the Thomson dipole whose configuration is shown in Fig. \[Fig10\]. The electric and magnetic fields of this dipole are $$\begin{aligned} \v E = q\,\frac{(\v x + \v a /2)}{|\v x + \v a /2|^3}, \quad \v B = g\,\frac{(\v x - \v a /2)}{|\v x - \v a /2|^3}.\end{aligned}$$ These fields satisfy $$\begin{aligned} {\boldsymbol{\nabla}}\cdot \v E =&\, 4 \pi q \delta(\v x+\v a/2), \quad{\boldsymbol{\nabla}}\times \v E = 0, \\ {\boldsymbol{\nabla}}\cdot \v B =&\, 4 \pi g \delta(\v x-\v a/2), \quad {\boldsymbol{\nabla}}\times \v B = 0.\end{aligned}$$ In particular, the electric field can be expressed as the gradient of the electric potential $\v E = - {\boldsymbol{\nabla}}\Phi,$ where $$\begin{aligned} \Phi(\v x) = \frac{q}{|\v x + \v a /2|}.\end{aligned}$$ Using $\v E = - {\boldsymbol{\nabla}}\Phi,$ we write $\v E \times \v B = - {\boldsymbol{\nabla}}\Phi \times \v B,$ which combines with ${\boldsymbol{\nabla}}\times (\Phi \v B) =\Phi{\boldsymbol{\nabla}}\times \v B+{\boldsymbol{\nabla}}\Phi \times \v B $ to obtain $\v E \times \v B = - {\boldsymbol{\nabla}}\times (\Phi \v B).$ If we define the vector $\v W=\Phi\v B,$ then $\v E \times \v B = - {\boldsymbol{\nabla}}\times \v W.$ Using this expression in the integrand of Equation (88), we obtain $$\begin{aligned} \v x \times (\v E \times \v B)= - \v x \times ({\boldsymbol{\nabla}}\times \v W).\end{aligned}$$ To write Equation (E5) in an appropriate form, we can use the following identity expressed in index notation [@61]: $$\begin{aligned} \big[\v x \times \!\big({\boldsymbol{\nabla}}\!\times \!\v W\big)\big]^i = &\, -\partial_j\big(x^jW^i-2W^jx^i\big)+\partial^i\big( x_jW^j\big) -2 x^i\partial_jW^j.\end{aligned}$$ Here summation convention on repeated indices is adopted and $\varepsilon^{ijk}$ is the Levi-Civita symbol with $\varepsilon^{123}=1$ and $\delta^{i}_{j}$ is the Kronecker delta. Equation (E6) can be readily verified. First we write $$\begin{aligned} \nonumber \big[\v x \times \!\big({\boldsymbol{\nabla}}\!\times \!\v W\big)\big]^i=& \, \varepsilon^{ijk}x_j \big( {\boldsymbol{\nabla}}\times \v W\big)_k \\ \nonumber =&\,\varepsilon^{ijk} x_j \varepsilon_{klm} \partial^l W^m \\ \nonumber =& \, (\delta^{i}_{l} \delta^{j}_{m}-\delta^{j}_{l} \delta^{i}_{m}) \, x_j \partial^l W^m \\ =& \, x_m \partial^i W^m - (x_m\partial^m) W^i,\end{aligned}$$ where we have used the identity $\varepsilon^{ijk}\varepsilon_{klm}=\delta^{i}_{l} \delta^{j}_{m}-\delta^{j}_{l} \delta^{i}_{m}.$ Now, consider the identically zero quantities $$\begin{aligned} 2\big( \partial_m W^m x^i - \partial_m W^m x^i \big)\equiv 0, \\ \big(\partial^i x_m W^m + 2 W^m\partial_m x^i - \partial_m x^m W^i \big)\equiv 0.\end{aligned}$$ Adding Equations (E8) and (E9) to Equation (E7), we obtain Equation (E6). When Equation (E6) is integrated over a volume, the first two terms of the right-hand side can be transformed into surface integrals which are shown to vanish for a large $r.$ Therefore, $$\begin{aligned} \nonumber \int_{V}\big[\v x \times \big(\v E \times \v B \big)\big]^i \,d^3x=&\, 2\int_{V} x^i\partial_jW^j\,d^3x =\,2 \int_{V}x^i(\partial_j\Phi B^j+\Phi\partial_jB^j)\,d^3x \\ =&-2\!\int_{V} \!x^i(E_j B^j\,)\,d^3x +2\!\int_{V}\!x^i\Phi(\partial_jB^j)\,d^3x.\end{aligned}$$ Using Equation (E10) in Equation (88), we obtain $$\begin{aligned} \v L_\texttt{EM}=-\frac{1}{2\pi c}\!\int_{V} \v x \,\big(\v E\!\cdot\!\v B\big)\,d^3x + \frac{1}{2\pi c}\!\int_{V}\v x\, \Phi ({\boldsymbol{\nabla}}\!\cdot\!\v B)\,d^3x = \,\frac{1}{2\pi c}\!\int_{V}\v x\, \Phi ({\boldsymbol{\nabla}}\!\cdot\!\v B)\,d^3x, \end{aligned}$$ where the integral in the first term has vanished because integrand is an odd function of $\v x$ for the chosen origin. Using Equations (E3) and (E4), we substitute ${\boldsymbol{\nabla}}\cdot\v B=4\pi g\delta(\v x-\v a/2)$ and $ \Phi=q/|\v x + \v a /2|$ into the second integral, obtaining the expected result $$\begin{aligned} \v L_\texttt{EM}=\, \frac{2qg}{c}\!\int_{V} \delta(\v x -\v a/2) \bigg(\frac{\v x}{|\v x + \v a /2|}\bigg)d^3x=\, \frac{2qg}{c}\,\, \frac{\v x}{|\v x + \v a /2|}\bigg|_{\v x = \v a /2}=\frac{qg}{c}\hat{\v a}.\end{aligned}$$ Farmelo G. The strangest man – the hidden life of Paul Dirac, quantum genius. London: Faber & Faber; 2009. Dirac PAM. Quantised singularities in the electromagnetic field. [Proc R Soc Lond A. 1931;](http://dx.doi.org/10.1098/rspa.1931.0130)\ [ 133: 60-72](http://dx.doi.org/10.1098/rspa.1931.0130). Dirac PAM. The theory of magnetic poles. [Phys Rev. 1948; 74: 817-830](https://doi.org/10.1103/PhysRev.74.817). Shnir YM. Magnetic monopoles. Berlin, Germany: Springer; 2005. Carrigan RA, Trower WP, editors. Magnetic monopoles. New York, US: Springer-Verlag; 1983. Craigie NS, Giacomelli G, Nahm W, et al. Theory and detection of magnetic monopoles in gauge theories. Singapore: World Scientific Publishing; 1986. Ripka G. Dual superconductor models of color confinement. [Lect Notes Phys. 2004; 639: 1](https://doi.org/10.1007/b94800). [arXiv:hep-ph/0310102](https://arxiv.org/abs/hep-ph/0310102). Goddard P, Olive DI. Magnetic monopoles in gauge field theories. [Rep Prog Phys. 1978;](https://doi.org/10.1088/0034-4885/41/9/001)\ [ 41: 1357-1437](https://doi.org/10.1088/0034-4885/41/9/001). Coleman SR. The magnetic monopole fifty years later. Les Houches 1981, Proceedings, Gauge Theories In High Energy Physics, Part 1, 461–552 and Harvard Univ. Cambridge–HUTP–82–A032 (82, REC. OCT.) 97p. Giacomelli G. Magnetic monopoles. [Riv Nuovo Cim. 1984; 7: 1–111](https://doi.org/10.1007/BF02724347). Preskill J. Magnetic monopoles. [Ann Rev Nucl Part Sci. 1984; 34: 461–530](https://doi.org/10.1146/annurev.ns.34.120184.002333). Blagojević M, Senjanović P. The quantum field theory of electric and magnetic charge. [Phys Rep. 1988; 157: 233–346](https://doi.org/10.1016/0370-1573(88)90098-1). Harvey JA. Magnetic Monopoles, Duality, and Supersymmetry. [arXiv:hep-th/9603086](https://arxiv.org/abs/hep-th/9603086). Alvarez–Gaumé L, Hassan SF. Introduction to S-Duality in $N =2$ supersymmetric gauge theories (A Pedagogical Reviewof the Work of Seiberg and Witten). [Fortsch Phys. 1997; 45: 159–236](https://doi.org/10.1002/prop.2190450302). [arXiv:hep-th/9701069](https://arxiv.org/abs/hep-th/9701069). Lynden–Bell D, Nouri–Zonoz M. Classical monopoles: Newton, NUT space, gravomagnetic lensing, and atomic spectra. [Rev Mod Phys. 1998; 70: 427–445](https://doi.org/10.1103/RevModPhys.70.427). [arXiv:gr-qc/9612049](https://arxiv.org/abs/gr-qc/9612049). Milton KA. Theoretical and experimental status of magnetic monopoles. [Rep Prog Phys.](https://doi.org/10.1088/0034-4885/69/6/R02)\ [2006; 69: 1637–1711](https://doi.org/10.1088/0034-4885/69/6/R02). [arXiv:hep-ex/0602040](https://arxiv.org/abs/hep-ex/0602040). Rajantie A. Introduction to magnetic monopoles. [Cont Phys. 2012; 53: 195–211](https://doi.org/10.1080/00107514.2012.685693). [arXiv:](https://arxiv.org/abs/1204.3077)\ [0906.3219](https://arxiv.org/abs/1204.3077). Jackson JD. Classical electrodynamics. 3rd ed. New York (NY): John Wiley & Sons: 1999. Schwinger J, DeRaad Jr. LL, Milton KA, et al. Classical electrodynamics. Reading (MA): Perseus Books: 1998. Müller–Kirsten HJW. Electrodynamics: An introduction including quantum effects. Singapure: World Scientific: 2004. Sakurai JJ. Modern quantum mechanics. Reading (MA): Addison–Wesley; 1994. Felsager B. Geometry, particles and fields. NewYork (NY): Springer; 1998. Banks T. Modern quantum field theory: a concise introduction. Cambridge: Cambridge University Press; 2008. Nakahara M. Geometry, topology and physics. 2nd ed. London: IoP Publishing; 2003. Zee A. Quantum field theory in a nutshell. Princeton: Princeton University Press; 2010. Lacava F. Classical electrodynamics: from image charges to the photon mass and magnetic monopoles. Switzerland: Springer International Publishing; 2016. Goodstein DL. Richard P. Feynman, teacher. [Phys Today. 1989; 42: 70–75](https://doi.org/10.1063/1.881195). Kragh H. The concept of the monopole. A historical and analytical case study. [Stud Hist Phil Sci A. 1981; 12: 141–172](https://doi.org/10.1016/0039-3681(81)90017-0). Polchinski J. Monopoles, duality, and string theory. [Int J Mod Phys A. 2004; 19: 145-154](https://doi.org/10.1142/S0217751X0401866X).\ [arXiv:hep-th/0304042](https://arxiv.org/abs/hep-th/0304042). Dirac PAM. The monopole concept. [Int J Theor Phys. 1978; 17: 235–247](https://doi.org/10.1007/BF00672870). Letter from W. Pauli to N. Bohr, 5 March 1949. In W. Pauli, Scientific Correspondence with Bohr, Einstein, Heisenberg a.o. Volume III: 1940-1949, Ed. K. von Meyenn, [Sources in the History of Mathematics and Physical Sciences, Band III (1993)](https://doi.org/10.1007/978-3-540-78802-7). Kragh H. Dirac: a scientific biography. Cambridge: Cambridge University Press; 1990, Chapter 10. Saha MN. The origin of mass in neutrons and protons. [Ind J Phys. 1936; 10: 145](http://dx.doi.org/10.1007/BF02838849). Thomson JJ. On momentum in the electric field. [Phil Mag. 2009; 8: 331–356](https://doi.org/10.1080/14786440409463203). Thomson JJ. Elements of themathematical theory of electricity and magnetism. 4th ed. Cambridge: Cambridge University Press; 1904. Wilson HA. Note on Dirac’s theory of magnetic poles. [Phys Rev. 1949; 75: 308](https://doi.org/10.1103/PhysRev.75.309). Saha MN. Note on Dirac’s theory of magnetic poles. [Phys Rev. 1949; 75: 309](https://doi.org/10.1103/PhysRev.75.1968). Fierz M. Zur theorie magnetisch geladener teilchen. Helv Phys Acta. 1944; 17: 27. Schwinger J. A magnetic model of matter. [Science. 1969; 165: 757–761](https://doi.org/10.1126/science.165.3895.757). Aharonov Y, Bohm D. Significance of electromagnetic potentials in the quantum theory. [Phys Rev. 1959; 115: 485–491](https://doi.org/10.1103/PhysRev.115.485). Kunstatter G. Monopole charge quantization and the Aharonov–Bohm effect. [Can J Phys. 1984; 62:737–740](https://doi.org/10.1139/p84-101). Weinberg EJ. Classical solutions in quantum field theory: solitons and instantons in high energy physics. Cambridge: Cambridge University Press; 2012. Chapter 5. Dirac PAM. The Lagrangian in quantum mechanics. S Phys Z Sowjetunion. 1933; 3: 64. Feynman RP. The principle of least action in quantum mechanics \[Ph.D. thesis\]. Princeton University; 1942. Feynman RP. Space–time approach to non–relativistic quantum mechanics. [Rev Mod](https://doi.org/10.1103/RevModPhys.20.367)\ [Phys. 1948; 20: 367–387](https://doi.org/10.1103/RevModPhys.20.367). Wu TT, Yang CN. Concept of nonintegrable phase factors and global formulation of gauge fields. [Phys Rev D. 1975; 12: 3845–3857](https://doi.org/10.1103/PhysRevD.12.3845). Goldhaber AS. Role of Spin in the monopole problem. [Phys Rev. 1965; 140: B1407-B1414](https://doi.org/10.1103/PhysRev.140.B1407). Wilczek F. Magnetic flux, angular momentum, and statistics. [Phys Rev Lett. 1982; 48:](https://doi.org/10.1103/PhysRevLett.48.1144)\ [1144–1146](https://doi.org/10.1103/PhysRevLett.48.1144). Kobe DH. Comment on ‘Magnetic flux, angular momentum, and statistics’. [Phys Rev](https://doi.org/10.1103/PhysRevLett.49.1592)\ [Lett. 1982 ;49: 1592](https://doi.org/10.1103/PhysRevLett.49.1592). Jackiw R. Three–cocycle in mathematics and physics. [Phys Rev Lett. 1985; 54: 159–162](https://doi.org/10.1103/PhysRevLett.54.159). Jackiw R. Dirac’s magnetic monopoles (again). [Int J Mod Phys A. 2004; 19S1: 137–143](https://doi.org/10.1142/9789812703996_0011). [arXiv:hep-th/0212058](https://arxiv.org/abs/hep-th/0212058). Jadczyk AZ. Magnetic charge quantization and generalized imprimitivity systems. [Int J Theor Phys. 1975; 14: 183–192](https://doi.org/10.1007/BF01807666). t’ Hooft G. Magnetic monopoles in unified gauge theories. [Nucl Phys B. 1974; 79: 276–](https://doi.org/10.1016/0550-3213(74)90486-6)\ [284](https://doi.org/10.1016/0550-3213(74)90486-6). Polyakov AM. Particle spectrum in quantum field theory. JETP Lett. 1974; 20: 194. Jan Smit. Introduction to quantum fields on a lattice. Cambridge: Cambridge University Press; 2002. Preskill J. Magnetic monopoles in particle physics and cosmology. In: E Kolb, D Schramm, M Turner, editors. Inner space/outer space. Chicago: University Chicago Press; 1985. p. 373. Goldhaber AS, Heras R. Dirac Quantization Condition Holds with Nonzero Photon Mass. [arXiv:1710.03321](https://arxiv.org/abs/1710.03321). Rowe EGP. Green’s functions in space and time. [Am J Phys. 1979; 47: 373](https://doi.org/10.1119/1.11827). MacKenzie R. Path Integral Methods and Applications. [arXiv:quant-ph/0004090](https://arxiv.org/abs/quant-ph/0004090). Adawi I. Thomson’s monopoles. [Am J Phys. 1976; 44: 762](https://doi.org/10.1119/1.10310). Brownstein KR. Angular momentum of a charge monopole pair. [Am J Phys. 1989; 57:](https://doi.org/10.1119/1.15994)\ [ 420–421](https://doi.org/10.1119/1.15994). Poincaré H. Remarques sur une expèrience de M. Birkeland. Comp Rend. 1896; 123: 530. Darboux G. Problème de Mécanique. Bull Sci Math Astro. 1878; 2: 433. Schwinger J. Sources and magnetic charge. [Phys Rev. 1968; 173: 1536–1544](https://doi.org/10.1103/PhysRev.173.1536). Zwanziger D. Quantum field theory of particles with both electric and magnetic charges. [Phys Rev. 1968; 176: 1489–1495](https://doi.org/10.1103/PhysRev.176.1489). Castelnovo C, Moessner R, Sondhi SL. Magnetic monopoles in spin ice. [Nature. 2008;](https://doi.org/10.1038/nature06433)\ [451: 42–45](https://doi.org/10.1038/nature06433). Bramwell ST, Giblin SR, Calder S, et. al. Measurement of the charge and current of magnetic monopoles in spin ice. [Nature. 2009; 461: 956–959](https://doi.org/10.1038/nature08500). Jaubert LDC, Holdsworth PCW. Magnetic monopole dynamics in spin ice. J Phys Condens Matter. [2011; 23: 164222](https://doi.org/10.1088/0953-8984/23/16/164222). [arXiv:1010.0970](https://arxiv.org/abs/1010.0970). Patrizii L, Spurio M. Status of searches for magnetic monopoles. [Annu Rev Nucl Part](https://doi.org/10.1146/annurev-nucl-102014-022137)\ [Sci. 2015; 65: 279–302](https://doi.org/10.1146/annurev-nucl-102014-022137). [arXiv:1510.07125](https://arxiv.org/abs/1510.07125). Rajantie A. The search for magnetic monopoles. [Phys Today. 2016; 69 (10): 40–46](https://doi.org/10.1063/PT.3.3328). Fairbairn M, Pinfold JL. MoEDAL–a new light on the high–energy frontier. [Cont Phys. 2017; 58: 1–24](https://doi.org/10.1080/00107514.2016.1222649). Acharya B, Alexandre J, Baines S, et al. Search for magnetic monopoles with the MoEDAL forward trapping detector in 13 TeV proton–proton collisions at the LHC. [Phys Rev Lett. 2017; 118: 061801](https://doi.org/10.1103/PhysRevLett.118.061801). [arXiv:1611.06817](https://arxiv.org/abs/1611.06817). Acharya B, Alexandre J, Baines S, et al. Search for magnetic monopoles with the MoEDAL forward trapping detector in 2.11 $\text{fb}^{-1}$ of 13 TeV proton–proton collisions at the LHC. [Phys Lett B. 2018; 782: 510](https://doi.org/10.1016/j.physletb.2018.05.069). [arXiv:1712.09849](https://arxiv.org/abs/1712.09849). Berry MV. Paul Dirac: the purest soul in physics. [Phys. World. 1998; 11: 36–40](https://doi.org/10.1088/2058-7058/11/2/32). Kleinert H. Multivalued fields in condensed matter, electromagnetism, and gravitation. Singapore: World Scientific; 2008. Tanabashi M, et al. (Particle data group). Review of Particle Physics. [Phys Rev D. 2018;](https://doi.org/10.1103/PhysRevD.98.030001)\ [98: 030001](https://doi.org/10.1103/PhysRevD.98.030001). Berry MV. Singularities in Waves. In: R Balian, M Kléman, J–P Poirier, editors. 1981. Les Houches lecture series session XXXV. Amsterdam: North–Holland. p. 453–543. Nye JF, Berry MV. Dislocations in wave trains. [Proc R Soc Lond A. 1974; 336: 165](http://dx.doi.org/10.1098/rspa.1974.0012). Berry MV, Dennis MR, Soskin MS, editors. The plurality of optical singularities. [J Opt A: Pure Appl Opt. 2004; S155–6: 290](http://dx.doi.org/10.1088/1464-4258/6/5/E01). Dennis MR, Kivshar YS, Soskin MS, et al. Singular optics: more ado about nothing. [J Opt A: Pure Appl Opt. 2009; 11: 090201](http://dx.doi.org/10.1088/1464-4258/11/9/090201). Desyatnikov AS, Fadeyeva TA, Dennis MR, editors. Special issue on singular optics. [J Opt. 2013; 15: 040201](http://dx.doi.org/10.1088/2040-8978/15/4/040201). Soskin M, Boriskina SV, Chong Y, Dennis MR, et al. Singular optics and topological photonics. [J Opt. 2017; 19: 010401](http://dx.doi.org/10.1088/2040-8986/19/1/010401). Berry MV, Dennis MR. Knotted and linked phase singularities in monochromatic waves. [Proc R Soc A. 2001; 457: 2251](http://dx.doi.org/10.1098/rspa.2001.0826). Mansuripur M. Comment on Jackson’s analysis of electric charge quantization due to interaction with Dirac’s magnetic monopole. [arXiv:1701.00592](https://arxiv.org/abs/1701.00592).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study an equivariant co-assembly map that is dual to the usual Baum–Connes assembly map and closely related to coarse geometry, equivariant Kasparov theory, and the existence of dual Dirac morphisms. As applications, we prove the existence of dual Dirac morphisms for groups with suitable compactifications, that is, satisfying the Carlsson–Pedersen condition, and we study a ${\textup K}$[-]{}-theoretic counterpart to the proper Lipschitz cohomology of Connes, Gromov and Moscovici.' address: - | Department of Mathematics and Statistics\ University of Victoria\ PO BOX 3045 STN CSC\ Victoria, B.C.\ Canada\ V8W 3P4 - | Mathematisches Institut\ Georg-August-Universität Göttingen\ Bunsenstraße 3–5\ 37073 Göttingen\ Deutschland author: - Heath Emerson - Ralf Meyer title: 'Coarse and equivariant co-assembly maps' --- [^1] Introduction {#sec:intro} ============ This is a sequel to the articles [@Emerson-Meyer:Dualizing; @Emerson-Meyer:Descent], which deal with a coarse co-assembly map that is dual to the usual coarse assembly map. Here we study an equivariant co-assembly map that is dual to the Baum–Connes assembly map for a group $G$. A rather obvious choice for such a dual map is the map $$\label{eq:dumb_co-assembly} p_{\mathcal EG}^*\colon {\textup{KK}}^G_*({\mathbb C},{\mathbb C})\to{\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$$ induced by the projection $p_{\mathcal EG}\colon {\mathcal EG}\to{\textup{point}}$. This map and its application to the Novikov conjecture go back to Kasparov ([@Kasparov:Novikov]). Nevertheless, is not quite the map that we consider here. Our map is closely related to the coarse co-assembly map of [@Emerson-Meyer:Dualizing]. It is an isomorphism if and only if the Dirac-dual-Dirac method applies to $G$. Hence there are many casesgroups with $\gamma\neq1$where our co-assembly map is an isomorphism and  is not. Most of our results only work if the group $G$ is (almost) totally disconnected and has a $G$[-]{}-compact universal proper $G$[-]{}-space ${\mathcal EG}$. We impose this assumption throughout the introduction. First we briefly recall some of the main ideas of [@Emerson-Meyer:Dualizing; @Emerson-Meyer:Descent]. The new ingredient in the coarse co-assembly map is the *reduced stable Higson corona* ${\mathfrak c^\mathfrak{red}}(X)$ of a coarse space $X$. Its definition resembles that of the usual Higson corona, but its ${\textup K}$[-]{}-theory behaves much better. The coarse co-assembly map is a map $$\label{eq:coarse_co-assembly} \mu\colon {\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}(X)\bigr) \to {\textup{KX}}^*(X),$$ where ${\textup{KX}}^*(X)$ is a coarse invariant of $X$ that agrees with ${\textup K}^*(X)$ if $X$ is uniformly contractible. If ${\lvertG\rvert}$ is the coarse space underlying a group $G$, then there is a commuting diagram $$\label{eq:non-equivariant_co-assembly} \begin{gathered} \xymatrix@C+2em{ {\textup{KK}}^G_*\bigl({\mathbb C},C_0(G)\bigr) \ar[r]^-{p_{\mathcal EG}^*} \ar@{<->}[d]^{\cong} & {\textup{RKK}}^G_*\bigl({\mathcal EG};{\mathbb C},C_0(G)\bigr) \ar@{<->}[d]^{\cong} \\ {\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr) \ar[r]^-{\mu} & {\textup{KX}}^*({\lvertG\rvert}). } \end{gathered}$$ In this situation, ${\textup{KX}}^*({\lvertG\rvert})\cong {\textup K}^*({\mathcal EG})$ because ${\lvertG\rvert}$ is coarsely equivalent to ${\mathcal EG}$, which is uniformly contractible. The commuting diagram , coupled with the reformulation of the Baum Connes assembly map in [@Meyer-Nest:BC], is the source of the relationship between the coarse co-assembly map and the Dirac-dual-Dirac method mentioned above. If $G$ is a torsion-free discrete group with finite classifying space $BG$, the coarse co-assembly map is an isomorphism if and only if the Dirac-dual-Dirac method applies to $G$. A similar result for groups with torsion is available, but this requires working equivariantly with respect to compact subgroups of $G$. In this article, we work equivariantly with respect to the whole group $G$. The action of $G$ on its underlying coarse space ${\lvertG\rvert}$ by isometries induces an action on ${\mathfrak c^\mathfrak{red}}(G)$. We consider a $G$[-]{}-equivariant analogue $$\label{eq:equivariant_co-assembly} \mu\colon {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr) \to {\textup K}_*(C_0({\mathcal EG}){\mathbin\rtimes}G)$$ of the coarse co-assembly map ; here ${\textup K^\textup{top}}_*(G,A)$ denotes the domain of the Baum–Connes assembly map for $G$ with coefficients $A$. We avoid ${\textup K}_*({\mathfrak c^\mathfrak{red}}(X){\mathbin\rtimes}G)$ and ${\textup K}_*({\mathfrak c^\mathfrak{red}}(X){\mathbin{\rtimes_\textup r}}G)$ because we can say nothing about these two groups. In contrast, the group ${\textup K^\textup{top}}_*\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert}\bigr)$ is much more manageable. The only *analytical* difficulties in this group come from coarse geometry. There is a commuting diagram similar to  that relates  to equivariant Kasparov theory. To formulate this, we need some results of [@Meyer-Nest:BC]. There is a certain $G$[-]{}-$C^*$-algebra ${\mathsf P}$ and a class ${\mathsf D}\in{\textup{KK}}^G({\mathsf P},{\mathbb C})$ called *Dirac morphism* such that the Baum–Connes assembly map for $G$ is equivalent to the map $${\textup K}_*\bigl( (A\otimes {\mathsf P}) {\mathbin{\rtimes_\textup r}}G) \to {\textup K}_*(A{\mathbin{\rtimes_\textup r}}G)$$ induced by Kasparov product with ${\mathsf D}$. The Baum–Connes conjecture holds for $G$ with coefficients in ${\mathsf P}\otimes A$ for any $A$. The Dirac morphism is a *weak equivalence*, that is, its image in ${\textup{KK}}^H({\mathsf P},{\mathbb C})$ is invertible for each compact subgroup $H$ of $G$. The existence of the Dirac morphism allows us to localise the (triangulated) category ${\textup{KK}}^G$ at the multiplicative system of weak equivalences. The functor from ${\textup{KK}}^G$ to its localisation turns out to be equivalent to the map $$p_{\mathcal EG}^*\colon {\textup{KK}}^G(A,B)\to{\textup{RKK}}^G({\mathcal EG}; A,B).$$ One of the main results of this paper is a commuting diagram $$\label{eq:maintheorem} \begin{gathered} \xymatrix{ {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr) \ar[r] \ar@{<->}[d]^{\cong} & {\textup K}^*({\mathcal EG}) \ar@{<->}[d]^{\cong} \\ {\textup{KK}}^G_*({\mathbb C},{\mathsf P}) \ar[r]^-{p_{\mathcal EG}^*} & {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathsf P}). } \end{gathered}$$ In other words, the equivariant coarse co-assembly  is equivalent to the map $$p_{\mathcal EG}^*\colon {\textup{KK}}^G_*({\mathbb C},{\mathsf P}) \to {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathsf P}).$$ This map is our proposal for a dual to the Baum–Connes assembly map. We should justify why we prefer the map  over . Both maps have isomorphic targets: $${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathsf P}) \cong {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C}) \cong {\textup K}_*(C_0({\mathcal EG}){\mathbin\rtimes}G).$$ Even in the usual Baum–Connes assembly map, the analytical side involves a choice between full and reduced group $C^*$[-]{}-algebras and crossed products. Even though the full group $C^*$[-]{}-algebra has better functoriality properties and is sometimes preferred because it gives potentially finer invariants, the reduced one is used because its ${\textup K}$[-]{}-theory is closer to ${\textup K^\textup{top}}_*(G)$. In formulating a dual version of the assembly map, we are faced with a similar situation. Namely, the topological object that is dual to ${\textup K^\textup{top}}_*(G)$ is ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$. For the analytical side, we have some choices; we prefer ${\textup{KK}}^G_*({\mathbb C},{\mathsf P})$ over ${\textup{KK}}^G_*({\mathbb C},{\mathbb C})$ because the resulting co-assembly map is an isomorphism in more cases. Of course, we must check that this choice is analytical enough to be useful for applications. The most important of these is the Novikov conjecture. Elements of ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$ yield maps ${\textup K^\textup{top}}_*(G)\to{\mathbb Z}$, which are analogous to higher signatures. In particular, gives rise to such objects. The maps ${\textup K^\textup{top}}_*(G)\to{\mathbb Z}$ that come from a class in the range of  are known to yield homotopy invariants for manifolds because there is a pairing between ${\textup{KK}}^G_*({\mathbb C},{\mathbb C})$ and ${\textup K}_*({C^*_\textup{max}}G)$ (see [@Ferry-Ranicki-Rosenberg:Novikov]). But since factors through , the former also produces homotopy invariants. In particular, surjectivity of implies the Novikov conjecture for $G$. More is true: since ${\textup{KK}}^G({\mathbb C},{\mathsf P})$ is the home of a *dual-Dirac morphism*, yields that $G$ has a dual-Dirac morphism and hence a *$\gamma$[-]{}-element* if and only if  is an isomorphism. This observation can be used to give an alternative proof of the main result of [@Emerson-Meyer:Descent]. We call elements in the range of  *boundary classes*. These automatically form a graded ideal in the ${\mathbb Z/2}$-graded unital ring ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$. In contrast, the range of the unital ring homomorphism  need not be an ideal because it always contains the unit element of ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$. We describe two important constructions of boundary classes, which are related to compactifications and to the proper Lipschitz cohomology of $G$ studied in [@Connes-Gromov-Moscovici:Lipschitz; @Dranishnikov:Lipschitz]. Let ${\mathcal EG}\subseteq Z$ be a $G$[-]{}-equivariant compactification of ${\mathcal EG}$ that is compatible with the coarse structure in a suitable sense. Since there is a map $${\textup K^\textup{top}}_*\bigl(G,C(Z\setminus {\mathcal EG})\bigr) \to {\textup K^\textup{top}}_*\bigl(G,{\mathfrak c^\mathfrak{red}}({\mathcal EG})\bigr),$$ we get boundary classes from the boundary $Z\setminus {\mathcal EG}$. This construction also shows that $G$ has a dual-Dirac morphism if it satisfies the Carlsson–Pedersen condition. This improves upon a result of Nigel Higson ([@Higson:Bivariant]), which shows split injectivity of the Baum–Connes assembly map with coefficients under the same assumptions. Although we have discussed only ${\textup{KK}}^G_*({\mathbb C},{\mathsf P})$ so far, our main technical result is more general and can also be used to construct elements in Kasparov groups of the form ${\textup{KK}}^G_*\bigl({\mathbb C},C_0(X)\bigr)$ for suitable $G$[-]{}-spaces $X$. If $X$ is a proper $G$[-]{}-space, then we can use such classes to construct boundary classes in ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$. This provides a ${\textup K}$[-]{}-theoretic counterpart of the proper Lipschitz cohomology of $G$ defined by Connes, Gromov, and Moscovici in [@Connes-Gromov-Moscovici:Lipschitz]. Our approach clarifies the geometric parts of several constructions in [@Connes-Gromov-Moscovici:Lipschitz]; thus we substantially simplify the proof of the homotopy invariance of Gelfand–Fuchs cohomology classes in [@Connes-Gromov-Moscovici:Lipschitz]. Preliminaries {#sec:preliminaries} ============= Dirac-dual-Dirac method and Baum–Connes conjecture {#sec:Dirac_and_BC} -------------------------------------------------- First, we recall the *Dirac-dual-Dirac method* of Kasparov and its reformulation in [@Meyer-Nest:BC]. This is a technique for proving injectivity of the *Baum–Connes assembly map* $$\label{eq:BC_assembly} \mu\colon {\textup K^\textup{top}}_*(G,B) \to {\textup K}_*\bigl(C^*_r(G,B)\bigr),$$ where $G$ is a locally compact group and $B$ is a $C^*$[-]{}-algebra with a strongly continuous action of $G$ or, briefly, a *$G$[-]{}-$C^*$-algebra*. This method requires a proper $G$-$C^*$-algebra $A$ and classes $$d\in{\textup{KK}}^G(A,{\mathbb C}), \qquad \eta\in{\textup{KK}}^G({\mathbb C},A), \qquad \gamma {\mathrel{\vcentcolon=}}\eta \otimes_A d \in {\textup{KK}}^G({\mathbb C},{\mathbb C}),$$ such that $p_{\mathcal EG}^*(\gamma) = 1_{\mathbb C}$ in ${\textup{RKK}}^G({\mathcal EG};{\mathbb C},{\mathbb C})$. If these data exist, then the Baum–Connes assembly map  is injective for all $B$. If, in addition, $\gamma=1_{\mathbb C}$ in ${\textup{KK}}^G({\mathbb C},{\mathbb C})$, then the Baum–Connes assembly map is invertible for all $B$, so that $G$ satisfies the Baum–Connes conjecture with arbitrary coefficients. Let $A$ and $B$ be $G$[-]{}-$C^*$-algebras. An element $f\in {\textup{KK}}^G(A,B)$ is called a *weak equivalence* in [@Meyer-Nest:BC] if its image in ${\textup{KK}}^H(A,B)$ is invertible for each compact subgroup $H$ of $G$. The following theorem contains some of the main results of [@Meyer-Nest:BC]. \[meyernest\] Let $G$ be a locally compact group. Then there is a $G$[-]{}-$C^*$-algebra ${\mathsf P}$ and a class ${\mathsf D}\in{\textup{KK}}^G({\mathsf P},{\mathbb C})$ called *Dirac morphism* such that 1. ${\mathsf D}$ is a weak equivalence; 2. the Baum–Connes conjecture holds with coefficients in $A\otimes{\mathsf P}$ for any $A$; 3. the assembly map  is equivalent to the map $${\mathsf D}_*\colon {\textup K}_*(A\otimes{\mathsf P}{\mathbin{\rtimes_\textup r}}G) \to {\textup K}_*(A{\mathbin{\rtimes_\textup r}}G);$$ 4. the Dirac-dual-Dirac method applies to $G$ if and only if there is a class $\eta\in{\textup{KK}}^G({\mathbb C},{\mathsf P})$ with $\eta\otimes_{\mathbb C}D = 1_A$, if and only if the map $$\label{diracdualdiracmethod} {\mathsf D}^*\colon {\textup{KK}}^G_*({\mathbb C},{\mathsf P}) \to {\textup{KK}}^G_*({\mathsf P},{\mathsf P}), \qquad x\mapsto D\otimes x,$$ is an isomorphism. Whereas [@Emerson-Meyer:Descent] studies the invertibility of  by relating it to , here we are going to study the map  itself. It is shown in [@Meyer-Nest:BC] that the localisation of the category ${\textup{KK}}^G$ at the weak equivalences is isomorphic to the category ${\textup{RKK}}^G({\mathcal EG})$ whose morphism spaces are the groups ${\textup{RKK}}^G({\mathcal EG};A,B)$ as defined by Kasparov in [@Kasparov:Novikov]. This statement is equivalent to the existence of a Poincaré duality isomorphism $$\label{eq:PD_EG} {\textup{KK}}^G_*(A\otimes{\mathsf P},B) \cong {\textup{RKK}}^G_*({\mathcal EG};A,B)$$ for all $G$-$C^*$-algebras $A$ and $B$ (this notion of duality is analysed in [@Emerson-Meyer:Euler]). The canonical functor from ${\textup{KK}}^G$ to the localisation becomes the obvious functor $$p_{\mathcal EG}^*\colon {\textup{KK}}^G(A,B) \to {\textup{RKK}}^G({\mathcal EG}; A,B).$$ Since ${\mathsf D}$ is a weak equivalence, $p_{\mathcal EG}^*({\mathsf D})$ is invertible. Hence the maps in the following commuting square are isomorphisms for all $G$[-]{}-$C^*$-algebras $A$ and $B$: $$\xymatrix@C=4em{ {\textup{RKK}}^G_*({\mathcal EG}; A, B\otimes {\mathsf P}) \ar[r]_{\cong}^{{\mathsf D}_*} \ar[d]_{\cong}^{{\mathsf D}^*} & {\textup{RKK}}^G_*({\mathcal EG}; A,B) \ar[d]_{\cong}^{{\mathsf D}^*} \\ {\textup{RKK}}^G_*({\mathcal EG}; A\otimes{\mathsf P},B\otimes{\mathsf P}) \ar[r]_{\cong}^{{\mathsf D}_*} & {\textup{RKK}}^G_*({\mathcal EG}; A\otimes{\mathsf P},B). }$$ Together with  this implies $${\textup{KK}}^G_*(A\otimes{\mathsf P},B)\cong{\textup{KK}}^G_*(A\otimes{\mathsf P}, B\otimes{\mathsf P}).$$ In the following, it will be useful to turn the isomorphism $${\textup K^\textup{top}}_*(G,A) \cong {\textup K}_*\bigl((A\otimes{\mathsf P}){\mathbin{\rtimes_\textup r}}G\bigr)$$ in Theorem \[meyernest\].(c) into a definition. Group actions on coarse spaces {#sec:acts_coarse} ------------------------------ Let $G$ be a locally compact group and let $X$ be a right $G$[-]{}-space and a coarse space. We always assume that $G$ acts continuously and coarsely on $X$, that is, the set $\{(xg, yg)\mid g \in K, (x,y) \in E\}$ is an entourage for any compact subset $K$ of $G$ and any entourage $E$ of $X$. \[typesofactions\] We say that $G$ *acts by translations* on $X$ if $\{(x,gx)\mid x\in X,\ g\in K\}$ is an entourage for all compact subsets $K\subseteq G$. We say that $G$ acts by *isometries* if every entourage of $X$ is contained in a $G$[-]{}-invariant entourage. \[groupsascoarsespaces\] Let $G$ be a locally compact group. Then $G$ has a unique coarse structure for which the right translation action is isometric; the corresponding coarse space is denoted ${\lvertG\rvert}$. The generating entourages are of the form $$\bigcup_{g\in G} Kg\times Kg = \{(xg,yg) \mid g\in G, x,y\in K\}$$ for compact subsets $K$ of $G$. The *left* translation action is an action by translations for this coarse structure. \[exa:coarse\_on\_proper\_G-space\] More generally, any proper, $G$[-]{}-compact $G$[-]{}-space $X$ carries a unique coarse structure for which $G$ acts isometrically; its entourages are defined as in Example \[groupsascoarsespaces\]. With this coarse structure, the orbit map $G\to X$, $g\mapsto g\cdot x$, is a coarse equivalence for any choice of $x\in X$. If the $G$[-]{}-compactness assumption is omitted, the result is a $\sigma$[-]{}-coarse space. We always equip a proper $G$[-]{}-space with this additional structure. The stable Higson corona {#sec:Higson_corona} ------------------------ We next recall the definition of the *stable Higson corona* of a coarse space $X$ from [@Emerson-Meyer:Dualizing; @Emerson-Meyer:Descent]. Let $D$ be a $C^*$[-]{}-algebra. Let ${\mathcal M}(D\otimes{\mathbb K})$ be the multiplier algebra of $D\otimes{\mathbb K}$, and let ${\bar{\mathfrak B}^\mathfrak{red}}(X,D)$ be the $C^*$[-]{}-algebra of norm-continuous, bounded functions $f\colon X\to{\mathcal M}(D\otimes{\mathbb K})$ for which $f(x)-f(y)\in D\otimes{\mathbb K}$ for all $x,y\in X$. We also let $${\mathfrak B^\mathfrak{red}}(X,D) {\mathrel{\vcentcolon=}}{\bar{\mathfrak B}^\mathfrak{red}}(X,D)/C_0(X,D\otimes{\mathbb K}).$$ A function $f\in{\bar{\mathfrak B}^\mathfrak{red}}(X,D)$ has *vanishing variation* if the function $E \ni (x,y)\mapsto {\lVertf(x)-f(y)\rVert}$ vanishes at $\infty$ for any closed entourage $E\subseteq X\times X$. The *reduced stable Higson compactification* of $X$ with coefficients $D$ is the subalgebra ${\mathfrak{\bar c}^\mathfrak{red}}(X,D) \subseteq {\bar{\mathfrak B}^\mathfrak{red}}(X,D)$ of vanishing variation functions. The quotient $${\mathfrak c^\mathfrak{red}}(X,D) {\mathrel{\vcentcolon=}}{\mathfrak{\bar c}^\mathfrak{red}}(X,D)/C_0(X,D\otimes{\mathbb K}) \subseteq {\mathfrak B^\mathfrak{red}}(X,D)$$ is called *reduced stable Higson corona* of $X$. This defines a functor on the coarse category of coarse spaces: a coarse map $f\colon X\to X'$ induces a map ${\mathfrak c^\mathfrak{red}}(X',D)\to{\mathfrak c^\mathfrak{red}}(X,D)$, and two maps $X\to X'$ induce the same map ${\mathfrak c^\mathfrak{red}}(X',D)\to{\mathfrak c^\mathfrak{red}}(X,D)$ if they are close. Hence a coarse equivalence $X\to X'$ induces an isomorphism ${\mathfrak c^\mathfrak{red}}(X',D)\cong{\mathfrak c^\mathfrak{red}}(X,D)$. For some technical purposes, we must allow unions ${\mathscr X}=\bigcup X_n$ of coarse spaces such that the embeddings $X_n\to X_{n+1}$ are coarse equivalences; such spaces are called *$\sigma$[-]{}-coarse spaces*. The main example is the *Rips complex* ${\mathscr P}(X)$ of a coarse space $X$, which is used to define its coarse ${\textup K}$[-]{}-theory. More generally, if $X$ is a proper but not $G$[-]{}-compact $G$[-]{}-space, then $X$ may be endowed with the structure of a $\sigma$[-]{}-coarse space. For coarse spaces of the form ${\lvertG\rvert}$ for a locally compact group $G$ with a $G$[-]{}-compact universal proper $G$[-]{}-space ${\mathcal EG}$, we may use ${\mathcal EG}$ instead of ${\mathscr P}(X)$ because ${\mathcal EG}$ is coarsely equivalent to $G$ and uniformly contractible. Therefore, we do not need $\sigma$[-]{}-coarse spaces much; they only occur in Lemma \[lem:coarse\_gives\_classes\]. It is straightforward to extend the definitions of ${\mathfrak{\bar c}^\mathfrak{red}}(X,D)$ and ${\mathfrak c^\mathfrak{red}}(X,D)$ to $\sigma$[-]{}-coarse spaces (see [@Emerson-Meyer:Dualizing; @Emerson-Meyer:Descent]). Since we do not use this generalisation much, we omit details on this. Let $H$ be a locally compact group that acts coarsely and properly on $X$. It is crucial for us to allow non-compact groups here, whereas [@Emerson-Meyer:Descent] mainly needs equivariance for compact groups. Let $D$ be an $H$[-]{}-$C^*$-algebra, and let ${\mathbb K}_H{\mathrel{\vcentcolon=}}{\mathbb K}(\ell^2{\mathbb N}\otimes L^2H)$. Then $H$ acts on ${\bar{\mathfrak B}^\mathfrak{red}}(X,D\otimes {\mathbb K}_H)$ by $$(h\cdot f)(x){\mathrel{\vcentcolon=}}h\cdot \bigl(f(xh)\bigr),$$ where we use the obvious action of $H$ on $D\otimes{\mathbb K}_H$ and its multiplier algebra. The action of $H$ on ${\bar{\mathfrak B}^\mathfrak{red}}(X, D\otimes{\mathbb K}_H)$ need not be continuous; we let ${\bar{\mathfrak B}^\mathfrak{red}}_H(X,D)$ be the subalgebra of $H$[-]{}-continuous elements in ${\bar{\mathfrak B}^\mathfrak{red}}(X, D\otimes{\mathbb K}_H)$. We let ${\mathfrak{\bar c}^\mathfrak{red}}_H(X,D)$ be the subalgebra of vanishing variation functions in ${\bar{\mathfrak B}^\mathfrak{red}}_H(X,D)$. Both algebras contain $C_0(X, D\otimes{\mathbb K}_H)$ as an ideal. The corresponding quotients are denoted by ${\mathfrak B^\mathfrak{red}}_H(X,D)$ and ${\mathfrak c^\mathfrak{red}}_H(X,D)$. By construction, we have a natural morphism of extensions of $H$[-]{}-$C^*$-algebras $$\label{eq:forget-control} \begin{gathered} \xymatrix{ C_0(X, D\otimes{\mathbb K}_H)\ \ar@{>->}[r] \ar@{=}[d] & {\mathfrak{\bar c}^\mathfrak{red}}_H(X,D) \ar@{->>}[r] \ar[d]^{\subseteq} & {\mathfrak c^\mathfrak{red}}_H(X,D) \ar[d]^{\subseteq} \\ C_0(X, D\otimes{\mathbb K}_H)\ \ar@{>->}[r] & {\bar{\mathfrak B}^\mathfrak{red}}_H(X,D) \ar@{->>}[r] & {\mathfrak B^\mathfrak{red}}_H(X,D). } \end{gathered}$$ Concerning the extension of this construction to $\sigma$[-]{}-coarse spaces, we only mention one technical subtlety. We must extend the functor ${\textup K^\textup{top}}_*(H,{\text\textvisiblespace})$ from $C^*$[-]{}-algebras to $\sigma$[-]{}-$H$-$C^*$[-]{}-algebras. Here we use the definition $$\label{vanilla} {\textup K^\textup{top}}_*(H,A) \cong {\textup K}_*\bigl((A\otimes{\mathsf P}){\mathbin{\rtimes_\textup r}}H\bigr),$$ where ${\mathsf D}\in{\textup{KK}}^H({\mathsf P},{\mathbb C})$ is a Dirac morphism for $H$. The more traditional definition as a colimit of ${\textup{KK}}^G_*(C_0(X),A)$, where $X\subseteq{\mathcal EG}$ is $G$[-]{}-compact, yields a wrong result if $A$ is a $\sigma$[-]{}-$H$-$C^*$-algebra because colimits and limits do not commute. Let $H$ be a locally compact group, let $X$ be a coarse space with an isometric, continuous, proper action of $H$, and let $D$ be an $H$[-]{}-$C^*$-algebra. The $H$[-]{}-equivariant coarse ${\textup K}$[-]{}-theory ${\textup{KX}}^*_H(X,D)$ of $X$ with coefficients in $D$ is defined in [@Emerson-Meyer:Descent] by $$\label{definitionofcoarsektheory} {\textup{KX}}_H^*(X,D) {\mathrel{\vcentcolon=}}{\textup K^\textup{top}}_*\bigl(H, C_0({\mathscr P}(X),D)\bigr).$$ As observed in [@Emerson-Meyer:Descent], we have ${\textup K^\textup{top}}_*\bigl(H, C_0({\mathscr P}(X),D)\bigr) \cong {\textup K}_*(C_0({\mathscr P}(X),D){\mathbin\rtimes}H)$ because $H$ acts properly on ${\mathscr P}(X)$. For most of our applications, $X$ will be equivariantly uniformly contractible for all compact subgroups $K\subseteq H$, that is, the natural embedding $X\to{\mathscr P}(X)$ is a $K$[-]{}-equivariant coarse homotopy equivalence. In such cases, we simply have $$\label{eq:simplify_coarse_K-theory} {\textup{KX}}_H^*(X,D) \cong {\textup K^\textup{top}}_*\bigl(H,C_0(X,D)\bigr).$$ In particular, this applies if $X$ is an $H$[-]{}-compact universal proper $H$[-]{}-space (again, recall that the coarse structure is determined by requiring $H$ to act isometrically). The *$H$[-]{}-equivariant coarse co-assembly map for $X$ with coefficients in $D$* is a certain map $$\mu^*\colon {\textup K^\textup{top}}_{*+1}\bigl(H,{\mathfrak c^\mathfrak{red}}_H(X,D)\bigr) \to {\textup{KX}}^*_H(X,D)$$ defined in [@Emerson-Meyer:Descent]. In the special case where we have , this is simply the boundary map for the extension $C_0(X, D\otimes{\mathbb K}_H){\rightarrowtail}{\mathfrak{\bar c}^\mathfrak{red}}_H(X,D) {\twoheadrightarrow}{\mathfrak c^\mathfrak{red}}_H(X,D)$. We are implicitly using the fact that the functor ${\textup K^\textup{top}}_*(H,{\text\textvisiblespace})$ has long exact sequences for arbitrary extensions of $H$-$C^*$[-]{}-algebras, which is proved in [@Emerson-Meyer:Descent] using the isomorphism $${\textup K^\textup{top}}_*(H,B) {\mathrel{\vcentcolon=}}{\textup K}_*\bigl((B\otimes {\mathsf P}){\mathbin{\rtimes_\textup r}}H\bigr) \cong {\textup K}_*\bigl((B{\mathbin{\otimes_\textup{max}}}{\mathsf P}){\mathbin\rtimes}H\bigr)$$ and exactness properties of maximal $C^*$[-]{}-tensor products and full crossed products. There is also an alternative picture of the co-assembly map as a forget-control map, provided $X$ is uniformly contractible (see [@Emerson-Meyer:Descent]\*[2.8]{}). We have the following equivariant version of this result: \[forgettingcontrol\] Let $G$ be a totally disconnected group with a $G$[-]{}-compact universal proper $G$[-]{}-space ${\mathcal EG}$. Then the $G$[-]{}-equivariant coarse co-assembly map for $G$ is equivalent to the map $$j_*\colon {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\mathcal EG},D)\bigr) \to{\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak B^\mathfrak{red}}_G({\mathcal EG},D)\bigr)$$ induced by the inclusion $j\colon {\mathfrak c^\mathfrak{red}}_G({\mathcal EG},D) \to{\mathfrak B^\mathfrak{red}}_G({\mathcal EG},D)$. The equivalence of the two maps means that there is a natural commuting diagram $$\xymatrix{ {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\lvertG\rvert},D)\bigr) \ar[r]^{\mu^*} \ar@{<->}[d]^{\cong} & {\textup{KX}}_*^G({\lvertG\rvert},D) \ar@{<->}[d]^{\cong} \\ {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\mathcal EG},D)\bigr) \ar[r]^{j^*} & {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak B^\mathfrak{red}}_G({\mathcal EG},D)\bigr). }$$ Recall that $j$ is induced by the inclusion ${\mathfrak{\bar c}^\mathfrak{red}}_G({\mathcal EG},D)\to {\bar{\mathfrak B}^\mathfrak{red}}_G({\mathcal EG},D)$, which exactly forgets the vanishing variation condition. Hence $j_*$ is a forget-control map. We may replace ${\lvertG\rvert}$ by ${\mathcal EG}$ because ${\mathcal EG}$ is coarsely equivalent to ${\lvertG\rvert}$. The coarse ${\textup K}$[-]{}-theory of ${\mathcal EG}$ agrees with the usual ${\textup K}$[-]{}-theory of ${\mathcal EG}$ (see [@Emerson-Meyer:Descent]). A slight elaboration of the proof of [@Emerson-Meyer:Descent]\*[Lemma 15]{} shows that $${\textup K}^H_*\bigl({\bar{\mathfrak B}^\mathfrak{red}}_G({\mathcal EG},D)\bigr) \cong {\textup{KK}}^H_*\bigl({\mathbb C},{\bar{\mathfrak B}^\mathfrak{red}}_G({\mathcal EG},D)\bigr)$$ vanishes for all compact subgroups $H$ of $G$. This yields ${\textup K^\textup{top}}_*\bigl(G,{\bar{\mathfrak B}^\mathfrak{red}}_G({\mathcal EG},D)\bigr)=0$ by a result of [@Chabert-Echterhoff-Oyono:Going]. Now the assertion follows from the Five Lemma and the naturality of the ${\textup K}$[-]{}-theory long exact sequence for  as in [@Emerson-Meyer:Descent]. Classes in Kasparov theory from the stable Higson corona ======================================================== In this section, we show how to construct classes in equivariant ${\textup{KK}}$-theory from the ${\textup K}$[-]{}-theory of the stable Higson corona. The following lemma is our main technical device: \[lem:coarse\_gives\_classes\] Let $G$ and $H$ be locally compact groups and let $X$ be a coarse space equipped with commuting actions of $G$ and $H$. Suppose that $G$ acts by translations and that $H$ acts properly and by isometries. Let $A$ and $D$ be $H$[-]{}-$C^*$-algebras, equipped with the trivial $G$[-]{}-action. We abbreviate $$B_X{\mathrel{\vcentcolon=}}C_0(X,D\otimes{\mathbb K}_H{\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}H, \qquad E_X{\mathrel{\vcentcolon=}}({\mathfrak{\bar c}^\mathfrak{red}}_H(X,D) {\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}H$$ and similarly for ${\mathscr P}(X)$ instead of $X$. There are extensions $B_X{\rightarrowtail}E_X{\twoheadrightarrow}E_X/B_X$ and $B_{{\mathscr P}(X)}{\rightarrowtail}E_{{\mathscr P}(X)}{\twoheadrightarrow}E_{{\mathscr P}(X)}/B_{{\mathscr P}(X)}$ with $$E_{{\mathscr P}(X)}/B_{{\mathscr P}(X)} \cong E_X/B_X \cong ({\mathfrak c^\mathfrak{red}}_H(X,D) {\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}H,$$ and a natural commuting diagram $$\label{eq:technical_diagram} \begin{gathered} \xymatrix{ {\textup K}_{*+1}(E_X/B_X) \ar[r]^-{\partial} \ar[d]^{\psi} & {\textup K}_*(B_{{\mathscr P}(X)}) \ar[d]^{\phi} \\ {\textup{KK}}^G_*({\mathbb C},B_X) \ar[r]^-{p_{\mathcal EG}^*} & {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},B_X). } \end{gathered}$$ The quotients $E_X/B_X$ and $E_{{\mathscr P}(X)}/B_{{\mathscr P}(X)}$ are as asserted and agree because $X\to{\mathscr P}(X)$ is a coarse equivalence and because maximal tensor products and full crossed products are exact functors in complete generality, unlike spatial tensor products and reduced crossed products. We let $\partial$ be the ${\textup K}$[-]{}-theory boundary map for the extension $B_{{\mathscr P}(X)}{\rightarrowtail}E_{{\mathscr P}(X)}{\twoheadrightarrow}E_X/B_X$. Since we have a natural map ${\mathfrak c^\mathfrak{red}}_H(X,D){\mathbin{\otimes_\textup{max}}}A\to {\mathfrak c^\mathfrak{red}}_H(X,D{\mathbin{\otimes_\textup{max}}}A)$, we may replace the pair $(D,A)$ by $(D{\mathbin{\otimes_\textup{max}}}A,{\mathbb C})$ and omit $A$ if convenient. Stabilising $D$ by ${\mathbb K}_H$, we can further eliminate the stabilisations. First we lift the ${\textup K}$[-]{}-theory boundary map for the extension $B_X{\rightarrowtail}E_X{\twoheadrightarrow}E_X/B_X$ to a map $\psi\colon {\textup K}_{*+1}(E_X/B_X)\to {\textup{KK}}^G_*({\mathbb C},B_X)$. The $G$[-]{}-equivariance of the resulting Kasparov cycles follows from the assumption that $G$ acts on $X$ by translations. We have to distinguish between the cases $=0$ and $=1$. We only write down the construction for $=0$. Since the algebra $E_X/B_X$ is matrix-stable, ${\textup K}_1(E_X/B_X)$ is the homotopy group of unitaries in $E_X/B_X$ without further stabilisation. A cycle for ${\textup{KK}}^G_0({\mathbb C},B_X)$ is given by two $G$[-]{}-equivariant Hilbert modules ${\mathcal E}_\pm$ over $B_X$ and a $G$[-]{}-continuous adjointable operator $F\colon{\mathcal E}_+\to{\mathcal E}_-$ for which $1-FF^*$, $1-F^*F$ and $gF-F$ for $g\in G$ are compact; we take ${\mathcal E}_\pm = B_X$ and let $F\in E_X\subseteq {\mathcal M}(B_X)$ be a lifting for a unitary $u\in E_X/B_X$. Since $G$ acts on $X$ by translations, the induced action on ${\mathfrak{\bar c}^\mathfrak{red}}(X,D)$ and hence on $E_X/B_X$ is trivial. Hence $u$ is a $G$[-]{}-invariant unitary in $E_X/B_X$. For the lifting $F$, this means that $$1-FF^*,\ 1-F^*F,\ gF-F\in B_X.$$ Hence $F$ defines a cycle for ${\textup{KK}}_0^G({\mathbb C},B_X)$. We get a well-defined map $[u]\mapsto [F]$ from ${\textup K}_1(E_X/B_X)$ to ${\textup{KK}}^G_0({\mathbb C},B_X)$ because homotopic unitaries yield operator homotopic Kasparov cycles. Next we have to factor the map $p_{\mathcal EG}^*\circ\psi$ in  through ${\textup K}_0(B_{{\mathscr P}(X)})$. The main ingredient is a certain continuous map $\bar{c}\colon {\mathcal EG}\times X\to{\mathscr P}(X)$. We use the same description of ${\mathscr P}(X)$ as in [@Emerson-Meyer:Descent] as the space of positive measures on $X$ with $1/2 < \mu(X)\le1$; this is a $\sigma$[-]{}-coarse space in a natural way, we write it as ${\mathscr P}(X)=\bigcup {\textup P}_d(X)$. There is a function $c\colon {\mathcal EG}\to{\mathbb R}_+$ for which $\int_{\mathcal EG}c(\mu g)\,\textup{d}g=1$ for all $\mu\in{\mathcal EG}$ and $\operatorname{supp}c\cap Y$ is compact for $G$[-]{}-compact $Y\subseteq{\mathcal EG}$. If $\mu\in{\mathcal EG}$, $x\in X$, then the condition $$\langle \bar{c}(\mu,x), \alpha\rangle {\mathrel{\vcentcolon=}}\int_G c(\mu g) \alpha(g^{-1}x)\,\textup{d}g$$ for $\alpha\in C_0(X)$ defines a probability measure on $X$. Since such measures are contained in ${\mathscr P}(X)$, $\bar{c}$ defines a map $\bar{c}\colon {\mathcal EG}\times X\to{\mathscr P}(X)$. This map is continuous and satisfies $\bar{c}(\mu g,g^{-1}xh)=\bar{c}(\mu,x)h$ for all $g\in G$, $\mu\in{\mathcal EG}$, $x\in X$, $h\in H$. For a $C^*$[-]{}-algebra $Z$, let $C({\mathcal EG},Z)$ be the $\sigma$[-]{}-$C^*$-algebra of all continuous functions $f\colon {\mathcal EG}\to Z$ without any growth restriction. Thus $C({\mathcal EG},Z)=\varprojlim C(K,Z)$, where $K$ runs through the directed set of compact subsets of ${\mathcal EG}$. We claim that $(\bar{c}^* f)(\mu)(x) {\mathrel{\vcentcolon=}}f\bigl(\bar{c}(\mu,x)\bigr)$ for $f\in C_0({\mathscr P}(X),D)$ defines a continuous $$[-]{}-homomorphism $$\bar{c}^*\colon C_0({\mathscr P}(X),D) \to C\bigl({\mathcal EG},C_0(X,D)\bigr).$$ If $K\subseteq{\mathcal EG}$ is compact, then there is a compact subset $L\subseteq G$ such that $c(\mu\cdot g)=0$ for $\mu\in K$ and $g\notin L$. Hence $\bar{c}(\mu,x)$ is supported in $L^{-1}x$ for $\mu\in K$. Since $G$ acts on $X$ by translations, such measures are contained in a filtration level ${\textup P}_d(X)$. Hence $\bar{c}^*(f)$ restricts to a $C_0$-function $K\times X\to D$ for all $f\in C_0({\mathscr P}(X),D)$. This proves the claim. Since $\bar{c}$ is $H$[-]{}-equivariant and $G$[-]{}-invariant, we get an induced map $$B_{{\mathscr P}(X)} = C_0({\mathscr P}(X),D){\mathbin\rtimes}H \to (C\bigl({\mathcal EG},C_0(X,D)\bigr){\mathbin\rtimes}H)^G = C({\mathcal EG},B_X)^G,$$ where $Z^G\subseteq Z$ denotes the subalgebra of $G$[-]{}-invariant elements. We obtain an induced $$[-]{}-homomorphism between the stable multiplier algebras as well. An element of ${\textup K}_0\bigl(B_{{\mathscr P}(X)})$ is represented by a self-adjoint bounded multiplier $F\in{\mathcal M}(B_{{\mathscr P}(X)}\otimes{\mathbb K})$ such that $1-FF^*$ and $1-F^*F$ belong to $B_{{\mathscr P}(X)}\otimes{\mathbb K}$. Now $\tilde{F}{\mathrel{\vcentcolon=}}\bar{c}^*(F)$ is a $G$[-]{}-invariant bounded multiplier of $C({\mathcal EG},B_X\otimes{\mathbb K})$ and hence a $G$[-]{}-invariant multiplier of $C_0({\mathcal EG},B_X\otimes{\mathbb K})$, such that $\alpha\cdot (1-\tilde{F}\tilde{F}^*)$ and $\alpha\cdot (1-\tilde{F}^*\tilde{F})$ belong to $C_0({\mathcal EG},B_X\otimes{\mathbb K})$ for all $\alpha\in C_0({\mathcal EG})$. This says exactly that $\tilde{F}$ is a cycle for ${\textup{RKK}}^G_0({\mathcal EG};{\mathbb C},B_X)$. This construction provides the natural map $$\phi\colon {\textup K}_0(B_{{\mathscr P}(X)}) \to {\textup{RKK}}^G_0({\mathcal EG};{\mathbb C},B_X).$$ Finally, a routine computation, which we omit, shows that the two images of a unitary $u\in E_X/B_X$ differ by a compact perturbation. Hence the diagram  commutes. We are mainly interested in the case where $A$ is the source ${\mathsf P}$ of a Dirac morphism for $H$. Then ${\textup K}_{*+1}(E_X/B_X) = {\textup K^\textup{top}}_*\bigl(H,{\mathfrak c^\mathfrak{red}}_H(X,D)\bigr)$, and the top row in  is the $H$[-]{}-equivariant coarse co-assembly map for $X$ with coefficients $D$. Since we assume $H$ to act properly on $X$, we have a ${\textup{KK}}^G$-equivalence $B_X\sim C_0(X,D){\mathbin\rtimes}H$, and similarly for ${\mathscr P}(X)$. Hence we now get a commuting square $$\label{eq:notso_technical_diagram} \begin{gathered} \xymatrix@C=4em{ {\textup K^\textup{top}}_{*+1}\bigl(H,{\mathfrak c^\mathfrak{red}}_H(X,D)\bigr) \ar[r]^-{\partial} \ar[d]^{\psi} & {\textup{KX}}^*_H(X,D) \ar[d]^{\phi} \\ {\textup{KK}}^G_*({\mathbb C},C_0(X,D){\mathbin\rtimes}H) \ar[r]^-{p_{\mathcal EG}^*} & {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},C_0(X,D){\mathbin\rtimes}H). } \end{gathered}$$ If, in addition, $D={\mathbb C}$ and the action of $H$ on $X$ is free, then we can further simplify this to $$\label{eq:even_less_technical_diagram} \begin{gathered} \xymatrix@C=4em{ {\textup K^\textup{top}}_{*+1}\bigl(H,{\mathfrak c^\mathfrak{red}}_H(X)\bigr) \ar[r]^-{\partial} \ar[d]^{\psi} & {\textup{KX}}^*_H(X) \ar[d]^{\phi} \\ {\textup{KK}}^G_*\bigl({\mathbb C},C_0(X/H)\bigr) \ar[r]^-{p_{\mathcal EG}^*} & {\textup{RKK}}^G_*\bigl({\mathcal EG};{\mathbb C},C_0(X/H)\bigr). } \end{gathered}$$ We may also specialise the space $X$ to ${\lvertG\rvert}$, with $G$ acting by multiplication on the left, and with $H\subseteq G$ a compact subgroup acting on ${\lvertG\rvert}$ by right multiplication. This is the special case of  that is used in [@Emerson-Meyer:Descent]. The following applications will require other choices of $X$. Applications to Lipschitz classes {#sec:apply_main_lemma} --------------------------------- Now we use Lemma \[lem:coarse\_gives\_classes\] to construct interesting elements in ${\textup{KK}}^G_*\bigl({\mathbb C},C_0(X)\bigr)$ for a $G$[-]{}-space $X$. This is related to the method of Lipschitz maps developed by Connes, Gromov and Moscovici in [@Connes-Gromov-Moscovici:Lipschitz]. ### Pulled-back coarse structures {#sec:pull-back_coarse} Let $X$ be a $G$[-]{}-space, let $Y$ be a coarse space and let $\alpha\colon X\to Y$ be a proper continuous map. We pull back the coarse structure on $Y$ to a coarse structure on $X$, letting $E\subseteq X\times X$ be an entourage if and only if $\alpha_*(E)\subseteq Y\times Y$ is one. Since $\alpha$ is proper and continuous, this coarse structure is compatible with the topology on $X$. For this coarse structure, $G$ *acts by translations* if and only if $\alpha$ satisfies the following *displacement condition* used in [@Connes-Gromov-Moscovici:Lipschitz]: for any compact subset $K\subseteq G$, the set $$\bigl\{\bigl(\alpha(gx),\alpha(x)\bigr)\in Y\times Y \bigm| x\in X,\ g\in K\bigr\}$$ is an entourage of $Y$. The map $\alpha$ becomes a coarse map. Hence we obtain a commuting diagram $$\xymatrix{ {\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}(Y)\bigr) \ar[r]^{\alpha^*} \ar[d]^{\partial^Y} & {\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}(X)\bigr) \ar[r]^-{\psi} \ar[d]^{\partial^X} & {\textup{KK}}^G_*\bigl({\mathbb C},C_0(X)\bigr) \ar[d]^{p_{\mathcal EG}^*} \\ {\textup{KX}}^*(Y) \ar[r]^{\alpha^*} & {\textup{KX}}^*(X) \ar[r]^-{\phi} & {\textup{RKK}}^G_*\bigl({\mathcal EG};{\mathbb C},C_0(X)\bigr). }$$ with $\psi$ and $\phi$ as in Lemma \[lem:coarse\_gives\_classes\]. The constructions of [@Connes-Gromov-Moscovici:Lipschitz]\*[I.10]{} only use $Y={\mathbb R}^N$ with the Euclidean coarse structure. The coarse co[-]{}-assembly map is an isomorphism for ${\mathbb R}^N$ because ${\mathbb R}^N$ is scalable. Moreover, ${\mathbb R}^N$ is uniformly contractible and has bounded geometry. Hence we obtain canonical isomorphisms $${\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}({\mathbb R}^N)\bigr) \cong {\textup{KX}}^*({\mathbb R}^N) \cong {\textup K}^*({\mathbb R}^N).$$ In particular, ${\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}({\mathbb R}^N)\bigr)\cong{\mathbb Z}$ with generator $[\partial{\mathbb R}^N]$ in ${\textup K}_{1-N}\bigl({\mathfrak c^\mathfrak{red}}({\mathbb R}^N)\bigr)$. This class is nothing but the usual dual-Dirac morphism for the locally compact group ${\mathbb R}^N$. As a result, any map $\alpha\colon X\to{\mathbb R}^N$ that satisfies the displacement condition above induces $$[\alpha]{\mathrel{\vcentcolon=}}\psi\bigl(\alpha^*[\partial{\mathbb R}^N]\bigr) \in{\textup{KK}}^G_{-N}\bigl({\mathbb C},C_0(X)\bigr).$$ The commutative diagram  computes $p_{\mathcal EG}^*[\alpha]\in {\textup{RKK}}^G_{-N}\bigl({\mathcal EG};{\mathbb C},C_0(X)\bigr)$ in purely topological terms. ### Principal bundles over coarse spaces {#sec:bundles_over_coarse} As in [@Connes-Gromov-Moscovici:Lipschitz], we may replace a fixed map $X\to{\mathbb R}^N$ by a section of a vector bundle over $X$. But we need this bundle to have a $G$[-]{}-equivariant spin structure. To encode this, we consider a $G$[-]{}-equivariant ${\textup{Spin}}(N)$[-]{}-principal bundle $\pi\colon E\to B$ together with actions of $G$ on $E$ and $B$ such that $\pi$ is $G$[-]{}-equivariant and the action on $E$ commutes with the action of $H{\mathrel{\vcentcolon=}}{\textup{Spin}}(N)$. Let $T{\mathrel{\vcentcolon=}}E\times_{{\textup{Spin}}(N)} {\mathbb R}^N$ be the associated vector bundle over $B$. It carries a $G$[-]{}-invariant Euclidean metric and spin structure. As is well-known, sections $\alpha\colon B\to T$ correspond bijectively to ${\textup{Spin}}(N)$-equivariant maps $\alpha'\colon E\to{\mathbb R}^N$; here a section $\alpha$ corresponds to the map $\alpha'\colon E\to{\mathbb R}^N$ that sends $y\in E$ to the coordinates of $\alpha\pi(y)$ in the orthogonal frame described by $y$. Since the group ${\textup{Spin}}(N)$ is compact, the map $\alpha'$ is proper if and only if $b\mapsto {\lVert\alpha(b)\rVert}$ is a proper function on $B$. As in \[sec:pull-back\_coarse\], a ${\textup{Spin}}(N)$[-]{}-equivariant proper continuous map $\alpha'\colon E\to Y$ for a coarse space $Y$ allows us to pull back the coarse structure of $Y$ to $E$; then ${\textup{Spin}}(N)$ acts by isometries. The group $G$ acts by translations if and only if $\alpha'$ satisfies the displacement condition from \[sec:pull-back\_coarse\]. If $Y={\mathbb R}^N$, we can rewrite this in terms of $\alpha\colon B\to T$: we need $$\sup \bigl\{ {\lVertg\alpha(g^{-1}b)-\alpha(b)\rVert} \bigm| b\in B,\ g\in K \bigr\}$$ to be bounded for all compact subsets $K\subseteq G$. If the displacement condition holds, then we are in the situation of Lemma \[lem:coarse\_gives\_classes\] with $H={\textup{Spin}}(N)$ and $X=E$. Since $H$ acts freely on $E$, $C_0(E){\mathbin\rtimes}H$ is $G$[-]{}-equivariantly Morita–Rieffel equivalent to $C_0(B)$. We obtain canonical maps $$\begin{gathered} {\textup K}^{{\textup{Spin}}(N)}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}_{{\textup{Spin}}(N)}({\mathbb R}^N)\bigr) \xrightarrow{(\alpha')^*} {\textup K}^{{\textup{Spin}}(N)}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}_{{\textup{Spin}}(N)}(E)\bigr) \\ \xrightarrow{\psi} {\textup{KK}}^G_*\bigl({\mathbb C},C_0(E){\mathbin\rtimes}{\textup{Spin}}(N)\bigr) \cong {\textup{KK}}^G_*\bigl({\mathbb C},C_0(B)\bigr).\end{gathered}$$ The ${\textup{Spin}}(N)$[-]{}-equivariant coarse co-assembly map for ${\mathbb R}^N$ is an isomorphism by [@Emerson-Meyer:Descent] because the group ${\mathbb R}^N{\mathbin\rtimes}{\textup{Spin}}(N)$ has a dual-Dirac morphism. Using also the uniform contractibility of ${\mathbb R}^N$ and ${\textup{Spin}}(N)$-equivariant Bott periodicity, we get $${\textup K}^{{\textup{Spin}}(N)}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}_{{\textup{Spin}}(N)}({\mathbb R}^N)\bigr) \cong {\textup{KX}}_{{\textup{Spin}}(N)}^*({\mathbb R}^N) \cong {\textup K}_{{\textup{Spin}}(N)}^*({\mathbb R}^N) \cong {\textup K}_{{\textup{Spin}}(N)}^{*+N}({\textup{point}}).$$ The class of the trivial representation in $\operatorname{Rep}({\textup{Spin}}N) \cong {\textup K}^{{\textup{Spin}}(N)}_0({\mathbb C})$ is mapped to the usual dual-Dirac morphism $[\partial {\mathbb R}^N]\in {\textup K}^{{\textup{Spin}}(N)}_{1-N}\bigl({\mathfrak c^\mathfrak{red}}_{{\textup{Spin}}(N)}({\mathbb R}^N)\bigr)$ for ${\mathbb R}^N$. As a result, any proper section $\alpha\colon B\to T$ satisfying the displacement condition induces $$[\alpha]{\mathrel{\vcentcolon=}}\psi\bigl(\alpha^*[\partial{\mathbb R}^N]\bigr) \in {\textup{KK}}^G_{-N}\bigl({\mathbb C},C_0(B)\bigr).$$ Again, the commutative diagram  computes $p_{\mathcal EG}^*[\alpha]\in {\textup{RKK}}^G_{-N}\bigl({\mathcal EG};{\mathbb C},C_0(X)\bigr)$ in purely topological terms. ### Coarse structures on jet bundles {#sec:coarse_jet} Let $M$ be an oriented compact manifold and let $\operatorname{Diff}^+(M)$ be the infinite-dimensional Lie group of orientation-preserving diffeomorphisms of $M$. Let $G$ be a locally compact group that acts on $M$ by a continuous group homomorphism $G\to\operatorname{Diff}^+(M)$. The *Gelfand–Fuchs cohomology* of $M$ is part of the group cohomology of $\operatorname{Diff}^+(M)$ and by functoriality maps to the group cohomology of $G$. It is shown in [@Connes-Gromov-Moscovici:Lipschitz] that the range of Gelfand–Fuchs cohomology in the cohomology of $G$ yields homotopy-invariant higher signatures. This argument has two parts; one is geometric and concerns the construction of a class in ${\textup{KK}}^G_*\bigl({\mathbb C},C_0(X)\bigr)$ for a suitable space $X$; the other uses cyclic homology to construct linear functionals on ${\textup K}_*(C_0(X){\mathbin{\rtimes_\textup r}}G)$ associated to Gelfand–Fuchs cohomology classes. We can simplify the first step; the second has nothing to do with coarse geometry. Let $\pi^k\colon {\textup J}^k_+(M)\to M$ be the *oriented $k$[-]{}-jet bundle* over $M$. That is, a point in ${\textup J}^k_+(M)$ is the $k$th order Taylor series at $0$ of an orientation-preserving diffeomorphism from a neighbourhood of $0\in{\mathbb R}^n$ into $M$. This is a principal $H$[-]{}-bundle over $M$, where $H$ is a connected Lie group whose Lie algebra $\mathfrak{h}$ is the space of polynomial maps $p\colon {\mathbb R}^n\to{\mathbb R}^n$ of order $k$ with $p(0)=0$, with an appropriate Lie algebra structure. The maximal compact subgroup $K\subseteq H$ is isomorphic to $\textup{SO}(n)$, acting by isometries on ${\mathbb R}^n$. It acts on $\mathfrak{h}$ by conjugation. Since our construction is natural, the action of $G$ on $M$ lifts to an action on ${\textup J}^k_+(M)$ that commutes with the $H$[-]{}-action. We let $H$ act on the right and $G$ on the left. Define $X_k{\mathrel{\vcentcolon=}}{\textup J}^k_+(M)/K$. This is the bundle space of a fibration over $M$ with fibres $H/K$. Gelfand–Fuchs cohomology can be computed using a chain complex of $\operatorname{Diff}^+(M)$-invariant differential forms on $X_k$ for $k\to\infty$. Using this description, Connes, Gromov, and Moscovici associate to a Gelfand–Fuchs cohomology class a functional ${\textup K}_*(C_0(X_k){\mathbin{\rtimes_\textup r}}G)\to{\mathbb C}$ for sufficiently high $k$ in [@Connes-Gromov-Moscovici:Lipschitz]. Since ${\textup J}^k_+(M)/H\cong M$ is compact, there is a unique coarse structure on ${\textup J}^k_+(M)$ for which $H$ acts isometrically (see \[sec:acts\_coarse\]). With this coarse structure, ${\textup J}^k_+(M)$ is coarsely equivalent to $H$. The compactness of ${\textup J}^k_+(M)/H\cong M$ also implies easily that $G$ acts by translations. We have a Morita–Rieffel equivalence $C_0(X_k)\sim C_0({\textup J}^k_+ M){\mathbin\rtimes}K$ because $K$ acts freely on ${\textup J}^k_+(M)$. We want to study the map $${\textup K}_{*+1}({\mathfrak c^\mathfrak{red}}_K({\textup J}^k_+ M){\mathbin\rtimes}K) \xrightarrow{\psi} {\textup{KK}}^G_*({\mathbb C},C_0({\textup J}^k_+ M){\mathbin\rtimes}K) \cong {\textup{KK}}^G_*\bigl({\mathbb C},C_0(X_k)\bigr)$$ produced by Lemma \[lem:coarse\_gives\_classes\]. Since $H$ is almost connected, it has a dual-Dirac morphism by [@Kasparov:Novikov]; hence the $K$[-]{}-equivariant coarse co-assembly map for $H$ is an isomorphism by the main result of [@Emerson-Meyer:Descent]. Moreover, $H/K$ is a model for ${\mathcal EG}$ by [@Abels:Slices]. We get $${\textup K}_{*+1}({\mathfrak c^\mathfrak{red}}_K({\textup J}^k_+ M){\mathbin\rtimes}K) \cong {\textup K}_{*+1}({\mathfrak c^\mathfrak{red}}_K({\lvertH\rvert}){\mathbin\rtimes}K) \cong {\textup{KX}}^*_K({\lvertH\rvert}) \cong {\textup K}^*_K(H/K).$$ Let $\mathfrak{h}$ and $\mathfrak{k}$ be the Lie algebras of $H$ and $K$. There is a $K$[-]{}-equivariant homeomorphism $\mathfrak{h}/\mathfrak{k}\cong H/K$, where $K$ acts on $\mathfrak{h}/\mathfrak{k}$ by conjugation. Now we need to know whether there is a $K$[-]{}-equivariant spin structure on $\mathfrak{h}/\mathfrak{k}$. One can check that this is the case if $k\equiv 0,1 \bmod 4$. Since we can choose $k$ as large as we like, we can always assume that this is the case. The spin structure allows us to use Bott periodicity to identify ${\textup K}^*_K(H/K) \cong {\textup K}_{*-N}^H({\mathbb C})$, which is the representation ring of $K$ in degree $-N$, where $N=\dim \mathfrak{h}/\mathfrak{k}$. Using our construction, the trivial representation of $K$ yields a canonical element in ${\textup{KK}}^G_{-N}\bigl({\mathbb C},C_0(X_k)\bigr)$. This construction is much shorter than the corresponding one in [@Connes-Gromov-Moscovici:Lipschitz] because we use Kasparov’s result about dual-Dirac morphisms for almost connected groups. Much of the corresponding argument in [@Connes-Gromov-Moscovici:Lipschitz] is concerned with proving a variation on this result of Kasparov. Computation of KKG(C,P) {#sec:compute_KKGCP} ======================= So far, we have merely used the diagram  to construct certain elements in ${\textup{KK}}^G_*({\mathbb C},B)$. Now we show that this construction yields an isomorphic description of ${\textup{KK}}^G_*({\mathbb C},{\mathsf P})$. This assertion requires $G$ to be a totally disconnected group with a $G$[-]{}-compact universal proper $G$[-]{}-space. We assume this throughout this section. \[hatsoff\] In the situation of Lemma , suppose that $X = {\lvertG\rvert}$ with $G$ acting by left translations and that $H\subseteq G$ is a compact subgroup acting on $X$ by right translations; here ${\lvertG\rvert}$ carries the coarse structure of Example . Then the maps $\psi$ and $\phi$ are isomorphisms. We reduce this assertion to results of [@Emerson-Meyer:Descent]. The $C^*$[-]{}-algebras ${\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert},D){\mathbin\rtimes}H$ and ${\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert}, D)^H$ are strongly Morita equivalent, whence have isomorphic ${\textup K}$[-]{}-theory. It is shown in [@Emerson-Meyer:Descent] that $$\label{car} {\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert},D)^H\bigr) \cong {\textup{KK}}^G_*({\mathbb C},\operatorname{Ind}_H^G D).$$ Finally, $\operatorname{Ind}_H^G(D) = C_0(G,D)^H$ is $G$[-]{}-equivariantly Morita–Rieffel equivalent to $C_0(G,D){\mathbin\rtimes}H$. Hence we get $$\begin{gathered} {\textup K}_{*+1}({\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert},D){\mathbin\rtimes}H) \cong {\textup K}_{*+1}\bigl({\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert},D)^H\bigr) \cong {\textup{KK}}^G_*\bigl({\mathbb C}, \operatorname{Ind}_H^G(D)\bigr) \\ \cong{\textup{KK}}^G_*\bigl({\mathbb C}, C_0(G,D){\mathbin\rtimes}H\bigr). \end{gathered}$$ It is a routine exercise to verify that this composition agrees with the map $\psi$ in . Similar considerations apply to the map $\phi$. We now set $X={\lvertG\rvert}$, and let $G=H$. The actions of $G$ on ${\lvertG\rvert}$ on the left and right are by translations and isometries, respectively. Lemma \[lem:coarse\_gives\_classes\] yields a map $$\label{map} \Psi_*\colon {\textup K}_{*+1}\bigl(({\mathfrak c^\mathfrak{red}}({\lvertG\rvert},D){\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}G\bigr) \to {\textup{KK}}^G_*\bigl({\mathbb C},C_0({\lvertG\rvert},D{\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}G\bigr).$$ for all $A,D$, where we use the $G$-equivariant Morita–Rieffel equivalence between $C_0({\lvertG\rvert},D){\mathbin\rtimes}G$ and $D$. It fits into a commuting diagram $$\xymatrix@C=4em{ {\textup K}_{*+1}\bigl(({\mathfrak c^\mathfrak{red}}_G({\lvertG\rvert},D){\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}G\bigr) \ar[r]^-{\partial} \ar[d]^{\Psi^{D,A}_*} & {\textup K}_*(C_0({\mathcal EG},D{\mathbin{\otimes_\textup{max}}}A){\mathbin\rtimes}G) \ar[d]_{\cong} \\ {\textup{KK}}_*^G({\mathbb C},D{\mathbin{\otimes_\textup{max}}}A) \ar[r]^-{p_{\mathcal EG}^*} & {\textup{RKK}}_*^G({\mathcal EG};{\mathbb C},D{\mathbin{\otimes_\textup{max}}}A). }$$ \[lem:Psi\_iso\] The class of $G$-$C^*$-algebras $A$ for which $\Psi_*^{D,A}$ is an isomorphism for all $D$ is triangulated and thick and contains all $G$-$C^*$[-]{}-algebras of the form $C_0(G/H)$ for compact open subgroups $H$ of $G$. The fact that this category of algebras is triangulated and thick means that it is closed under suspensions, extensions, and direct summands. These formal properties are easy to check. Since $H\subseteq G$ is open, there is no difference between $H$[-]{}-continuity and $G$[-]{}-continuity. Hence $$\begin{aligned} \bigl({\mathfrak c^\mathfrak{red}}_G({\lvertG\rvert},D){\mathbin{\otimes_\textup{max}}}C_0(G/H)\bigr){\mathbin\rtimes}G &\cong \bigl({\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert},D){\mathbin{\otimes_\textup{max}}}C_0(G/H)\bigr){\mathbin\rtimes}G \\ &\sim \bigl({\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert},D) {\mathbin\rtimes}H, \end{aligned}$$ where $\sim$ means Morita–Rieffel equivalence. Similar simplifications can be made in other corners of the square. Hence the diagram for $A=C_0(G/H)$ and $G$ acting on the right is equivalent to a corresponding diagram for trivial $A$ and $H$ acting on the right. The latter case is contained in Lemma \[hatsoff\]. \[the:Kreg\] Let $G$ be an almost totally disconnected group with $G$[-]{}-compact ${\mathcal EG}$. Then for every $B\in{\textup{KK}}^G$, the map $$\Psi_*\colon {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert},B)\bigr) \to {\textup{KK}}_*^G({\mathbb C},B\otimes{\mathsf P})$$ is an isomorphism and the diagram $$\xymatrix{ {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert},B)\bigr) \ar[r]^-{\mu^*} \ar[d]_{\cong}^{\Psi_*} & {\textup{KX}}^*_G({\lvertG\rvert},B) \ar[d]_{\cong} \\ {\textup{KK}}_*^G({\mathbb C},B\otimes{\mathsf P}) \ar[r]^-{p_{\mathcal EG}^*} & {\textup{RKK}}_*^G({\mathcal EG};{\mathbb C},B\otimes{\mathsf P}) }$$ commutes. In particular, ${\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr)$ is naturally isomorphic to ${\textup{KK}}^G_*({\mathbb C},{\mathsf P})$. It is shown in [@Emerson-Meyer:Descent] that for such groups $G$, the algebra ${\mathsf P}$ belongs to the thick triangulated subcategory of ${\textup{KK}}^G$ that is generated by $C_0(G/H)$ for compact subgroups $H$ of $G$. Hence the assertion follows from Lemma \[lem:Psi\_iso\] and our definition of ${\textup K^\textup{top}}$. \[cor:coassembly-equivalent\] Let ${\mathsf D}\in{\textup{KK}}^G({\mathsf P},{\mathbb C})$ be a Dirac morphism for $G$. Then the following diagram commutes $$\xymatrix{ {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\lvertG\rvert})\bigr) \ar@/^3em/[rr]^-{\mu^*} \ar[d]^{\Psi_*}_{\cong} \ar[r]^-{\partial} & {\textup K}_*(C_0({\mathcal EG},{\mathsf P}){\mathbin\rtimes}G) \ar[d]_{\cong} \ar[r]^{{\mathsf D}_*}_{\cong} & {\textup K}_*(C_0({\mathcal EG}){\mathbin\rtimes}G) \ar[d]_{\cong} \\ {\textup{KK}}_*^G({\mathbb C},{\mathsf P}) \ar[r]^-{p_{\mathcal EG}^*} \ar[dr]_-{{\mathsf D}_*} & {\textup{RKK}}_*^G({\mathcal EG};{\mathbb C},{\mathsf P}) \ar[r]^{{\mathsf D}_*}_{\cong} & {\textup{RKK}}_*^G({\mathcal EG};{\mathbb C},{\mathbb C}) \\ & {\textup{KK}}^G_*({\mathbb C},{\mathbb C}) \ar[ur]_-{p_{\mathcal EG}^*}, }$$ where $\Psi_*$ is as in Theorem  and $\mu^*$ is the $G$[-]{}-equivariant coarse co-assembly map for ${\lvertG\rvert}$, and the indicated maps are isomorphisms. This follows from Theorem \[the:Kreg\] and the general properties of the Dirac morphism discussed in \[sec:Dirac\_and\_BC\]. We call $a\in {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$ 1. a *boundary class* if it lies in the range of $$\mu^*\colon {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr)\to {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C});$$ 2. *properly factorisable* if $a = p_{\mathcal EG}^*(b\otimes_A c)$ for some proper $G$-$C^*$-algebra $A$ and some $b\in{\textup{KK}}^G_*({\mathbb C},A)$, $c\in{\textup{KK}}^G_*(A,{\mathbb C})$; 3. *proper Lipschitz* if $a = p_{\mathcal EG}^*(b\otimes_{C_0(X)}c)$, where $b\in{\textup{KK}}^G_*\bigl({\mathbb C},C_0(X))$ is constructed as in \[sec:pull-back\_coarse\] and \[sec:bundles\_over\_coarse\] and $c\in {\textup{KK}}^G_*(C_0(X),{\mathbb C})$ is arbitrary. Let $G$ be a totally disconnected group with $G$[-]{}-compact ${\mathcal EG}$. 1. A class $a\in {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$ is properly factorisable if and only if it is a boundary class. 2. Proper Lipschitz classes are boundary classes. 3. The boundary classes form an ideal in the ring ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$. 4. The class $1_{\mathcal EG}$ is a boundary class if and only if $G$ has a dual-Dirac morphism; in this case, the $G$[-]{}-equivariant coarse co-assembly map $\mu^*$ is an isomorphism. 5. Any boundary class lies in the range of $$p_{\mathcal EG}^*\colon {\textup{KK}}^G_*({\mathbb C},{\mathbb C})\to{\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$$ and hence yields homotopy invariants for manifolds. By Corollary \[cor:coassembly-equivalent\], the equivariant coarse co-assembly map $$\mu^*\colon {\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr) \to {\textup K}_*\bigl(C_0({\mathcal EG}{\mathbin\rtimes}G)\bigr) \cong {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$$ is equivalent to the map $${\textup{KK}}^G_*({\mathbb C},{\mathsf P}) \xrightarrow{p_{\mathcal EG}^*} {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathsf P}) \cong {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C}).$$ If we combine this with the isomorphism ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C}) \cong {\textup{KK}}^G_*({\mathsf P},{\mathsf P})$, the resulting map $${\textup{KK}}^G_*({\mathbb C},{\mathsf P}) \to {\textup{KK}}^G_*({\mathsf P},{\mathsf P})$$ is simply the product (on the left) with ${\mathsf D}\in{\textup{KK}}^G({\mathsf P},{\mathbb C})$; this map is known to be an isomorphism if and only if it is surjective, if and only if $1_{\mathsf P}$ is in its range, if and only if the $H$[-]{}-equivariant coarse co-assembly map $${\textup K}_*({\mathfrak c^\mathfrak{red}}_H({\lvertG\rvert}){\mathbin\rtimes}H) \to {\textup{KX}}^*_H({\lvertG\rvert})$$ is an isomorphism for all compact subgroups $H$ of $G$ by [@Emerson-Meyer:Descent]. For any $G$[-]{}-$C^*$-algebra $B$, the ${\mathbb Z/2}$[-]{}-graded group ${\textup K^\textup{top}}_*(G,B)\cong {\textup K}_*\bigl((B\otimes{\mathsf P}){\mathbin{\rtimes_\textup r}}G\bigr)$ is a graded module over the ${\mathbb Z/2}$[-]{}-graded ring ${\textup{KK}}^G_*({\mathsf P},{\mathsf P})\cong {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$ in a canonical way; the isomorphism between these two groups is a ring isomorphism because it is the composite of the two ring isomorphisms $${\textup{KK}}^G_*({\mathsf P},{\mathsf P})\xrightarrow[\cong]{p_{\mathcal EG}^*} {\textup{RKK}}^G_*({\mathcal EG};{\mathsf P},{\mathsf P})\xleftarrow[\cong]{{\text\textvisiblespace}\otimes{\mathsf P}} {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C}).$$ Hence we get module structures on ${\textup K^\textup{top}}_*\bigl(G,{\mathfrak c^\mathfrak{red}}({\lvertG\rvert})\bigr)$ and $${\textup K^\textup{top}}_*\bigl(G,C_0({\mathcal EG},{\mathbb K})\bigr) \cong {\textup K}_*(C_0({\mathcal EG}){\mathbin\rtimes}G) \cong {\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C}).$$ The latter isomorphism is a module isomorphism; thus ${\textup K^\textup{top}}_*\bigl(G,C_0({\mathcal EG},{\mathbb K})\bigr)$ is a free module of rank $1$. The equivariant co-assembly map is natural in the formal sense, so that it is an ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$-module homomorphism. Hence its range is a submodule, that is, an ideal in ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$ (since this ring is graded commutative, there is no difference between one- and two-sided graded ideals). This also yields that $\mu^*$ is surjective if and only if it is bijective, if and only if the unit class $1_{\mathcal EG}$ belongs to its range; we already know this from [@Emerson-Meyer:Descent]. If $A$ is a proper $G$-$C^*$[-]{}-algebra, then ${\textup{id}}_A\otimes{\mathsf D}\in{\textup{KK}}^G(A\otimes{\mathsf P},A)$ is invertible ([@Meyer-Nest:BC]). If $b\in{\textup{KK}}^G_*({\mathbb C},A)$ and $c\in{\textup{KK}}^G_*(A,{\mathbb C})$, then we can write the Kasparov product $b\otimes_A c$ as $${\mathbb C}\xrightarrow{b} A \xleftarrow[\cong]{{\textup{id}}_A\otimes{\mathsf D}} A\otimes{\mathsf P}\xrightarrow{c\otimes{\textup{id}}_{\mathsf P}} {\mathsf P}\xrightarrow{{\mathsf D}} {\mathbb C},$$ where the arrows are morphisms in the category ${\textup{KK}}^G$. Therefore, $b\otimes_A c$ factors through ${\mathsf D}$ and hence is a boundary class by Theorem \[the:Kreg\]. Dual-Dirac morphisms and the Carlsson–Pedersen condition {#sec:Carlsson-Pedersen} ======================================================== Now we construct boundary classes in ${\textup{RKK}}^G_*({\mathcal EG};{\mathbb C},{\mathbb C})$ from more classical boundaries. We suppose again that ${\mathcal EG}$ is $G$[-]{}-compact, so that ${\mathcal EG}$ is a coarse space. A *metrisable compactification* of ${\mathcal EG}$ is a metrisable compact space $Z$ with a homeomorphism between ${\mathcal EG}$ and a dense open subset of $Z$. It is called *coarse* if all scalar-valued functions on $Z$ have vanishing variation; this implies the corresponding assertion for operator-valued functions because $C(Z,D)\cong C(Z)\otimes D$. Equivalently, the embedding ${\mathcal EG}\to Z$ factors through the Higson compactification of ${\mathcal EG}$. A compactification is called *$G$[-]{}-equivariant* if $Z$ is a $G$[-]{}-space and the embedding ${\mathcal EG}\to Z$ is $G$[-]{}-equivariant. An equivariant compactification is called *strongly contractible* if it is $H$[-]{}-equivariantly contractible for all compact subgroups $H$ of $G$. The *Carlsson–Pedersen condition* requires that there should be a $G$[-]{}-compact model for ${\mathcal EG}$ that has a coarse, strongly contractible, and $G$[-]{}-equivariant compactification. Typical examples of such compactifications are the Gromov boundary for a hyperbolic group (viewed as a compactification of the Rips complex), or the visibility boundary of a CAT(0) space on which $G$ acts properly, isometrically, and cocompactly. \[the:dual\_Dirac\_compactify\] Let $G$ be a locally compact group with a $G$[-]{}-compact model for ${\mathcal EG}$ and let ${\mathcal EG}\subseteq Z$ be a coarse, strongly contractible, $G$[-]{}-equivariant compactification. Then $G$ has a dual-Dirac morphism. We use the $C^*$[-]{}-algebra ${\bar{\mathfrak B}^\mathfrak{red}}_G(Z)$ as defined in \[sec:Higson\_corona\]. Since $Z$ is coarse, there is an embedding ${\bar{\mathfrak B}^\mathfrak{red}}_G(Z)\subseteq{\mathfrak{\bar c}^\mathfrak{red}}_G({\mathcal EG})$. Let $\partial Z{\mathrel{\vcentcolon=}}Z\setminus {\mathcal EG}$ be the boundary of the compactification. Identifying $${\bar{\mathfrak B}^\mathfrak{red}}_G(\partial Z)\cong{\bar{\mathfrak B}^\mathfrak{red}}_G(Z)/C_0({\mathcal EG},{\mathbb K}_G),$$ we get a morphism of extensions $$\xymatrix{ 0 \ar[r] & C_0({\mathcal EG},{\mathbb K}_G) \ar[r] \ar@{=}[d] & {\mathfrak{\bar c}^\mathfrak{red}}_G({\mathcal EG}) \ar[r] & {\mathfrak c^\mathfrak{red}}_G({\mathcal EG}) \ar[r] & 0 \\ 0 \ar[r] & C_0({\mathcal EG},{\mathbb K}_G) \ar[r] & {\bar{\mathfrak B}^\mathfrak{red}}_G(Z) \ar[r] \ar[u]_{\subseteq} & {\bar{\mathfrak B}^\mathfrak{red}}_G(\partial Z) \ar[r] \ar[u]_{\subseteq} & 0. }$$ Let $H$ be a compact subgroup. Since $Z$ is compact, we have ${\bar{\mathfrak B}}_G(Z)=C(Z,{\mathbb K})$. Since $Z$ is $H$[-]{}-equivariantly contractible by hypothesis, ${\bar{\mathfrak B}}_G(Z)$ is $H$[-]{}-equivariantly homotopy equivalent to ${\mathbb C}$. Hence ${\bar{\mathfrak B}^\mathfrak{red}}_G(Z)$ has vanishing $H$[-]{}-equivariant ${\textup K}$[-]{}-theory. This implies ${\textup K^\textup{top}}_*\bigl(G,{\bar{\mathfrak B}^\mathfrak{red}}_G(Z)\bigr)= 0$ by [@Chabert-Echterhoff-Oyono:Going], so that the connecting map $$\label{pineapple} {\textup K^\textup{top}}_{*+1}\bigl(G,{\bar{\mathfrak B}^\mathfrak{red}}_G(\partial Z)\bigr) \to {\textup K^\textup{top}}_*\bigl(G,C_0({\mathcal EG})\bigr) \cong {\textup K}_*(C_0({\mathcal EG}){\mathbin\rtimes}G)$$ is an isomorphism. This in turn implies that the connecting map $${\textup K^\textup{top}}_{*+1}\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\mathcal EG})\bigr) \to {\textup K^\textup{top}}_*\bigl(G,C_0({\mathcal EG})\bigr)$$ is surjective. Thus we can lift $1\in {\textup{RKK}}^G_0({\mathcal EG};{\mathbb C},{\mathbb C}) \cong {\textup K}_0(C_0({\mathcal EG}){\mathbin\rtimes}G)$ to $$\alpha\in {\textup K^\textup{top}}_1\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\mathcal EG})\bigr) \cong {\textup K^\textup{top}}_1\bigl(G,{\mathfrak c^\mathfrak{red}}_G({\lvertG\rvert})\bigr).$$ Then $\Psi_*(\alpha)\in{\textup{KK}}^G_0({\mathbb C},{\mathsf P})$ is the desired dual-Dirac morphism. The group ${\textup K^\textup{top}}_*\bigl(G,{\bar{\mathfrak B}^\mathfrak{red}}_G(\partial Z)\bigr)$ that appears in the above argument is a *reduced* topological $G$[-]{}-equivariant ${\textup K}$[-]{}-theory for $\partial Z$ and hence differs from ${\textup K^\textup{top}}_*\bigl(G,C(\partial Z)\bigr)$. The relationship between these two groups is analysed in [@Emerson-Meyer:Euler]. [^1]: This research was partially carried out at the [Westfälische Wilhelms-Universität Münster]{} and supported by the EU-Network *Quantum Spaces and Noncommutative Geometry* (Contract HPRN-CT-2002-00280) and the *Deutsche Forschungsgemeinschaft* (SFB 478).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The shape of a neutron star (NS) is closely linked to its internal structure and the equation of state of supranuclear matters. A rapidly rotating, asymmetric NS in the Milky Way undergoes free precession, making it a potential source for [*multimessenger*]{} observation. The free precession could manifest in (i) the spectra of continuous gravitational waves (GWs) in the kilohertz band for ground-based GW detectors, and (ii) the timing behavior and pulse-profile characteristics if the NS is monitored as a pulsar with radio and/or X-ray telescopes. We extend previous work and investigate in great detail the free precession of a triaxially deformed NS with analytical and numerical approaches. In particular, its associated continuous GWs and pulse signals are derived. Explicit examples are illustrated for the continuous GWs, as well as timing residuals in both time and frequency domains. These results are ready to be used for future multimessenger observation of triaxially-deformed freely-precessing NSs, in order to extract scientific implication as much as possible.' author: - | Yong Gao,$^{1,2}$ Lijing Shao,$^{2,3}$[^1] Rui Xu,$^{2}$ Ling Sun,$^{4}$ Chang Liu,$^{1,2}$ and Ren-Xin Xu$^{1,2}$\ $^{1}$Department of Astronomy, School of Physics, Peking University, Beijing 100871, China\ $^{2}$Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, China\ $^{3}$National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China\ $^{4}$LIGO Laboratory, California Institute of Technology, Pasadena, California 91125, USA bibliography: - 'refs.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'Triaxially-deformed Freely-precessing Neutron Stars: Continuous electromagnetic and gravitational radiation' --- \[firstpage\] gravitational waves – pulsars: general – methods: analytical Introduction {#sec:intro} ============ Pulsars are magnetized rotating neutron stars (NSs). Using the so-called pulsar timing technique, the Hulse-Taylor pulsar provided the first validation for the existence of gravitational waves [GWs; @Taylor:1979zz]. In the new era after the direct observation of GWs with ground-based laser interferometric detectors [@TheLIGOScientific:2014jea; @TheVirgo:2014hva; @Abbott:2016blz; @TheLIGOScientific:2017qsa; @LIGOScientific:2018mvr], pulsars continue to play an important role in the context of GW astrophysics. They can be perceived as GW [*sources*]{} radiating continuous GWs, as well as GW [*detectors*]{} in the form of pulsar timing arrays [@Janssen:2014dka; @Perera:2019sca]. In this work, we are interested in freely precessing, asymmetric NSs, which can produce both modulated pulse signals and continuous GW radiations with characteristic features [@Zimmermann:1979ip; @Zimmermann:1980ba; @Jones:2000ud; @Jones:2001yg], and thus become potential multimessenger sources of great scientific interest. The most compelling evidence for NS free precession comes from PSR B1828$-$11. Timing observation over 13 years for this isolated pulsar showed strong Fourier power at periods of about 250, 500 and 1000 days [@Stairs:2000zz], which could be an indication for free precession. @Link:2001zr suggested the period at 500 days as the precession period of a biaxial NS with a precessing angle of $\sim 3^{\circ}$ and a dipole moment nearly orthogonal to the symmetric axis. They also interpreted the period at 250 days as a result of the electromagnetic dipole torque. Timing data of another pulsar, PSR B1642$-$03, provided additional support for the idea of NS free precession [@1993ASPC...36...43C; @Shabanova:2001ud]. While more evidence is needed to solidify the phenomenon of NS free precession, it is nowadays certainly interesting to study it in the context of GW astrophysics. The conventional model for NS structure consists of a liquid core and a thin solid crust. @Jones:2000ud conjectured that the core of a NS does not participate in the free precession. Assuming the crust-only precession, they constructed a simple model with a thin radio beam fixed on the body of a biaxially deformed NS, and they assumed that the beam is aligned with the dipole moment. The model was applied to some potential candidates which may be undergoing free precession [@Jones:2000ud]. Although our understanding related to NSs has advanced remarkably [@2018RPPh...81e6902B], NS structure is still unclear and alternative models have already been proposed. NSs could actually be strange stars if Witten’s conjecture is correct [@Witten:1984rs], and they could globally be in a solid state if quarks are condensed in position space [@Xu:2003xe] or momentum space [@Mannarelli:2007bs], resulting in highly elastic quadrupole deformations [@Owen:2005fn] and thus large free precession amplitudes. Therefore, a multimessenger study of freely-precessing NSs would help in understating the equation of state of cold matter at supranuclear density and distinguish between different models. Previous studies dominantly focused on biaxial NSs. In the most generic case, the deformation of a NS does not need to be biaxial. A triaxially deformed NS can demonstrate new features in its free precession. We extend the simple model in @Jones:2000ud and study the timing residual of a freely precessing triaxial NS in this work. The internal dissipation from the frictional-type coupling between the crust and the core may damp the wobble angle in a relatively short timescale [@Jones:2001yg]. As an illustrative work, we do not consider the damping here, but use different wobble angles in our calculation, from large ones to small ones, to display the modulations of spin period and spin period derivative in both time and frequency domains. The precession modulates the pulse width as well, which provides a good way to probe the beam shape of the pulsar radiation [@Link:2001zr; @Desvignes:2019uxs]. We use a simple cone model [@Gil:1984ads; @Lorimer:2005misc] to study the pulse-width modulation of triaxially-deformed freely-precessing NSs, and investigate the change of pulse width with different choices of wobble angles. From the GW perspectives, precessing NSs have been recognized as potential sources of continuous GWs for decades [@Zimmermann:1979ip; @Zimmermann:1980ba; @Alpar:1985kz]. In the new era of GW astronomy, the detection of GWs from precessing NSs with ground-based detectors is imminent. Using the Advanced LIGO data from its first and second observing runs, the search of continuous GWs at once and twice rotation frequencies from 222 pulsars has been performed [@Authors:2019ztc]. Stringent upper limits are set on the GW amplitude, the fiducial ellipticity, and the mass quadrupole moment via the search at the twice of the rotation frequency. These results can be used for testing various alternatives to the General Relativity [e.g., @Xu:2019gua]. @Zimmermann:1980ba treated precessing triaxial NSs as rigid bodies and derived the quadrupole waveform for them. In addition, he simplified the waveform assuming a small wobble angle, and showed that the spectral lines of the continuous GWs are located at angular frequencies of $\Omega_{\textrm{r}}+\Omega_{\textrm{p}}$ and $2\Omega_{\textrm{r}}$, where $\Omega_{\textrm{r}}$ is the rotation angular frequency, and $\Omega_{\textrm{p}}$ is the free precession angular frequency of the NS. The first-order spectral lines yield little information about other physical properties of NSs beyond the rotation and precession frequencies. Based on @Zimmermann:1980ba, @VanDenBroeck:2004wj obtained a third angular frequency at $2\left(\Omega_{\textrm{r}}+\Omega_{\textrm{p}}\right)$ by expanding the waveform to the second order of the wobble angle. The feasibility to detect continuous GWs from precessing triaxial NSs was re-examined, and it is found that the deviation from axisymmetry, the oblateness, and the wobble angle can be determined if the second-order line is observed [@VanDenBroeck:2004wj]. Following @Zimmermann:1980ba and @VanDenBroeck:2004wj, in this work we use a Newtonian treatment for the precession, augmented with the GW radiation formalism in GR [@Misner:1974qy]. In this problem, the Newtonian treatment is indeed also valid for strong-field objects like NSs, if the GR expressions for the integrals of various moments are used [@Thorne:1980ru]. Similarly to @VanDenBroeck:2004wj, we expand the GW waveform to the second order of the precessing angle. However, unlike @VanDenBroeck:2004wj, where a hierarchy for small parameters is assumed, more generically we treat the deviation from axisymmetry as a small parameter independent of the wobble angle in the expansion. Consequently, we obtain three more frequencies in the continuous GW spectra that are useful for a more complete extraction of physical information. The structure of this paper is as follows. In Section \[sec:free\_prec\], we provide both analytical and numerical solutions for freely precessing triaxial rigid bodies. Estimations of the oblateness, the nonaxisymmetry, and the wobble angle for elastically deformed NSs are given based on existing literature. In Section \[sec:pulsar\_radiation\], we show the timing residuals and pulse-width modulations of precessing triaxial NSs. These features could be identified if the NS is observed as a radio and/or X-ray pulsar. In Section \[sec:gw\], after briefly reviewing the quadrupole formula in @Zimmermann:1980ba, we expand the waveform to the second order assuming a small wobble angle, a small nonaxisymmetry and a small oblateness. Because of the relaxation in the assumption about the small quantities, three new spectral lines are obtained with respect to previous studies. In Section \[sec:disc\], we discuss the extraction of physical information of NSs from radio signals and continuous GWs. We briefly summarize our work in Section \[sec:sum\]. Free precession of triaxial rigid bodies {#sec:free_prec} ======================================== In general, the rotation axis of a rigid body does not coincide with its principal axes. As a consequence, a freely rotating rigid body precesses around the direction of the total angular momentum [@landau1960course]. The motion of the body can be described by three Euler angles, $\theta$, $\phi$, and $\psi$, and their time derivatives. In Fig. \[fig.euler\_angles\] we denote the coordinates of the inertial reference frame by uppercase letters, $\textrm{X}$, $\textrm{Y}$, and $\textrm{Z}$, with unit basis vectors $\widehat{e}_{\textrm{X}}$, $\widehat{e}_{\textrm{Y}}$, and $\widehat{e}_{\textrm{Z}}$. The vector $\widehat{e}_{\textrm{Z}}$ is chosen to be in the direction of the angular momentum of the rigid body, $\mathbf{J}$. We use lowercase letters, $x_{1}$, $x_{2}$, and $x_{3}$, to denote the coordinates in the body frame, which is attached on the rigid body. Their unit basis vectors are $\widehat{e}_{1}$, $\widehat{e}_{2}$, and $\widehat{e}_{3}$, chosen to be parallel to the three individual eigenvectors of the moment of inertia tensor. We use $I_{1}$, $I_{2}$, and $I_{3}$ as the diagonal components of the moment of inertia tensor in the body frame. ![The inertial and body coordinate systems for the rigid body. Uppercase letters, $\textrm{X}$, $\textrm{Y}$, and $\textrm{Z}$, denote the inertial frame coordinates, while lowercase letters, $x_{1}$, $x_{2}$, and $x_{3}$, denote the coordinates in the body frame. Three Euler angles, $\theta$, $\phi$, and $\psi$, are defined as shown. []{data-label="fig.euler_angles"}](fig_euler_angles.pdf){width="8.2cm"} For freely precessing rigid bodies, the dynamical equations of motion in the body frame are [@landau1960course], $$\begin{aligned} I_1 \dot {\omega}_1 - \left( I_2 - I_3 \right) \omega_2 \omega_3 &= 0 \label{eqn:euler_body1}\,, \\ I_2 \dot {\omega}_2 - \left( I_3 - I_1 \right) \omega_3 \omega_1 &= 0 \label{eqn:euler_body2}\,, \\ I_3 \dot {\omega}_3 - \left( I_1 - I_2 \right) \omega_1 \omega_2 &= 0 \label{eqn:euler_body3}\,,\end{aligned}$$ where $\omega_1$, $\omega_2$, and $\omega_3$ represent the angular velocities along $\widehat{e}_{1}$, $\widehat{e}_{2}$, and $\widehat{e}_{3}$. The dots denote the derivatives with respect to time $t$. Considering the kinematics of the rigid body, the evolution of the three Euler angles can be described by [@landau1960course], $$\begin{aligned} \omega_{1} &={\dot \phi} \sin \theta \sin \psi +{\dot \theta}\cos \psi \label{eqn:kinetic1} \,, \\ \omega_{2} &={\dot \phi} \sin \theta \cos \psi-{\dot \theta}\sin \psi \label{eqn:kinetic2}\,, \\ \omega_{3} &={\dot \phi} \cos \theta+ \dot \psi \label{eqn:kinetic3}\,,\end{aligned}$$ where the Euler angles are defined in Fig. \[fig.euler\_angles\]. As the rigid body is torque free, both the kinetic energy $$E=\frac{1}{2}\left(I_{1} \omega_{1}^{2}+I_{2} \omega_{2}^{2}+I_{3} \omega_{3}^{2}\right)\,,$$ and the angular momentum $$J =\left(I_{1}^{2} \omega_{1}^{2}+I_{2}^{2} \omega_{2}^{2}+I_{3}^{2} \omega_{3}^{2}\right)^{1 /2}\,,$$ are conserved. In the following, we assume that the principal moments of inertia satisfy $I_{1} < I_{2} < I_{3}$. We also assume $J^{2}>2 E I_{2}$, which is equivalent to the condition that the tail of the angular momentum $\mathbf{J}$ moves around $\widehat{e}_{3}$ along a closed curve in the body frame [@landau1960course]. Results for other choices can be obtained by properly relabeling the indices. The motion of a rigid body described by Eqs. (\[eqn:euler\_body1\]–\[eqn:kinetic3\]) is an initial value problem. In principle, one can obtain the evolution of the orientation of the triaxial rigid body at any time once the initial values of the three Euler angles and the angular velocities are specified. In subsequent calculations, we choose the initial values such that at $t=0$, one has $\phi = 0$, $\psi=\pi/2$, and $\theta$ is at its minimum value $\theta_{\rm min}$. The initial values of the angular velocities in the body frame are denoted as $\omega_1=a$, $\omega_2=0$, and $\omega_3=b$ at $t=0$. These assumptions can easily be extended to generic cases. Now, we discuss the analytical solution and the numerical method to solve the equations of motion in Eqs. (\[eqn:euler\_body1\]–\[eqn:kinetic3\]). Analytical solution {#sec:analy_solution} ------------------- The exact analytical solution to Eqs. (\[eqn:euler\_body1\]–\[eqn:kinetic3\]) for a precessing triaxial rigid body has been obtained in terms of the elliptic functions [@landau1960course; @Zimmermann:1980ba; @whittaker1988treatise; @VanDenBroeck:2004wj; @Akgun:2005nd; @Pina2015DrawingTF; @Lasky:2013bpa]. Here we briefly review the solution according to @landau1960course for readers’ convenience. The angular velocities in the body frame are, $$\begin{aligned} &\omega_{1}(\tau)= a\, \mathtt{cn} (\tau, m)\,,\\ &\omega_{2}(\tau)=a\left[\frac{I_{1}\left(I_{3}-I_{1}\right)}{I_{2} \left(I_{3} - I_{2}\right)} \right]^{1 / 2} \mathtt{sn} (\tau, m)\,,\\ &\omega_{3}(\tau)=b \, \mathtt{dn}(\tau, m)\,,\end{aligned}$$ where $\tau$ is the dimensionless time variable, $$\tau =t \sqrt{\frac{\left(I_{3}-I_{2}\right)\left(J^{2}-2 E I_{1}\right)}{I_{1} I_{2} I_{3}}} \,,$$ and $\mathtt{sn}$, $\mathtt{cn}$, and $\mathtt{dn}$ are the elliptic functions [see e.g., @olver2010nist]. The parameter $m$ can be expressed as, $$\label{eqn:modulus} m =\frac{\left(I_{2}-I_{1}\right) I_{1} a^{2}}{\left(I_{3}-I_{2}\right) I_{3} b^{2}}\,.$$ The angular velocities in the body frame are periodic with a period, $$T=\frac{4 K(m)}{b}\left[\frac{I_{1} I_{2}}{\left(I_{3}-I_{1}\right)\left(I_{3}-I_{2}\right)}\right]^{1 / 2}\,,$$ where $K(m)$ is the complete elliptic integral of the first kind [@olver2010nist]. The period $T$ is the free precession period. If $I_{1}$ is nearly equal to $I_{2}$, the parameter $m$ is close to zero. In this case, the period $T$ is approximately $2 \pi I_{1} /\left[\omega_{3}\left(I_{3} - I_{1}\right)\right]$, which is the well-known free precession period for a biaxial body. The Euler angles $\theta$ and $\psi$ are also periodic, and can be expressed as, $$\begin{aligned} &\cos \theta=\frac{I_{3} b}{J} \mathtt{dn} (\tau,m)\,,\\ &\tan \psi = \left[ \frac{I_{1}\left( I_{3}-I_{2}\right)}{I_{2} \left(I_{3}-I_{1}\right)}\right]^{1 / 2} \frac{\mathtt{cn}(\tau, m)}{\mathtt{sn}(\tau, m)}\,.\end{aligned}$$ From the above two equations, one finds that the angle $\theta$ has a period of $T/2$, while the angle $\psi$ has a period of $T$. In contrast, the angle $\phi$ is not periodic. It can be represented as a sum of two parts, $\phi = \phi_{1} + \phi_{2}$. The “periodic” part $\phi_{1}$ has a period of $T/2$, and is defined via, $$\label{eq:phi1} \exp \left[2 {\rm i} \phi_{1}(t)\right]=\frac{\vartheta_{4}\left(\frac{2 \pi t}{T}+ {\rm i} \pi \alpha, q \right)}{\vartheta_{4}\left(\frac{2 \pi t}{T}- {\rm i} \pi \alpha ,q \right)}\,,$$ where $\vartheta_{4}$ is the fourth Jacobi theta function with nome . In Eq. (\[eq:phi1\]), $\alpha$ is determined via $$\mathtt{sn} \left[2 \mathrm{i} \alpha K(m) \right]=\frac{\mathrm{i} I_{3} b}{I_{1} a}\,.$$ The “linear-in-time” part $\phi_{2}$ is given by, $$\label{eq:phi2} {\phi_{2}=\frac{2 \pi t}{T_{1}}} = \left( \frac{J}{I_{1}}+\frac{2\pi \mathrm{i}}{T} \frac{\vartheta_{4}^{\prime}(\mathrm{i} \pi \alpha, q)}{\vartheta_{4}(\mathrm{i} \pi \alpha, q)} \right) t \,,$$ where $\vartheta_{4}^{\prime}(u,q)$ is the derivative of $\vartheta_{4}(u,q)$ with respect to $u$.[^2] As $I_{1}$ approaches $I_{2}$, the period $T_{1}$ in Eq. (\[eq:phi2\]) approaches $2\pi I_{1}/J$. Generally, $T$ and $T_{1}$ are not commensurate with each other, so the motion of the body is not periodic in the inertia frame. For simplicity, we define [@Zimmermann:1980ba; @VanDenBroeck:2004wj] $$\begin{aligned} &\Omega_{\mathrm{p}} \equiv\frac{2 \pi}{T} =\frac{\pi b}{2 K(m)}\left[\frac{\left(I_{3}-I_{2}\right)\left(I_{3}-I_{1}\right)}{I_{1} I_{2}}\right]^{1 / 2}\label{eqn:precession_angular}\,,\\ &\Omega_{\mathrm{r}}\equiv \frac{2 \pi}{T_{1}}-\frac{2 \pi}{T} = \frac{J}{I_{1}}+\frac{2\pi \mathrm{i}}{T} \frac{\vartheta_{4}^{\prime}(\mathrm{i} \pi \alpha, q)}{\vartheta_{4}(\mathrm{i} \pi \alpha, q)}-\Omega_{\mathrm{p}} \label{eqn:rotation_angular}\,,\end{aligned}$$ for later use. Numerical approach using quaternions {#sec:numerical_solutions} ------------------------------------ Although the analytical solution given in the above subsection is exact, the use of it is not intuitive. Here we discuss a numerical method to integrate the equations of motion. There are two reasons for the need of a numerical method. First, numerical methods can avoid the use of the elliptic functions. Second, they can be easily applied to general equations of motion where precessions are much involved with torques in consideration. In generic cases with torques, analytical solutions usually do not exist. In our numerical calculation, we employ the numerical method solving 3-dimensional rotations through the use of quaternions, which is a mathematically equivalent formalism to the aforementioned one using the Euler angles in describing rotations in the 3-dimensional space [@arribas2006quaternions]. The rotation matrix can be either written in terms of trigonometric functions of the Euler angles, or expressed by a specific quaternion whose time evolution is determined by the angular velocity of the rigid body. In numerical integrations, the latter is preferred because it produces stable results more efficiently [@arribas2006quaternions]. A quaternion, $q = q_0 + q_{1}\mathbf{i} + q_{2}\mathbf{j}+q_{3}\mathbf{k} $, in the quaternion basis, $\{ 1,\, \mathbf{i},\, \mathbf{j},\,\mathbf{k} \}$, is usually denoted as $$q=\left(q_{0}, \,\mathbf{q}\right)\,,$$ with $\mathbf{q} = (q_1, \, q_2, \, q_3)$ being a 3-vector when used in calculating 3-dimensional rotations. The rotation transformation from $\mathbf{r}$ to $\mathbf{r^{\prime}}$ is performed via [@coutsias2004quaternions] $$\begin{aligned} \label{eqn:quaternion} (0, \mathbf{r^{\prime}}) &=q\left(0,\mathbf{r}\right)\tilde{q} \nonumber\\ &=\left(0,\big(q_{0}^{2}-\mathbf{q} \cdot \mathbf{q}\big) \mathbf{r}+2 \mathbf{q}(\mathbf{q} \cdot \mathbf{r})+2 q_{0} \mathbf{q} \times \mathbf{r} \right)\nonumber\\ &=\left(0,\mathcal{R}\,\mathbf{r}\right)\, ,\end{aligned}$$ where $\tilde{q}=\left(q_{0},\mathbf{-q}\right)$ is the conjugate quaternion of $q$. Note that from the first to the second line we have used the multiplication rule of quaternions, and from the second to the third line the usual dot product and cross product of 3-vectors are applied. Explicitly, the rotation matrix $\mathcal{R}$ in Eq. (\[eqn:quaternion\]) is $$\label{eqn:rotation} {\mathcal{R}=}\left(\begin{array}{ccc} q_{0}^{2}+q_{1}^{2}-q_{2}^{2}-q_{3}^{2} & 2q_{1} q_{2}-2q_{0} q_{3} & 2q_{1} q_{3} + 2q_{0} q_{2} \\ 2q_{1} q_{2} + 2q_{0} q_{3} & q_{0}^{2}-q_{1}^{2}+q_{2}^{2}-q_{3}^{2} & 2q_{2} q_{3} - 2q_{0} q_{1} \\ 2q_{1} q_{3} - 2q_{0} q_{2} & 2q_{2} q_{3} + 2q_{0} q_{1} & q_{0}^{2}-q_{1}^{2}-q_{2}^{2} + q_{3}^{2} \end{array}\right)\,,$$ which equals the normal Euler rotation matrix. Comparing the elements of them, we can relate the quaternion $q$ with the Euler angles through $$\begin{aligned} &{q_{0}=\cos \frac{\theta}{2} \cos \left(\frac{1}{2}(\phi+\psi)\right)} \,, \label{eqn:euler_to_qua1} \\ &{q_{1}=\sin \frac{\theta}{2} \cos \left(\frac{1}{2}(\phi-\psi)\right)} \,, \label{eqn:euler_to_qua2}\\ &{q_{2}=\sin \frac{\theta}{2} \sin \left(\frac{1}{2}(\phi-\psi)\right)} \,, \label{eqn:euler_to_qua3}\\ &{q_{3}=\cos \frac{\theta}{2} \sin \left(\frac{1}{2}(\phi+\psi)\right)}\,. \label{eqn:euler_to_qua4}\end{aligned}$$ In addition, the differential equation that governs the time evolution of the quaternion $q$ can be expressed as [@coutsias2004quaternions; @betsch2009rigid] $$\label{eqn:diff_quaternion} \frac{{{\rm d}}q}{{{\rm d}}t}=\frac{1}{2} q \left(0,\boldsymbol{\omega}\right)=\frac{1}{2} \left(\begin{array}{cccc} {0} & {-\omega_{1}} & {-\omega_{2}} & {-\omega_{3}} \\ {\omega_{1}} & {0} & {\omega_{3}} & {-\omega_{2}} \\ {\omega_{2}} & {-\omega_{3}} & {0} & {\omega_{1}} \\ {\omega_{3}} & {\omega_{2}} & {-\omega_{1}} & {0} \end{array}\right) \left(\begin{array}{l} {q_{0}} \\ {q_{1}} \\ {q_{2}} \\ {q_{3}} \end{array}\right)\,.$$ ![ ([*Upper*]{}) The numerically solved time evolution for cosines of $\theta$, $\psi$, and $\phi$. ([*Lower*]{}) Absolute error of the numerical results with respect to the analytical results for cosines displayed in the upper panel. In this plot, for illustrative purposes we have taken generic values for the rigid body: $I_{1}/I_{3}=1/3$, $I_{2}/I_{3}=2/3$, and $a=b=1\,\textrm{rad\,s}^{-1}$. With these values, we have a free-precession period $T=6.9\,\textrm{s}$.[]{data-label="fig.dynamics_example"}](fig_dynamics_example.pdf){width="8cm"} Once the initial orientation of the rigid body is given, one can translate it into the initial value of the quaternion $q$ using Eqs. (\[eqn:euler\_to\_qua1\]–\[eqn:euler\_to\_qua4\]). Together with the initial values of $\omega_1$, $\omega_2$, and $\omega_3$, Eqs. (\[eqn:euler\_body1\]–\[eqn:euler\_body3\]) and Eq. (\[eqn:diff\_quaternion\]) can be integrated to obtain the angular velocities in the body frame and the time evolution of the quaternion $q(t)$, hence the elements of the rotation matrix $\mathcal{R}$ at any given time. The Euler angles can be recovered by the elements of the matrix $\mathcal{R}$ via $$\begin{aligned} &{\phi=\tan ^{-1}\left(-\frac{\mathcal{R}_{13}}{\mathcal{R}_{23}}\right)} \,,\\ &{\theta=\tan ^{-1}\left(\frac{\sqrt{1-\mathcal{R}_{33}^{2}}}{\mathcal{R}_{33}}\right)} \,,\\ &{\psi=\tan ^{-1}\left(\frac{\mathcal{R}_{31}}{\mathcal{R}_{32}}\right)}\,,\end{aligned}$$ where $\mathcal{R}_{ij}$ $(i,j = 1,2,3)$ is the component of the matrix $\mathcal{R}$. In Fig. \[fig.dynamics\_example\] we present an explicit example, where the numerical result and the absolute error relative to the analytical solution are shown. Without dedicated efforts in obtaining Fig. \[fig.dynamics\_example\], the numerical accuracy is already well below a few parts in trillion in this case. Note that this example is for a very generic case of free precession for a triaxially deformed rigid body, while the realistic situation for NSs is much milder as we are to discuss below, thus we expect significantly better numerical accuracy than what is shown in Fig. \[fig.dynamics\_example\]. Physical parameters for triaxial NSs {#sec:dynamics_NS} ------------------------------------ Before closing this section, let us discuss some typical values for relevant physical parameters of triaxial NSs. To describe the precession of triaxial NSs, one can define three small parameters out of $I_1, \, I_2,\, I_3, \, a$, and $b$, appearing in the previous equations. They are usually taken as the oblateness $$\epsilon \equiv \frac{I_{3}-I_{1}}{I_{3}}\,,$$ the nonaxisymmtry $$\delta \equiv \frac{I_{2}-I_{1}}{I_{3}-I_{2}} \, ,$$ and the tangent of the initial wobble angle $$\gamma \equiv \tan \theta_{\rm min} = \frac{I_{1} a}{I_{3} b}\, .$$ We first discuss the oblateness due to elastic deformations of NSs. For a conventional NS with a liquid core and a solid crust, the oblateness is [@1971AnPhy..66..816B; @Jones:2000ud], $$\label{eqn:epsilon} \epsilon_{\mathrm{elast}}=\eta \,\epsilon_{0}\,,$$ where $\eta$ is the so-called rigidity parameter [@Jones:2000ud], and $\epsilon_0$ is the zero-strain oblateness. The rigidity parameter $\eta$ is unity for a perfectly rigid star and zero for a liquid star. By assuming the shear modulus in the crust to be constant, the rigidity parameter for a NS with a liquid core can be approximated as [@1971AnPhy..66..816B] $$\label{eqn:rigidity} \eta \simeq \frac{57 \mu V_{\mathrm{c}}}{10 |E_{g}|} \simeq 2.3 \times 10^{-5} \left(\frac{\mu}{ 10^{30} \, \mathrm{erg \, cm^{-3}}}\right) R_{6}^{4}\,M_{1.4}^{-2}\,,$$ where $\mu$ is the shear modulus of the crust, $V_{\textrm{c}}$ is the volume of the crust, and $E_{g}=- 3GM^{2}/5R$ is the gravitational binding energy of the NS. The notations $M_{1.4}$ and $R_{6}$ represent the dimensionless NS mass $M_{1.4} \equiv M/\left(1.4\,M_{\odot}\right)$ and the dimensionless NS radius $R_6 \equiv R/(10^{6}\,\mathrm{cm})$. @Cutler:2002np adopted a relativistic NS structure and solved for the strain field in the crust which evolves as the NS spins down. They found that $\eta$ is smaller than the estimation in @1971AnPhy..66..816B by a factor of $\sim 40$. The estimation of the zero-strain oblateness $\epsilon_{0}$ is [@Cutler:2002np; @VanDenBroeck:2004wj], $$\label{eqn:ep0} \epsilon_{0} \simeq \frac{ \Omega_{\rm r}^{2} R^{3} }{G M} = 2.1\times10^{-3}\left(\frac{f_{\rm r}}{100\, \mathrm{Hz}}\right)^{2}R_{6}^{3}M_{1.4}^{-1}\,,$$ where $f_{\rm r} = \Omega_{\rm r}/2\pi $ is the spin frequency of the NS, and $\Omega_{\rm r}$ is defined in Eq. (\[eqn:rotation\_angular\]). Combining Eqs. (\[eqn:epsilon\]–\[eqn:ep0\]), we obtain the oblateness due to the elastic deformation $$\label{eqn:oblateness} \epsilon_{\mathrm{elast}} \simeq 4.9 \times 10^{-8} \left(\frac{f_{\rm r}}{100\, \mathrm{Hz}}\right)^{2} \left(\frac{\mu}{ 10^{30} \, \mathrm{erg \, cm^{-3}}}\right) R_{6}^{7}\,M_{1.4}^{-3} \,.$$ The oblateness for a NS due to elastic deformation is also limited by the breaking strain $\sigma_{\textrm{break}}$. According to @Owen:2005fn, the largest oblateness is $$\begin{aligned} \epsilon_{\mathrm{max} } = 3.4 \times 10^{-7}\left(\frac{\sigma_{\mathrm{break} }}{10^{-2}}\right) \frac{ M_{1.4}^{-2.2}\,R_{6}^{4.26} }{1+0.7M_{1.4}R_{6}^{-1}} \,.\end{aligned}$$ The value of $\sigma_{\textrm{break}}$ is uncertain. Early estimations are in the range from $10^{-4}$ to $10^{-2}$ [@ruderman1992structure]. However, @Horowitz:2009ya found that $\sigma_{\textrm{break}}$ is around 0.1 by simulating the crust as Coulomb solids. Note that even with this extreme value of $\sigma_{\textrm{break}}$, the largest oblateness is about $\epsilon_{\textrm{max}}=2\times 10^{-6} $. Now we turn to the nonaxisymmetry. NSs are biaxial when $\delta$ is zero or infinity, and triaxial when $\delta$ has a finite value. Due to the complex evolution and relaxation of the crust after the star’s birth and during the accretion in the late lifetime [@Link:1998km; @Link:2002jk; @Akgun:2005nd], deformed NSs are possible to be triaxial, characterized by finite values of $\delta$. The magnetic stresses might contribute to the triaxility as well [@Wasserman:2002ec]. The nonaxisymmetry depends on the evolution and relaxation of the crust and the magnetic stresses, which are complex, especially during dynamical or explosive processes. Because of our lack of knowledge about $\delta$, a measurement of it would be particularly exciting. As for the wobble angle $\theta$, there is no physical limitation for slowly rotating NSs. However, for a fast rotating one, as the rotational bulge of the NS turns larger, more matter needs to be displaced during the precession, leading to a larger crust strain [@Jones:2000ud; @VanDenBroeck:2004wj]. In order to keep the strain below the limit of $\sigma_{\textrm{break}}$, the star can only possess a small wobble angle. @Jones:2000ud estimated the maximum allowed wobble angle, $$\label{eqn:theta_constrain} \theta_{\textrm{max}} \approx 0.45\left(\frac{100 \, \mathrm{Hz}}{f_{\rm r}}\right)^{2}\left(\frac{\sigma_{\mathrm{break}}}{10^{-3}}\right) M_{1.4}R_{6}^{-3}\,.$$ The constraint on the wobble angle depends on the rotation frequency and the breaking strain $\sigma_{\textrm{break}}$. For a NS with $f_{\rm r}=100 \, \textrm{Hz}$ and $\sigma_{\textrm{break}}=10^{-3}$, the wobble angle is smaller than $0.45 \,\textrm{radians}$. If we take the breaking strain in the extreme case where $\sigma_{\textrm{break}}=0.1$, the wobble angle is basically unlimited even for a fast rotating NS at a spin frequency of $f_{\rm r}=500\,\textrm{Hz}$. For the theoretical analysis in subsequent sections, we apply series expansion for the trigonometric functions of the three Euler angles in Section \[sec:analy\_solution\], assuming a small oblateness, a small nonaxisymmetry, and a small wobble angle. The benefit of the perturbative treatment is the great simplification it brings and the explicit harmonics appearing in the spectra. As for generic cases when one or more of these parameters are large, one can always restore back to the exact solution (or the numerical scheme) for a careful check. Following @Zimmermann:1980ba and @VanDenBroeck:2004wj, we find that practically, it is more convenient to use $$\begin{aligned} &\kappa \equiv \frac{1}{16}\frac{I_{3}}{I_{1}} \frac{I_{2}-I_{1}}{I_{3}-I_{2}}\, ,\end{aligned}$$ than $\delta$ in the expansion. Up to the leading order, $\kappa$ and $\delta$ are related by $\kappa \simeq \delta/16$. A constant of 1/16 is included for the convenience of later computation [@VanDenBroeck:2004wj]. The parameter $m$ in Eq. (\[eqn:modulus\]) is then simplified to $m = 16 \kappa \gamma^{2}$. In the series expansion, different from @VanDenBroeck:2004wj, we treat $\gamma$ and $\kappa$ independently and do not assume any hierarchy between them. The series expansions of trigonometric functions of the three Euler angles up to the second order of $\gamma$ and $\kappa$ are, $$\begin{aligned} \label{eq:series:begin} \cos \phi =& \cos \left[(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})t\right]\,,\\ \sin \phi =& \sin \left[(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})t\right]\,,\\ \cos \theta =& 1-\frac{\gamma ^2}{2} \,,\\ \sin \theta =& \gamma +8 \gamma \kappa \sin^2 \left(\Omega_{\mathrm{p}} t\right)\,,\\ \cos \psi =& \sin \left(\Omega_{\mathrm{p}} t \right) \left[1+(8\kappa + 32 \kappa^{2})\cos ^{2} \left( \Omega_{\mathrm{p}}t \right)\right]\nonumber \\ & + \sin \left(\Omega_{\mathrm{p}} t \right)\left[16 \kappa ^2 \cos ^2\left(\Omega_{\mathrm{p}} t\right) \left(3 \cos \left(2 \Omega_{\mathrm{p}} t\right)+1\right)\right]\,,\\ \sin \psi =& \cos \left(\Omega_{\mathrm{p}} t \right)\left[1-(8\kappa + 32 \kappa^{2})\sin ^{2} \left( \Omega_{\mathrm{p}}t \right)\right]\nonumber \\ & - \cos \left(\Omega_{\mathrm{p}} t \right)\left[96 \kappa ^2 \sin ^2\left(\Omega_{\mathrm{p}} t\right) \cos ^2\left( \Omega_{\mathrm{p}} t\right)\right]\,. \label{eq:series:end}\end{aligned}$$ Modulated timing and pulse signals {#sec:pulsar_radiation} ================================== If a triaxially-deformed freely-precessing NS is observed as a pulsar, the free precession will introduce characteristic modulations on the timing and pulse signals. These modulations might be revealed by radio and/or X-ray observations. In Section \[sec:phase\], we discuss the phase modulations of precessing triaxial NSs and show the residuals of spin period and spin period derivative for different initial values of the wobble angle. In Section \[sec:pulse\_width\], the pulse-width modulations for different initial values of the wobble angle are displayed. Phase modulation {#sec:phase} ---------------- ![Geometry of a freely precessing triaxial pulsar. The observer is in the $\textrm{X}$-$\textrm{Z}$ plane. The dipole moment is denoted as $\widehat{m}$, and the dashed line $\widehat{m}^{\prime}$ represents the dipole moment when it sweeps through the $\textrm{X}$-$\textrm{Z}$ plane, namely the moment when the observer can see the pulse. We denote the angle between $\widehat{e}_{\rm Z}$ and $\widehat{m}$ as $\Theta$, and the angle between $\widehat{e}_{3}$ and $\widehat{m}$ (i.e. the magnetic inclination angle) as $\chi$.[]{data-label="fig.timing"}](fig_timing.pdf){width="8cm"} Following @Jones:2000ud, we assume for simplicity that the pulsar beam is in the same direction as the magnetic dipole moment $\widehat{m}$. Once the dipole moment sweeps through the plane defined by the line of sight and the spin angular momentum, a pulse can be observed. In Fig. \[fig.timing\], we show the geometry of a freely precessing triaxial NS. We denote the polar angle between $\widehat{e}_{\textrm{Z}}$ and $\widehat{m}$ as $\Theta$. We denote the azimuthal angle between $\widehat{e}_{\textrm{X}}$ and the projection of $\widehat{m}$ on the $\textrm{X}$-$\textrm{Y}$ plane as $\Phi$. It is related to the Euler angles via [@Jones:2000ud], $$\Phi=\phi-\frac{\pi}{2}+\arctan \left(\frac{\cos \psi \sin \chi}{\sin \theta \cos \chi-\sin \psi \sin \chi \cos \theta}\right)\,,$$ where $\chi$ is the magnetic inclination angle between $\widehat{e}_{3}$ and $\widehat{m}$. The time derivative of $\Phi$ is the instantaneous spin angular frequency of the NS. Its time-averaged value corresponds to the mean spin angular frequency obtained in the observation. Note that for a precessing triaxial NS, all of the three Euler angles change with time. Especially, the wobble angle varies in a range from its minimum value $\theta_{\rm{min}}$ to the maximum, $\theta_{\rm{max}}$. We discuss separately the two situations for $\theta_{\rm{min}} > \chi$ and $\theta_{\rm{max}} < \chi$ to obtain the timing residual of the precessing NS. - When $\theta_{\rm{min}} > \chi$, the time-averaged spin frequency of the NS is $\dot{\phi}$. Therefore, the precession-induced phase residual is [@Jones:2000ud] $$\begin{aligned} \label{eqn:residual1} \Delta \Phi =\Phi-\left(\phi-\frac{\pi}{2}\right) =\arctan \left(\frac{\cos \psi \sin \chi}{\sin \theta \cos \chi-\sin \psi \sin \chi \cos \theta}\right)\,. \end{aligned}$$ - When $\theta_{\rm{max}} < \chi$, the time-averaged spin frequency is $\dot \phi +\dot \psi$, and the precession-induced phase residual is [@Jones:2000ud] $$\begin{aligned} \label{eqn:residual2} \Delta \Phi &=\Phi-(\phi+\psi) \nonumber\\ &=\arctan \left[\frac{(\cos \theta-1) \sin \psi \sin \chi-\sin \theta \cos \chi}{\cos \psi \sin \chi+(\cos \theta \sin \psi \sin \chi-\sin \theta \cos \chi) \tan \psi}\right] \,. \end{aligned}$$ ![ ([*Upper*]{}) Precession-induced residuals of the spin period and the spin period derivative for a large wobble angle where $\theta_{\rm{min}} > \chi$. ([*Lower*]{}) The Fourier amplitude for the spin period residual. In this figure, we have chosen the magnetic inclination angle $\chi=\pi/6$, the oblateness $\epsilon=4.9 \times 10^{-8}$, the nonaxisymmetry parameter $\delta=0.1$, and a wobble angle in the range $\theta \in \left(0.79,\,0.84\right)$. With these parameters, we have $T_{1}=0.010\,\textrm{s}$, and a free precession period $T=3.1\times10^{5}\, \textrm{s}$. []{data-label="fig.timing_large"}](fig_timing_large.pdf){width="8.2cm"} ![ ([*Upper*]{}) Precession-induced residuals of the spin period and the spin period derivative for a small wobble angle where $\theta_{\rm{max}} \ll \chi$. ([*Lower*]{}) The Fourier amplitude for the spin period residual. In this figure, we have chosen the same $\chi$, $\epsilon$, and $\delta$ as in Fig. \[fig.timing\_large\], but a small wobble angle in the range $\theta \in \left(0.017,\,0.018\right)$. With these parameters, we have $T_{1}=0.010\,\textrm{s}$, and a free precession period $T=2.1\times10^{5}\,\textrm{s}$. []{data-label="fig.timing_small"}](fig_timing_small.pdf){width="8.2cm"} The precession-induced residuals of the spin period, $P$, and the spin period derivative, $\dot P$, can be calculated using the time derivatives of the precession-induced phase residual, $$\begin{aligned} \label{eqn:ppdot_modulation1} \Delta P &=-\frac{P_{0}^{2}}{2 \pi} \Delta \dot{\Phi}\,, \\ \label{eqn:ppdot_modulation2} \Delta \dot{P} &=-\frac{P_{0}^{2}}{2 \pi} \Delta \ddot{\Phi}\,,\end{aligned}$$ where $P_{0}$ is the mean spin period of the NS. Substituting the time-evolving Euler angles into Eqs. (\[eqn:residual1\]–\[eqn:ppdot\_modulation2\]), one can obtain the modulations of the spin period and the spin period derivative for different choices of parameters. In Fig. \[fig.timing\_large\] and Fig. \[fig.timing\_small\] we respectively present examples for cases of a large wobble angle where $\theta_{\rm{min}} > \chi$, and a small wobble angle where $\theta_{\rm{max}} \ll \chi$. In the calculation, we take the magnetic inclination angle $\chi=\pi/6$, the mean spin period $P_{0}=0.01\,\rm{s}$, and we make use of Eq. (\[eqn:oblateness\]) to estimate the oblateness. - The residuals of the spin period and the spin period derivative for the case of a large wobble angle are displayed in the upper panel of Fig. \[fig.timing\_large\]. The Fourier transformation of the spin period residual is shown in the lower panel of Fig. \[fig.timing\_large\]. The spectrum shows strong peaks at frequencies $n\,\Omega_{\textrm{p}} / 2\pi$, where $n$ is a positive integer number and $\Omega_{\textrm{p}}$ ($\simeq 2.0\times10^{-5}\,{\rm s}^{-1}$) is the free precession angular frequency defined in Eq. (\[eqn:precession\_angular\]). - For the small wobble angle limit where $\theta_{\rm{max}} \ll \chi$, we display the residuals of the spin period and the spin period derivative in the upper panel of Fig. \[fig.timing\_small\]. We also take the Fourier transformation of the spin period residual, whose amplitude is shown in the lower panel of Fig. \[fig.timing\_small\]. Notice that a logarithmic scale is used for the Fourier amplitude. Compared to the case of a large wobble angle, the harmonics at $n\Omega_{\textrm{p}}/2\pi$ ($n\geq 2$) are much weaker than the line at $\Omega_{\textrm{p}}/2\pi$ (now, $\Omega_{\textrm{p}} \simeq 3.0 \times 10^{-5} \, {\rm s}^{-1}$) in the case of a small wobble angle. In the small wobble angle limit, the precession-induced spin phase residual can be approximated as [@Jones:2000ud; @Link:2001zr] $$\Delta \Phi = -\sin \theta \cot \chi \cos \psi - \frac{1}{4} \sin^{2}\theta (1+2\cot^{2}\chi)\sin 2\psi \,.$$ Applying the series expansion of the Euler angles in Eqs. (\[eq:series:begin\]–\[eq:series:end\]) and using Eq. (\[eqn:residual2\]), the spin period residual is given by $$\begin{aligned} \label{eqn:delp} \Delta P \approx &\frac{P_{0}^{2}}{2\pi}\Omega_{\mathrm{p}}\gamma (8 \kappa +1) \cot \chi \cos \left(\Omega_{\mathrm{p}} t \right) \nonumber \\ &+\frac{P_{0}^{2}}{4\pi}\Omega_{\mathrm{p}} \gamma^{2} \left(1+2\cot^{2}\chi\right)\cos \left(2\Omega_{\mathrm{p}} t \right) \,.\end{aligned}$$ It shows that at the second order of the wobble angle, the modulation of the spin period includes the first and the second harmonics of the free precession angular frequency $\Omega_{\textrm{p}}$, corresponding to the first two peaks in the lower panel of Fig. \[fig.timing\_small\]. The third peak comes from higher-order terms that are not included in the approximation. The residual of the spin period derivative $\Delta \dot{P}$ can be obtained by taking the time derivative of $\Delta{P}$, which gives $$\begin{aligned} \label{eqn:delpdot} \Delta \dot{P} \approx &-\frac{P_{0}^{2}}{2\pi}\Omega_{\mathrm{p}}^{2}\gamma (8 \kappa +1) \cot \chi \sin \left(\Omega_{\mathrm{p}} t \right) \nonumber \\ &-\frac{P_{0}^{2}}{2\pi}\Omega_{\mathrm{p}}^{2} \gamma^{2} \left(1+2\cot^{2}\chi\right)\sin \left(2\Omega_{\mathrm{p}} t \right) \,.\end{aligned}$$ When $\kappa=0$, Eqs. (\[eqn:delp\]–\[eqn:delpdot\]) reduce to the corresponding results for a precessing biaxial NS [@Link:2001zr]. Pulse-width modulation {#sec:pulse_width} ---------------------- ![Geometry of the pulsar emission beam in the cone model [@Gil:1984ads; @Lorimer:2005misc]. The emission is confined in a cone with an opening angle $\rho$. We denote the impact angle as $\beta$, which corresponds to the closest approach between the line of sight and the magnetic dipole moment. Pulse signals can be observed once the line of sight sweeps through the cone. The purple line denotes the sweep of the line of sight, and different cuts of the line of sight through the cone result in different pulse width $W$.[]{data-label="fig.pulse_width"}](fig_pulse_width){width="7cm"} ![([*Upper*]{}) The pulse-width modulation in the case of a large wobble angle; parameters are the same as in Fig. \[fig.timing\_large\]. ([*Lower*]{}) The pulse-width modulation for a small wobble angle; parameters are the same as in Fig. \[fig.timing\_small\]. For both cases, we have chosen the inclination angle $\iota=5\pi/6$, and the angular radius of the emission cone $\rho=\pi/6$.[]{data-label="fig.timing_width"}](fig_timing_width){width="8.2cm"} ![image](fig_gw_large.pdf){width="17cm"} In order to analyse the pulse-width modulation, we adopt a simple cone model to describe the radiation of a pulsar. For a more complicated radiation geometry, our method can be extended as well. In the cone model, from the geometry in Fig. \[fig.pulse\_width\] we have [@Gil:1984ads; @Lorimer:2005misc], $$\begin{aligned} \label{eq:cone:model} \sin ^{2}\left(\frac{W}{4}\right)=\frac{\sin ^{2}(\rho / 2) -\sin ^{2}(\beta / 2)}{\sin (\Theta+\beta) \sin \Theta}\,,\end{aligned}$$ where $\Theta$ is defined in Fig. \[fig.timing\], and ${W}$, $\rho$, $\beta$ are defined in Fig. \[fig.pulse\_width\]. Equation (\[eq:cone:model\]) is not exact for the pulse width because in our case the angle $\Theta$ changes with time. However, as the spin frequency is much higher than the free precession frequency, the change in $\Theta$ during a spin period is negligible, thus this approximation is good enough for our calculation. In this cone model, the observer can observe the pulse signal once the line of sight enters into the emission cone. The variation of pulse width can be determined by the angle $\Theta$ once the inclination angle (denoted as $\iota$) between the angular momentum and the line of sight to the NS is determined. The angle $\Theta$ can be expressed through the Euler angles and the magnetic inclination angle via [@Jones:2000ud; @Link:2001zr], $$\cos \Theta=\sin \theta \sin \psi \sin \chi+\cos \theta \cos \chi \,.$$ In Fig. \[fig.timing\_width\] we present examples for the pulse-width modulation with a large wobble angle where $\theta_{\rm{min}} > \chi$ and a small wobble angle where $\theta_{\rm{max}} \ll \chi$. The choices of the oblateness, the nonaxisymmetry, and the magnetic inclination angle are the same as in Fig. \[fig.timing\_large\] and Fig. \[fig.timing\_small\] for the large and small wobble angles, respectively. The example for the large wobble angle is displayed in the upper panel of Fig. \[fig.timing\_width\]. For our (extreme) choice of the parameters, the angle between the angular momentum and the dipole moment changes significantly, $\Theta \in \left(0.26,\,1.31\right)$. As a consequence, the pulse width changes in a wide range with $W\in\left(0,2.68\right)$. The line of sight leaves out of the emission cone due to the free precession during certain time ranges, and then the pulse width diminishes to zero accordingly. The modulation of pulse width in the case of a small wobble angle is shown in the lower panel of Fig. \[fig.timing\_width\]. In this case, the angle $\Theta$ is in the range of $\Theta \in \left(0.51,\, 0.54\right)$, and the change of pulse width is much milder with $W\in\left(2.14, 2.21\right)$. In the case of a small wobble angle, applying the series expansions of $\theta$ and $\psi$ in Eqs. (\[eq:series:begin\]–\[eq:series:end\]), the angle $\Theta$ can be approximated as $$\Theta \approx \chi - \sin \theta \sin \psi \approx \chi - \gamma \cos \left(\Omega_{\mathrm{p}} t\right) \,,$$ which reduces to the corresponding result for a precessing biaxial NS [@Link:2001zr]. Continuous Gravitational Waves {#sec:gw} ============================== We discuss generic continuous GWs from a triaxially-deformed freely-precessing NS in Section \[sec:g:wf\], and the approximation to the waveform with small oblateness, a small wobble angle, and small nonaxisymmetry in Section \[sec:gw\_small\]. The results mainly follow @Zimmermann:1980ba and @VanDenBroeck:2004wj, but we make further extensions by assuming no hierarchy in the three small parameters. Generic waveform {#sec:g:wf} ---------------- We use the quadrupole approximation for the continuous GWs from freely precessing triaxial NSs. In the transverse-traceless (TT) gauge, the metric perturbation is [@Misner:1974qy] $$\label{eqn:quadrupole} h_{i j}^{\mathrm{TT}}=\frac{2G}{r c^{4}} \frac{{{\rm d}}^{2} I_{i j}}{{{\rm d}}\,t^{2}}\,,$$ where $I_{ij}$ is the trace-free part of the moment of inertia tensor, and $r$ is the luminosity distance from the source to the observer. Alternatively, the quadrupole formula in Eq. (\[eqn:quadrupole\]) can be expressed as [@Zimmermann:1980ba] $$\label{eqn:waveform1} h_{i j}^{\mathrm{TT}}=-\frac{2G}{rc^{4}} \mathcal{R}_{i k} \mathcal{R}_{j l }A_{kl} \,,$$ where $\mathcal{R}$ is the rotation matrix (\[eqn:rotation\]), and $A_{kl}$ is determined by the body-frame angular velocities and the body-frame moments of inertia. For example, we have $$\begin{aligned} &A_{11}=2\left(\Delta_{2} \omega_{2}^{2}-\Delta_{3} \omega_{3}^{2}\right)\label{eqn:a11}\,,\\ &A_{12}= \left(\Delta_{1}-\Delta_{2}+\frac{\Delta_{3}^{2}}{I_{3}}\right)\omega_{1} \omega_{2} \label{eqn:a12} \,,\end{aligned}$$ where $$\Delta_{1} \equiv I_{2}-I_{3}, \quad \Delta_{2} \equiv I_{3}-I_{1}, \quad \Delta_{3} \equiv I_{1}-I_{2}\,.$$ The other components of $A_{kl}$ can be obtained from Eqs. (\[eqn:a11\]–\[eqn:a12\]) by cyclic permutation of the indices. The waveform in Eq. (\[eqn:waveform1\]) is usually decomposed into $h_+$ and $h_\times$, $$\label{eqn:decompose} h_{i j}^{\mathrm{TT}}= h_{+} \left(\widehat{e}_{+}\right)_{ij} + h_{\times} \left(\widehat{e}_{\times}\right)_{ij} \,,$$ where $h_{+}$ and $h_{\times}$ represent the radiation of the two independent polarizations. The polarization tensors, $\widehat{e}_{+}$ and $\widehat{e}_{\times}$, are $$\begin{aligned} &\widehat{e}_{+}=\widehat{p} \otimes \widehat{p}-\widehat{q} \otimes \widehat{q}\label{eqn:plus}\,,\\ &\widehat{e}_{\times}=\widehat{p} \otimes \widehat{q}+\widehat{q} \otimes \widehat{p} \label{eqn:cross} \,,\end{aligned}$$ where $\widehat{p}$ and $\widehat{q}$ are two unit vectors with $\widehat{p}\times \widehat{q}$ in the propagation direction of the GWs. We assume that the observer lies in the plane and define the inclination angle $\iota$ as the angle between the direction of the angular momentum $\widehat{e}_{\rm Z}$ and the line of sight to the NS. In the inertial frame, the unit vectors $\widehat{p}$ and $\widehat{q}$ are $$\begin{aligned} &\widehat{p}=-\widehat{e}_{\rm Y} \cos \iota-\widehat{e}_{\rm Z} \sin \iota \label{eqn:p}\,,\\ &\widehat{q}=-\widehat{e}_{\rm X}\label{eqn:q}\,.\end{aligned}$$ Combining Eq. (\[eqn:waveform1\]) and Eqs. (\[eqn:decompose\]–\[eqn:q\]), the waveforms of the two polarizations are [@Zimmermann:1980ba; @VanDenBroeck:2004wj] $$\begin{aligned} &h_{+} =-\frac{G}{rc^{4}} \big[\left( \mathcal{R}_{2 k} \cos \iota + \mathcal{R}_{3 k} \sin \iota\right)\left( \mathcal{R}_{2 l} \cos \iota + \mathcal{R}_{3 l} \sin \iota\right) \nonumber \\ &\quad \quad \quad \quad -\mathcal{R}_{1 k} \mathcal{R}_{1 l}\big] A_{k l} \label{eqn:waveform_plus}\,,\\ &h_{\times} =-\frac{2G}{rc^{4}}\left( \mathcal{R}_{2 k} \cos \iota + \mathcal{R}_{3 k} \sin \iota\right) \mathcal{R}_{1 l} A_{k l}\label{eqn:waveform_cross}\,.\end{aligned}$$ In Section \[sec:free\_prec\], we have discussed the time evolution of the angular velocities in the body frame and the three Euler angles, so we can obtain $h_{+}$ and $h_{\times}$ at any given time $t$. In Fig. \[fig.gw\_large\], we plot waveforms of $h_{+}$ and $h_{\times}$ in the time domain at different inclination angles. Note that the parameters that we have chosen for the plot are exaggerated for NSs for illustrative purposes. Physical parameters consistent with the estimates in Section \[sec:dynamics\_NS\] can be easily implemented, but the effects will be too small for visual inspection. We also show the Fourier transformation of the waveforms at the inclination angle of $\iota=\pi/6$ in Fig. \[fig.gw\_large\_fft\]. The peaks of the spectra are dominantly at angular frequencies $$\Omega_{\mathrm{r}}+n\,\Omega_{\mathrm{p}}\,,\quad 2\Omega_{\mathrm{r}}+n\,\Omega_{\mathrm{p}}\,,$$ where $n=0, \pm1, \pm2, \cdots$ is an integer number. ![The Fourier amplitudes of $h_{+}$ and $h_{\times}$ for the waveform in the second panel of Fig. \[fig.gw\_large\] with $\iota=\pi/6$. []{data-label="fig.gw_large_fft"}](fig_gw_large_fft.pdf){width="8.4cm"} Waveform for small oblateness, small wobble angle and small nonaxisymmetry {#sec:gw_small} -------------------------------------------------------------------------- Following @Zimmermann:1980ba and @VanDenBroeck:2004wj, we investigate the waveforms in the limit of small oblateness $\epsilon$, small wobble angle $\theta$, and small nonaxisymmetry $\delta$. The difference between our work and the previous work is that, instead of assuming a hierarchy between $\kappa$ and $\gamma$, namely $\kappa \sim {\cal O}\big( \gamma^2 \big)$ in @VanDenBroeck:2004wj, we treat $\gamma$ and $\kappa$ as small quantities independent to each other. As discussed in Section \[sec:dynamics\_NS\], it is more plausible to assume no intrinsic hierarchy between $\kappa$ and $\gamma$, in particular when the internal structure of NSs is still rather uncertain. The procedure to derive the expansion of the waveform is as follows. First we expand $R_{i j}$ and the angular velocities $\omega_{1}$, $\omega_{2}$, and $\omega_{3}$ to the second order of $\gamma$ and $\kappa$ using the expansion of Euler angles in Eqs. (\[eq:series:begin\]–\[eq:series:end\]). Second, we substitute the expansion of $R_{i j}$ and $A_{k l}$ into Eqs. (\[eqn:waveform\_plus\]–\[eqn:waveform\_cross\]). Third, we retain the GW waveform to the second order of $\gamma$ and $\kappa$ and combine the trigonometric functions using trigonometric identities. Such an extension with independent $\kappa$ and $\gamma$ parameters gives us more features than what was discovered before. With the above procedure, we obtain six components of $h_{+}$, which are distinguished by different GW frequencies, $$\begin{aligned} h_{+}^{1} = & \frac{ - G\epsilon \gamma I_{3} b^{2}\sin 2\iota }{rc^{4}}\cos \left[(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})t\right]\,,\\ h_{+}^{2} = & \frac{-32 G\epsilon \kappa I_{3} b^{2} (1+ \cos^{2} \iota)}{r c^{4}}\cos \left[2\Omega_{\mathrm{r}}t\right]\,,\\ h_{+}^{3} = & \frac{2 G\epsilon (64 \kappa^{2} +\gamma^{2})I_{3} b^{2}(1+ \cos^{2} \iota)}{r c^{4}}\cos \left[2(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})t\right]\,,\\ h_{+}^{4} = & \frac{- 14G\epsilon \gamma \kappa I_{3}b^{2}\sin 2\iota }{r c^{4}}\cos \left[(\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}})t\right]\,,\\ h_{+}^{5} = & \frac{2G\epsilon \gamma \kappa I_{3} b^{2}\sin 2\iota}{rc^{4} }\cos \left[(\Omega_{\mathrm{r}}+3\Omega_{\mathrm{p}})t\right]\,,\\ h_{+}^{6} = & \frac{-128 G\epsilon \kappa ^{2} I_{3} b^{2}(1+ \cos^{2} \iota)}{r c^{4}}\cos \left[2(\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}})t\right]\, .\end{aligned}$$ Similarly, we obtain six components of $h_{\times}$, $$\begin{aligned} h_{\times}^{1} = & \frac{ 2G\epsilon \gamma I_{3} b^{2}\sin \iota }{r c^{4}}\sin \left[(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})t\right]\,,\\ h_{\times}^{2} = & \frac{64G \epsilon \kappa I_{3} b^{2} \cos \iota}{r c^{4} }\sin \left[2\Omega_{\mathrm{r}}t\right]\,,\\ h_{\times}^{3} = &\frac{-4 G\epsilon (64 \kappa^{2} +\gamma^{2})I_{3} b^{2} \cos\iota}{r c^{4}}\sin \left[2(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})t\right]\\ h_{\times}^{4} = & \frac{28G\epsilon \gamma \kappa I_{3}b^{2}\sin \iota }{r c^{4}}\sin \left[(\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}})t\right]\,,\\ h_{\times}^{5} = & \frac{-4G\epsilon \gamma \kappa I_{3} b^{2}\sin \iota}{r c^{4}}\sin \left[(\Omega_{\mathrm{r}}+3\Omega_{\mathrm{p}})t\right]\,,\\ h_{\times}^{6} = & \frac{256 G\epsilon \kappa ^{2} I_{3} b^{2}\cos \iota}{r c^{4}}\sin \left[2(\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}})t\right]\,.\end{aligned}$$ Note that the components with frequencies $\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}}$ and $2\Omega_{\mathrm{r}}$ are the leading order contributions found in @Zimmermann:1980ba, and that the one with frequency $2(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})$ is the third spectral line in @VanDenBroeck:2004wj once $\kappa$ is treated as $O(\gamma^2)$. Below we briefly discuss the waveform for different choices of $\gamma$ and $\kappa$, and for presentation reasons, we leave their observational aspects to the next section, together with the possible radio/X-ray counterparts. - When $\gamma=0$ and $\kappa \neq 0$, the NS does not precess and GWs are radiated at twice of the rotation frequency. The radiation is caused by the asymmetry between $I_{1}$ and $I_{2}$. - When $\kappa=0$ and $\gamma \neq 0$, GWs are radiated at $\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}}$ and $2(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})$. This is the classical result of a precessing biaxial NS [@Zimmermann:1979ip]. - When $\kappa \neq 0$ and $\gamma \neq 0$, the NS is a precessing triaxial body. At the first order of $\gamma$ and $\kappa$, continuous GWs are emitted at angular frequencies of $\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}}$ and $2\Omega_{\mathrm{r}}$ [@Zimmermann:1980ba]. Previously, @VanDenBroeck:2004wj treated $\kappa$ as small as $\gamma^{2}$ and got a new line. It is at the second order of $\gamma$, but still at the first order of $\kappa$. The continuous GWs corresponding to this spectral line have an angular frequency of $2(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})$. Here in our work, we treat $\kappa$ and $\gamma$ independently and expand the waveform to the second order that includes $\gamma^2$, $\kappa^2$, and $\gamma \kappa$. We find three new spectral lines at angular frequencies of $\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}}$, $\Omega_{\mathrm{r}}+3\Omega_{\mathrm{p}}$ and $2(\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}})$. We consider it a natural extension of the results in @VanDenBroeck:2004wj. Discussions {#sec:disc} =========== The first detection of the coalescence of a binary NS system opened the avenue for multimessenger astrophysics [@TheLIGOScientific:2017qsa; @GBM:2017lvd; @Monitor:2017mdv]. In this paper, we discuss another possibility to achieve multimessenger observation with electromagnetic and GW detectors, namely the observation of precessing NSs. Multimessenger astrophysics can be greatly advanced if a precessing NS is observed as a pulsar via radio and/or X-ray telescopes, and in the mean time, its continuous GW radiation is detected by the kilohertz laser-interferometric GW detectors, including LIGO [@TheLIGOScientific:2014jea], Virgo [@TheVirgo:2014hva], and KAGRA [@Akutsu:2018axf]. As we will see below, this should surely provide invaluable constraints on the NS structure, complementary to traditional observables, including masses, radii and tidal deformabilities of NSs. Radio/X-ray signals and continuous GWs from precessing triaxial NSs will provide valuable information about the wobble angle, the nonaxisymmetry, and the oblateness of the source. These measurements are ultimately related to the long-standing question on the equation of state for supranuclear matters inside NSs [@Lattimer:2000nx]. Below we take the case of a small wobble angle as an example to discuss the extraction of physical properties from such measurements [@VanDenBroeck:2004wj]. For pulsar signals, the amplitude of the spin period residual in Eq. (\[eqn:delp\]) at the frequencies of $\Omega_{\mathrm{p}}$ and $2\Omega_{\mathrm{p}}$ can be expressed as $$\begin{aligned} {\Delta P_{1}} =& { 1.6\times 10^{-10}} \cot\chi\left( \frac{P_{0}}{0.01\, \mathrm{s}} \right)^{2} \left(\frac{\Omega_{\mathrm{p}}}{10^{-5}\,\mathrm{rad\,s^{-1}}}\right) \left(\gamma +8\kappa \gamma\right) \,\mathrm{s} \,,\\ {\Delta P_{2}} =& {8.0\times 10^{-11}} \left(1 + 2\cot^{2}\chi\right) \left( \frac{P_{0}}{0.01\,\mathrm{s}} \right)^{2} \left( \frac{\Omega_{\mathrm{p}}}{10^{-5}\, \mathrm{rad\,s^{-1}}}\right) \gamma^{2}\,\mathrm{s} \,,\end{aligned}$$ where $\Delta P_{1}$ is the amplitude of the spin period residual at the frequency $\Omega_{\textrm{p}}$, and $\Delta P_{2}$ is the amplitude of the spin period residual at the frequency $2\Omega_{\textrm{p}}$. The elliptic integral of the first kind $K(m)$ approaches $\pi/2$ in the small-wobble-angle and small-nonaxisymmetry limit, which leads to $\Omega_{\textrm{p}} \to \epsilon\, \Omega_{\textrm{r}}$. Therefore, in such a limiting case, the precession angular frequency $\Omega_{\textrm{p}}$ can be approximated as [@VanDenBroeck:2004wj] $$\label{eqn:get_ob} \Omega_{\mathrm{p}} \simeq \frac{\pi}{2 K(m)} \epsilon \, \Omega_{\mathrm{r}}\,.$$ The free precession period $T$ can be directly obtained from the positions of spectral lines in the frequency domain of timing residuals. The wobble angle $\gamma$, the nonaxisymmetry $\kappa$, and the magnetic inclination angle $\chi$ cannot be fully determined with two spectral lines. But if the nonaxisymmetry is small enough, the second-order contribution to the amplitude of the first line can be ignored. Then the wobble angle $\gamma$ and the magnetic inclination angle $\chi$ can be determined. The pulse-width modulations will provide important information on the beam shape of pulsars. In our work, we used a simple cone model [@Gil:1984ads] to describe the modulations of pulse width. We find that up to the second order, the pulse-width modulation is the same as that in the biaxial case. From the perspective of observation, if the pulse-width variations from precessing NSs are observed, the beam shape can be inferred via different cuts by the line of sight [@Link:2001zr]. From above discussions, we find that the inclusion of the nonaxisymmetry of NS only slightly changes the timing residuals and the pulse width compared with the biaxial results. The reason is that the parameter $m=16 \kappa \gamma^{2}$ plays an important role in determining the behavior of free precession. As $m$ approaches zero, the biaxial approximation is robust. Even for a large nonaxisymmetry, if the wobble angle $\gamma \ll 1$, the dynamics of the NS still only deviates from the biaxial one slightly. In the case of large wobble angles and large nonaxisymmetries, the parameter $m$ can be of order unity. Then the amplitudes of the harmonics are correspondingly large in timing residuals. In this case, if the angle between the beam and the line of sight changes during the free precession, the observer might lose the radiating beam when the line of sight does not cut the radiating region (see the upper panel in Fig. \[fig.timing\_width\]). For active pulsars, magnetospheric processes may affect the pulse signals from precessing NSs and make the interpretation of free precession complicated. For example, the precession may itself introduce changes on the emission geometry and the activities of the magnetosphere [@Link:2001zr]. Besides, the changes of the emission height can contribute to pulse arrival time [@Link:2001zr]. Depending on the properties of the observed pulsars, these complications need to be considered. As a new observation window, GWs from precessing NSs can give complementary physical information on these triaxial NSs. Following @VanDenBroeck:2004wj, we present the procedures to extract physical parameters from continuous GWs. We take the “$\times$” mode as an example, and the discussion for the “$+$” mode is similar. For the “$\times$”-polarized GW, the amplitudes of the first-order lines at $\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}}$ and $2\Omega_{\mathrm{r}}$ are $$\begin{aligned} {A_{\times}^{1}}= & {1.0\times 10^{-28}}\,\gamma \sin \iota \left(\frac{\epsilon}{4.9\times10^{-8}} \right) \left( \frac{f_{\mathrm{r}}}{100\,\mathrm{Hz}} \right)^{2} \left(\frac{10\,\mathrm{kpc}}{r}\right) \,,\\ {A_{\times}^{2}}=& {3.3\times 10^{-27}}\,\kappa \cos\iota \left( \frac{\epsilon}{4.9\times10^{-8}} \right) \left( \frac{f_{\mathrm{r}}}{100\,\mathrm{Hz}} \right)^{2} \left( \frac{10\,\mathrm{kpc}}{r}\right) \,,\end{aligned}$$ where we have assumed that the moment of inertia of the NS is $10^{45}\,\mathrm{g\,cm^{2}}$ and applied Eq. (\[eqn:oblateness\]) to estimate the oblateness at specific rotation frequency for a NS with $M=1.4 \, M_{\odot}$ and $R=10 \,\textrm{km}$. If the first-order lines are observed, the rotation frequency and the free precession frequency can be determined. The inclination angle $\iota$ can be obtained by comparing the amplitudes of different polarizations for the two first-order lines. Note that the determination of the inclination angle $\iota$ is model dependent with the radio signals [@Jones:2007zza]. The derived $\iota$ from continuous GWs is less model dependent and can help to probe the pulsar geometry [@Jones:2007zza]. In our work, the inclination angle $\iota$ is needed to determine the pulse-width modulations. The oblateness, nonaxisymmetry, and wobble angle are degenerate in the first-order waveform. For the “$\times$”-polarized GW, the amplitudes of the second-order lines at $2(\Omega_{\mathrm{r}}+\Omega_{\mathrm{p}})$, $\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}}$, $\Omega_{\mathrm{r}}+3\Omega_{\mathrm{p}}$, and $2(\Omega_{\mathrm{r}}-\Omega_{\mathrm{p}})$ are respectively $$\begin{aligned} {A_{\times}^{3}}=& -{2.1\times 10^{-28}}\,\left(64\kappa^{2}+\gamma^{2}\right) \nonumber\\ & \times \cos \iota \left( \frac{\epsilon}{4.9\times10^{-8}} \right) \left( \frac{f_{\mathrm{r}}}{100\,\mathrm{Hz}}\right)^{2}\left( \frac{10\,\mathrm{kpc}}{r} \right)\,,\end{aligned}$$ $$\begin{aligned} {A_{\times}^{4}}=& {1.5\times 10^{-27}}\,\gamma \kappa \sin \iota \left( \frac{\epsilon}{4.9\times10^{-8}} \right) \left( \frac{f_{\mathrm{r}}}{100\,\mathrm{Hz}} \right)^{2}\left( \frac{10\,\mathrm{kpc}}{r}\right) \,,\end{aligned}$$ $$\begin{aligned} {A_{\times}^{5}}=& - {2.1\times 10^{-28}} \,\gamma \kappa\sin \iota \left( \frac{\epsilon}{4.9\times10^{-8}} \right) \left( \frac{f_{\mathrm{r}}}{100\,\mathrm{Hz}} \right)^{2}\left( \frac{10\,\mathrm{kpc}}{r}\right) \,,\end{aligned}$$ $$\begin{aligned} {A_{\times}^{6}}= & {1.3\times 10^{-26}} \,\kappa^{2} \cos\iota \left( \frac{\epsilon}{4.9\times10^{-8}} \right) \left( \frac{f_{\mathrm{r}}}{100\,\mathrm{Hz}} \right)^{2} \left( \frac{10\,\mathrm{kpc}}{r}\right) \,.\end{aligned}$$ Theoretically, by comparing the amplitudes of the two first-order lines and one of the second-order lines, the wobble angle $\gamma$ and the nonaxisymmetry $\kappa$ can be determined. Then, one can obtain the parameter $m=16\kappa\gamma^{2}$ so that the oblateness can be determined using Eq. (\[eqn:get\_ob\]). From the observational perspective, however, these amplitudes at the second order are very small, and unlikely to be detectable with the Advanced LIGO/Virgo detectors. Besides, if the coherent time of the observation is shorter than the free precession period, the free precession angular frequency $\Omega_{\textrm{p}}$ cannot be resolved in frequency domain. However, in the optimistic situation when they are observed with the next-generation GW detectors (e.g., the Einstein Telescope and Cosmic Explorer [@Hild:2010id; @Sathyaprakash:2012jk; @Punturo:2010zz; @Evans:2016mbw]), they can be used to infer the oblateness, nonaxisymmetry, and wobble angle of the star. The distance to the NS and the moment of inertia always enter the waveform through the combination $I_{3}/r$. Therefore, we cannot obtain them independently. By inserting an educated guess of $I_{3}$, the distance to the NS can be roughly determined [@VanDenBroeck:2004wj]. Or conversely, if the distance can be determined via parallax or dispersion measure in pulsar timing data, one can get a measurement of $I_3$, thus putting new constraints on the equation of state. Detailed analysis along this line is beyond the scope of this paper, and we leave it to future study. Summary {#sec:sum} ======= To summarize, in this paper we describe both the analytical and numerical methods to calculate the dynamical evolution of precessing triaxial rigid bodies. We discuss the timing residuals and the pulse-width modulations for precessing triaxial NSs. We also present concrete examples of the timing residuals and the pulse-width modulations for large and small wobble angles. For the GWs from triaxial precessing NSs, after reviewing the general solution of the quadrupole waveform [@Zimmermann:1980ba] and showing examples of the waveform in both time and frequency domains, we extend the work by @VanDenBroeck:2004wj at the second order by relaxing the assumption on the small parameters $\gamma$ and $\kappa$. We obtain three new lines in the continuous GWs spectra, which might be useful for future continuous GW analysis using the third-generation ground-based detectors [@Evans:2016mbw], depending on the distance of the sources. If the prospects of the multimessenger astrophysics discussed in this work become reality, numerous information on the shape of NSs and the equation of state of supranuclear matters will be obtained, enabling a new frontier for fundamental physics. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the National Natural Science Foundation of China (11975027, 11991053, 11721303 and 11673002), the Young Elite Scientists Sponsorship Program by the China Association for Science and Technology (2018QNRC001), the Max Planck Partner Group Program funded by the Max Planck Society, and the High-performance Computing Platform of Peking University. It was partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences through the Grant No. XDB23010200. L. Sun is a member of the LIGO Laboratory. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the United States National Science Foundation, and operates under cooperative agreement PHY–1764464. Advanced LIGO was built under award PHY–0823459. Data availability {#data-availability .unnumbered} ================= The data underlying this article will be shared on reasonable request to the corresponding author. \[lastpage\] [^1]: Corresponding author. E-mail: lshao@pku.edu.cn (LS) [^2]: Note that the solution of the Euler angle $\phi$ in @landau1960course and @Zimmermann:1980ba has sign typos when the following theta function and its derivative [@whittaker1988treatise] is used as they claimed. The fourth Jacobi theta function is defined as $\vartheta_{4}(u, q)=1+2 \sum_{n=1}^{\infty}(-1)^{n} q^{n^{2}} \cos (2\pi n u)$, and the derivative of $\vartheta_{4}$ with respect to $u$ is $\vartheta^{\prime}_{4}(u, q)=4\pi \sum_{n=1}^{\infty}n\,(-1)^{n+1} q^{n^{2}} \sin (2\pi n u)$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Nearest neighbors is a successful and long-standing technique for anomaly detection. Significant progress has been recently achieved by self-supervised deep methods (e.g. RotNet). Self-supervised features however typically under-perform Imagenet pre-trained features. In this work, we investigate whether the recent progress can indeed outperform nearest-neighbor methods operating on an Imagenet pretrained feature space. The simple nearest-neighbor based-approach is experimentally shown to outperform self-supervised methods in: accuracy, few shot generalization, training time and noise robustness while making fewer assumptions on image distributions.' bibliography: - 'example\_paper.bib' --- Introduction {#sec:intro} ============ Agents interacting with the world are constantly exposed to a continuous stream of data. Agents can benefit from classifying particular data as anomalous i.e. particularly interesting or unexpected. Such discrimination is helpful in allocating resources to the observations that require it. This mechanism is used by humans to discover opportunities or alert of dangers. Anomaly detection by artificial intelligence has many important applications such as fraud detection, cyber intrusion detection and predictive maintenance of critical industrial equipment. In machine learning, the task of anomaly detection consists of learning a classifier that can label a data point as normal or anomalous. In supervised classification, methods attempt to perform well on normal data whereas anomalous data is considered noise. The goal of an anomaly detection methods is to specifically detect extreme cases, which are highly variable and hard to predict. This makes the task of anomaly detection challenging (and often poorly specified). The three main settings for anomaly detection are: supervised, semi-supervised and unsupervised. In the *supervised* setting, labelled training examples exist for normal and anomalous data. It is therefore not fundamentally different from other classification tasks. This setting is also too restrictive for many anomaly detection tasks as many anomalies of interest have never been seen before e.g. the emergence of new diseases. In the more interesting *semi-supervised* setting, all training images are normal with no included anomalies. The task of learning a normal-anomaly classifier is now one-class classification. The most difficult setting is *unsupervised* where an unlabelled training set of both normal and anomalous data exists. The typical assumption is that the proportion of anomalous data is significantly smaller than normal data. In this paper, we deal both with the semi-supervised and the unsupervised settings. Anomaly detection methods are typically based on distance, distribution or classification. The emergence of deep neural networks has brought significant improvements to each category. In the last two years, deep classification-based methods have significantly outperformed all other methods, mainly relying on the principle that classifiers that were trained to perform a certain task on normal data will perform this task well on unseen normal data, but will fail on anomalous data, due to poor generalization on a different data distribution. In a recent paper, @gu2019statistical demonstrated that a K nearest-neighbours (kNN) approach on the raw data is competitive with the state-of-the-art methods on tabular data. Surprisingly, kNN is not used or compared against in most current image anomaly detection papers. In this paper, we show that although kNN on raw image data does not perform well, it outperforms the state of the art when combined with a strong off-the-shelf generic feature extractor. Specifically, we embed every (train and test) image using an Imagenet-pretrained ResNet feature extractor. We compute the K nearest neighbor (KNN) distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if a datum is anomalous. We evaluate this baseline extensively, both on commonly used datasets as well as datasets that are quite different from Imagenet. We find that it has significant advantages over existing methods: i) higher than state-of-the-art accuracy ii) extremely low sample complexity iii) it can utilize very strong external feature extractors, at minimal cost iv) it makes few assumptions on the images e.g. images can be rotation invariant, and of arbitrary size v) it is robust to anomalies in the training set i.e. it can handle the unsupervised case (when coupled with our two-stage approach) vi) it is plug and play, does not have a training stage. Another contribution of our paper is presenting a novel adaptation of kNN to image group anomaly detection, a task that received scant attention from the deep learning community. Although using kNN for anomaly detection is not a new method, it is not often used or compared against by most recent image anomaly detection works. Our aim is to bring awareness to this simple but highly effective and general image anomaly detection method. We believe that every new work should compare to this simple method due to its simplicity, robustness, low sample complexity and generality. Previous Work {#sec:prev} ============= *Pre-deep learning methods:* The two classical paradigms for anomaly detection are: reconstruction-based and distribution-based. Reconstruction-based methods use the training set to learn a set of basis functions, which represent the normal data in an effective way. At test time, they attempt to reconstruct a new sample using the learned basis functions. The method assumes that normal data will be reconstructed well, while anomalous data will not. By thresholding the reconstruction cost, the sample is classified as normal or anomalous. Choices of different basis functions include: sparse combinations of other samples (e.g. kNN) [@eskin2002geometric], principal components [@jolliffe2011principal; @candes2011robust], K-means [@hartigan1979algorithm]. Reconstruction metric include Euclidean, $L_1$ distance or perceptual losses such as SSIM [@wang2004image]. The main weaknesses of reconstruction-based methods are i) difficulty of learning discriminative basis functions ii) finding effective similarity measures is non-trivial. Semi-supervised distribution-based approaches, attempt to learn the probability density function (PDF) of the normal data. Given a new sample, its probability is evaluated and is designated as anomalous if the probability is lower than a certain threshold. Such methods include: parametric models e.g. mixture of Gaussians (GMM). Non-parametric methods include Kernel Density Estimation [@latecki2007outlier] and kNN [@eskin2002geometric] (which we also consider reconstruction-based) The main weakness of distributional methods is the difficulty of density estimation for high-dimensional data. Another popular approach is one-class SVM [@scholkopf2000support] and related SVDD [@tax2004support]. SVDD can be seen as fitting the minimal volume sphere that includes at least a certain percentage of the normal data points. As this method is very sensitive to the feature space, kernel methods were used to learn an effective feature space. *Augmenting classical methods with deep networks:* The success of deep neural networks has prompted research combining deep learned features to classical methods. PCA methods were extended to deep auto-encoders [@yang2017towards], while their reconstruction costs were extended to deep perceptual losses [@zhang2018unreasonable]. GANs were also used as a basis function for reconstruction in images. One approach [@zong2018deep] to improve distributional models is to first learn to embed data in a semantic, low dimensional space and then model its distribution using standard methods e.g. GMM. SVDD was extended by @ruff2018deep to learn deep features as a superior alternative for kernel methods. This method suffers from a “mode collapse” issue, which has been the subject of followup work. The approach investigated in this paper can be seen as belonging to this category, as classical kNN is extended with deep learned features. *Self-supervised Deep Methods:* Instead of using supervision for learning deep representations, self-supervised methods train neural networks to solve an auxiliary task for which obtaining data is free or at least very inexpensive. It should be noted that self-supervised representation typically under-perform those learned from large supervised datasets such as Imagenet. Auxiliary tasks for learning high-quality image features include: video frame prediction [@mathieu2015deep], image colorization [@zhang2016colorful; @larsson2016learning], and puzzle solving [@noroozi2016unsupervised]. Recently, @gidaris2018unsupervised used a set of image processing transformations (rotation by $0,90,180,270$ degrees around the image axis), and predicted the true image orientation. They used it to learn high-quality image features. @golan2018deep, have used similar image-processing task prediction for detecting anomalies in images. This method has shown good performance on detecting images from anomalous classes. The performance of this method was improved by @hendrycks2019using, while it was combined with openset classification and extended to tabular data by @bergman2020classification. In this work, we show that self-supervised methods underperform simpler kNN-based methods that use strong generic feature extractors on image anomaly detection tasks. Deep Nearest-Neighbors for Image Anomaly Detection {#sec:method} ================================================== We investigate a simple K nearest-neighbors (kNN) based method for image anomaly detection. We denote this method, Deep Nearest-Neighbors (DN2). Semi-supervised Anomaly Detection {#subsec:semi-supervised} --------------------------------- DN2 takes a set of input images $X_{train}=x_1,x_2..x_N$. In the semi-supervised setting we assume that all input images are normal. DN2 uses a pre-trained feature extractor $F$ to extract features from the entire training set: $$\label{eq:extract} f_i = F(x_i)$$ In this paper, we use a ResNet feature extractor that was pretrained on the Imagenet dataset. At first sight it might appear that this supervision is a strong requirement, however such feature extractors are widely available. We will later show experimentally that the normal or anomalous images do not need to be particularly closely related to Imagenet. The training set is now summarized as a set of embeddings $F_{train} = f_1,f_2..f_N$. After the initial stage, the embeddings can be stored, amortizing the inference of the training set. To infer if a new sample $y$ is anomalous, we first extract its feature embedding: $f_y = F(y)$. We then compute its kNN distance and use it as the anomaly score: $$\label{eq:knn} d(y) = \frac{1}{k} \sum_{f \in N_k(f_y)}{\|f - f_y\|^2}$$ $N_k(f_y)$ denotes the $k$ nearest embeddings to $f_y$ in the training set $F_{train}$. We elected to use the euclidean distance, which often achieves strong results on features extracted by deep networks, but other distance measures can be used in a similar way. By verifying if the distance $d(y)$ is larger than a threshold, we determine if an image $y$ is normal or anomalous. Unsupervised Anomaly Detection {#subsec:unsupervised} ------------------------------ In the fully-unsupervised case, we can no longer assume that all input images are normal, instead, we assume that only a small proportion of input images are anomalous. To deal with this more difficult setting (and inline with previous works on unsupervised anomaly detection), we propose to first conduct a cleaning stage on the input images. After the feature extraction stage, we compute the kNN distance between each input image and the rest of the input images. Assuming that anomalous images lie in low density regions, we remove a fraction of the images with the largest kNN distances. This fraction should be chosen such that it is larger than the estimated proportion of anomalous input images. It will be later shown in our experiments that DN2 requires very few training images. We can therefore be very aggressive in the percentage of removed image, and keep only the images most likely to be normal (in practice we remove $50\%$ of training images). After removal of the suspected anomalous input images, the images are now assumed to have a very high-proportion of normal images. We can therefore proceed exactly as in the semi-supervised case. Group Image Anomaly Detection {#subsec:group} ----------------------------- Group anomaly detection tackles the setting where the input sample consists of a set of images. The particular combination is important, but not the order. It is possible that each image in the set will individually be normal but the set as a whole will be anomalous. As an example, let us assume normal sets consisting of $M$ images, a randomly sampled image from each class. If we trained a point (per-image) anomaly detector, it will be able to detect anomalous sets containing pointwise anomalous images e.g. images taken from classes not seen in training. An anomalous set containing multiple images from one seen class, and no images from another will however be classified as normal as all images are individually normal. Previously, several deep autoencoder methods were proposed (e.g. @dgroup) to tackle group anomaly detection in images. Such methods suffer from multiple drawbacks: i) high sample complexity ii) sensitivity to reconstruction metric iii) potential lack of sensitivity to the groups. We propose an effective kNN based approach. The proposed method embeds the set by orderless-pooling (we chose averaging) over all the features of the images in the set: OC-SVM Deep SVDD GEOM GOAD MHRot DN2 ----- -------- ---------------- ---------------- ---------------- ---------- ---------- 0 70.6 61.7 $\pm$ 1.3 74.7 $\pm$ 0.4 77.2 $\pm$ 0.6 77.5 **93.9** 1 51.3 65.9 $\pm$ 0.7 95.7 $\pm$ 0.0 96.7 $\pm$ 0.2 96.9 **97.7** 2 69.1 50.8 $\pm$ 0.3 78.1 $\pm$ 0.4 83.3 $\pm$ 1.4 **87.3** 85.5 3 52.4 59.1 $\pm$ 0.4 72.4 $\pm$ 0.5 77.7 $\pm$ 0.7 80.9 **85.5** 4 77.3 60.9 $\pm$ 0.3 87.8 $\pm$ 0.2 87.8 $\pm$ 0.7 92.7 **93.6** 5 51.2 65.7 $\pm$ 0.8 87.8 $\pm$ 0.1 87.8 $\pm$ 0.6 90.2 **91.3** 6 74.1 67.7 $\pm$ 0.8 83.4 $\pm$ 0.5 90.0 $\pm$ 0.6 90.9 **94.3** 7 52.6 67.3 $\pm$ 0.3 95.5 $\pm$ 0.1 96.1 $\pm$ 0.3 **96.5** 93.6 8 70.9 75.9 $\pm$ 0.4 93.3 $\pm$ 0.0 93.8 $\pm$ 0.9 95.2 **95.1** 9 50.6 73.1 $\pm$ 0.4 91.3 $\pm$ 0.1 92.0 $\pm$ 0.6 93.3 **95.3** Avg 62.0 64.8 86.0 88.2 90.1 **92.5** 1. Feature extraction from all images in the group $g$,\ $f^i_g = F(x^i_g)$ 2. Orderless pooling of features across the group:\ $f_g = \frac{\sum_i f^i_g}{number~of~images}$ Having extracted the group feature described above we proceed to detect anomalies using DN2. Experiments {#sec:exp} =========== In this section, we present extensive experiments showing that the simple kNN approach described above achieves better than state-of-the-art performance. The conclusions generalize across tasks and datasets. We extend this method to be more robust to noise, making it applicable to the unsupervised setting. We further extend this method to be effective for group anomaly detection. Unimodal Anomaly Detection {#subsec:exp:uni} -------------------------- The most common setting for evaluating anomaly detection methods is unimodal. In this setting, a classification dataset is adapted by designating one class as normal, while the other classes as anomalies. The normal training set is used to train the method, all the test data are used to evaluate the inference performance of the method. In line with previous works, we report the ROC area under the curve (ROCAUC). OC-SVM GEOM GOAD DN2 -------------- -------- ------ ------ ---------- -- -- FashionMNIST 92.8 93.5 94.1 **94.4** CIFAR100 62.6 78.7 - **89.3** : Anomaly Detection Accuracy on Fashion MNIST and CIFAR10 (ROCAUC $\%$) \[tab:exp\_small\_extra\] We conduct experiments against state-of-the-art methods, deep-SVDD [@ruff2018deep] which combines OCSVM with deep feature learning. Geometric [@golan2018deep], GOAD [@bergman2020classification], Multi-Head RotNet (MHRot) [@hendrycks2019using]. The latter three all use variations of RotNet. For all methods except DN2, we reported the results from the original papers if available. In the case of Geometric [@golan2018deep] and the multi-head RotNet (MHRot) [@hendrycks2019using], for datasets that were not reported by the authors, we run the Geometric code-release for low-resolution experiments, and MHRot for high-resolution experiments (as no code was released for the low-resolution experiments). *Cifar10:* This is the most common dataset for evaluating unimodal anomaly detection. CIFAR10 contains $32 \times 32$ color images from 10 object classes. Each class has $5000$ training images and $1000$ test images. The results are presented in Tab. \[tab:exp\_cifar10\], note that the performance of DN2 is deterministic for a given train and test set (no variation between runs). We can observe that OC-SVM and Deep-SVDD are the weakest performers. This is because both the raw pixels as well as features learned by Deep-SVDD are not discriminative enough for the distance to the center of the normal distribution to be successful. Geometric and later approaches GOAD and MHRot perform fairly well but do not exceed $90\%$ ROCAUC. DN2 significantly outperforms all other methods. In this paper, we choose to evaluate the performance of without finetuning between the dataset and simulated anomalies (which improves performance on all methods including DN2). Outlier Exposure is one technique for such finetuning. Although it does not achieve the top performance by itself, it reported improvements when combined with MHRot to achieve an average ROCAUC of $95.8\%$ on CIFAR10. This and other ensembling methods can also improve the performance of DN2 but are out-of-scope of this paper. *Fashion MNIST:* We evaluate Geometric, GOAD and DN2 on the Fashion MNIST dataset consisting of 6000 training images per class and a test set of 1000 images per class. We present a comparison of DN2 vs. OCSVM, Deep SVDD, Geometric and GOAD. We can see that DN2 outperforms all other methods, despite the data being visually quite different from Imagenet from which the features were extracted. *CIFAR100:* We evaluate Geometric, GOAD and DN2 on the CIFAR100 dataset. CIFAR100 has 100 fine-grained classes with 500 train images each or 20 coarse-grained classes with 2500 train images each. Following previous papers, we use the coarse-grained version. The protocol is the same as CIFAR10. We present a comparison of DN2 vs. OCSVM, Deep SVDD, Geometric and GOAD. The results are inline with those obtained for CIFAR10. **Comparisons against MHRot:** We present a further comparison between DN2 and MHRot [@hendrycks2019using] on several commonly-used datasets. The experiments give further evidence for the generality of DN2, in datasets where RotNet-based methods are not restricted by low-resolution, or by image invariance to rotations. We compute the ROCAUC score on each of the first $20$ categories (all categories if there are less than $20$), by alphabetical order, designated as normal for training. The standard train and test splits are used. All test images from all classes are used for inference, with the appropriate class designated normal and all the rest as anomalies. For brevity of presentation, the average ROCAUC score of the tested classes is reported. *$102$ Category Flowers [@nilsback2008automated]:* This dataset consists of $102$ categories of flowers, consisting of $10$ training images each. The test set consists of $30$ to over $200$ images per-class. *Caltech-UCSD Birds $200$ [@wah2011caltech]:* This dataset consists of $200$ categories of bird species. Classes typically contain between $55$ to $60$ images split evenly between train and test. *CatsVsDogs [@elson2007asirra]:* This dataset consists of $2$ categories; dogs and cats with $10,000$ training images each. The test set consist of $2,500$ images for each class. Each image contains either a dog or a cat in various scenes and taken from different angles. The data was extracted from the ASIRRA dataset, we split each class to the first $10,000$ images as train and the last $2,500$ as test. The results are shown in Tab. \[tab:exp\_mhrot\]. DN2 significantly outperforms MHRot on all datasets. Dataset MHRot DN2 ---------------- ------- ---------- Oxford Flowers 65.9 **93.9** UCSD Birds 200 64.4 **95.2** CatsVsDogs 88.5 **97.5** : MHRot vs. DN2 on Flowers, Birds, CatsVsDogs (Average Class ROCAUC $\%$)[]{data-label="tab:exp_mhrot"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![Network depth (number of ResNet layers) improves both Cifar10 and FashionMNIST results.[]{data-label="fig:small_depth"}](figures/unimodal/depth_cifar10.png "fig:") ![Network depth (number of ResNet layers) improves both Cifar10 and FashionMNIST results.[]{data-label="fig:small_depth"}](figures/unimodal/depth_fashion.png "fig:") ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Number of neighbors vs ROCAUC, the optimal number of K is around $2$.[]{data-label="fig:small_neighbors"}](figures/unimodal/unimodel_knn_auc.png "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------ **Effect of network depth:** Deeper networks trained on large datasets such as Imagenet learn features that generalize better than shallow network. We investigated the performance of DN2 when using features from networks of different depths. Specifically, we plot the average ROCAUC for ResNet with 50, 101, 152 layers in Fig. \[fig:small\_depth\]. DN2 works well with all networks but performance is improved with greater network depth. **Effect of the number of neighbors:** The only free parameter in DN2 is the number of neighbors used in kNN. We present in Fig. \[fig:small\_neighbors\], a comparison of average CIFAR10 and FashionMNIST ROCAUC for different numbers of nearest neighbors. The differences are not particularly large, but $2$ neighbors are usually best. **Effect of data invariance:** \[fig:wbc\_dior\_samples\] ------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------ ![(left) A chimney image from the DIOR dataset (right) An image from the WBC Dataset.](figures/unregistered/dior.jpg "fig:") ![(left) A chimney image from the DIOR dataset (right) An image from the WBC Dataset.](figures/unregistered/cell.png "fig:") ------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------ Dataset MHRot DN2 --------- ------- ---------- -- -- -- DIOR 83.2 **92.2** WBC 60.5 **82.9** : Anomaly Detection Accuracy on DIOR and WBC (ROCAUC $\%$) \[tab:exp\_inv\] Methods that rely on predicting geometric transformations e.g. [@golan2018deep; @hendrycks2019using; @bergman2020classification], use a strong data prior that images have a predetermined orientation (for rotation prediction) and centering (for translation prediction). This assumption is often false for real images. Two interesting cases not satisfying this assumption, are aerial and microscope images, as they do not have a preferred orientation, making rotation prediction ineffective. *DIOR [@li2020object]:* An aerial image dataset. The images are registered but do not have a preferred orientation. The dataset consists of $19$ object categories that have more than $50$ images with resolution above $120 \times 120$ (the median number of images per-class is $578$). We use the bounding boxes provided with the data, and take each object with a bounding box of at least $120$ pixels in each axis. We resize it to $256 \times 256$ pixels. We follow the same protocol as in the earlier datasets. As the images are of high-resolution, we use the public code release of Hendrycks [@hendrycks2018deep] as a self-supervised baseline. The results are summarized in Tab. \[tab:exp\_inv\]. We can see that DN2 significantly outperforms MHRot. This is due both to the generally stronger performance of the feature extractor as well as the lack of rotational prior that is strongly used by RotNet-type methods. Note that the images are centered, a prior used by the MHRot translation heads. *WBC [@zheng2018fast]:* To further investigate the performance on difficult real world data, we performed an experiment on the WBC Image Dataset, which consists of high-resolution microscope images of different categories of white blood cells. The data do not have a preferred orientation. Additionally the dataset is very small, only a few tens of images per-class. We use Dataset $1$ that was obtained from Jiangxi Telecom Science Corporation, China, and split it to the $4$ different classes that contain more than $20$ images each. We set the first $80\%$ images in each class to the train set, and the last $20\%$ to the test set. The results are presented in Tab. \[tab:exp\_inv\]. As expected, DN2 outperforms MHRot by a significant margin showing its greater applicability to real world data. Multimodal Anomaly Detection {#subsec:exp:multi} ---------------------------- Dataset Geometric DN2 ---------- ----------- ---------- CIFAR10 61.7 **71.7** CIFAR100 57.3 **71.0** : Anomaly Detection Accuracy on Multimodal Normal Image Distributions (ROCAUC $\%$) \[tab:exp\_multi\] It has been argued (e.g. @ahmed2019detecting) that unimodal anomaly detection is less realistic as in practice, normal distributions contain multiple classes. While we believe that both settings occur in practice, we also present results on the scenario where all classes are designated as normal apart from a single class that is taken as anomalous (e.g. all CIFAR10 classes are normal apart from “Cat”). Note that we do not provide the class labels of the different classes that compose the normal class, rather we consider them to be a single multimodal class. We believe this simulates the realistic case of having a complex normal class consisting of many different unlabelled types of data. We compared DN2 against Geometric on CIFAR10 and CIFAR100 on this setting. We provide the average ROCAUC across all the classes in Tab. \[tab:exp\_multi\]. DN2 achieves significantly stronger performance than Geometric. We believe this is occurs as Geometric requires the network not to generalize on the anomalous data. However, once the training data is sufficiently varied the network can generalize even on unseen classes, making the method less effective. This is particularly evident on CIFAR100. \[fig:group\] ![Number of images per group vs. detection ROCAUC. Group anomaly detection with mean pooling is better than simple feature concatenation for groups with more than $3$ images.](figures/group/group_updated.png) ----------------------------------------- ------------------------------------------- ------------------------------------------------- ![image](figures/small/cifar_small.png) ![image](figures/small/fashion_small.png) ![image](figures/unsupervised/unsupervised.png) ----------------------------------------- ------------------------------------------- ------------------------------------------------- Generalization from Small Training Datasets {#subsec:exp:small} ------------------------------------------- One of the advantage of DN2, which does not utilize learning on the normal dataset is its ability to generalize from very small datasets. This is not possible with self-supervised learning-based methods, which do not learn general enough features to generalize to normal test images. A comparison between DN2 and Geometric on CIFAR10 is presented in Fig. \[fig:small\_unsupervised\]. We plotted the number of training images vs. average ROCAUC. We can see that DN2 can detect anomalies very accurately even from 10 images, while Geometric deteriorates quickly with decreasing number of training images. We also present a similar plot for FashionMNIST in Fig. \[fig:small\_unsupervised\]. Geometric is not shown as it suffered from numerical issues for small numbers of images. DN2 again achieved strong performance from very few images. Unsupervised Anomaly Detection {#subsec:exp:unsupervised} ------------------------------ There are settings where the training set does not consist of purely normal images, but rather a mixture of unlabelled normal and anomalous images. Instead we assume that anomalous images are only a small fraction of the number of the normal images. The performance of DN2 as function of the percentage of anomalies in the training set is presented in Fig. \[fig:small\_unsupervised\]. The performance is somewhat degraded as the percentage of training set impurities exist. To improve the performance, we proposed a cleaning stage, which removes $50\%$ of the training set images that have the most distant $kNN$ inside the training set. We then run DN2 as usual. The performance is also presented in Fig. \[fig:small\_unsupervised\]. Our cleaning procedure is clearly shown to significantly improve the performance degradation as percentage of impurities. Group Anomaly Detection {#subsec:exp:group} ----------------------- To compare to existing baselines, we first tested our method on the task in @dgroup. The data consists of normal sets containing $10-50$ MNIST images of the same digit, and anomalous sets containing $10-50$ images of different digits. By simply computing the trace-diagonal of the covariance matrix of the per-image ResNet features in each set of images, we achieved $0.92$ ROCAUC vs. $0.81$ in the previous paper (without using the training set at all). As a harder task for group anomaly detection in unordered image sets, we designate the normal class as sets consisting of exactly one image from each of the $M$ CIFAR10 classes (specifically the classes with ID $0..M-1$) while each anomalous set consisted of $M$ images selected randomly among the same classes (some classes had more than one image and some had zero). As a simple baseline, we report the average ROCAUC (Fig, \[fig:group\]) for anomaly detection using DN2 on the concatenated features of each individual image in the set. As expected, this baseline works well for small values of $M$ where we have enough examples of all possible permutations of the class ordering, but as $M$ grows larger ($M>3$), its performance decreases, as the number permutations grows exponentially. We compare this method, with 1000 image sets for training, to nearest neighbours of the orderless max-pooled and average-pooled features, and see that mean-pooling significantly outperforms the baseline for large values of $M$. While we may improve the performance of the concatenated features by augmenting the dataset with all possible orderings of the training sets, it is will grow exponentially for a non-trivial number of $M$ making it an ineffective approach. Implementation {#subsec:exp:imp} -------------- In all instances of DN2, we first resize the input image to $256 \times 256$, we take the center crop of size $224 \times 224$, and using an Imagenet pre-trained ResNet ($101$ layers unless otherwise specified) extract the features just after the global pooling layer. This feature is the image embedding. Analysis {#sec:analysis} ======== In this section, we perform an analysis of DN2, both by comparing kNN to other classification methods, as well as comparing the features extracted by the pretrained networks vs. features learned by self-supervised methods. kNN vs. one-class classification {#subsec:analysis:knn} -------------------------------- In our experiments, we found that kNN achieved very strong performance for anomaly detection tasks. Let us try to gain a better understanding of the reasons for the strong performance. In Fig. \[fig:tsne\] we can observe t-SNE plots of the test set features of CIFAR10. The normal class is colored in yellow while the anomlous data is marked in blue. It is clear that the pre-trained features embed images from the same class into a fairly compact region. We therefore expect the density of normal training images to be much higher around normal test images than around anomalous test images. This is responsible for the success of kNN methods. C=1 C=3 C=5 C=10 kNN ------- ------- ------- ------- ------- 91.94 92.00 91.87 91.64 92.52 : Accuracy on CIFAR10 using K-means approximations and full kNN (ROCAUC $\%$)[]{data-label="tab:exp_kmeans"} ----------------------------------------------- ----------------------------------------------- ---------------------------------------------- ![image](figures/tsne/svdd/cifar10_0_foo.png) ![image](figures/tsne/geom/cifar10_0_foo.png) ![image](figures/tsne/dn2/cifar10_0_foo.png) ![image](figures/tsne/svdd/cifar10_1_foo.png) ![image](figures/tsne/geom/cifar10_1_foo.png) ![image](figures/tsne/dn2/cifar10_1_foo.png) ----------------------------------------------- ----------------------------------------------- ---------------------------------------------- kNN has linear complexity in the number of training data samples. Methods such as One-Class SVM or SVDD attempt to learn a single hypersphere, and use the distance to the center of the hypersphere as a measure of anomaly. In this case the inference runtime is constant in the size of the training set, rather than linear as in the kNN case. The drawback is the typical lower performance. Another popular way [@fukunaga1975branch] of decreasing the inference time is using K-means clustering of the training features. This speeds up inference by a ratio of $\frac{N}{K}$. We therefore suggest speeding up DN2 by clustering the training features into $K$ clusters and the performing kNN on the clusters rather than the original features. Tab. \[tab:exp\_kmeans\] presents a comparison of performance of DN2 and its K-means approximations with different numbers of means (we use the sum of the distances to the 2 nearest neighbors). We can see that for a small loss in accuracy, the retrieval speed can be reduced significantly. Pretrained vs. self-supervised features {#subsec:analysis:features} --------------------------------------- To understand the improvement in performance by pretrained feature extractors, we provide t-SNE plots of normal and anomalous test features extracted by Deep-SVDD, Geometric and DN2 (Resnet50 pretrained on Imagenet). The top plots are of a normal class that achieves moderate detection accuracy, while the bottom plots are of a normal class that achieves high accuracy. We can immediately observe that the normal class in Deep-SVDD is scattered among the anomalous classes, explaining its lower performance. In Geometric the features of the normal class are a little more localized, however the density of the normal region is still only moderately concentrated. We believe that the fairly good performance of Geometric is achieved by the massive ensembling that it performs (combination of $72$ augmentations). We can see that Imagenet pretrained features preserve very strong locality. This explains the strong performance of DN2. Discussion {#sec:disc} ========== **A general paradigm for anomaly detection:** Recent papers (e.g. @golan2018deep) advocated the paradigm of self-supervision, possibly with augmentation by an external dataset e.g. outlier exposure. The results in this paper, give strong evidence to an alternative paradigm: i) learn general features using all the available supervision on vaguely related datasets ii ) the learned features are expected to be general enough to be able to use standard anomaly detection methods (e.g. kNN, k-means). The pretrained paradigm is much faster to deploy than self-supervised methods and has many other advantages investigated extensively in Sec. \[sec:exp\]. We expect that for image data that has no similarity whatsoever to Imagenet, using pre-trained features may be less effective. That withstanding, in our experiments, we found that Imagenet-pretrained features were effective on aerial images as well as microscope images, while both settings are very different from Imagenet. We therefore expect DN2-like methods to be very broadly applicable. **External supervision:** The key enabler for DN2’s success is the availability of a high quality external feature extractor. The ResNet extractor that we used was previously trained on Imagenet. Using supervision is typically seen as being more expensive and laborious than self-supervised methods. In this case however, we do not see it as a disadvantage at all. We used networks that have already been trained and are as commoditized as free open-source software libraries. They are available completely free, no new supervision at all is required for using such networks for any new dataset, as well as minimal time or storage costs for training. The whole process consists of merely a single PyTorch line, we therefore believe that in this case, the discussion of whether these methods can be considered supervised is purely philosophical. **Scaling up to very large datasets:** Nearest neighbors are famously slow for large datasets, as the runtime increases linearly with the amount of training data. The complexity is less severe for parametric classifiers such as neural networks. As this is a well known issue with nearest neighbors classification, much work was performed at circumventing it. One solution is fast kNN retrieval e.g. by kd-trees. Another solution used in Sec. \[sec:analysis\], proposed to speed up kNN by reducing the training set through computing its k-means and computing kNN on them. This is generalized further by an established technique that approximates NN by a recursive K-means algorithm [@fukunaga1975branch]. We expect that in practice, most of the runtime will be a result of the neural network inference on the test image, rather than on nearest neighbor retrieval. **Non-image data:** Our investigation established a very strong baseline for image anomaly detection. This result, however, does not necessarily mean that all anomaly detection tasks can be performed this way. Generic feature extractors are very successful on images, and are emerging in other tasks e.g. natural language processing (BERT [@devlin2018bert]). This is however not the case in some of the most important areas for anomaly detection i.e. tabular data and time series. In those cases, general feature extractors do not exist, and due to the very high variance between datasets, there is no obvious path towards creating such feature extractors. Note however that as deep methods are generally less successful on tabular data, the baseline of kNN on raw data is a very strong one. That withstanding, we believe that these data modalities present the most promising area for self-supervised anomaly detection. @bergman2020classification proposed a method along these lines. Conclusion {#sec:conc} ========== We compare a simple method, kNN on deep image features, to current approaches for semi-supervised and unsupervised anomaly detection. Despite its simplicity, the simple method was shown to outperform the state-of-the-art methods in terms of accuracy, training time, robustness to input impurities, robustness to dataset type and sample complexity. Although, we believe that more complex approaches will eventually outperform this simple approach, we think that DN2 is an excellent starting point for practitioners of anomaly detection as well as an important baseline for future research.
{ "pile_set_name": "ArXiv" }
--- abstract: | The structural, electronic and vibrational properties of atomic wires composed of the early alkali metals lithium and sodium are studied using density functional perturbation theory. The s-like electronic states near the Fermi level couple very weakly to longitudinal acoustic phonons and not at all to the transverse acoustic phonons, which results in a weak overall electron-phonon coupling. The results are compared to earlier studies on the electron-phonon coupling in metallic atomic wires and reinforces the idea that s-like states at the Fermi level give rise to weak electron-phonon coupling in one-dimension, in contrast with materials containing d-like states at the Fermi level which have correspondingly larger electron-phonon coupling due to interactions with transverse phonons. \*Correspondence to lanzin@rpi.edu. author: - 'Nicholas A. Lanzillo$^*$' - 'Saroj K. Nayak' bibliography: - 'Alkali.bib' title: 'Weak electron-phonon coupling in the early alkali atomic wires' --- Introduction ============ The interplay between electrons and phonons gives rise to many interesting and important physical properties, including transport phenomena and superconductivity[@Ashcroft1976solid]. Some of the earliest attempts at characterizing the electron-phonon interaction from a first-principles perspective using a combination of density functional theory and linear response theory have resulted in remarkably accurate descriptions of phonon frequencies and electron-phonon coupling constants for many common metals[@savrasov1994linear; @savrasov1996electron; @bauer1998electron]. The effects of reduced dimensionality and extreme quantum confinement give rise to distinct electron-phonon coupling relative to the bulk. There has been extensive work, both theoretical[@Frederiksen2004inelastic; @Picaud2003phonons; @delaVega2006universal; @Agrait2002onset] and experimental[@Agrait2002electron; @bohler2009point], on the electron-phonon interaction in short, finite-length, single-atom thick atomic wires. However, there has been comparatively little work done exploring the electron-phonon interaction using Eliashberg Theory in these types of systems. Up to this point, Eliashberg Theory has been used to comparatively study the electron-phonon interactions in Al and Pb[@verstraete2006phonon], Na[@sen2006peierls] and Al, Cu, Ag and Au[@simbeck2012aluminum]. In this paper, we explore the effects of the electron-phonon interaction in the early alkali metals, Li and Na, which have simple, nearly-spherical Fermi surfaces to paint a more complete picture of electron-phonon coupling in single-atom thick, one-dimensional metallic wires. Previous results[@verstraete2006phonon; @simbeck2012aluminum; @sen2006peierls] indicate that there exists a correlation between the electronic character at the Fermi level (i.e. s-like, p-like or d-like) and the type of phonon (longitudinal or transverse) that the electron will couple to. In Al and Ag atomic wires, it is found that the s-like and p-like electrons at the Fermi level couple to longitudinal acoustic phonons and result in an overall weak coupling, while the d-like electrons in Cu, Au and Pb atomic wires couple to imaginary frequency transverse modes, giving rise to a large electron-phonon coupling. In this paper, we extend this understanding to include the electron-phonon coupling in the early alkali metals Li and Na, which possess simple, nearly-spherical Fermi surfaces. Theory ====== Electronic wavefunctions and energies are calculated via density functional theory[@kohn1964inhomogeneous; @kohn1965self], while the phonon dispersion relations and electron-phonon interactions are calculated via density functional perturbation theory[@gonze1997dynamical]. The central ingredient in calculating the electron-phonon coupling is the interaction matrix element, $g_{k,k'}$: $$g_{kk'}=\sqrt{\frac{\hbar}{2NM\omega_q}}\vec{u_q}<k'|\delta V_{SCF}|k>$$ where $k$ and $k'$ represent the electronic wavefunctions before and after a phonon collision event, $M$ is the ionic mass, $N$ is the number of phonon modes, $\vec{u_q}$ is the phonon eigenvector and $\delta V_{SCF}$ is the self-consistent change in the potential energy in the presence of the phonon-distorted lattice geometry. The Eliashberg Spectral Function ($\alpha^2F(\omega)$) is then defined as an integral over the matrix elements squared: $$\alpha^2F(\omega)=g(\epsilon_F)\sum_{k,k'}|g_{kk'}|^2\delta(\omega-\omega_q)$$ where $g(\epsilon_F)$ is the electronic density of states at the Fermi level. This function can be thought of a phonon density of states weighted according to scattering interactions with electrons near the Fermi level. In essence, we keep only the phonons from the density of states that participate in scattering events with electrons. The electron-phonon coupling constant ($\lambda$) is defined as a weighted integral over the Eliashberg Spectral Function: $$\lambda = 2 \int \frac{\alpha^2F(\omega)}{\omega}$$ The factor of $1/\omega$ has the effect of filtering out the contributions from high-frequency phonons to the overall electron-phonon coupling, instead placing most of the weight on the lower-frequencies modes. This makes intuitive sense, since the electronic energy levels before and after a phonon event are related via the phonon frequency ($\omega$): $$\epsilon_f = \epsilon_i \pm \hbar \omega$$ where the + is for phonon absorption and the - is for phonon emission. In general, the change in energy between the initial and final electronic states are small since a typical phonon energy is on the order of meV. Thus, low-frequency phonons are expected to play a greater role in scattering events with electrons than high-frequency phonons. Computational Methods ===================== Calculations were performed using the ABINIT[@gonze2002first; @gonze2005brief; @gonze2009abinit] software package, which is an implementation of density functional theory[@kohn1964inhomogeneous; @kohn1965self] utilizing pseudopotentials and a plane-wave basis set. Lithium and sodium atoms were treated using Troullier-Martins norm-conserving pseudopotentials[@troullier1991efficient] with plane wave cutoff energies of 80.0 Hartree for Li and 20.0 Hartree for Na. We considered single-atom unit cells with 15.0 Bohr of vacuum in the lateral directions, which is large enough to ensure that the interaction between neighboring supercells in negligible. The interatomic separation was optimized in each case until the forces on the atoms were less than 0.05 eV/Angstrom. We chose k-point sampling of $1\times 1\times64$ for electronic structure calculations and a finer grid of $1 \times 1 \times 128$ of phonon calculations. Convergence with respect to the number of k-points was carefully checked in each case. Results and Discussion ====================== The starting point for our study was to optimize the interatomic separation for each atomic wire. This was done via two methods; first we minimized the forces on the ions until they were converged below 0.05 eV/Angstrom, and then we calculated potential energy curves within a window of 1-2 atomic units surrounding the minimum energy value to get an idea of the underlying energy landscape. The potential energy curves for our wires are shown in Figure 1. Each wire shows a well-defined potential energy minimum for the linear configuration in the neighborhood of 3 Angstrom. While we note that the potential energy minimum for the linear configuration is likely metastable state, similar to the cases of Al, Pb and Na[@verstraete2006phonon; @sen2006peierls], searching for other stable geometries like zig-zag configurations is beyond the scope of the current work since we only seek to elucidate the effects of electron-phonon coupling in linear wires. The minimum-energy separation for the lithium wire is 2.96 Angstrom, while for the sodium wire it is 3.30 Angstrom. As expected, the larger sodium atom prefers a larger interatomic separation relative to the smaller lithium atom. The electronic energy dispersion is calculated for each wire in order to verify that the wire retains its metallicity when confined to a single spatial dimension and also to investigate the character of energy bands near the Fermi level. The electronic structure and angular momentum resolved density of states for each atomic wire are shown in Figure 2. ![The electronic band structure and angular momentum resolved partial density of states for (a) lithium and (b) sodium atomic wires.](Figure2) We see that each wire has a single band crossing the Fermi level and that the character of each band is predominantly s-like at the Fermi level, although the p-like orbitals make a small contribution in the case of lithium. The d-like contributions are not identified because they are unoccupied and lie far enough away from the Fermi level that they don’t contribute to the net electron-phonon coupling constant. In both cases, the electron character shifts from s-like below the Fermi level (at the Gamma point) to p-like above the Fermi level (at the X point). Next, we turn to the phonons. The phonon band structure, Eliashberg Spectral Function and electron-phonon coupling constant are shown in Figure 3. ![The phonon band structure, Eliashberg Spectral Function and electron-phonon coupling constant for atomic wires of (a) lithium and (b) sodium.](Figure3) The phonon band structure shows the presence of one longitudinal acoustic phonon mode and two degenerate transverse acoustic modes, due to the one-dimensional symmetry of the wires. The transverse acoustic phonon modes dip into imaginary frequencies, which indicates a preference for the wires to distort into a zig-zag configuration. Imaginary frequency transverse phonon modes have been observed in atomic wires composed of of Al, Cu, Ag, Cu and Pb[@verstraete2006phonon; @sen2006peierls; @simbeck2012aluminum] and their degeneracy has been labelled accordingly in the figure. The spectral function shows peaks only around the longitudinal phonon modes, indicating that electrons do not couple at all to the imaginary frequency transverse modes. Since the longitudinal phonons have a much larger absolutely value of frequency than the transverse phonons, the factor of $1/\omega$ results in a weak electron-phonon coupling of only $\lambda = 0.1$ for both metals. This number is larger than the coupling constants in Al and Ag atomic wires but smaller than the coupling constants in Cu and Au atomic wires[@simbeck2012aluminum]. Comparing these numbers to the electron-phonon coupling in the bulk, we see that both metals show a reduced overall coupling in the atomic wire configuration. The bulk coupling for BCC Li is $\lambda=0.45$[@liu1991electron; @profeta2006superconductivity]while for Na it is $\lambda=0.18$ [@bauer1998electron]. The sizeable reduction in electron-phonon coupling is attributed to the strong effects of quantum confinement in one-dimension. We note that while results for the electron-phonon coupling have been published[@sen2006peierls], the k-point sampling was no dense enough to attain an accurate value of the electron-phonon coupling constant. We have reproduced the published results using the reported k-point grid ($1\times 1 \times 24$, resulting in a coupling constant of $\lambda=0.01$) but note that our k-point sampling of $1\times 1 \times 128$ gives a more accurate and fully-converged value of the electron-phonon coupling constant $\lambda$ ($\lambda=0.1$). It is worth noting that the identical (to one decimal place) values of the electron-phonon coupling constants in the Li and Na atomic wires is coincidental. While the maximum phonon frequencies are largest in the Li wire (extending up to 400 cm$^{-1}$) the peak heights of the spectral function are also larger (rising to around $\alpha^2F(\omega)=0.2$). In the case of the Na atomic wire, the maximum phonon frequencies are smaller (extending only to around 150 cm$^{-1}$) but the spectral function only rises to around $\alpha^2F(\omega)$=0.15, yielding a very similar overall coupling strength. Conclusion ========== In conclusion, we have shown that weak electron-phonon coupling prevails in single-atom thick metallic wires composed of the early alkali metals Li and Na. The weak coupling is the result of the s-like electons at the Fermi level coupling exclusively to the longitudinal acoustic phonon branch, which has a larger magnitude of frequency than the transverse modes. The higher frequencies are effectively damped by the factor of $1/\omega$ in the formula for the electron-phonon coupling constant, resulting in very small overall coupling. When compared to earlier results on other metals, a trend becomes evident in which s- and p-like electrons at the Fermi level result in weak electron-phonon coupling in one-dimension, while the presence of d-like electrons results in stronger coupling due to the involvement of the transverse acoustic phonons. Future studies will focus on the calculation of the electron-phonon coupling for atomic wires composed of more complicated metals as well as studies of the electron-phonon coupling at contacts and interfaces. This work used computational resources provided by the Computational Center for Nanotechnology Innovations (CCNI) at Rensselaer Polytechnic Institute. The authors acknowledge support by the Army Research Lab Multiscale Multidisciplinary Modeling of Electronic Materials (MSME) Collaborative Research Alliance (CRA) and the INDO-US Forum.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove that the functor $X\mapsto A(\Sigma X)$ from connected pointed spaces to spectra, given by Waldhausen’s $K$-theory, splits as a product of its Goodwillie derivatives.' author: - | Crichton Ogle\ Dept. of Mathematics\ The Ohio State University\ `ogle@math.ohio-state.edu` title: '[On the homotopy type of $A(\Sigma X)$]{}' --- .5in Introduction/Statement of results ================================= .4in In [@w2], Waldhausen constructs a map $$W:A(X)\to\Omega^\infty\Sigma^\infty(X_+)$$ (where $A(X)$ denotes the Waldhausen $K$-theory of the space $X$), and shows that evaluation on the image of $M : \Omega^\infty\Sigma^\infty(X_+)\to A(X)$ induced by the inclusion of monomial matrices produces a self-map $W\circ M : \Omega^\infty\Sigma^\infty(X_+) \to \Omega^\infty\Sigma^\infty(X_+)$ homotopic to the identity by a homotopy natural in $X$. This yields a splitting of $\Omega^\infty\Sigma^\infty(X_+)$ off of $A(X)$ (as well as its stabilization $A^S(X)$), a fact which plays a key role in the proof of the Fundamental Theorem of Waldhausen $K$-theory relating $A(X)$ to pseudo-isotopy theory ([@w2], [@wm], [@w] and finally [@jrw]): $A(X) \simeq \Omega^\infty\Sigma^\infty(X_+) \times Wh^{Diff}(X)$ where $\Omega^2 Wh^{Diff}(X) \simeq \wp(X)$ = the stable pseudo-isotopy space of $X$ (as defined by Hatcher-Wagoner-Igusa). The construction of $W$ is in stages. It is first shown that the homotopy fibre $${\overline}{A}(S^n \wedge X_+) := hofibre(A(S^n \wedge X_+) \to A(*))$$ can be described through a certain range of dimensions (approximately $2n$) in terms of a type of cyclic bar construction. On this cyclic bar construction Waldhausen defines a map to $\Omega^\infty\Sigma^\infty(X_+)$ compatible with stabilization. The result is a map $A^S(X) \to \Omega^\infty\Sigma^\infty(X_+)$ natural in $X$, and precomposition with the stabilization map $A(X) \to A^S(X)$ yields $W$. In this paper we present a generalization of Waldhausen’s map $W$. Specifically let $X$ and $Y$ be pointed simplicial sets, $X$ connected. Then there exists a : $$\overline{Tr}_X(Y) : \underset{n\to\infty}{\lim}\Omega^n hofibre({\overline}{A}(\Sigma(X \wedge \Sigma^n Y)) \to \overline A(\Sigma X)) \to \Omega^\infty\Sigma^\infty(\Sigma(\underset{q\ge 1}{\vee} | X^{[q-1]} \wedge Y |)).$$ This map is natural in $X$ and $Y$. The main application is to prove .2in [****]{} [*For connected $X$ there is a weak equivalence of infinite loop spaces $$\tilde\rho = \underset{q\ge 1}{\prod} \tilde\rho_q : \Omega^\infty\Sigma^\infty(\Sigma(\underset{q\ge 1}{\vee} E\mathbb Z/q \underset{\mathbb Z/q}{\leftthreetimes} |X|^{[q]})) \overset\simeq{\longrightarrow} \overline A(\Sigma X)$$ natural in $X$, where the action of $\Bbb Z/q$ on $|X|^{[q]}$ is given by cyclic permutation.*]{} .2in It should be noted that an alternative approach to the $p$-adic completion of this result would be to use the main result of \[6A\], which determines ${\overline}{A}(X)^{\wedge}_p$ for arbitrary 1-connected spaces (not just suspension spaces). Theorem A is, however, an integral identification of ${\overline}{A}(\Sigma X)$ obtained by explicit maps in a stable range. .2in Theorem A has an unfortunate history. In 1986, this result was announced simultaneously by the author and the authors of [@ccgh], as a solution to a conjecture posed by Goodwillie a few years prior. Unfortunately both the unpublished [@o1], [@o2], as well as [@ccgh], included a number of technical errrors. The proof of Theorem A we give here begins with the line of argument attempted in [@ccgh], then modified along the lines of [@w2]. An outline is as follows. In section 1 we recall the necessary results from [@w2] and Goodwillie’s Calculus of Functors ([@g1] – [@g3]), and define the maps $\tilde\rho_q$ used in the proof. In section 2, we follow the arguments of [@w2] in constructing the trace map $\overline{Tr}_X(Y)$, culminating in section 2.3 where we use $\overline{Tr}_X(Y)$ to explicitly compute the first derivative of $\tilde\rho_q$ at a connected space $X$ (in the sense of Goodwillie). Appealing to Goodwillie’s convergence criteria for functor calculus [@g2], together with his computation of the first differential of $\wp (X)$ [@g1] and the Fundamental Theorem of [@jrw] cited above then completes the proof of the result. It is quite likely that an integral statement similar to the above could alternatively be established by the methods of [@dgm]. .2in This paper represents work done by the author during the period 1988 - 1992, and as such represents a classical approach to the construction of trace maps on Waldhausen $K$-theory. During that period, the author benefitted from discussions with a number of mathematicians, including R. Schwänzl and R. Vogt (who carefully critiqued [@o1] and [@o2]), and Z. Fiedorowicz (with whom we were able to resolve some of the technical issues of these earlier attempts in [@fov]). But the [*sine qua non*]{} in all of this is the work of Waldhausen and Goodwillie; Waldausen for his development of the $K$-theory which bears his name and his contribution to the proof of the Fundamental Theorem stated above, and Goodwillie for his invention of the Calculus of Functors and the computation of the derivatives of stable pseudo-isotopy. What appears below is simply a corollary of their foundational work. .2in During the initial stages of this paper, the author enjoyed the hospitality of the Max-Plank-Institut für Mathematik in Bonn. This work partially supported by a grant from the National Science Foundation. We would like to thank Tom Goodwillie for his critical reading of earlier versions of this work, and the suggested improvements that followed. We would also like to thank the referee for the final version of this paper for his helpful and illuminating critque. Background ========== Waldhausen $K$-theory --------------------- We recall the construction of $A(X)$ as given in [@w2]. Let $X$ be a pointed, connected simplicial set, $GX$ its Kan loop group. Let $H^n_k (|GX|)$ denote the total singular complex of the topological monoid $Aut_{|GX|} (\overset {k}{\vee}\ S^n \wedge |GX|_+)$ of $|GX|$-equivariant self-homotopy equivalences of the free basepointed $|GX|$-space $(\overset {k}{\vee}\ S^n \wedge |GX|_+)$. $H^n_k(|M|)$ is a mapping space (for a simplicial monoid $M$), for which we will adopt the convention that $AB = B\circ A$. $H^n_k (|GX|)$ identifies naturally with a set $\overline M^n_k (|GX|_+)$ of path components of $$M^n_k (|GX|_+) = Map (\overset k {\vee} \ S^n,\overset k {\vee}\ S^n \wedge|GX|_+)$$ under the inclusion $$H^n_k (|GX|) \hookrightarrow Map_{|GX|} (\overset k {\vee} S^n \wedge |GX|_+, \overset k {\vee} S^n \wedge |GX|_+) \cong Map (\overset k {\vee} S^n, \overset k {\vee} S^n\wedge |GX|_+)\; .$$ One has stabilization maps $$M^n_k (|GX|_+)\overset\imath{\longrightarrow} M^n_{k+1} (|GX|_+)$$ given by wedge product with the inclusion $S^n \hookrightarrow S^n \wedge |GX|_+$, suspension maps $M^n_k (|GX _+)\overset\Sigma{\longrightarrow} M^{n+1}_k (|GX|_+)$ given by smash product with the identity map on $S^1$, as well as pairing maps $$M^n_k (|GX|_+)\times M_\ell^n (|GX|_+) \to M^n_{k+\ell} (|GX|_+)$$ induced by wedge-sum. Restricted to $\{H^n_k (|GX|)\}_{k\ge 0}$, this pairing gives $\underset{k\ge0}{\coprod} H^n_k (|GX|)$ the structure of a topological permutative category [@s1] for all $n\ge 0$. These operations – wedge sum, suspension, stabilization – commute up to natural isomorphism. So letting $H_k (|GX|) = \underset{\Sigma}{\underset{\to}{\lim}} H^n_k (|GX|)$ and $H(|GX|) = \underset{k}{\underset{\to}{\lim}} H_k (|GX|)$ we see that $\underset{k\ge0}{\coprod} H_k (|GX|)$ is also a topological permutative category under wedge-sum. Waldhausen’s definition of $A(X)$ is If $X$ is a pointed connected simplicial set, $A(X) = \Omega B (\underset{k\ge 0}{\coprod} BH_k (|GX|)) \simeq \Bbb Z \times BH (|GX|)^+$ where $\simeq$  means weakly equivalent . If $X$ is a connected basepointed space, $A(X)$ is defined to be $A(Sing(X))$ (here $Sing(X)$ denotes the basepointed total singular complex of $X$). Similarly if $X$ is a connected pointed simplicial space, $A(X) \overset{\text{def}}{=} A(Sing|X|)$. $A(X)$ is a homotopy functor, in the sense that $X\simeq Y$ implies $A(X)\simeq A(Y)$. .2in We will use the notation $\Sigma U$ to denote the reduced suspension of $U$. If $|X| \simeq\Sigma|Z|$, where $Z$ is a simplicial space connected in each degree, then $GX$ is weakly equivalent to the simplicial James monoid $JZ$, which in degree $q$ is the free monoid on the pointed space $Z_q$. In this case we can use $JZ$ in place of the Kan loop group $GX$ in the above constructions. The result is an equivalence $A(\Sigma Z) \simeq \Omega B (\underset{k\ge0}{\coprod} BH_k (|JZ|))$. In studying $A(\Sigma Z)$, we will use constructions from §2 of [@w2]. The first, due to Segal, generalizes the bar construction which associates to a monoid its nerve. Thus, a is a basepointed set $M$ together with a partially defined composition law $$M\times M\supset M_2 \overset\mu{\hookrightarrow} M$$ satisfying - $M\vee M \subset M_2$, with $\mu(*,m) = \mu(m,*) = m$ and - $\mu(\mu(m_1,m_2), m_3) = \mu(m_1,\mu(m_2,m_3))$ in the sense that if one is defined then so is the other, and they are equal. To avoid confusion, we include as part of the data associated to a partial monoid the sets $$\{ M_p\subseteq\text{ composable }p\text{-tuples in }M\}_{p\ge 0}$$ where $M_0 = *$ , $M_1 = M$ , $M_2$ is as given above, and composable means that the iterated product is defined. We require that the face and degeneracy maps, defined in the usual way, induce a simplicial structure on $\{M_p\}_{p\ge 0}$ . The of $M$ is then the simplicial set $$\{[p]\mapsto M_p\}.$$ Let $M$ be a monoid, $S$ a set on which $M$ acts on both sides, with $(ms)m' = m(sm')$ .One can then form the of $M$ with coefficients in $S$. It is a simplicial set $N^{cy}(M,S)$ which in degree $q$ is $M^q\times S$. The face and degeneracy maps are given by the following formulae (see [@w2], §2): $$\begin{aligned} \partial_0(m_1,\dots,m_q;s) &= (m_2,m_3,\dots,m_q;sm_1)\\ \partial_i(m_1,\dots,m_q;s) &= (m_1,\dots,m_i m_{i+1},\dots, m_q;s),\qquad 1\le i \le q-1\\ \partial_q(m_1,\dots,m_q;s) &= (m_1,\dots,m_{q-1};m_q s)\\ s_i(m_1,\dots,m_q;s) &= (m_1,\dots,m_i,1,m_{i+1},\dots;s), \qquad 0\le i \le q.\end{aligned}$$ As noted in [@w2], the double bar construction is a special case of this cyclic bar construction where $S$ appears as a cartesian product of a left $M$-set and a right $M$-set. When $M$ is a grouplike monoid ($\pi_0M$ is a group) and $S=M$ with induced $M$ action on the left and right, there is a weak equivalence between $N^{cy}(M,M)$ and $BM^{S^1} = $ the free loop-space of $BM$. The construction of $N^{cy}(M,S)$ extends in the obvious way to a simplicial monoid $M$ acting on a simplicial set $S$. .2in It is sometimes the case that $S$ itself is a partial monoid which admits a left and right $M$-action. In this case one wants to know that the cyclic bar construction $N^{cy}(M,S)$ can be done in such a way as to be compatible with the partial monoid structure on $S$. So let $M$ be a monoid. By a we will mean a partial monoid $E$ together with a basepointed $M$-action $M\times E\to E$ compatible with the partial monoid structure on $E$ in the sense that for each $m\in M$ the map $$\begin{gathered} E\to E\;;\qquad e\mapsto me \end{gathered}$$ is a homomorphisms of partial monoids. A $M$-monoid is similarly defined, and an is a partial monoid equipped with compatible left and right monoid structures. Given such an $M$-bimonoid $E$ satisfying an obvious “saturation” condition \[p. 367, W2\], the semidirect product $M\ltimes E$ is the partial monoid with composition given by $(m,e) (m',e') = (mm', (em') (me'))$ whose nerve is the simplicial set $$\{[p]\mapsto M^p\times E_p\}_{p\ge 0},$$ where $E_p$ is the set of composeable $p$-tuples (as defined above). When $E$ is commutative, we will write the product in $E$ additively. Clearly this construction can be done degreewise when $M$ and $E$ are simplicial. If the partial monoid structure on $E$ has not been specified, we will assume it is the trivial one, for which $E_p = \overset p{\vee} (E,*)$ . The simplicial set $\{[p]\mapsto \overset p{\vee} (E,*)\}$ is a left (resp. right resp. bi-) monoid over $M$ if $E$ is. Iteration of this construction yields an $M$-monoid structure on a space whose realization is an iterated suspension of $|E|$, and which agrees with that induced by the given action of $M$ on $E$ together with the trivial action on the suspension coordinates. A left (resp. right) is a space equipped with a left (resp. right) action of M, not necessarily basepointed. Obviously, a left (resp. right) $M$-monoid is a left (resp. right) $M$-space, though not conversely. Similarly, we may define an $M$-bispace to be one equipped with compatible left and right $M$ actions. .2in Suppose that $A$ is a monoid, and that we are given an inclusion $A\hookrightarrow M$ of $A$ into an $A$-bispace which is a morphism of $A$-bispaces. Identifying $A$ with its image in $M$ , one may define a partial monoid by $$M_p := \overset p{\vee} (M,A) = \overset p{\underset{j=1}{\cup}} A^{j-1} \times M \times A^{p-j}.$$ The nerve of the resulting partial monoid $\{[p] \mapsto M_p\}$ is referred to as a (this construction is due to Waldhausen). Taking $A = \{pt\}$ yields $\{[p] \mapsto \overset p{\vee} (M,*)\}$ whose realization is homeomorphic to $\Sigma|M|$ (as discussed above). It is often useful to approximate the nerve of a monoid $M$ by generalized wedges. A straightforward argument \[Lemma 2.2.1, W2\] shows that if $A\hookrightarrow M$ is an $(n-1)$-connected inclusion of monoids, the induced inclusion $$\{[p]\mapsto \overset p{\vee} (M,A)\} \hookrightarrow \{[p] \mapsto \overset p{\vee} (M,M)\} = NM$$ is $(2n-1)$-connected. As one can easily see, a fixed monoid may admit many different partial monoid structures. A key result concerning the nerve of a semidirect product is provided by \[Lemma 2.3.1, W2\]; it provides a map $$u : diag (N^{cy} (M,NE))\to N(M\ltimes E)$$ which is a weak equivalence when $\pi_0(M)$ is a group. Here $M$ is a simplicial monoid, $E$ a simplicial $M$-bimonoid, and $N^{cy}(M,NE)$ the cyclic bar construction of $M$ acting on the nerve of the partial monoid $E$. The “diagonal” structure is with respect to the simplicial coordinates coming from $N^{cy}(_-)$ and $NE$. The saturation condition referred to above, as well as the condition that $\pi_0(M)$ is a group, will always be satisfied in our case. As we will need to know $u$ explicitly later on, we recall that it is given on $n$-simplices by the formula \[p. 369, W2\]: $$\label{eqn:1.1.2} u(m_1,\dots,m_n;e_1,\dots,e_n) = (m_1,(\overset n{\underset{i=1}{\prod}} m_i)e_1 m_1; m_2,(\overset n{\underset{i=2}{\prod}} m_i) e_2(m_1 m_2);\cdots; m_n, m_n e_n (\overset n{\underset{i=1}{\prod}} m_i)).$$ Let us return to considering $JZ$ and $H^n_k(|JZ|)$ (for connected $Z$). We will be interested in the case when $Z = X\vee Y$. For a positive integer r, let $F_r(X,Y) \subset J(X\vee Y)$ denote the subset which in each degree consists of elements of word-length at most $r$ in $Y$. This is clearly a simplicial subset. There is also a natural partial monoid structure on $F_r(X,Y)$, where two elements are composable if their product in $J(X\vee Y)$ lies in $F_r(X,Y)$. We will denote $F_1(X,Y)/F_0(X,Y)$ by $\overline F_1(X,Y)$. Note that $\overline F_1(X,Y) \cong J(X)_+\wedge Y\wedge J(X)_+$ for connected $Y$. Now consider the projection maps $$\begin{gathered} F_1(X,Y)\twoheadrightarrow F_1(X,*)= JX\;\; \text{induced by}\;\;Y\twoheadrightarrow *, \label{eqn:f1}\\ F_1(X,Y) \twoheadrightarrow \overline F_1(X,Y).\label{eqn:f2}\end{gathered}$$ We observe that all spaces in sight admit compatible left and right $J(X)$-actions, and these maps commute with the action. Also both projection maps are partial monoid maps, where we take the trivial monoid structure on $\overline F_1(X,Y)$, which with the given left and right actions of $J(X)$ is a $J(X)$- bimonoid. Let $$\overline M^n_k(|F_1(X,Y)|_+) = \overline M^n_k(|J(X,Y)|_+) \cap M^n_k(|F_1(X,Y)|_+).$$ \[lemma:p1p2\] The projection maps in (\[eqn:f1\]), (\[eqn:f2\]) induce maps $$\begin{aligned} &\overline M_k^n(|F_1(X,Y)|_+) \overset{p_1}{\twoheadrightarrow} \overline M_k^n(|J(X)|_+)\cong H_k^n(|J(X)|), \\ &\overline M_k^n(|F_1(X,Y)|_+) \overset{p_2}{\twoheadrightarrow} M_k^n(|\overline F_1(X,Y)|).\end{aligned}$$ All four spaces admit compatible left and right actions of $H^n_k(|J(X)|)$, and these maps commute with the actions. As a result, these projection maps induce a map of generalized wedges $$\begin{aligned} \{[p] &\mapsto \overset {p}{\vee}\; (\overline M^n_k (|F_1(X,Y)|_+), H^n_k(|JX|))\}\\ \to \{[p] &\mapsto \overset {p}{\vee}\; (H^n_k(|JX|) \ltimes M^n_k (|\overline F_1(X,Y)|), H^n_k(|JX|))\}\end{aligned}$$ where the semi-direct product $H^n_k(|J(X)|) \ltimes M^n_k(|\overline F_1(X,Y)|)$ is formed using the trivial monoid structure on $M^n_k(|\overline F_1(X,Y)|)$ together with the given left and right (basepointed) $H^n_k(|J(X)|)$-actions. The left and right actions of $H^n_k(|J(X)|)$ on $H^n_k(|J(X \vee Y)|)$ induce compatible left and right actions on $\overline M^n_k(|F_1(X,Y)|_+)$. These actions are functorial in $Y$, hence natural with respect to the map $Y\twoheadrightarrow *$ inducing the projection map $p_1$. It is not hard to see that these actions are also compatible with the collapsing map which induces $p_2$ . It follows that $p_1$ and $p_2$, taken together, induce a map which on the level of sets is represented as $$\label{eqn:p1p2} \overline M_k^n(|F_1(X,Y)|_+) \overset{p_1 \times p_2}{\longrightarrow} H_k^n(|J(X)|)\times M_k^n(|\overline F_1(X,Y)|).$$ Taking the trivial monoid structure on $M^n_k(|\overline F_1(X,Y)|)$, we get an $\overline H^n_k(|J(X)|)$ - bimonoid structure on this space, so that the R.H.S. of (\[eqn:p1p2\]) is the underlying set of a semi-direct product. The partial composition law in this semi-direct product amounts to a description of a compatible left and right action of $H^n_k(|J(X)|)$. This action is given explicitly by $$\begin{aligned} x(y,a) &= (x,*)(y,a) = (xy,xa),\\ (y,a)x &= (yx,ax),\\ x,y &\in H_k^n(|J(X)|),\;\; a \in M_k^n(|\overline F_1(X,Y)|).\end{aligned}$$ The projection maps $p_1$ and $p_2$ preserve both the left and right actions of $H^n_k(|J(X)|)$, and therefore so does $p_1\times p_2$ under the actions described above. This implies that $p_1\times p_2$ induces a map of generalized wedges, as claimed. .5in Goodwillie Calculus ------------------- This section recalls results from Goodwillie’s Calculus (\[G1\] – \[G3\]) that will be used later on. We consider functors $F : \underline C \to \underline D$ , where $\underline C$ is either $U,T,U(C)$ or $T(C)$ and $\underline D$ is $T,T(C)$ or the category $Sp$ of spectra. Here $U$ is the category of (Hausdorff) topological spaces, $T$ the category of basepointed spaces in $U$. $U(C)$ denotes the corresponding category of spaces over $C\in obj(U)$, and $T(C)$ the category of basepointed objects in $U(C)$. Note that an object of $T(C)$ is a retractive space $Y$ over $C$, i.e., $r :Y\to C$ comes equipped with a right inverse $i\;(r \circ i = id)$. Each of these choices of $\underline C$ is a closed model category in the sense of Quillen, so one has the usual constructions of homotopy theory. When $\underline C$ = $U(C)$ or $T(C)$ we denote by $C_n$ the full subcategory of $n$-connected objects in $\underline C$. As in \[G2\], a map of spaces or spectra is called if its homotopy fibre is $(n-1)$-connected. In all of these categories one has a standard notion of weak equivalence, and $F$ is called a if $F$ preserves weak equivalences. We will only be concerned with homotopy functors. .2in Let $S$ be a finite set, $C(S)$ the category of subsets of $S$ with morphisms corresponding to inclusions. An $S$-cube in $C$ is a covariant functor $G : C(S) \to \underline C$. If $S = \{1,2\dots,n\} = \underline n\,, G$ is called an $n$-cube. Associated to an $S$-cube is the homotopy-inverse limit $h(G) = holim(G|_{C_0(S)})$ where $C_0(S)$ denotes the full subcategory of $C(S)$ on all objects except $\phi$. The natural coaugmentation map $lim(G) \to holim(G)$ induces a natural transformation $$a(G) : G(\phi) \to h(G)\; .$$ and $G$ is if $a(G)$ is a weak equivalence. We say $F : \underline C \to\underline D$ (as above) is if $F\circ G$ is $h$-cartesian for every strongly homotopy co-cartesian $S$-cube $G : S \to \underline C$, where $|S| = n+1$ \[Def.3.1, G2\]. The condition that $F$ be $n$-excisive becomes less restrictive as $n$ increases. Thus, if $F$ is $n$-excisive, it is $(n+1)$-excisive, but not conversely \[Prop. 2.3.2, G2\]. .2in Given a homotopy functor $F$ satisfying certain conditions, there is a natural way of producing a functor $P_n F$ of degree $n$ and a natural transformation $F\to P_n F$. In fact, $P_n F$ can always be constructed. Starting with $X\in \text{ obj }(\underline C)$ one can define an $(n+1)$-cube $$X\underset C {*}(_-) : C (\underline{n+1}) \to \underline C = U(C)\text{ or }T(C);$$ this associates to $T \subset \underline{n+1}$ the space $X\underset{C}{*}T$ which is the fibrewise join over $C$ of $X$ with the set $T$. Now let $(T_n F) (X) = holim (F\circ (X\underset{C}{*}(_-))|_{C_0(\underline{n+1})})$. Then $a(F\circ(X\underset{C}{*}(_-)))$ defines a transformation $(t_n F) (X) : F(X) \to (T_n F) (X)$. One easily sees that $X\mapsto (T_n F) (X)$ is again a homotopy functor on $\underline C$ and that $(t_n F) := (t_n F) (_-)$ defines a natural transformation from $F$ to $T_n F$. Note that $X\underset{C}{*}(_-) : C(\underline{n+1}) \to \underline C$ is a (strongly) homotopy co-cartesian diagram in $\underline C$, so that $t_n F$ is an equivalence if $F$ is of degree $n$. Iteration of this construction yields $P_n F$ which is by definition the homotopy colimit of the directed system $\{T^i_n F, t_n T_n^i F\}$. The transformations $t_n T^i_n F$ induce a natural transformation $p_n F : F \to P_n F$. Choice of a distinguished element $(m+1) \in \underline{m+1}$ induces an inclusion $\underline m \to \underline{m+1}$ and hence a natural transformation $C( \underline{m})\to C(\underline{m+1})$. This in turn induces a natural transformation of directed systems $$\{T^i_n F, t_n T_n^i F\} \to \{T^i_{n-1} F,t_{n-1} T^i_{n-1} F\}$$ and hence a natural transformation $P_m F\overset{q_m F}{\longrightarrow} P_{m-1} F$. Different choices of $m$ yield naturally equivalent choices of $q_m F$. The is then by definition the inverse system $\{P_n F, q_n F\}$, which is best viewed as a tower together with the natural transformations $p_n F$: $$\diagram & & & \vdots \dto^{q_3F} \\ & & & P_2F \dto^{q_2F} \\ & & & P_1F \dto^{q_1F} \\ F \xto[0,3]^{p_0F} \xto[-1,3]^{p_1F} \xto[-2,3]^{p_2F} & & & P_0 F \enddiagram$$ The closed diagrams in this tower are homotopy commutative. The $\text{n}^{\text{th}}$ homogeneous part of $F$ is by definition the homotopy fibre of $q_nF$: $$D_n F := hofibre(P_n F \overset{q_n F}{\longrightarrow} P_{n-1} F)\;.$$ .2in (\[Definition 4.1, G2\]) $F$ is if the following statement holds for some numbers $c$ and $\kappa$ :$\underline{E_n(c,\kappa)}$ : If $G : C(\underline{n+1}) \to \underline{C}$ is any strongly co-Cartesian $(n+1)$-cube such that for all $s\in S$ the map $G(\phi)\to G(s)$ is $k_s$ connected and $k_s\ge \kappa$, then the diagram $G(C(\underline{n+1}))$ is $(-c + \sum k_s)$-connected. In this case $D_n F$ is of degree $n$ that is, it is stably $n$-excisive and $P_i D_n F \simeq *$ for $i<n$ \[Prop. 1.11, G3\]. We will write $\underline{P_n F}$ for $hofibre(F\overset{p_n F}{\longrightarrow} P_n F)$, and $\underline{P^m_n F}$ for $hofibre(P_n F \to P_m F)$. The functor $D_n(F)$ is referred to as the of $F$ (at $*$). One also wants to know when the connectivity of $F\overset{p_n F}{\longrightarrow} P_n F$ tend to $\infty$ as $n$ tends to $\infty$. From \[Def. 4.2, G2\], $F$ is $\rho$-analytic if there is some number $q$ such that $F$ satisfies $E_n(n\rho -q, \rho +1)$ for all $n\ge 1$ . (\[Th. 2.5.21, G3\]) The connectivity of $p_n F$ tends to $\infty$ over the category $\underline C_{\rho}$ where $\rho = \rho(F), F : \underline C \to \underline D$. In analogy with functions, $\underline C_\rho$ may be thought of as the of $F$. In applying this calculus to $F$, it is natural to restrict one’s attention to the subcategory $\underline C_{\rho(F)}$ which in general is the largest subcategory of $\underline C$ for which the Taylor series of $F|_{\underline C_{\rho(F)}}$ converges (in the homotopy-theoretical sense). Within this range it provides a powerful machinery for analyzing $F$, as well as determining the effect of a natural transformation $\eta :F_1 \to F_2$ on homotopy groups. It is clear from the above theorem that $\eta$ will induce a weak equivalence when restricted to $\underline C_\rho \;(\rho = max(\rho(F_1),\rho(F_2)), F_i : \underline C \to \underline D)$ if $\eta$ induces an equivalence on differentials: $$D_n(\eta) : D_n(F_1)\overset{\cong}{\to} D_n(F_2),$$ under the condition that $P_0(F_i) \simeq *$. However, there is another way of getting at $\eta$. Assume first that $\underline C = U(C)$ and that $F_i : \underline C \to \underline D$ have the same modulus $\rho$ for $i$ = 1,2. Let $(X,p : X\to C)$ be an object in $U(C)$. Then $(X,p : X\to C)$ defines a natural transformation $\nu_{(X,p)} : U(X) \to U(C)$ given on objects by $$\nu_{(X,p)}(Y,r : Y\to X) = (Y,p \circ r : Y \to C).$$ Analyticity is preserved by the natural transformation $\nu^*_{(X,p)} : F\to F\circ \nu_{(X,p)}$. The next result of Goodwillie’s concerns only $1^{\text{st}}$ differentials, and is is contained in Theorems 5.3 and 5.7 of \[G2\]. \[thm:conv\] If $F_1,F_2: U(C) \to \underline D$ are $\rho$-analytic, and $\eta : F_1 \to F_2$ is a natural transformation such that the square $$\diagram P_1(\nu^*_{(X,p)}F_1) \rrto^{P_1(\nu^*_{(X,p)}\eta)} \dto_{q_1(\nu^*_{(X,p)}F_1)} & & P_1(\nu^*_{(X,p)}F_2) \dto^{q_1(\nu^*_{(X,p)}F_2)} \\ P_0(\nu^*_{(X,p)}F_1) \rrto^{P_0(\nu^*_{(X,p)}\eta)} & & P_0(\nu^*_{(X,p)}F_2) \enddiagram$$ is homotopy-cartesian for every $(X,p)$ in $U(C)$, then for every $f : Y\to X$ in $U(C)_\rho$ the diagram $$\diagram F_1(Y) \rto^{\eta(Y)} \dto_{F_1(f)} & F_2(Y) \dto^{F_2(f)} \\ F_1(X) \rto^{\eta(X)} & F_2(X) \enddiagram$$ is homotopy-cartesian. In the case $C = *$ we will denote $hofibre(q_1(\nu^*_{(X,p)} F))$ by $(D_1F)_X$; $p$ in this case is unique. For $F_2 = A(\Sigma_-) = A\Sigma(_-)$, $\rho =0$ by \[Theorem 4.6, G2\]. Theorem \[thm:conv\] yields If $\eta : F_1 \to A\Sigma(_-)$ is a natural transformation which induces an equivalence $D_1(\eta)_X : (D_1 F_1)_X \overset\simeq{\longrightarrow} (D_1 A\Sigma)_X$ for all connected spaces $X$ and $F$ is $0$-analytic, then $\eta$ induces an equivalence $$\eta(f) : hofibre(F_1(Y) \to F_1(X)) \overset\simeq{\longrightarrow} hofibre(A(\Sigma Y) \to A(\Sigma X))$$ for all maps $f$ between connected spaces $Y$ and $X$. The result which makes these techniques applicable to the study of $A(X)$ is the computation, due to Waldhausen at $X=pt$, and Goodwillie for general $X$, of the differentials of $A(X);\text{ here } (Y)$ denotes the retractive object $(Y\vee X, r : Y\vee X\to X)$ thought of as an object in $T(X)$. (Waldhausen \[W2\], \[W4\]; Goodwillie \[Cor. 3.3, G1\])\[thm:WG\] For connected $X$ there is an equivalence $$(D_1A\Sigma)_X(Y)\simeq\Omega^\infty\Sigma^\infty(\Sigma(\underset{q\ge1}{\vee} |X^{[q-1]} \wedge Y|))$$ natural in $X$ . Goodwillie’s computation in \[G1\] applies to the functor $A(_-)$ rather than $A\Sigma(_-)$. However, there is a natural equivalence between $A\Sigma(_-)$ and the restriction of $A(_-)$ to the subcategory of (basepointed) suspension spaces. Again, by Goodwillie, the differential $(D_1A\Sigma)_X(Y)$ may be computed as the differential (at $\Sigma X$) of the functor $$(Y) \to A(\Sigma(Y\vee X)\to \Sigma X) := hofibre(A(\Sigma Y\vee\Sigma X)\to A(\Sigma X))$$ which by \[G2\], together with the Snaith splitting of the functor $\Omega\Sigma(_-)$ yields the result. For a functor $F$ with range either $T$, $T(C)$, or $Sp$, $F$ is said to be if the natural map of $Hom$-spaces $Hom(X,Y)\to Hom(F(X),F(Y))$ is continuous, and if it satisfies the colimit axiom \[G3\]. Waldhausen’s functor $A(_-)$ is both continuous and finitary. If $X = \{X_k\}_{k\ge 0}$ is a simplicial space and $X_k$ is 1-connected for each $k$, then $$\Phi_A : | [k] \mapsto \overline A(X_k) | \overset\simeq{\longrightarrow} \overline A(| [k] \mapsto X_k|).$$ By \[BD\], if $F$ is a continuous finitary homotopy functor on $U(C)$ then the natural transformation $\Phi_F : |[k] \mapsto F(_-)| \to F(|[k] \mapsto (_-)|)$ induces a weak equivalence over the category of simplicial objects in $U(C)_{\rho(F)}$. .5in Elementary Expansions and Representations in $H^n_q(|JX|)$ ---------------------------------------------------------- As in the previous sections $X$ will denote a basepointed connected simplicial set. Our objective in the section will be to construct the maps $\tilde\rho_q :\tilde D_q(X) \to \tilde A(\Sigma X)$ of [@ccgh] used in the proof of Theorem A, where $$D_q(X) := \Omega^\infty\Sigma^\infty(\Sigma( E\mathbb Z/p \underset{\mathbb Z/p}{\leftthreetimes} |X|^{[p]})).$$ In addition, we provide some techniques for computing $\tilde\rho_q$ on differentials, by relating certain restrictions of $\tilde\rho_q$ to products of elementary expansions. This will be used in section 3.3 where we compute the trace of $\tilde\rho_q$. From the construction of $\tilde\rho_q$, it will be easy to extend it to a map $\tilde\rho_q(JX) : \tilde D_q(JX) \to \overline A(\Sigma X)$. We do this, and prove analogous results for $\tilde\rho_q(JX)$. Let $\imath : |X| \to |JX|$ denote the standard inclusion. Fixing an indexing of $\overset q{\vee} S^n$ and $\overset q{\vee} (S^n \wedge |JX|_+)$ we let $(S^n)_i$ resp. $(S^n \wedge |JX|_+)_i$ denote the $i\text{th}$ term in the appropriate wedge for $1\le i \le q$. Beginning with $(x_1\cdots,x_q)\in |X|^q$, $\rho_q(x_1,\cdots,x_q)$ is the map given on $(S^n)_i$ by the composition $$\label{eqn:elem1} (S^n)_i = S^n \xrightarrow{\text{pinch}} S^n \vee S^n \overset{id\vee f_i}{\longrightarrow} S^n\vee (S^n\wedge|JX|)\overset{inc}{\longrightarrow} (S^n\wedge |JX|_+)_i \vee (S^n\wedge |JX|_+)_{i+1}.$$ Here subscripts are taken mod $q$; thus $i+1 = 1$ if $i = q, i+1$ otherwise. The basepointed cofibration sequence $$S^0 \overset i{\rightarrow} |J(X)|_+ \twoheadrightarrow |J(X)|$$ splits up to homotopy after a single suspension. Fixing $j_1 : \Sigma|JX| \to \Sigma (|JX|_+)$ with $\Sigma p \circ j_1 \simeq id$ and letting $j : \Sigma^n|JX| \to \Sigma^n(|JX|_+)$ be $\Sigma^{n-1}(j_1)$, $inc$ is the map induced by the inclusions $$\begin{aligned} &S^n = S^n \wedge S^0\hookrightarrow S^n\wedge |JX|_+\; , \\ &S^n \wedge |JX| \overset j{\hookrightarrow} S^n \wedge |JX|_+.\end{aligned}$$ Then $f_i(s) = [s,\imath(x_i)]\in S^n \wedge |JX|$ for $s\in S^n$. “pinch” denotes the pinch map determined by the standard embedding $S^{n-1} \to S^n$, together with a fixed choice of homeomorphism from the cofibre to $S^n\vee S^n$ . Clearly $\rho_q$ is continuous and defines a map of spaces $$\rho_q : |X|^q \to |H^n_q(|JX|)|\cong| \overline M^n_q(|JX|_+)|.$$ $\rho_q$ is also equivariant with respect to the action of $\Bbb Z/q$, which acts on $|X|^q$ by cyclically permuting the coordinates and on $H^n_q(|JX|)$ via the standard embedding $\Bbb Z/q \to \Sigma_q$ and the usual conjugation action of $\Sigma_q$ on $H^q_n(|JX|)$. \[prop:extend\] $\rho_q$ extends to a map $\overline\rho_q : E\Bbb Z/q \underset{\Bbb Z/q}{\times} |X|^q \to \Omega\overline A(\Sigma X)$, which in turn induces a map $\tilde\rho_q : \Omega^\infty\Sigma^\infty(\Sigma (E\Bbb Z/q \underset{\Bbb Z/q}{\leftthreetimes} |X|^{[q]})) \to \overline A(\Sigma X)$. Taking the direct limit under suspension and stabilization yields a map $|X|^q \to |H(|JX|)|$ which we also denote by $\rho_q$. This map is still $\Bbb Z/q$-equivariant, where $\Bbb Z/q$ acts on the second space via the embedding $\Bbb Z/q \to \Sigma_q \to \Sigma_\infty$. It suffices to know now that the plus construction $|H(|JX|)| \to \Omega A(\Sigma X)$ can be done so as to be equivariant with respect to the action of $\Sigma_\infty$ and that the action of $\Sigma_\infty$ on $\Omega A(\Sigma X)$ is trivial up to homotopy. This follows from [@fo]. The result is that $\Omega A(\Sigma X) \overset i{\rightarrow} \underset{\Sigma_\infty\phantom{xx}}{E\Sigma_\infty \times\Omega A(\Sigma X)}$ admits a left homotopy inverse $$p : \underset{\Sigma_\infty\phantom{xx}}{E\Sigma_\infty \times\Omega A(\Sigma X)} \to \Omega A(\Sigma X)$$ $(p\circ i\simeq id)$ and we can take $\overline\rho_q$ to be the composition $$E\Bbb Z/q \underset{\Bbb Z/q}{\times} |X|^q \overset{(1\times\rho_q)}{\longrightarrow} \underset{\Sigma_\infty\phantom{xxx}}{E\Sigma_\infty{\times} |H(|JX|)|} \to \underset{\Sigma_\infty\phantom{xx}}{E\Sigma_\infty \times\Omega A(\Sigma X)} \overset p{\rightarrow} \Omega A(\Sigma X).$$ Taking the infinite-loop extension of the adjoint of $\overline\rho_q$ yields a map $$\Omega^\infty\Sigma^\infty(\Sigma(E\Bbb Z/q\underset{\Bbb Z/q}{\times} |X|^q)) \to A(\Sigma X).$$ Fix a stable section $s$ of the projection $$E\Bbb Z/q\underset{\Bbb Z/q}{\times} |X|^q \to E\Bbb Z/q \underset{\Bbb Z/q}{\leftthreetimes}|X|^{[q]} = (E\Bbb Z/q)_+\underset{\Bbb Z/q}{\wedge}|X|^{[q]}.$$ Then $\tilde\rho_q$ is the composition $$\Omega^\infty\Sigma^\infty (\Sigma(E\Bbb Z/q\underset{\Bbb Z/q}{\leftthreetimes}|X|^{[q]})) \overset s{\rightarrow} \Omega^\infty\Sigma^\infty (\Sigma(E\Bbb Z/q\underset{\Bbb Z/q}{\times}|X|^q)) \to A(\Sigma X).$$ Finally we note that all of the constructions are natural in $X$, and hence factor through $\overline A(\Sigma X)$. The space $\Omega^\infty\Sigma^\infty(\Sigma(E\Bbb Z/q\underset{\Bbb Z/q}{\leftthreetimes}|X|^{[q]}))$ will be denoted by $\tilde D_q(X)$. $\tilde D_q(_-)$ can alternatively be thought of as a functor on connected spaces. The following is more or less contained in \[§3, CCGH\]. Here $X \text{ and } Y$ denote basepointed simplicial sets. \[prop:rep1\] 1. $(D_1\tilde D_q)_X(Y) \simeq \Omega^\infty\Sigma^\infty(\Sigma|X^{[q-1]} \wedge Y|)$. 2. $(D_1 F_q)_X(Y) =\Omega^\infty\Sigma^\infty(\Sigma(\overset q{\underset{i=1}{\vee}} |X^{[i-1]} \wedge Y \wedge X^{[q-i]}|))$, where $F_q(Z)=\Omega^\infty\Sigma^\infty(\Sigma|Z^{[q]}|)$. The natural transformation $F_q(_-) \to\tilde D_q(_-)$ induces the fold map on $1^{\text{st}}$ differentials which is the infinite loop extension of the map $$\begin{gathered} \overset{q-1}{\underset{i=0}{\vee}} X^{[q-i-1]} \wedge Y \wedge X^{[i]} \to X^{[q-1]} \wedge Y,\\ (x_1, \cdots, x_{q-i-1}, y,x'_1,\cdots,x'_i)\mapsto (x'_1,\cdots,x'_i,x_1,\cdots, x_{q-i-1}, y).\end{gathered}$$ 3. The inclusion $i_q(X,Y) : X^{[q-1]} \wedge Y \to (X\vee Y)^{[q]} \to E\Bbb Z/q\underset{\Bbb Z/q}{\leftthreetimes} (X\vee Y)^{[q]}$ induces an equivalence $$\Omega^\infty\Sigma^\infty(\Sigma|X^{[q-1]} \wedge Y|) \to \underset{\underset n{\longrightarrow}}{\lim}\; \Omega^m hofibre(\tilde D_q(X\vee\Sigma^m Y)\to \tilde D_q(X)) = (D_1\tilde D_q)_X(Y)$$ 1\) and 2) appear in [@ccgh]; the easiest way to verify them is to first compute $(D_1 F_q)_X(Y)$, which is straightforward, and then observe that inclusion the term $(E\Bbb Z/q\underset{\Bbb Z/q}{\leftthreetimes} (_-))$ has the effect of “dividing by $q$” (in Goodwillie’s words) via the fold map. Finally 3) follows from 1) and 2), as the inclusion $X^{[q-1]} \wedge Y\to (X\vee Y)^{[q]}$ induces a map $\Omega^\infty\Sigma^\infty(\Sigma(|X^{[q-1]} \wedge Y|)) \to (D_1F_q)_X(Y)$ which agrees up to homotopy with the infinite loop extension of the inclusion of $X^{[q-1]} \wedge Y$ into the last term in the wedge $\overset q{\underset{i=1}{\vee}} X^{[i-1]} \wedge Y\wedge X^{[q-i]}$. Recall that for a ring $R$ and $r\in R$ the elementary matrix $e_{ij}(r)$ is the matrix $id + \overline e_{ij}(r)$ where $\overline e_{ij}(r)_{k,\ell} = r$ if $(k,\ell) = (i,j), 0$ otherwise. One should not try to push the analogy between $H^n_q(|JX|)$ and the group $GL_q(\Bbb Z[JX])$ too far, especially for finite $n$. However one can construct elements of $H^n_q(|JX|)$ which behave enough like elementary matrices to be useful. We call these , as they correspond to the elementary expansions of classical Whitehead simple homotopy theory. \[def:expand\] Let $X$ be a connected simplicial set, and $\imath : |X| \to |JX|$ the standard inclusion. For $x\in |X|, e_{ij}(\imath(x)) \in |H^n_q(|JX|)|$ is given on $(S^n)_\ell \subset \overset q{\underset{k=1}{\vee}} (S^n)_k$ by $$\begin{aligned} &\underline{\ell\ne i}\qquad (S^n)_\ell \overset{ inc}{\longrightarrow} (S^n \wedge |JX|_+)_\ell\\ &\underline{\ell = i} \qquad (S^n)_i = S^n \overset{\text{pinch}}{\longrightarrow} S^n \vee S^n \overset{id\; \vee f}{\longrightarrow} S^n \vee (S^n\wedge |JX|)\overset{inc}{\longrightarrow} (S^n\wedge |JX|_+)_i \vee(S^n\wedge |JX|_+)_j\end{aligned}$$ where (as before) we have identified $H^n_q(|JX|)$ with $\overline M^n_q(|JX|_+)$. The sequence for $\ell = i$ is exactly as in (\[eqn:elem1\]) with $f(s) = [s,\imath(x)] \in S^n \wedge |JX|$; the only difference is the indexing of the last term. $e_{ij} (-\imath(x))$ is defined the same way, but with $id\vee f$ replaced by $id\vee (-f)$ where $-f$ is $f$ composed with a fixed choice of $S^n\overset{(-1)}{\longrightarrow} S^n$ representing loop inverse. The $\overline e_{ij}(\imath(x))$ is given by $$\begin{aligned} &\underline{\ell \ne i}\qquad\qquad\qquad (S^n)_\ell \longrightarrow * \\ &\underline{\ell = i}\qquad(S^n)_i \overset f{\rightarrow} (S^n \wedge |JX|) \overset {inc}{\longrightarrow} (S^n \wedge |JX|_+)_j.\end{aligned}$$ Similarly one can define $\overline e_{ij}(-\imath(x))$. \[rem:pm\] When $i= j$, one could define $e_{ii}(\pm\imath(x))$ (loop inverse) in a similar fashion. Note that the definition of $e_{ij} (\pm(x))$ depends on a choice of parameters: choice of pinch map, choice of $j : S^n \wedge |JX| \to S^n \wedge |JX|_+$, and choice of $S^n\overset{(-1)}{\longrightarrow} S^n$ representing $-1$. These, however, can be fixed so as to be compatible under suspension in the $n$ coordinate and depending in a continuous and natural way on $x\in |X|$ as well as $X$. We assume this has been done. In this way, all of the manipulations we will do with these elements will be natural in $X$ and $x\in |X|$. In a similar vein we will sometimes want to know that two maps depending on $x\in |X|$ (or diagrams depending on $X,Y,\dots$) are homotopic by a natural homotopy which depends continuously on $x\in |X|$ (resp. naturally homotopy-commutative by a homotopy which depends continuously on the spaces $X,Y,\dots$). When this can be done, we will say the two maps are homotopic (or that the diagram is $h$-commutative). \[prop:inv\] Suppose $f = e_{i_1 j_1}(\imath(x_1))\cdot\dots\cdot e_{i_n j_n} (\imath(x_n))$ for $x_i \in |X|$. Then there is a canonical homotopy $f\cdot f^{-1} \simeq *$, where $f^{-1} = e_{i_n j_n} (-\imath(x_n))\cdot\dots\cdot e_{i_1 j_1} (-\imath(x_1))$. There is certainly a homotopy. It can be made canonical by concentrating the homotopy in the spherical coordinates. This involves choosing a homotopy between $$\begin{aligned} &S^n \overset{\text{pinch}}{\longrightarrow} S^n \vee S^n \xrightarrow{\text{pinch }\vee\text{ id}} S^n \vee S^n \vee S^n \\ \qquad\qquad\text{and}\\ &S^n \xrightarrow{\text{pinch}} S^n \vee S^n \xrightarrow{\text{id }\vee\text{ pinch}} S^n \vee S^n \vee S^n\end{aligned}$$ as well as a homotopy between $S^n \overset{\text{pinch}}{\longrightarrow} S^n \vee S^n \overset{\text{id }\vee\text{ (-1)}} {\longrightarrow} S^n \vee S^n \overset{\text{fold}}{\longrightarrow} S^n$ and the trivial map $S^n \longrightarrow *$ We are making any claims that this homotopy is unique, even up to homotopy. We will also need \[prop:canon1\] For $x_1,\dots,x_{q-1} \in |X|, y\in |Y|$, there is a canonical homotopy between $$\begin{gathered} e_{12}(-\imath(x_1))\cdot e_{23} (-\imath(x_2))\cdot\dots\cdot e_{q-1q}(-\imath(x_{q-1}))\overline e_{q1}(\imath(y)) \text{ and }\\ \overline e_{11}((\overset{q-1}{\underset{i=1}{\prod}} -\imath(x_i))\imath(y)) + \overline e_{21}((\overset{q-1}{\underset{i=2}{\prod}} -\imath(x_i))\imath(y)) + \dots + \overline e_{q1}(\imath(y))\end{gathered}$$ where “+” denotes loop sum. On the level of matrices this is clear; the product here is taking place in $|J(X\vee Y)|$. Properly speaking, we should write $\overset{q-1}{\underset{i=j}{\prod}} -\imath(x_i)$ as $(-1)^{q-1-j} \overset{q-1}{\underset{i=j}{\prod}} \imath(x_i)$, given that $|J(X\vee Y)|$ is a monoid without any strict inverses. To realize that the obvious homotopy is canonical, we note that it involvesi) reparamerization to pass between the sequence of pinch maps used to evaluate the compositions and ii) reparametrization to reposition the iterated power of $(-1)$ appearing in the expression $(-1)^{q-1-j} \overset{q-1}{\underset{i=j}{\prod}} \imath(x_i)$. Both of these can be done in a natural and continuous way with respect to the parameters $x_1,\dots,x_{q-1},y$ involved. The next result relates the representations $\rho_q$ of (\[eqn:elem1\]) to products of elementary expansions. This will be needed for the computation of the trace on $\tilde\rho_q$ given in §3.3. We define representations $\overline\rho_q^1, \overline\rho^2_q$ as follows: $$\begin{aligned} \label{eqn:row12} &\overline\rho^1_q(x_1,\dots,x_{q-1}) = \rho_q(x_1,\dots,x_{q-1},*)\quad x_i\in X\\ &\overline\rho^2_q(y) = p_2\rho_q(*,*,\dots,*,y) \quad y\in Y\end{aligned}$$ where $p_2 : H^n_q(|J(X\vee Y)|) \to M^n_q(|\overline F_1(X,Y)|)$ is as in Lemma \[lemma:p1p2\], with $X$ and $Y$ connected. \[prop:canon2\] As continuous maps $\overline \rho^1_q$ and $\overline\rho^2_q$ are canonically homotopic to the following products of elementary expansions: $$\begin{gathered} \overline\rho^1_q(x_1,\dots,x_{q-1}) \simeq e_{(q-1)q}(\imath(x_{q-1})) e_{(q-2)(q-1)}(\imath(x_{q-2}))\cdot\dots\cdot e_{12}(\imath(x_1));\\ \overline\rho^2_q(y) \simeq \overline e_{q1}(y).\end{gathered}$$ This again only involves a reparametrization in the spherical coordinate independent of $X$ and $Y$, in the case of $\overline\rho^1_q$. In the case of $\overline\rho^2_q$ we needn’t do anything, as the projection map $p_2$ kills the identity maps along the diagonal and we are left with a single non-zero entry. The above canonical homotopies arise from Steinberg identities, which hold in $H^n_k(|GX|)$ up to canonical homotopies. Most types of identities among elementary expansions which hold up to homotopy do not hold up to canonical homotopy. For example, it is not true that the entire representation $\rho_q$ is homotopic to a product of elementary expansions. This type of issue typically arises whenever one tries to analyze such cyclic representations in terms of products of elementary expansions. We have stated the above results using elementary expansions with entries in $\imath(|X|) \subset J|X|$, which is all we will need for section 3. However the above constructions apply to the more general case where one allows arbitrary entries in $J|X|$ (or even $|GX|$ when $|X|$ is not a suspension). Thus for $y\in J|X| \cong |JX|$, one defines $e_{ij}(y) \in |H^q_n(|JX|)|$ exactly as in Definition \[def:expand\], where $f : S^n \to S^n \wedge |JX|$ is the map $f(s) = [s,y] \in S^n \wedge |JX|$. Similarly for the reduced elementary expansion $\overline e_{ij}(y)$. Remark \[rem:pm\] and Propositions \[prop:inv\], \[prop:canon1\], and \[prop:canon2\] apply in this more general context. \[prop:canon3\] For $a_1,\dots,a_{q-1} \in |JX|,\; b \in |\overline F_1(X,Y)|$ there is a canonical homotopy between $$\begin{aligned} &e_{12}(-a_1) \cdot e_{23}(-a_2)\cdot\dots\cdot e_{q-1q}(-a_{q-1})\cdot\overline e_{q1}(b)\;\;\text{ and } \\ &\overline e_{11}((-1)^{q-1}(\overset{q-1}{\underset{i=1}{\prod}} a_i)b) + \overline e_{21}((-1)^{q-2}(\overset{q-1}{\underset{i=2}{\prod}} a_i)b) + \dots + \overline e_{q-11} ((-1)a_{q-1}b) + \overline e_{q1} (b).\end{aligned}$$ The representations $\rho_q$ also extend in a natural way to yield a continuous map $\rho_q : |JX|^q \to|H^n_q(|JX|)|$, which on a $q$-tuple $(a_1,\dots,a_q)\in |JX|^q$ is given exactly as in (\[eqn:elem1\]), where $f_i$ is now the map $f_i(s) = [s,a_i] \in S^n \wedge |JX|$. Proposition \[prop:extend\] applies with $|JX|$ in place of $|X|$ for the domain of $\tilde\rho_q$; in fact it is easy to see that the map defined in that proposition factors by this extension. For $a_1,\dots,a_{q-1} \in |JX|, b \in |\overline F_1(X,Y)|$, let $$\begin{gathered} \overline\rho^1_q (a_1,\dots,a_{q-1}) = \rho_q(a_1,a_2,\dots,a_{q-1},*)\\ \overline\rho^2_q(b) = p_2\rho_q(*,*,\dots,*,b)\end{gathered}$$ as in (\[eqn:row12\]). Then as continuous maps $\overline\rho^1_q$ and $\overline\rho^2_q$ are canonically homotopic to the following product of elementary expansions: $$\begin{gathered} \overline\rho^1_q(a_1,a_2,\dots,a_{q-1})\simeq e_{(q-1)q}(a_{q-1})e_{(q-2)(q-1)}(a_{q-2})\cdot\dots\cdot e_{12}(a_1) \\ \overline\rho^2_q(b)\simeq \overline e_{q1}(b)\;.\end{gathered}$$ The proofs of these two propositions is exactly as before. .2in Observe that when $a_1 = a_2 = \dots = a_{q-1} = *$, the map $\overline\rho^1_q(a_1,\dots,a_{q-1})=\overline\rho^1_q(*,\dots,*)$ is not the standard inclusion $\overset{q}{\vee}(S^n) \hookrightarrow \overset{q}{\vee} (S^n\wedge|JX|_+)$, only homotopic to it. This homotopy, which we will need later on, is a wedge of homotopies between $$\begin{aligned} S^n \overset{\text{pinch}}{\longrightarrow} S^n\vee S^n &\overset{\text{id }\vee *}{\longrightarrow} S^n\vee S^n \overset{\text{fold}}{\longrightarrow} S^n\nonumber \\ &\text{and}\label{eqn:1.3.16}\\ S^n &\overset{\text{id}}{\longrightarrow} S^n\nonumber\end{aligned}$$ This wedge produces a path between the basepoint of $H^n_q(|J(X)|)$ and $\overline\rho^1_q(*,\dots,*)$. .2in As a final remark, we note that in the above propositions involving minus signs, we are not requiring any type of coherence conditions to apply for this minus sign with respect to composition product (which in the limiting case $n\to\infty$ will involve the product structure on the generalized ring $\Omega^\infty\Sigma^\infty(|GX|_+)$). We only require that certain homotopies can be made canonical. The restrictions on the “ring” under consideration that must be made in order for such a coherent $(-1)$ to exist are substantial, as shown by Schwänzl and Vogt in [@sv]. The trace ========= Manipulation in the stable range -------------------------------- We follow closely the argument of Waldhausen \[W2\] in proving \[thm:2.1.1\] Let $X$ and $Y$ be pointed simplicial sets, with $X$ connected and $Y$ $m$-connected. Then the two spaces $$NH^n_k(|J(X\vee Y)|) \qquad \text{ and }\qquad N^{cy}(H^n_k(|JX|),M^n_k(\Sigma|\overline F_1(X,Y)|))$$ are $q$-equivalent, where $q =\min(n-2, 2m+1)$ and $n\ge 1$. The notation is that of §2.1. Here the monoid structure on $H^n_k(|J(X\vee Y)|)$ and $H^n_k(|JX|)$ is the usual one, while the partial monoid structure on the $H^n_k(|JX|)$-bimonoid $M^n_k(|\overline F_1(X,Y)|)$ is the trivial one. The equivalence follows as in \[Theorem 3.1, W2\] by the construction of five maps, each of which is suitably connected. .2in $H^n_k(|J(X\vee Y)|)$ admits a partial monoid structure where two elements are composable iff at most one of them lies outside the submonoid $H^n_k(|JX|)$. The nerve of this partial monoid is by definition the generalized wedge $$\{[p] \mapsto \overset p{\vee} (H^n_k(|J(X\vee Y)|), H^n_k(|JX|))\}.$$ As $Y$ is $m$-connected, the inclusion $H^n_k(|JX|) \to H^n_k(|J(X\vee Y)|)$ is also $m$-connected. It follows \[Lemma 2.2.1, W2\] that the inclusion $$\{[p] \mapsto \overset p{\vee} (H^n_k(|J(X\vee Y)|), H^n_k(|JX|))\} \hookrightarrow NH^n_k(|J(X\vee Y)|)$$ is $(2m+1)$-connected. .2in The inclusion $F_1(X,Y) \hookrightarrow J(X,Y)$ is $(2m+1)$-connected, hence induces a $(2m+1)$-connected map $\overline M^n_k(|F_1(X,Y)|_+) \to \overline M^n_k(|J(X\vee Y)|_+)$ of $H^n_k(|JX|)$-bimonoids. This in turn induces an inclusion of generalized wedges $$\begin{aligned} \{[p] &\mapsto \overset p{\vee} (\overline M^n_k(|F_1(X,Y)|_+), H^n_k(|JX|))\}\\ \hookrightarrow\{[p] &\mapsto \overset p{\vee} (\overline M^n_k(J(X\vee Y)|_+), H^n_k(|JX|))\}\end{aligned}$$ which is $(2m+1)$-connected in each degree by the gluing Lemma \[Lemma 2.1.2, W2\] and induction on $p$. It follows that the inclusion of simplicial objects is also $(2m+1)$-connected. .2in We consider the restriction to the path components corresponding to $H^n_k(|J(X\vee Y)|)$ of the inclusion $$\begin{aligned} M^n_k(|F_1(X,Y)|_+) &= Map( \overset k{\vee} S^n, \overset k{\vee} S^n \wedge |F_1(X,Y)|_+)\\ &\hookrightarrow Map ( \overset k{\vee} S^n,\overset k{\prod} S^n \wedge |F_1(X,Y)|_+)\\ &\simeq \overset k{\prod}\,\overset k{\prod}\, \Omega^n\Sigma^n (|F_1(X,Y)|_+).\end{aligned}$$ This is an $(n-1)$-equivalence. \[Lemma 1, W2\] yields an $(n-2)$-equivalence $$\Omega^n\Sigma^n(|F_1(X,Y)|_+) \simeq \Omega^n \Sigma^n (|JX_+\,\vee\, \overline F_1(X,Y)|) \to \Omega^n\Sigma^n (|JX|_+) \times \Omega^n \Sigma^n(|{\overline}{F}_1(X,Y)|).$$ The gluing lemma now applies to show that the map on nerves of partial monoids defined in Lemma \[lemma:p1p2\] $$\begin{aligned} \{[p] &\mapsto \overset p{\vee}(\overline M^n_k (|F_1(X,Y)|_+), H^n_k(|JX|))\}\\ \to \{[p] &\mapsto \overset p{\vee}(H^n_k(|JX|) \ltimes M^n_k (|{\overline}{F}_1(X,Y)|), H^n_k(|JX|))\}\end{aligned}$$ is $(n-2)$-connected. .2in Taking the trivial monoid structure on $M^n_k((|{\overline}{F}_1(X,Y)|)$ and forming its nerve, \[Lemma 2.3, W2\] provides an equivalence $$\begin{aligned} \text{diag} (&N^{cy} (H^n_k(|JX|),\Sigma . M^n_k (|{\overline}{F}_1(X,Y)|)))\\ \overset{u}{\underset{\simeq}{\longrightarrow}} &N(H^n_k(|JX|)\ltimes M^n_k(|{\overline}{F}_1(X,Y)|))\\ = &\{[p] \mapsto \overset p{\vee} (H^n_k(|JX|) \ltimes M^n_k (|{\overline}{F}_1(X,Y)|), H^n_k(|JX|))\}.\end{aligned}$$ Here $\Sigma . A$ denotes the simplicial space $\{[p] \mapsto \overset p{\vee} (A,*)\}$ which arises on taking the nerve of a trivial partial monoid. .2in Partial geometric realization produces a map from $\Sigma. M^n_k(|{\overline}{F}_1(X,Y)|)$ to $ S^1 \wedge M^n_k(|{\overline}{F}_1(X,Y)|)$. The pairing map $$S^1 \wedge M^n_k(|{\overline}{F}_1(X,Y)|) \to M^n_k(S^1 \wedge |{\overline}{F}_1(X,Y)|)$$ together with partial geometric realization produces a map $$N^{cy}(H^n_k(|JX|), \Sigma . M^n_k (|{\overline}{F}_1(X,Y)|)) \to N^{cy}(H^n_k(|JX|), M^n_k(S^1 \wedge |{\overline}{F}_1(X,Y)|)) \; .$$ By the realization lemma, this map is $(2m+1)$-connected. .2in These 5 maps taken together yield the required sequence connecting the spaces $N(H^n_k(|J(X\vee Y)|))$ and $N^{cy} (H^n_k(|JX|), M^n_k (\Sigma |{\overline}{F}_1(X,Y)|))$. Each of the maps is [$\min (n-2, 2m+1)$]{}-connected and the theorem follows. The maps constructed in the above theorem are compatible with suspension in the $n$-coordinate as well as pairing under block sum, by which we will always mean the wedge-sum of section 2.1 for the appropriate monoid in question. Taking the limit as $n$ goes to $\infty$ yields a sequence of maps connecting $$\underset{k\ge 0}{\coprod} N(H_k(|J(X\vee Y)|)) \qquad\text{and}\qquad \underset{k\ge 0}{\coprod} N^{cy} (H_k(|JX|), M_k (\Sigma |{\overline}{F}_1(X,Y)|));$$ each of these maps preserves block-sum and is $(2m-1)$-connected for $(m-1)$-connected $Y$. We thus get a sequence of maps between their group completions which is also $(2m-1)$-connected. .2in Denote $\Omega B(\underset{k\ge 0}{\coprod} N^{cy}(H_k(|JX|), M_k(\Sigma |{\overline}{F}_1(X,Y)|))$ by $C(X,Y)$, and $C(X,_-)$ by $C_X(_-)$; $C_X(_-)$ is a homotopy functor on the category $T(*)$. (compare \[Lemma 4.2, W2\])\[lemma:4.2\] There is an equivalence of 1st differentials $$\begin{aligned} (D_1 A\Sigma)_X(Y) &= \underset {n}{\underset{\longrightarrow}{\lim}}\; \Omega^n(hofibre(A(\Sigma(X\vee (S^n \wedge Y))) \to A(\Sigma X))) \\ \simeq (D_1 C_X)_*(Y) &= \underset {n}{\underset{\longrightarrow}{\lim}}\; \Omega^n(hofibre(C(X,S^n \wedge Y)\to C(X,*)))\;.\end{aligned}$$ This is an immediate consequence of the above theorem; for each $n$, we have an equivalence $A(\Sigma X)\simeq C(X,*)$ as well as a $(2n-1)$-equivalence between $A(\Sigma(X\vee (S^n \wedge Y)))$ and $C(X,S^n \wedge Y)$. This gives a $(2n-1)$-equivalence between $hofibre (A(\Sigma(X\vee (S^n \wedge Y))) \to A(\Sigma X))$ and $hofibre (C(X,S^n \wedge Y_+) \to C(X,*))$ which in the above limit yields a weak equivalence. .5in The Generalized Waldhausen Trace -------------------------------- In this section we construct a trace map, generalizing the construction of Waldhausen in [@w2]. The techniques are essentially those of \[§4, W2\]. .2in Let $F,F'$ be basepointed spaces, with $F$ $(m-1)$-connected. \[lemma:connected\] For all integers $k,m,m\ge0$ the map of spaces $$\begin{gathered} Map(\overset k{\vee} S^n, S^{n+m} \wedge F')\wedge Map (S^{n+m}, S^{n+m} \wedge F) \\ \overset\lambda{\longrightarrow} Map (\overset k{\vee} S^n, S^{n+m} \wedge F' \wedge F)\;,\end{gathered}$$ given by $$\lambda(f\wedge g) : \overset k{\vee} S^n \overset f{\rightarrow} S^{n+m}\wedge F' \overset{g\wedge id}{\longrightarrow} S^{n+m} \wedge F\wedge F' \xrightarrow[\cong]{id\wedge\text{switch}} S^{n+m} \wedge F'\wedge F\;,$$ is $(3m-1)$-connected. This is a slight generalization of \[Lemma 4.3, W2\], and the proof is the same. Namely, there is a commutative diagram .1in $$\diagram Map(S^n,S^{n+m} \wedge F')^k \wedge Map (S^{n+m}, S^{n+m} \wedge F) \drto & \\ & Map(S^n, S^{n+m} \wedge F' \wedge F)^k \\ \overset k{\vee}(S^0 \wedge S^m\wedge F') \wedge Map (S^{n+m}, S^{n+m} \wedge F) \uuto & \\ \overset k{\vee}(S^0\wedge S^m\wedge F') \wedge Map(S^m, S^m\wedge F) \uto & \overset k{\vee} S^0\wedge(S^m\wedge F'\wedge F) \xto[-2,0] \\ \overset k{\vee}(S^0\wedge S^m\wedge F') \wedge F \uto \urto_{\cong}|<{\rotate\tip} & \\ \enddiagram$$ .1in where the top horizontal map corresponds to the map given above, the right vertical map is $(4m-1)$-connected and each of the left vertical maps is $(3m-1)$-connected. For connected $Y'$, we have a homeomorphism of $|JX|$-bimonoids $$|\overline F_1 (X,Y')|\cong |JX|_+ \wedge |Y'| \wedge |JX|_+.$$ If $Y' = S^m \wedge Y_+$ then the above lemma applies with $F = \overset k{\vee}|Y'|\wedge |JX|_+$ and $F' = |JX|_+$. This yields a sequence of maps $$\begin{gathered} H^n_k(|JX|)^p \times Map (\overset k{\vee} S^n, \overset k{\vee} S^{n+m} \wedge |\overline F_1(X,Y')|)\nonumber\\ \cong \updownarrow\varphi^p_{m,n,k}\nonumber\\ H^n_k(|JX|)^p \times Map (\overset k{\vee} S^n, \overset k{\vee} S^{n+m} \wedge |JX|_+ \wedge |Y'| \wedge |JX|_+)\label{eqn:2.2.2}\\ \uparrow f^p_{m,n,k}\nonumber\\ H^n_k(|JX|)^p \times (Map(\overset k{\vee} S^n,S^{n+m} \wedge |JX|_+) \wedge Map (S^{n+m},\,\overset k{\vee} S^{n+m} \wedge |Y'|\wedge |JX|_+))\nonumber\\ \downarrow g^p_{m,n,k}\nonumber\\ Map(S^{n+m}, \, S^{n+3m} \wedge |JX|_+ \wedge |Y_+|).\nonumber\end{gathered}$$ Here $f^p_{m,n,k}$ is induced by the pairing in Lemma \[lemma:connected\], and is $(3m-1)$-connected. The map $g^p_{m,n,k}$ associates to the $(p+2)$-tuple $(\alpha_1,\dots,\alpha_p;\beta_1\wedge \beta_2)$ the composition $$\begin{aligned} \label{eqn:2.2.3} S^{n+m} \overset{\beta_2}{\longrightarrow} &\overset k{\vee} S^{n+m} \wedge |Y'|\wedge |JX|_+\nonumber\\ \overset\cong{\longleftrightarrow} &\overset k{\vee} (S^n\wedge |JX|_+)\wedge (S^m\wedge |Y'|)\nonumber\\ \xrightarrow{(\alpha_1\alpha_2\cdot\dots\cdot\alpha_p)\wedge id} &\overset k{\vee} (S^n\wedge |JX|_+)\wedge (S^m \wedge |Y'|)\nonumber\\ \overset\cong{\longleftrightarrow} (&\overset k{\vee} S^n) \wedge (|JX|_+\wedge S^m \wedge |Y'|) \\ \xrightarrow{\beta_1\wedge id}(&S^{n+m}\wedge |JX|_+)\wedge (|JX|_+ \wedge S^m \wedge |Y'|)\nonumber\\ \overset\cong{\longleftrightarrow} &S^{n+m}\wedge (|JX|_+ \wedge |JX|_+)\wedge (S^m \wedge |Y'|)\nonumber\\ \xrightarrow{id\wedge\mu\wedge id} &S^{n+m} \wedge |JX|_+ \wedge S^m \wedge |Y'|\nonumber\\ \overset\cong{\longleftrightarrow} &S^{n+3m} \wedge |JX|_+ \wedge |Y_+|.\nonumber\end{aligned}$$ The equivalence of trivial partial monoids $$Map (\overset k{\vee} S^n, \overset k{\vee} S^{n+m} \wedge |JX|_+ \wedge |Y'| \wedge |JX|_+) \cong Map (\overset k{\vee} S^n, \overset k{\vee} S^{n+m} \wedge |\overline F_1(X,Y')|)$$ which induces the equivalence $\varphi^p_{m,n,k}$, commutes with the natural left and right actions of $H^n_k(|JX|)$, and therefore is an $H^n_k(|JX|)$-bimonoid equivalence. Starting with maps $f\in H^n_k(|JX|), g\in Map(\overset k{\vee} S^n, S^{n+m} \wedge |JX|_+), h \in Map(S^{n+m}, \overset k{\vee} S^{n+m} \wedge |Y'|\wedge |JX|_+)$ the pairings $$\begin{gathered} (f,g) \mapsto f\cdot g,\nonumber \\ f\cdot g : \overset k{\vee} S^n \overset\cong{\longleftrightarrow} \overset k{\vee} S^n \wedge S^0 \overset{id\wedge \iota_*}{\hookrightarrow} \overset k{\vee} (S^n \wedge |JX|_+) \overset f{\rightarrow} \overset k{\vee} (S^n \wedge |JX| _+)\label{eqn:2.2.4.a} \\ \overset\cong{\underset{\beta}{\longrightarrow}} (\overset k{\vee} S^n)\wedge |JX|_+ \overset{g\wedge id}{\longrightarrow} (S^{n+m} \wedge |JX|_+)\wedge |JX|_+\overset\cong{\longleftrightarrow}\nonumber\\ S^{n+m} \wedge (|JX|_+\wedge |JX|_+)\overset{id\wedge\mu}{\longrightarrow} S^{n+m} \wedge |JX|_+\nonumber\end{gathered}$$ and $$\begin{gathered} (h,f)\mapsto h\cdot f, \nonumber\\ h\cdot f : S^{n+m} \overset h{\rightarrow} \overset k{\vee} S^{n+m} \wedge |Y'| \wedge |JX|_+ \overset\cong{\underset{\alpha}\longrightarrow} S^m \wedge |Y'|\wedge (\overset k{\vee} S^n\wedge |JX|_+)\label{eqn:2.2.4.b} \\ \overset{id\wedge f}{\longrightarrow} S^m \wedge |Y'|\wedge (\overset k{\vee} S^n\wedge |JX|_+) \overset\cong{\underset{\alpha^{-1}}\longrightarrow} \overset k{\vee} S^{n+m} \wedge |Y'| \wedge |JX|_+ \nonumber\end{gathered}$$ induce a left $H^n_k(|JX|)$-monoid structure on $Map(\overset k{\vee} S^n, S^{n+m}\wedge |JX|_+)$ and a right $H^n_k(|JX|)$-monoid structure on $Map(S^{n+m}, \overset k{\vee} S^{n+m} \wedge |Y'|\wedge |JX|_+)$ for appropriate choices of homeomorphisms $\alpha,\beta$ and inclusion $\iota_*$. Precisely, - $\iota_*:S^0\to |JX|_+$ is induced by the inclusion of the basepoint of $|JX|$; - $\beta$ is the homeomorphism $\overset k{\vee} (S^n \wedge |JX|_+)\cong (\overset k{\vee} S^n)\wedge |JX|_+$ given as $\beta = \underset{j=1}{\overset k{\vee}} (\iota_j\wedge id)$, with $\iota_j$ denoting the inclusion of $S^n$ as the $j^{\text{th}}$ term in the wedge $\overset k{\vee} S^n$; - $\alpha$ is induced by fixed choice of homeomorphism $\overset k{\vee} S^{n+m}\cong S^m\wedge(\overset k{\vee} S^n)$, extended in the obvious way to include the $|Y'|$ and $|JX|_+$-factors. Taken together, these actions define an $H^n_k(|JX|)$-bimonoid structure on the space $Map(\overset k{\vee} S^n,S^{n+m}\wedge |JX|_+) \wedge Map(S^{n+m}, \overset k{\vee} S^{n+m} \wedge |Y'|\wedge |JX|_+)$. With this structure, the pairing map of Lemma \[lemma:connected\] (for $F=\overset k{\vee} S^m \wedge |JX|_+$ and $F' = |JX|_+ \wedge |Y'|$) becomes an $H^n_k(|JX|)$-bimonoid map. .2in Although the maps $\alpha$ and $\beta$ above depend on $k$ and $n$, they clearly can be chosen so as to be compatible with any particular fixed choice of stabilization in the $k$-coordinate and suspension in the $n$-coordinate. Consequently, all of the above maps in (\[eqn:2.2.4.a\]) and (\[eqn:2.2.4.b\]) can be done compatibly with respect to both stabilization in the $k$-coordinate and suspension in the $n$-coordinate. The equivalence $|\overline F_1 (X,S^m\wedge Y_+)\, |\overset\cong{\to} |JX|_+ \wedge S^m \wedge |Y_+| \wedge |JX|_+$ is compatible with suspension in the $m$-coordinate and so $\varphi^p_{m,n,k}$ in (\[eqn:2.2.2\]) is compatible with suspension in both the $n$- and $m$-coordinates. By much the same reasoning, the homeomorphisms appearing in (\[eqn:2.2.3\]) involve a choice of natural equivalences, which can be chosen so as to be compatible with respect to suspension in these coordinates. And with $\alpha_1, \beta_1$ and $\beta_2$ as in (\[eqn:2.2.3\]) it follows directly from (\[eqn:2.2.3\]) and (\[eqn:2.2.4.a\]) that $g^1_{m,n,k}(\alpha_1 ; \beta_1 \wedge \beta_2) = g^0_{m,n,k}((\alpha_1\beta_1)\wedge \beta_2) = g^0_{m,n,k}(\beta_1\wedge(\beta_2\alpha_1))$. Finally, by construction $\underset{k\ge 0}{\coprod} g^p_{m,n,k}$ maps wedge-sum to loop sum. Putting this all together, we get \[thm:2.2.5\] For each $n,m,k \ge 1, \{\varphi^p_{m,n,k}\}_{p\ge 0}, \{f^p_{m,n,k}\}_{p\ge 0}$ and $\{g^p_{m,n,k}\}_{p\ge 0}$ induce well-defined maps of simplicial spaces: $$\begin{gathered} N^{cy}(H^n_k(|JX|) , M^n_k(S^m\wedge |\overline F_1 (X,Y')|)) \\ \cong\updownarrow\varphi^\cdot_{m,n,k} \\ N^{cy}(H^n_k(|JX|) , M^n_k (S^m\wedge |JX|_+ \wedge|Y'|\wedge |JX|_+)) \\ \uparrow f^\cdot_{m,n,k} \\ N^{cy}(H^n_k(|JX|) , Map (\overset k{\vee} S^n, S^{n+m} \wedge |JX|_+) \wedge Map (S^{n+m},\,\overset k{\vee} S^{n+m} \wedge |Y'|\wedge |JX|_+))\\ \downarrow g^\cdot_{m,n,k} \\ Map (S^{n+m}, S^{n+3m} \wedge |JX|_+ \wedge |Y_+|)), \end{gathered}$$ where the simplicial structure on the range of $(g^\cdot_{m,n,k})$ is trivial, $Y' = S^m \wedge Y_+$, and $f^\cdot_{m,n,k}$ is $(3m-1)$-connected. These maps are compatible with suspension in the $m$ and $n$ coordinates, and stabilization in the $k$-coordinate. They are also natural with respect to $X$ and $Y$ (where $X$ is connected). The only point that has not already been covered is the statement concerning the connectivity of $f^\cdot_{m,n,k}$. But this follows by the realization lemma, as $f^p_{m,n,k}$ is $(3m-1)$-connected for each $p$. Now $g^\cdot$ takes wedge-sum to loop sum, as we have already noted, and thus factors via group completion with respect to wedge-sum. So passing to the limit in $m$ yields a map $T$: $$\begin{gathered} (D_1C_X)_*(Y_+) \\ =\underset{m}{\underset{\longrightarrow}{\lim}}\;\Omega^m(hofibre (C(X, S^m \wedge Y_+) \to C(X,*))) \\ \overset T{\rightarrow}\; \Omega^\infty\Sigma^\infty(\Sigma(\underset{q\ge 1}{\vee}\; |X^{[q-1]} \wedge Y_+ |)).\end{gathered}$$ Precomposing by the equivalence of Lemma \[lemma:4.2\] we get $$Tr_X(Y) :(D_1 A\Sigma)_X(Y_+) \to \Omega^\infty\Sigma^\infty(\Sigma(\underset{q\ge 1}{\vee} |X^{[q-1]} \wedge Y_+ |)).$$ (in the case $X = \{pt\}$ we recover the map constructed in [@w2]). This map is natural in both $X$ and $Y$. Taking the fibre with respect to the map $Y_+ \to \{pt\}$ yields (for basepointed $Y$) the (reduced) $$\label{eqn:gentrace} \overline{Tr}_X(Y) : (D_1 A\Sigma)_X(Y) \to \Omega^\infty\Sigma^\infty(\Sigma(\underset{q\ge 1}{\vee} |X^{[q-1]} \wedge Y|))$$ where on the right we have, for $q\ge 1$, composed with the (basepointed) projection $Y_+ \to Y$ . Finally, we can follow by projection to the $q$th factor $\Omega^\infty\Sigma^\infty(\Sigma|X^{[q-1]} \wedge Y|)$; this yields a map $$\overline{Tr}_X(Y)_q : (D_1 A\Sigma)_X(Y) \to \Omega^\infty\Sigma^\infty(\Sigma |X^{[q-1]} \wedge Y|).$$ For connected $X$, $Tr_X(Y) \cong \underset{q\ge 1}{\prod} Tr_X(Y)_q$ . .5in Computing the trace on ${\overline}\rho_q$ ------------------------------------------ By the results of section 1.3, there is a map $$\tilde\rho = \underset{q\ge 1}{\prod} \tilde\rho_q : \tilde D(X) = \underset{q\ge 1}{\prod} \tilde D_q(X) \to \overline A(\Sigma X)$$ defined for any connected simplicial set $X$. This map is natural in $X$, and is induced by the representations $\rho_q : |X|^q \to |H^n_k(|JX|)|$. The product that appears on the L.H.S. is the weak product; note, however, that as $X$ is connected, the weak product in this case is weakly equivalent to the strong product. Replacing $X$ by $X\vee Y$, we define $\rho_q(X,Y)$ as the restriction of $\rho_q$ to $|X|^{q-1} \times |Y| \subset |X\vee Y|^q$. This inclusion induces the inclusion $i_q(X,Y)$ of Proposition \[prop:rep1\] after passing to smash products. Let $\tilde i_q(X,Y) = \Omega^\infty\Sigma^\infty(\Sigma i_q(X,Y))$. Then the composition $$\begin{gathered} \tilde\rho_q(X,Y) : \Omega^\infty\Sigma^\infty(\Sigma |X^{[q-1]} \wedge Y|) \xrightarrow{\tilde i_q(X,Y)} \Omega^\infty\Sigma^\infty (\Sigma |X\vee Y|^{[q]}) \to \\ \Omega^\infty\Sigma^\infty (\Sigma (E\Bbb Z/q\underset{\Bbb Z/q}{\leftthreetimes} |(X\vee Y)^{[q]}|)) = \tilde D_q(X\vee Y) \overset{\tilde\rho_q}{\longrightarrow} \overline A(\Sigma(X\vee Y))\end{gathered}$$ can alternatively be described as the precomposition of $\Omega^\infty\Sigma^\infty(\Sigma\rho_q(X,Y))$ with the stable section $s : \Omega^\infty\Sigma^\infty (\Sigma |X^{[q-1]} \wedge Y|) \to \Omega^\infty\Sigma^\infty (\Sigma |X|^{q-1} \times |Y|)$, followed by the map into $ (\overline A\Sigma)(X\vee Y)$. Proposition \[prop:rep1\] tells us that the map $\tilde i_q(X,Y)$ induces an equivalence $\Omega^\infty\Sigma^\infty(\Sigma X^{[q-1]} \wedge Y|) \overset\simeq{\rightarrow} (D_1\tilde D_q)_X(Y)$, and Goodwillie’s results tell us that $\tilde\rho$ is an equivalence for connected spaces iff $(D_1\tilde\rho)_X(Y)$ is an equivalence for all connected $X$. They also tell us that $(D_1 A\Sigma)_X(Y)$ and $(D_1\tilde D)_X(Y)$ are the same for connected $X$. Thus the primary task is to show that $\overline {Tr}_X(Y) \circ (D_1\tilde\rho)_X(Y)$ is an equivalence for all connected $X$. For $p\ne q, \overline {Tr}_X (Y)_p \circ \tilde\rho_q (X,Y) \simeq *$. When $p = q$, $ \overline Tr_X (Y)_q \circ \tilde\rho_q (X,Y) \simeq (-1)^{q-1}$. These homotopies are canonical in $X$ and $Y$, and hold for all connected $X$ and $q\ge 1$. Thus $\overline Tr_X (Y) \circ (\underset{q\ge 1}{\prod} \tilde\rho_q (X,Y))$ is an equivalence for connected $X$, which implies $\overline {Tr}_X (Y) \circ (D_1\tilde\rho)_X(Y)$ is an equivalence for connected $X$. The last implication follows by Proposition \[prop:rep1\]. Our main objective is the evaluation of the trace map $\overline {Tr}_X(Y)$ on $\tilde\rho_q(X,Y)$, which we will do in stages. First, we determine what happens to the image of the representation $\rho_q(X,Y)$ under the maps constructed in Theorem \[thm:2.1.1\]. This will bring us into the cyclic bar construction. The maps provided by (\[eqn:2.2.2\]) – (\[eqn:2.2.4.b\]) will then determine the composition $\overline Tr_X(Y)$. .2in We will assume $Z' = X\vee Y$ where $X$ is connected and $Y$ is $m'$-connected. $\rho_q$ (resp.  its restriction $\rho_q (X,Y)$) is induced by a simplicial representation ${Z'}^q$ (resp. $ X^{q-1} \times Y$) $\to H^n_q(|JZ'|)$ which we also denote by $\rho_q$ . This map can be represented simplicially by a map of partial monoids: $$\{[p] \mapsto \overset p{\vee} ({Z'}^q,*)\} \overset{\{\overset p{\vee} \rho_q\}}{\longrightarrow} NH^n_q (|JZ'|).$$ We will construct five diagrams, one for each of the maps in the proof of Theorem \[thm:2.1.1\]. .2in The first map in Theorem \[thm:2.1.1\] was induced by the $(2m'+1)$-connected inclusion of partial monoids: $$\{[p] \mapsto \overset p{\vee} (H^n_q (|JZ'|), H^n_q (|JX|))\} \overset{\imath_1}{\longrightarrow} NH^n_q (|JZ'|)\;.$$ The generalized wedge on the left contains the image of $\{\overset p{\vee}\rho_q\}$ and hence $\{\overset p{\vee} \rho_q(X,Y)\}$. Thus $\{\overset p{\vee}\rho_q (X,Y)\} = \imath_1 \circ \overline\rho_{q,1}$, were $\overline\rho_{q,1}$ is a map of generalized wedges, induced in each degree by the representation $\rho_q(X,Y)$, and fits into the commutative diagram: $$\diagram \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.56){\overset p{\vee}\rho_q(X,Y)} \ddouble & NH^n_q (|JZ'|) \\ \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.44){\overline\rho_{q,1}} & \{[p] \mapsto \overset p{\vee} (H^n_q(|JZ'|), H^n_q (|JX|))\} \uto_{\imath_1} \\ \enddiagram$$ .2in The second map in Theorem \[thm:2.1.1\] is the $(2m'+1)$-connected map of generalized wedges induced by the $(2m'+1)$-connected inclusion $$\overline M^n_q (|F_1(X,Y)|_+) \to \overline M^n_q (|JZ'|_+) \cong H^n_q (|JZ'|).$$ As the image of $\rho_q$ is contained in $\overline M^n_q (|F_1(X,Y)|_+)$, we can further factor $\rho_q(X,Y)$ as $\imath_2 \circ \overline\rho_{q,2}$. $\overline\rho_{q,2}$ is defined exactly as $\overline\rho_{q,1}$ – it is the (unique) map of generalized wedges induced by $\rho_q(X,Y)$ which makes the following diagram commute: $$\diagram \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.44){\overline\rho_{q,1}} \ddouble & \{[p] \mapsto \overset p{\vee}(\overline M^n_q (|JZ'|_+), H^n_q (|JX|))\} \\ \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.40){\overline\rho_{q,2}} & \{[p] \mapsto \overset p{\vee}(\overline M^n_q (|F_1(X,Y)|_+), H^n_q (|JX|))\} \uto_{\imath_2} \\ \enddiagram$$ .2in The $(n-2)$-connected map $$\overline M^n_q (|F_1(X,Y)|_+)\overset {p_1 \times p_2}{\longrightarrow} H^n_q (|JX|)) \times M^n_q (|\overline F_1(X,Y)|)$$ induces the third map in Theorem \[thm:2.1.1\], where the projections $p_1,p_2$ are induced by the projections of $F_1(X,Y)$ to $JX$ and $\overline F_1(X,Y)$ respectively. Let $\overline\rho^i_q = p_i \circ \rho_q(X,Y)$ for $i = 1,2$. Then we have a commuting square $$\diagram \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.41){\overline\rho_{q,2}} \ddouble & \{[p] \mapsto \overset p{\vee}(\overline M^n_q (|F_1(X,Y)|_+), H^n_q (|JX|))\} \dto \\ \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.32){\overline\rho_{q,3}} & \{[p] \mapsto \overset p{\vee}(\overline M^n_q(|JX|_+) \ltimes M^n_q (|\overline F_1(X,Y)|), H^n_q(|JX|))\} \enddiagram$$ where $\overline\rho_{q,3}$ is induced in each degree by the product $\overline\rho^1_q \times \overline\rho^2_q$. .2in This is the first place where one encounters complications in computing the trace map on arbitrary representations. From equation (\[eqn:1.1.2\]) we can see the problem – when $M$ is not a group but only grouplike there may be no simple way to choose $f^{-1}$ for $f \in M$, which one needs to do in order to formally invert the equivalence $u : diag(N^{cy}(M,NE)) \overset\simeq{\longrightarrow} N(M\ltimes E)$. In our case by first reducing the representation under consideration to $\rho_q(X,Y)$ we are able to circumvent this difficulty. For by Proposition \[prop:canon2\], $\overline\rho^1_q$ and $\overline\rho^2_q$ are canonically homotopic to a product of elementary expansions: $$\begin{gathered} \overline\rho^1_q(x_1,\dots,x_{q-1})\cong e_{q-1q}(\imath(x_{q-1})) e_{q-2q-1}(\imath(x_{q-2}))\cdot \dots \cdot e_{12}(\imath(x_1))\label{eqn:2.3.5.1} \\ \overline\rho^2_q(y) \cong \overline e_{q1}(\imath(y)) \text{ (the \underbar{reduced} expansion with $(q,1)$ entry $\imath(y)$)}\label{eqn:2.3.5.2}\end{gathered}$$ where $\imath(X)$ denotes the image of $x \in |X|$ in $|JX|$ under the natural inclusion $X \to JX$, and similarly for $Y$ (for notational simplicity, we have used $|\overline\rho^1_q|$ and $|\overline\rho^2_q|$ and $|Y|$ respectively. To recover $\overline\rho^1_q$ and $\overline\rho^2_q$ as above one applies Sing $(_-)$ and precomposes with the map $A \to Sing(|A|)$). The notation is explained in section 2.3. For such a product of elementary expansions Proposition \[prop:inv\] yields a canonical homotopy between $f^{-1}f, *$ and $ff^{-1}$ where $f^{-1} = e_{12}(-\imath(x_1)) e_{23}(-\imath(x_2))\cdot \dots \cdot e_{q-1q}(-\imath(x_{q-1}))$ for $f =\overline\rho^1_q(x_1,\dots,x_{q-1})$ as above. We can define a map $$\begin{aligned} |\overline\rho^1_{q,4}| : |X|^{q-1} \times |Y| \to &|N^{cy}_1 (H^n_q(|JX|), M^n_q(|\overline F_1 (X,Y)|))| \\ &= H^n_q(|JX|) \times M^n_q(|\overline F_1 (X,Y)|)|\end{aligned}$$ by $(x_1,\dots, x_{q-1},y) \mapsto (f,f^{-1} ef^{-1})$ where $f = \overline\rho^1_q(x_1,\dots, x_{q-1}),e = \overline e_{q1}(y)$ as given above in (\[eqn:2.3.5.1\]), (\[eqn:2.3.5.2\]). Extending degreewise yields a map $\overline\rho_{q,4}$ and a canonically homotopy-commutative diagram $$\diagram \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.32){\overline\rho_{q,3}} \ddouble & \{[p] \mapsto \overset p{\vee}(\overline M^n_q(|JX|_+) \ltimes M^n_q (|\overline F_1(X,Y)|), H^n_q(|JX|))\} \\ \{[p] \mapsto \overset p{\vee} (X^{q-1} \times Y,*)\} \rto^(.37){\overline\rho_{q,4}} & diag(N^{cy} (H^n_q(|JX|),\Sigma . M^n_q (|\overline F_1(X,Y)|))) \uto_{\simeq} \enddiagram$$ .1in where the equivalence on the right hand side is the map $u$ defined in (\[eqn:1.1.2\]), and the bottom map has been modified by the homotopy of (\[eqn:1.3.16\]) applied to the map in (\[eqn:2.3.5.1\]) to make it basepoint-preserving. Recall that $\Sigma. A$ is shorthand notation for $\{[p] \mapsto \overset p{\vee}(A,*)\}$. The fact that the diagram is canonically homotopy-commutative is important. Note also that $\overline\rho_{q,4}$ is given on one-simplices by $\overline\rho^1_{q,4}$. .2in In Theorem \[thm:2.1.1\] the fifth map is induced by partial geometric realization $$r: \Sigma . M^n_q (|\overline F_1(X,Y)|)\to S^1 \wedge M^n_q (|\overline F_1(X,Y)|)$$ and the pairing $$p : S^1 \wedge M^n_q (|\overline F_1(X,Y)|) \to M^n_q (S^1 \wedge |\overline F_1(X,Y)|) \overset\cong{\longrightarrow} M^n_q (|\overline F_1(X,S^1 \wedge Y)|).$$ Let $M^n_q (|\overline F_1(X,\Sigma. Y)|)$ denote the simplicial object $\{[p] \mapsto M^n_q (|\overline F_1(X,\overset p{\vee} (Y,*))|)\}$ where the face and degeneracy maps are induced by those of $\Sigma . Y$. There is an obvious map of simplicial objects $$\Sigma . M^n_q(|\overline F_1(X,Y)|) \hookrightarrow M^n_q(|\overline F_1(X,\Sigma . Y)|)$$ which in degree $p$ is given by the inclusion $$\overset p{\vee} M^n_q(|\overline F_1(X,Y)|) \hookrightarrow M^n_q(|\overline F_1(X,\overset p{\vee} Y)|).$$ The partial realization map r sends $M^n_q(|\overline F_1(X,\Sigma . Y)|)$ to $M^n_q(|\overline F(X,S^1 \wedge Y)|)$ and the composition $$\Sigma .M^n_q(|\overline F_1(X,Y)|) \to M^n_q(|\overline F_1(X,\Sigma . Y)|) \overset r{\rightarrow} M^n_q(|\overline F_1(X,S^1 \wedge Y)|)$$ is equivalent to the previous composition of partial realization followed by the pairing $p$. Note that the partial realization map above is $(n-2)$-connected by the same type of argument used in the construction of the third map in Theorem \[thm:2.1.1\]. Now the map $$\Sigma . M^n_q(|\overline F_1(X,Y)|) \overset\alpha{\rightarrow} M^n_q(|\overline F_1(X,\Sigma . Y)|)$$ is an $H^n_q(|JX|)$-bimonoid map, and so induces a bisimplicial map: $$N^{cy} (H^n_q(|JX|),\Sigma . M^n_q (|\overline F_1(X,Y)|))\overset\beta{\rightarrow} N^{cy}(H^n_q(|JX|), M^n_q (|\overline F_1(X,\Sigma . Y)|)).$$ Let $N^{cy}_p (M,NE)$ denote the simplicial object $$\{[k] \mapsto N^{cy}_{p,k} (M,NE) = M^p \times (NE)_k\}.$$ Then the representation $$\begin{gathered} \rho^1_{q,4} : X^{q-1} \times Y \to H^n_q(|JX|) \times M^n_q(|\overline F_1 (X,Y)|) = N^{cy}_{1,1} (H^n_q(|JX|), \Sigma . M^n_q(|\overline F_1 (X,Y)|)) \\ \overset\beta{\rightarrow} N^{cy}_{1,1} (H^n_q(|JX|), M^n_q (|\overline F_1(X,\Sigma . Y)|))\end{gathered}$$ extends uniquely to a map of simplicial objects: $$\overline\rho_{q,5} : X^{q-1} \times \Sigma . Y \to N^{cy}_1 (H^n_q(|JX|), M^n_q(|\overline F_1(X,\Sigma . Y)|)).$$ It is not true that there is a map $\Sigma . (X^{q-1} \times Y) \to X^{q-1} \times \Sigma . Y$ of simplicial objects which makes the appropriate diagram commute (here $X^{q-1} \times \Sigma . Y$ is the simplicial object $\{[p] \mapsto X^{q-1} \times (\overset p{\vee}(Y,*))\}$). However there is after passing to smash products. Specifically, we may choose stable splittings $i_1,i_2$ for which $p_1 \circ i_1 \simeq p_2 \circ i_2 \simeq id$ by a homotopy functorial in $X$ and $Y$. These produce the square $$\diagram \Omega^{\infty} \Sigma^{\infty} (\Sigma |X^{[q-1]} \wedge Y|) \rto<1ex>^{i_1} \ddto_{\cong}|<{\rotate\tip} & \Omega^{\infty} \Sigma^{\infty} (\Sigma(|X|^{q-1}\times Y|)) \lto<1ex>^{p_1} \dto^{\tilde\rho_{q,4}} \\ & \Omega^{\infty} \Sigma^{\infty}(|diagN^{cy} (H^n_q(|JX|),\Sigma . M^n_q(|\overline F_1 (X,Y)|))|) \ddtor^{\tilde\beta} \\ \Omega^{\infty} \Sigma^{\infty} (|X|^{[q-1]} \wedge \Sigma |Y|) \rto<1ex>^{i_2} & \Omega^{\infty} \Sigma^{\infty} (|X^{q-1} \times \Sigma . Y|) \lto<1ex>^{p_2}\dto^{\tilde\rho_{q,5}} \\ & \Omega^{\infty} \Sigma^{\infty} (|N^{cy}(H^n_q(|JX|), M^n_q(|\overline F_1 (X,\Sigma . Y)|))|) \enddiagram$$ where $\tilde\rho_{q,j} = \Omega^\infty\Sigma^\infty |\overline\rho_{q,j}|$ for $j= 4,5$ and $\overline\beta$ is induced by $(\beta)$. By the construction of $\tilde\rho_{q,4}$ and $\tilde\rho_{q,5}$ it is straightforward to see that the diagram is canonically homotopy-commutative. Note that the space appearing in the lower right-hand corner is $(n-2)$-equivalent to $\Omega^\infty\Sigma^\infty(|N^{cy} (H^n_q(|JX|),M^n_q(|\overline F_1(X, S^1 \wedge Y)|)|)$ . This is our fifth diagram. .2in Before evaluating the trace we make a useful simplification. In order to be consistent with notation, we will assume $Y = \Sigma^{2m-1}Z_+$ and use $\Sigma^{2m} F$ to denote $\Sigma|\overline F_1(X,Y)|$. There is no loss of generality here, because computation of $\overline {Tr}_X(Y)$ involves passing through a direct limit in which $Y$ becomes more and more highly suspended. Now we know that the partial realization map $$r : N^{cy}(H^n_q(|JX|), M^n_q(|\overline F_1(X,\Sigma . Y)|)) \to N^{cy} (H^n_q(|JX|), M^n_q(|\overline F_1(X, S^1\wedge Y)|))$$ commutes with the simplicial structure in the first coordinate (given by the face and degeneracy maps of the cyclic bar construction), and that it maps the simplicial space $$\{[k] \mapsto N^{cy}_{p}(H^n_q(|JX|), M^n_q(|\overline F_1(X,\{\Sigma. Y)\}_k|))\}$$ to the space $N^{cy}_p (H^n_q(|JX|), M^n_q(|\overline F_1(X, S^1 \wedge Y)|))$. It follows by Theorem \[thm:2.2.5\] that, upon restricting to the $q$th component of the trace map $\overline {Tr}_X(Y)$, we have a canonically homotopy commutative diagram $$\spreaddiagramrows{-1.0pc} \spreaddiagramcolumns{-1.0pc} \def\objectstyle{\ssize} \def\labelstyle{\sssize} \diagram & & X^{q-1}\times (\Sigma . Y) \ddto^{\rho_{q,5}} \\ M^n_q(|\overline F_1(X,\Sigma . Y)|) \ddto_{r} \rto^(.37){\cong} & N^{cy}_0 (H^n_q(|JX|),M^n_q(|\overline F_1(X, \Sigma . Y)|)) \ddto_{r} & \\ & & N^{cy}_1(H^n_q(|JX|),M^n_q(|\overline F_1(X, \Sigma . Y)|)) \ulto_{\partial_0} \ddto^{r}\\ M^n_q(\overline F_1(X,S^1 \wedge Y)|) \rto^(.37){\cong} & N^{cy}_0(H^n_q(|JX|),M^n_q(\overline F_1(X,S^1 \wedge Y)|)) & \\ & & N^{cy}_1 (H^n_q(|JX|),M^n_q(|\overline F_1(X, S^1\wedge Y)|)) \ulto_{\partial_0} \\ N^{cy}_0 (H^n_q(|JX|),M^n_{q,1} \wedge M^n_{q,2}) \uuto^{\varphi^0_{m,n,q}\circ f^0_{m,n,q}} \ddrto_{\overline\pi_p \circ g^0_{m,n,q}} & & \\ & & N^{cy}_1 (H^n_q(|JX|),M^n_{q,1} \wedge M^n_{q,2}) \uuto_{\varphi^1_{m,n,q}\circ f^1_{m,n,q}} \ullto_{\partial_0} \dlto_{\overline\pi_p \circ g^1_{m,n,q}} \\ & \Omega^{n+m}\Sigma^{n+3m} (|X^{[p-1]} \wedge Z_+|) & \enddiagram$$ where $$\begin{gathered} M^n_{q,1} = Map (\overset q{\vee} S^n,S^{n+m}\wedge |JX|_+\wedge |Z_+|) \\ M^n_{q,2} = Map (S^{n+m}, \overset q{\vee} S^{n+2m}\wedge |JX|_+)\;, \end{gathered}$$ and $\overline\pi_p$ is the obvious reduced projection onto the $p$th component $$\Omega^{n+m}\Sigma^{n+3m} (|X^{[p-1]} \wedge Z_+|).$$ Our object now is to find a map $\rho_{q,6}$ defined on $X^{q-1}\times (\Sigma . Y)$ or its realization, whose range is $N^{cy}_0 (H^n_q(|JX|),M^n_{q,1} \wedge M^n_{q,2})$ such that $\varphi^0_{m,n,q}\circ f^0_{m,n,q} \circ \rho_{q,6}$ is canonically homotopic to $r\circ \partial_0 \circ \overline\rho_{q,5}$ (of course, it would suffice to lift $r\circ \overline\rho_{q,5}$ directly without using $\partial_0$, and in fact such a lifting can be written down explicitly. However, it is much simpler to do this after mapping first by $\partial_0$; this makes the computation of $\overline\pi_p \circ g^0_{m,n,q}$ easier as well). Now $\overline\rho_{q,5}$ is the unique extension to $X^{q-1} \times (\Sigma . Y)$ of the representation $\overline\rho^1_{q,4}$ on 1-simplices $X^{q-1} \times Y$ given by $$\begin{gathered} (x_1,\dots,x_{q-1},y) \mapsto (f,f^{-1}ef^{-1});\\ f =\overline\rho^1_q(x_1,\dots,x_{q-1}),\, e=\overline\rho^2_q(y)\;. \end{gathered}$$ These are in turn expressed as a product of elementary expansions by (\[eqn:2.3.5.1\]), (\[eqn:2.3.5.2\]). Under $\partial_0$ this element maps to $(f^{-1}ef^{-1}\cdot f)$ which is canonically homotopic to $(f^{-1}e)$. It follows that we can describe $r \circ \partial_0 \circ \overline\rho_{q,5}$ on the realization of $X^{q-1} \times (\Sigma . Y)$ as the map of spaces given by the representation $$\begin{gathered} \rho^1_{q,6} : |X^{q-1} \times (\Sigma . Y)| \to |M^n_q(|\overline F_1(X, \Sigma . Y)|)| \\ (x_1,\dots,x_{q-1},\tilde y) \mapsto (f^{-1}\tilde e);\quad f =\overline\rho^1_q(x_1,\dots,x_{q-1}), \tilde e = \overline e_{q-1}(\imath (\tilde y))\end{gathered}$$ where $f$ is now viewed as a product of elementary expansions whose range is $|M^n_q(|\overline F_1(X, \Sigma . Y)|)|$. Note that $\tilde y$ denotes an element of $|\Sigma . Y|$. Writing $f^{-1}$ as $e_{12}(-\imath(x_1)) e_{23}(-\imath(x_2))\cdot\dots\cdot e_{q-1q}(-\imath(x_{q-1}))$ and applying Proposition \[prop:canon1\] yields a canonical homotopy between $\overline\rho^1_{q,6}$ and the representation $$\begin{gathered} \overline\rho^2_{q,6} : |X^{q-1} \times (\Sigma . Y)| \to |M^n_q(|\overline F_1(X, \Sigma . Y)|)| \\ (x_1,\dots,x_{q-1},\tilde y) \mapsto \overline e_{11}(z_1) + \overline e_{21}(z_2) + \dots + \overline e_{q1}(z_q) \end{gathered}$$ where $z_i = (\overset{q-1}{\underset{j=i}{\prod}} - \imath(x_i)) \imath(\tilde y) \in \pm (|\overline F_1(X, \Sigma .Y)|) = \pm \Sigma^{2m} F$ and “-” denotes the inverse under loop sum. We can write $\tilde y \in \Sigma|Y| \cong S^m \wedge |Z| \wedge S^m$ as $\tilde y = (s_1,z,s_2)$. Define $\overline\rho^3_{q,6}$ by $$\begin{gathered} \overline\rho^3_{q,6} : |X^{q-1} \times (S^m \wedge |Z|_+ \wedge S^m)| \to |M^n_{q,1} \wedge M^n_{q,2}|\nonumber\\ (x_1,\dots,x_{q-1},s_1,z,s_2) \mapsto (\overline e_{11}(z'_1) + \dots + \overline e_{q1}(z'_q)) \wedge \imath^1(s_2).\label{eqn:2.3.11}\end{gathered}$$ Here $M^q_{n,i}$ is as in the last diagram. The map $\imath^1(s_2)$ is represented by the composition $$\begin{gathered} S^{n+m} \to S^{n+m} \wedge S^m \cong S^{n+2m}\wedge S^0 \overset{\text{inc}}{\hookrightarrow} (S^{n+2m} \wedge |JX|_+)_1 \\ \overset{\text{inc}}{\hookrightarrow} \overset q{\vee} S^{n+2m} \wedge |JX|_+ \\ s \mapsto (s,s_2)\end{gathered}$$ $z'_i = (\overset{q-1}{\underset{j=i}{\prod}} - \imath(x_i)) \imath(s_1,z) \in \pm|JX_+| \wedge S^{n+m} \wedge |Z|_+ \cong \pm S^{n+m} \wedge |JX|_+ \wedge |Z_+|$, and the product of reduced elementary expansions in (\[eqn:2.3.11\]) is viewed as an element of $M^n_{q,1}$. It is straightforward to verify that the diagram 1[|\^0\_[m,n,q]{}f\^0\_[m,n,q]{}|]{} $$\diagram |X^{q-1} \times (S^m \wedge |Z|_+ \wedge S^m)|\rto^(.62){\overline\rho^3_{q,6}} \dto_{\simeq} & |M^n_{q,1} \wedge M^n_{q,2}| \dto^{|\varphi^0_{m,n,q}\circ f^0_{m,n,q}|} \\ |X^{q-1} \times \Sigma . Y|\rto^(.45){\overline\rho^2_{q,6}} & |M^n_q(|\overline F_1(X, \Sigma . Y)|)| \enddiagram$$ is canonically homotopy-commutative. So taking $\overline\rho_{q,6}$ to be $\overline\rho^3_{q,6}$ provides the necessary lift in order to evaluate $\overline{Tr}_X(Y)$. This evaluation is achieved, according to Theorem \[thm:2.2.5\], by switching the terms in (\[eqn:2.3.11\]) and composing. Since $\imath^1$ just involves the standard inclusion to the first factor in the wedge, we get $$(\overline e_{11}(z'_1)\cdot\dots\cdot \overline e_{q1}(z'_q)) \circ \imath^1(s_2) = \overline e_{11}(z'_1)\circ \imath^1(s_2)$$ which implies that $\overline\pi_p \circ g^0_{m,n,q} \circ \overline\rho^3_{q,6}$ is canonically null-homotopic for $p > q$, and that $\overline\pi_q\circ g^0_{m,n,q} \circ \overline\rho^3_{q,6}$ is the map $$(|X^{q-1}| \wedge S^m \wedge |Z|_+ \wedge S^m) \to \Omega^{n+m} \Sigma^{n+3m} (|X|^{[q-1]} \wedge |Z|_+)$$ given by $$(x_1,\dots,x_{q-1},s_1,z,s_2) \mapsto (\overset{q-1}{\underset{j=1}{\prod}} - \imath (x_i), s_1,\imath(z),s_2).$$ Up to reparametrization independent of $X$ and $Z$ this composition is canonically homotopic to $(-1)^{q-1} j_{2m}$, where $j_{2m}$ is the standard inclusion $$\Sigma^{2m}|X|^{[q-1]}\wedge |Z|_+ \to \Omega^{n+m} \Sigma^{n+3m} (|X|^{[q-1]} \wedge |Z|_+).$$ To complete the proof, note that $\overline {Tr}_X (Y)_p\circ\tilde\rho_q (X,Y)$ is induced by a natural transformation of a homogeneous functor of degree $q$ to a homogeneous functor of degree $p$ (evaluated at $X$), which must be canonically null-homotopic for $q > p$ as it factors through the $\text{p}^{\text{th}}$ differential of a $q$-homogeneous functor. We may now complete the proof of Theorem A. By the previous theorem, the map $$\tilde \rho : \tilde D(X) \to \overline A(\Sigma X)$$ induces a map on first differentials $$\label{eqn:2.3.14} (D_1 \tilde \rho)_X (Y) : (D_1 \tilde D)_X (Y) \to (D_1 \overline A\Sigma)_X (Y)$$ which is split-injective on homotopy groups for all $X$ and $Y$. In the case that both $X$ and $Y$ are finite complexes, the homotopy groups of both sides of (\[eqn:2.3.14\]) are finitely generated; for the right hand side this follows by Theorem \[thm:WG\]. This implies that $(D_1 \tilde \rho)_X (Y)$ is an equivalence for all finite $X$ and $Y$. As both functors are also homotopy functors (hence commute up to homotopy with filtered colimits), this implies $(D_1 \tilde \rho)_X (Y)$ is an equivalence for all $Y$ and connected $X$. Finally, applying Goodwillie’s Theorem \[thm:conv\] at $X = *$ implies that $\tilde \rho$ is an equivalence. .2in The equivalence $\tilde D(X) \overset\rho{\rightarrow} \overline A(\Sigma X)$ is natural with respect to $X$, so that if $f : X\to Y$ is a map of connected simplicial sets there is a homotopy-commutative diagram $$\diagram \tilde D(X)\rto^{\tilde\rho_X}\dto_{\tilde D(f)} &\overline A(\Sigma X)\dto^{\overline A(\Sigma f)} \\ \tilde D(Y)\rto^{\tilde\rho_Y} & \overline A(\Sigma Y) \enddiagram$$ It also follows that $\rho$ restricts to yield equivalences $$\overset n{\underset{q=m+1}{\prod}} \tilde D_q(X) = p^m_n\tilde D(X) \overset{p^m_n\tilde\rho(X)}{\longrightarrow} p^m_n\overline A(\Sigma X)$$ natural in $X$ for all $0\le m < n \le \infty$, because $\tilde\rho$ is a natural transformation of homotopy functors and hence commutes with Goodwillie Calculus. However, it is not true that $\tilde\rho$ or $p^m_n\tilde\rho$ are natural with respect to maps $\Sigma X \overset g{\rightarrow} \Sigma Y$ which do not desuspend up to homotopy. [XX]{} M. Bokstedt, G. Carlsson, R. Cohen, T. Goodwillie, W.-C. Hsiang, I. Madsen, [*On the algebraic $K$-theory of simply-connected spaces*]{}, Duke Math. Jour. [**84**]{} (1996), 541 – 563. G. Biedermann, W. Dwyer, [*Homotopy nilpotent groups*]{}, arXiv:0709.3925v4 . G. Carlsson, R. Cohen, T. Goodwillie, W.-C. Hsiang, [*The free loop space and the algebraic $K$-theory of spaces*]{}, $K$-theory [**1**]{} (1987), 53 – 82. G. Carlsson, R. Cohen, [*The cyclic groups and the free loop space*]{}, Comm. Math. Helv. [**62**]{} (1987), 423 – 449. A. Connes, [*Cohomologie cyclic et foncteurs $\text{ Ext }^n$*]{}, C. R. Acad. Sci. Paris [**296**]{} (1983) 953 – 958. B. Dundas, T. Goodwillie, R. McCarthy, [*The local structure of algebraic $K$-theory*]{} (2010). Z. Fiedorowicz, J.-L. Loday, [*Crossed simplicial groups and their associated homology*]{}, Trans. of the Amer. Math. Soc. [**326**]{} (1991), 57 – 87. Z. Fiedorowicz, C. Ogle, [*Algebraic $K$-theory and configuration spaces*]{}, Topology [**29**]{} (1990), 409 – 418. Z. Fiedorowicz, C. Ogle, R. Vogt, [*Volodin $K$-theory of $A^\infty$ rings*]{}, Topology [**32**]{} (1993), 329 – 352. T. Goodiwllie, [*The Calculus of functors*]{}, Harvard preprint (1986). T. Goodwillie, [*Calculus I. The first derivative of pseudoisotopy theory*]{}, $K$-theory [**4**]{} (1990), 1 – 27. T. Goodwillie, [*Calculus II. Analytic functors*]{}, $K$-theory [**5**]{} (1992), 295 – 332. T. Goodwillie, [*Calculus III. Taylor Series*]{}, Geom. Top. [**7**]{} (2003), 645 – 711. C. Ogle, [*Models for $A(X)$ and stable homotopy theory*]{}, OSU preprint (1985). C. Ogle, [*Splittings of $A(X)$ and related functors*]{}, OSU preprint (1987). G. Segal, [*Categories and cohomology theories*]{}, Topology [**13**]{} (1974), 293 – 312. F. Waldhausen, [*Algebraic $K$-theory of spaces. II.*]{}, Lecture Notes in Math. [**763**]{} (1979), 356 – 394 (Springer-Verlag). F. Waldhausen, [*Algebraic $K$-theory of generalized free products*]{}, Ann. of Math. [**108**]{} (1978), 135 – 204. F. Waldhausen, [*Algebraic $K$-theory of spaces, concordance, and stable homotopy theory*]{}, Ann. of Math. Studies [**113**]{} (1987). F. Waldhausen, [*Algebraic $K$-theory of spaces*]{}, Lecture Notes in Math. [**1246**]{} (1985), 318 – 419 (Springer-Verlag). F. Waldhausen, B. Jahren, J. Rognes, [*Spaces of PL manifolds and categories of simple maps*]{}, Inst. Mittag-Leffler Report [**41**]{} (2006). R. Schwänzel, R. Vogt, [*$E_\infty$ monoids with coherent homotopy inverses are abelian groups*]{}, Topology [**28**]{} (1989), 481 – 484.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We analyze Faraday rotation and depolarization of extragalactic radio point sources in the direction of the inner Galactic plane to determine the outer scale and amplitude of the rotation measure power spectrum. Structure functions of rotation measure show lower amplitudes than expected when extrapolating electron density fluctuations to large scales assuming a Kolmogorov spectral index. This implies an outer scale of those fluctuations on the order of a parsec, much smaller than commonly assumed. Analysis of partial depolarization of point sources independently indicates a small outer scale of a Kolmogorov power spectrum. In the Galaxy’s spiral arms, no rotation measure fluctuations on scales above a few parsecs are measured. In the interarm regions fluctuations on larger scales than in spiral arms are present, and show power law behavior with a shallow spectrum. These results suggest that in the spiral arms stellar sources such as stellar winds or protostellar outflows dominate the energy injection for the turbulent energy cascade on parsec scales, while in the interarm regions supernova and super bubble explosions are the main sources of energy on scales on the order of 100 parsecs.' author: - 'M. Haverkorn[^1] [^2], J. C. Brown[^3], B. M. Gaensler[^4] [^5], N. M. McClure-Griffiths[^6]' title: 'The outer scale of turbulence in the magneto-ionized Galactic interstellar medium' --- Introduction ============ Turbulence in the ionized phase of the interstellar medium (ISM) of the Milky Way is well described on small scales, while its properties on larger scales are more uncertain. On scales smaller than $\sim10^{11}$ m ($\sim 10^{-5}$ pc), the turbulence in the ionized medium is well characterized from diffractive and dispersive processes in the ISM influencing pulsar signals. The comprehensive study by @ars95 showed that on scales of $10^5$ to $10^{13}$ m ($\sim 10^{-11}$ pc to $10^{-3}$ pc) the power spectrum of electron density $n_e$ is well described by a power law with a spectral index consistent with the Kolmogorov spectral index $\alpha=5/3$ [@k41]. Most observed electron density power spectra are compatible with a Kolmogorov power spectrum [e.g. @sg90; @ssh00; @wmj05; @yhc07], although other spectral indices have been reported [e.g. @lkm01; @sss03]. On larger scales the slope and extent of the electron density power spectrum are much more uncertain. @ars95 include measurements of and of Faraday rotation measures for which uncertain assumptions about the correlation between and $n_e$ and about magnetic fields, respectively, needed to be made. Although these results suggest a power law that connects to the Kolmogorov power spectrum on small scales, these assumptions make the behavior of the power spectrum at larger scales somewhat speculative. Fluctuations in the magneto-ionized medium on parsec scales have been measured using structure functions[^7] of rotation measures (RMs) of extragalactic sources. @sc86 showed that the structure function of high-latitude sources is flat, indicating that fluctuations in RM only exist on size scales smaller than the scales they probe, viz. about 3$^{\circ}$. This means that there is no contribution from large-scale fluctuations from the Milky Way in the RM (although a constant RM from the Milky Way is not ruled out), and that it is most likely the RM contribution intrinsic to the sources that dominates. This was confirmed by @l87. RM structure functions at lower latitudes, however, show structure functions consistent with a power law, although the slope tends to be shallower than a Kolmogorov slope [@sc86; @ccs92]. Other observations in or near the Galactic plane [@sh04; @hkb03; @hgb06] also find shallow slopes for RM structure functions. Measurements of the outer scale of fluctuations, i.e. the scale at which a structure function saturates, differ considerably across the Galaxy. @ls90 used the autocorrelation function of synchrotron radiation to derive a typical scale of 90 pc at a distance of 1 kpc in a region near the north Galactic pole. @hgb06 studied sources in the Galactic plane and found that the outer scale of fluctuations is smaller than about 10 pc for the spiral arms, while it is roughly 100 pc for the interarm regions. A similar outer scale of $\sim90$ pc in the magneto-ionized medium is found in the Large Magellanic Cloud using Faraday rotation [@ghs05], whereas measurements indicate a much larger outer scale of a few kpc in the Large Magellanic Cloud [@eks01], Small Magellanic Cloud [@ssd99], and in external galaxies [@wca99; @eel03]. @hfm04 estimated a magnetic energy spectrum with a slope of $-0.37$ up to scales of 15 kpc. They suggest that a magnetic energy spectrum which is flatter than Kolmogorov on scales larger than the injection scale of $10-100$ pc is dictated by magnetic helicity inversely cascading up from the injection scale to larger scales. However, the pulsar rotation measure and dispersion measure data they use for the power law fit has a scatter of several orders of magnitude, making the resulting spectrum uncertain. Stellar sources of energy input are expected to dominate the turbulent driving in the Milky Way, except in the outskirts of the Galaxy where star formation is low and gravitational sources and instabilities such as the magneto-rotational instability [@bh91; @hb91] come into play [@sb99]. The stellar sources include supernovae, superbubbles, stellar winds, protostellar outflows and regions. @nf96 calculate a broadband source function mostly dominated by supernovae, which is confirmed by @mk04. The @nf96 source function shows a contribution of superbubbles to the turbulent driving on scales above $\sim$ 100 pc to about a kpc. Turbulent driving on these scales is not observed, possibly because of the finite extent of the Galaxy. So a maximum driving scale of about 100 pc due to supernovae is often implicitly assumed, although it is not unlikely that these sources inject energy into the medium (also) on smaller scales. What, if any, is the relation of the electron density and magnetic field fluctuations at these large scales to the Kolmogorov-like spectrum below $10^{13}$ m? Can both sets of observations be reconciled with one spectrum from kilometer to parsec scales? What are the characteristics of the fluctuations in the ionized ISM on larger scales? We address these questions in this paper, using RM structure functions from extragalactic sources behind the inner Galactic plane. These data are discussed in Section \[s:data\]. We determine the outer scale of fluctuations using two independent methods: (1) the amplitude and slope of the structure function indicate an uncommonly small outer scale of Kolmogorov turbulence in the magneto-ionized ISM, as we will explain in Section \[s:sf\], and (2) the same conclusions can be drawn from analysis of depolarization of extragalactic point sources by the Galactic ISM, as shown in Section \[s:depol\]. Section \[s:steep\] gives arguments for a steeper (Kolmogorov) power spectrum on small scales, and a discussion of the results can be found in Section \[s:disc\]. Section \[s:sum\] provides a summary and conclusions. Data analysis of polarized extragalactic point sources {#s:data} ====================================================== The data used are from the Southern Galactic Plane Survey [SGPS, @mdg05; @hgm06], a neutral hydrogen and full-polarization 1.4 GHz continuum survey of the Galactic plane. The continuum part spans an area of $253^{\circ} < l < 357^{\circ}$ and $|b| < 1.5^{\circ}$ and contains 148 polarized sources of which the rotation measure is measured unambiguously [@bhg07]. The data were obtained with the Australia Telescope Compact Array (ATCA) and are publicly available[^8]. For more details on the data reduction see @bhg07. Two corrections have been applied to our sample of extragalactic sources. Firstly, structure functions are sensitive to large-scale gradients in electron density across the field of view and include a geometrical component due to the change in direction of the regular magnetic field, [see e.g.  @bhg07]. As a first order correction, we approximate this with a 2D linear gradient in RM, and subtract this from the region over which a structure function is computed. Furthermore, lines of sight through discrete structures like H [ii]{} regions and supernova remnants can have deceptively large RMs due to an increased electron density and possibly magnetic field within these localized structures [@mwk03]. Therefore, we have used the total intensity 1.4 GHz radio data from the ATCA combined with Parkes single-dish data as well as H$\alpha$ maps [@f03] to discard 27 extragalactic sources with a sight line passing through a visible supernova remnant or H [ii]{} region. These sources are listed in Table \[t:flag\]. We recognize that omitting RMs through discrete high density regions may introduce a bias in the structure function, viz. decrease the structure function amplitude on large scales. However, the results are very similar to results without discarding extreme RMs [cf.  @hgb06], indicating that the bias, if present, is low. -------- ------- --------- ----------- -------- ------- --------- ----------- -------- ------- --------- ----------- 355.42 -0.81 600.53 H$\alpha$ 308.93 0.40 -752.10 I 267.03 0.04 298.38 I 351.82 0.17 134.43 I 308.73 0.07 -661.47 I 263.22 1.08 826.49 H$\alpha$ 337.19 0.02 56.25 I 299.42 -0.23 534.50 I 263.20 1.07 739.48 H$\alpha$ 337.06 0.85 -738.88 I 295.29 -1.23 -43.36 H$\alpha$ 260.69 -0.23 203.78 H$\alpha$ 333.72 -0.27 204.08 I 295.23 -1.05 -206.71 H$\alpha$ 260.52 -0.55 247.36 H$\alpha$ 332.14 1.03 -754.43 I 294.38 -0.75 470.04 H$\alpha$ 260.41 -0.43 221.22 H$\alpha$ 329.48 0.22 -100.23 I 294.29 -0.90 449.20 H$\alpha$ 259.78 1.22 250.40 H$\alpha$ 312.37 -0.03 -438.49 I 288.27 -0.70 491.08 H$\alpha$ 254.16 -0.34 -337.84 H$\alpha$ 309.06 0.84 -504.17 I 282.07 -0.78 861.69 I 253.68 -0.60 -348.90 H$\alpha$ -------- ------- --------- ----------- -------- ------- --------- ----------- -------- ------- --------- ----------- The SGPS data probe the inner Galaxy, which includes a number of spiral arms. Consequently, these data are well-suited to study differences in the structure in the ISM in spiral arms and in interarm regions. We constructed second-order structure functions of RM, $D_{RM}(\delta\theta) = \left< [\mbox{RM}(\theta) - \mbox{RM}(\theta+\delta\theta)]^2 \right>_{\theta}$, for different lines of sight in ’spiral arms’, i.e. lines of sight primarily through spiral arms, and ’interarm regions’, sight lines mostly through interarm regions, estimated from the spiral arm positions in @cl02. The lines of sight used to separate spiral arms and interarm regions are shown in Fig. \[f:gal\]. The error bars denote errors propagated from uncertainties in the RM values. For lines of sight at high longitudes close to the Galactic center it is not possible to distinguish between ‘spiral arm lines of sight’ and ’interarm lines of sight’, because spiral arms start running perpendicular to the line of sight. Therefore, we do not use data at $l > 326^{\circ}$ in this analysis. Figure \[f:sf\] shows $D_{RM}$ for spiral arms and interarm regions. The figure is similar to Fig. 1 in @hgm06 except here we have used the finalized RM list in @bhg07, and discarded the 27 sources in Table \[t:flag\]. The spiral arm structure functions are flat, while in interarm regions the structure functions rise, and in two out of three cases show a turnover from a power law at small scales to flat at the larger scales. The location of the turnover is interpreted as the largest angular scale of structure in the interarm regions. Using the argument that the largest angular scales in RM are probably coming from nearby, assuming a distance of 2 kpc yields an outer scale at a spatial scale of about $100$ pc. For the spiral arms we can only say that the outer scale of structure, i.e. the smallest scale we probe, is smaller than about 10 pc [@hgb06]. The amount of sources used to estimate the structure functions are 50, 20 and 18 for the respective interarm regions, and 8 sources for each of the spiral arms. Given the low source density in the spiral arms, we assess the reliability of the results here. The bin size is somewhat restricted due to the paucity of sources, but bin sizes between 0.5$^{\circ}$ and 1$^{\circ}$ are reasonable and yield comparable results. Figure \[f:sfarm\] shows the structure functions of RMs towards the Carina and Crux arms without any binning of sources. The solid lines give linear fits of the data, confirming our statement that the structure function in the spiral arms is flat. In the Carina arm, the seven uppermost points (all at $\log(D_{RM}) > 5.3$) are the combination of one source with extreme RM and all other sources in the region. The presence of this extreme source makes clear why the two data points on the largest scales are lower than the other points: on these scales the extreme RM source does not contribute. Furthermore, this explains why the amplitude of the structure function is higher in the Carina arm than in the Crux arm: omitting this source yields comparable amplitudes for both arms. However, we are hesitant to discard sources on the basis of extreme RM alone, as there is no reason per se why these sources would not be part of the spectrum. Therefore, we only omit sources visibly behind a discrete structure, as discussed above, and leave this extreme RM source in the dataset, while commenting on changes when this source is omitted. We can estimate the outer scale from modeling of the amplitude and slope of the structure functions, or from the amount of depolarization of the point sources by the Galaxy. These two methods will be discussed in Sections \[s:sf\] and \[s:depol\], respectively. Outer scale from rotation measure structure functions {#s:sf} ===================================================== The structure function slopes in Fig. \[f:sf\] are $0.32\pm0.05$, $1.09\pm0.18$ and $0.71\pm0.17$ for interarm regions 1, 2, and 3, respectively, much shallower than those expected from a Kolmogorov structure function, which has a slope $m=5/3$. However, we argue in this section that on smaller scales the RM structure function has to turn over to a steeper slope to be consistent with observations of electron density fluctuations on smaller scales [e.g., @ars95]; additional arguments for a steeper RM power spectrum on smaller scales are given in Section \[s:steep\]. Minter & Spangler (1996, hereafter MS96) developed a formalism to describe the structure function of RM assuming power spectra in magnetic field fluctuations and in electron density fluctuations which are zero-mean, isotropic, Gaussian, and independent. Assuming Kolmogorov turbulence the RM structure function, $D_{RM}$, can be described as: $$\begin{aligned} D_{\mbox{RM}}\!\!\!\!&=&\!\!\!\!\! \left\{\!\right. 251.226\left[\right. \left({\small \frac{n_{e0}}{0.1~\mbox{cm}^{-3}}}\right)^2\! \left({\small \frac{C_B^2}{10^{-13}\mbox{ m}^{-2/3}\mu\mbox{G}^2}}\right) \nonumber\\ &&+\left({\small \frac{B_{0\parallel}}{\mu\mbox{G}}}\right)^2\! \left({\small\frac{C_n^2}{10^{-3} \mbox{ m}^{-20/3}}}\right)\left.\right] \nonumber\\ &&+\,23.043\left({\small\frac{C_n^2}{10^{-3} \mbox{ m}^{-20/3}}}\right )\nonumber\\ &&\times\left({\small \frac{C_B^2}{10^{-13}\mbox{ m}^{-2/3}\mu\mbox{G}^2}}\right)\! \left({\small \frac{l_0^K}{\mbox{pc}}}\right)^{2/3} \!\!\left.\right\} \nonumber\\ &&\times\left({\small \frac{L}{\mbox{kpc}}}\right)^{8/3}\! \left({\small \frac{\delta\theta}{\mbox{deg}}}\right)^{5/3} \label{e:ms96}\end{aligned}$$ where $n_{e0}$ is the mean electron density, $B_{0\parallel}$ is the mean magnetic field strength along the line of sight, $l_0^K$ the outer scale of the Kolmogorov turbulence, and $L$ the length of the line of sight. The coefficients $C_B^2$ and $C_n^2$ are defined in the description of the magnetic field and density fluctuations as power laws with the same outer scale $l_0^K$ and spectral index $\alpha$ such that $$\left<\delta B_i(\mathbf{r_0})\delta B_i(\mathbf{r_0+r}) \right> = \int d^3q \frac{C_B^2 \mbox{e}^{-i\mathbf{q}\cdot\mathbf{r}}}{(q_0^2+q^2)^{\alpha/2}}$$ where wave number $q = 2\pi/r$ and $q_0 = 2\pi/l_0^K$. A similar expression applies for $\left<\delta n(\mathbf{r_0})\delta n(\mathbf{r_0+r}) \right>$. The spectral index of the power spectrum is $\alpha=11/3$ for Kolmogorov turbulence, and is related to the slope of the structure function $D_{RM}\propto r^m$ as $m=\alpha-2$ for $2<\alpha<4$ [@r77]. Equation \[e:ms96\] is only valid if $\delta\theta L/l_0^K<1$, which is only true on scales smaller than what we observe. However, arguing that the slope turns over to Kolmogorov on smaller scales, we use a Kolmogorov dependence according to equation (\[e:ms96\]) on scales $l < l_0^K$, which flattens to the shallow or flat observed spectra for $l>l_0^K$. Most of the input parameters in equation (\[e:ms96\]) are known, so that by joining this equation with RM structure functions in Figure \[f:sf\] the outer scale of Kolmogorov turbulence $l_0^K$ can be computed. The amplitude of the electron density fluctuations $C_n^2$ is taken to be $C_n^2= 10^{-3}$ m$^{-20/3}$ [@ars95], and the magnetic field fluctuations $C_B^2$ can be derived from the observed value from MS96 as $C_B^2 = 5.2 B_{ran}^2 (l_0^K)^{-2/3} \mu$G$^2$ m$^{-2/3}$, where $B_{ran}$ is the strength of the random component of $B$ in $\mu$G and $l_0^K$ is in parsecs. Values for the mean electron density $n_{e0}$, mean magnetic field $B_0$, random magnetic field $B_{ran}$, and distance to the emission will be derived in the following subsections. Magnetic field -------------- A number of estimates of the total magnetic field strength in the Galaxy, based on a number of observations, indicate that the total magnetic field strength is around $6~\mu$G at the Solar radius. @h95 gives an extended discussion about the different ways to determine Galactic magnetic field strengths, i.e. using the synchrotron emissivity under the assumption of minimum energy or minimum pressure, or with the cosmic ray density measured in the solar neighborhood, and using Faraday rotation from pulsars. His estimates for the total magnetic field in the solar neighborhood range from 4 to $7.4~\mu$G for different assumptions, while he estimates $B_{tot}\sim 7.6 - 11.2~\mu$G at Galactocentric radius $R_{Gal}=4$ kpc. More recently, @smr00 modeled cosmic ray evolution and propagation in the Milky Way, using constraints from synchrotron and $\gamma$-ray emission. Their results indicate a total magnetic field strength of 6.1 $\mu$G at the solar circle, increasing exponentially towards the inner Galaxy with a scale length of 10 kpc. These measurements are in good agreement with each other, with results from synchrotron radiation [@bbm96] and from pulsars [@hfm04]. We can estimate the relative strengths of the magnetic field in the spiral arms and interarm reasons from Fig. 7b in @bkb85, which shows that the mean volume emissivity in the spiral arms is twice as high as the mean volume emissivity in the interarm regions, i.e.$\epsilon_{arms} = 2\epsilon_{interarms} = 1.5\langle\epsilon\rangle$, where the average volume emissivity is $\langle\epsilon\rangle=0.5(\epsilon_{arms}+\epsilon_{interarms})$. Therefore, the total magnetic field strength in the spiral arms must be $(3/2)^{2/7}$ that in the interarm regions[^9], where the exponent $2/7$ occurs because in the minimum energy approximation $\langle B \rangle \propto \epsilon^{2/7}$. We adopt the dependence of the magnetic field strength on Galactocentric radius in @smr00, and correct for the relative strengths in the arms and interarm regions as given above, leading to the total field strengths given in Table \[t:out\]. The regular magnetic field component $B_{reg}$ is estimated by @hml06 from pulsar dispersion and rotation measures as $B_{reg}=2.1\pm0.3~\mu$G, increasing exponentially inwards with a scale length of 8.5 kpc. We adopt these values for the regular magnetic field strength, so that $B_{ran} = \sqrt{B_{tot}^2-B_{reg}^2} \approx 0.94B_{tot}$, independent of Galactocentric radius. The component of the regular magnetic field parallel to the line of sight $B_{0\parallel}$ in equation (\[e:ms96\]) is evaluated through $B_{0\parallel} = B_0\cos\beta$, where $\beta$ is the angle between the line of sight and the regular magnetic field ${\mathbf B_0}$. A pitch angle of $-12^{\circ}$ [@v04] is assumed, but a pitch angle of 0$^{\circ}$ gives no significant changes. Electron density ---------------- The average electron density used for each line of sight is derived from the NE2001 electron density model [@cl02]. The electron density was evaluated from the model for a particular line of sight centrally through each arm and each interarm region. The adopted average electron density is the average density over the adopted line of sight. Most of the structure in NE2001 is on such large scales that a change in direction of the line of sight of a degree or so does not influence the results significantly in any of the lines of sight. The adopted values for the mean electron density along each line of sight can be found in Table \[t:out\]. Distances {#s:dist} --------- Two different distances for modeling are needed: $L$, the total path length of the Faraday rotating material, and $L^*$, the distance used to convert the outer scale of fluctuations from angular to spatial scales, both of which are shown in Fig. \[f:gal\]. The distance $L$ is chosen as the distance to the point for which 90% of the electron density is contained along the path, and is given in Table \[t:out\]. Subsequently, the distance from the sun at which $B_{tot}(r)$ is computed for each sight line is $0.5L$. If the statistical properties of the medium do not change along a given line of sight, the distance at which a certain angular scale is expected to correspond to the largest spatial scale is a small distance. Consequently, $L^*$, which corresponds to the largest spatial scale $l_0^K$, should be different from $L$, and is estimated to be 2 kpc. These distances are very rough estimates. However,we show in Section \[s:sens\] that the sensitivity of our conclusions to the anticipated error in distances is low. Results from RM structure functions {#s:results} ----------------------------------- We extrapolate the observed slopes to smaller scales, with a steepening to a Kolmogorov spectrum at scale $l_0^K$, which is the outer scale of the Kolmogorov part of the spectrum. The constraint of equal amplitudes between the Kolmogorov structure function on the small scales and the shallower structure function on the larger scales at turnover scale $l_0^K$ yields $$D_{RM}(\delta\theta) \propto \left\{ \begin{array}{ll} \delta\theta^{5/3} & \mbox{for}~\delta\theta \le l_0^K/L^* \\ (l_0^K/L^*)^{5/3-m}\delta\theta^{m} & \mbox{for}~\delta\theta \ge l_0^K/L^* \end{array} \right.\label{e:mssf}$$ where $m$ is the spectral index of the shallower structure function. In reality the structure function will not make a sharp break as described here and instead will show a gradual turnover over a range of scales [e.g., @hgm04], but we use this parameterization as a good first approximation. Combining equations (\[e:ms96\]) and (\[e:mssf\]) for $\delta\theta \ge l_0^K/L^*$ yields $$\begin{aligned} D_{RM}(\delta\theta) &=& \left[\right. C_1 n_{e0}^2 B_{tot}^2 + C_2 B_{tot}^2(l_0^K)^{2/3} \nonumber \\ && +C_3 B_{tot} \cos\beta \left.\right]\nonumber \\ && \times L^{8/3}\left(\frac{l_0^K}{L^*}\right)^{5/3-m}\delta\theta^m,\label{e:ms2} \label{e:ms2}\end{aligned}$$ where $C_1=488.37$, $C_2=0.04$ and $C_3=0.08$ are constants derived from the known variables in equation (\[e:ms96\]), and $m$ is the spectral index of the structure function. We fit equation (\[e:ms2\]) to the observed structure functions to obtain estimates for the outer scale of the Kolmogorov part of the power spectrum $l_0^K$(SF), with results given in Table \[t:out\]. To compare our results with the Kolmogorov spectra found in electron density, we evaluate the structure function of RM caused by electron density fluctuations only ($B_{ran}=0$), shown as the dotted lines in Fig. \[f:sfex\] for the input parameters of the Carina and the Crux arm. This is a lower limit for the structure function of RM if the power spectrum of electron density on small scales is extrapolated to the large (parsec) scales discussed here. The spiral arm data, plotted as asterisks in Fig. \[f:sfex\], fall [*below*]{} the extrapolation of $D_{RM}$ to large scales, indicating that $D_{RM}$ has to turn over at smaller scales where the dotted and dashed lines in Fig. \[f:sfex\] meet. The same argument holds for the interarm data, which are not shown in the Figure for clarity. Although the input parameters are uncertain, this result is remarkably stable against variations of the input parameters (Section \[s:sens\]). For fluctuations in electron density only, this turnover scale is around 3 pc (1 pc) for the Carina (Crux) arm. If magnetic field fluctuations are present as well, the turn over scale $l_0^K$ is a little smaller, as noted in Table \[t:out\]. Therefore, the Kolmogorov spectrum in electron density on small scales, if extrapolated towards larger scales, does not extend all the way up to scales of $\sim100$ pc as previously assumed, but displays a break to a shallower slope (interarms) or a constant value (spiral arms). If it did, equation (\[e:ms2\]) demonstrates that at a scale $r = 100$ pc, $D_{RM}$ would be 2.3 orders of magnitude higher than the observed values. Sensitivity of results to input parameters {#s:sens} ------------------------------------------ Due to the non-straightforward dependence of $D_{RM}$ on $l_0^K$ in equations (\[e:mssf\]) and (\[e:ms2\]), we tested numerically how sensitive the results are to variations in the input parameters $L$, $L^*$ and $n_e$. If the path length $L$ is increased or decreased by 30%, this will decrease or increase the resulting outer scale $l_0^K$ by a factor of 50% . The same effect is seen for an increase or decrease in $L^*$ or $n_e$ by a factor two. Although it makes sense to assume that the largest angular scales $\delta\theta_0$ correspond to the largest spatial scales $l_0^K$ at some nearby position along the line of sight, as we have done, even if we assume that the largest spatial scales are at the midway distance along the line of sight, $l_0^K = 0.5~L \tan(\delta\theta_0)$, the obtained outer scale $l_0^K \approx 1-5$ pc for both arms and interarm regions. Also, if the amplitude of electron density $C_n^2$ were a factor 10 different in our data from the data in @ars95, the outer scale would change by less than 50%. As the results are fairly robust against reasonable changes in the input parameters, we feel confident in asserting that the outer scale of Kolmogorov turbulence $l_0^K$ must be on the order of a few parsecs. We note, however, that due to the assumptions made and uncertainties in input parameters, a relatively large uncertainty in the determined outer scale has to be taken into account. Outer scale from depolarization of point sources {#s:depol} ================================================ An independent estimate of the outer scales of magneto-ionic structure can be made from depolarization of extragalactic point sources, if caused by structure in the foreground ISM on angular scales smaller than the size of the (unresolved) source. Intrinsic variations in polarization angle causing partial depolarization are expected to arise within any polarized extragalactic source. Indeed, no source in our sample exhibits the theoretical maximum degree of polarization of around 70% [@p70] but instead observed degrees of polarization lie typically below 10%. Depolarization by foreground components can be caused by beam depolarization due to magnetic field and/or electron density fluctuations on scales smaller than the source size [see e.g.  @ghs05] or by bandwidth depolarization [see e.g. @st07]. Bandwidth depolarization is given by $p = p0 \sin(\Delta\theta)/\Delta\theta$, where $\Delta\theta = 2\mbox{RM}c^2\Delta\nu/\nu^3$ [@gw66]. However, significant bandwidth depolarization across our frequency band $\Delta\nu$ (8 MHz) can only be achieved by $RM \ga 5700$ rad m$^{-2}$, much higher than observed RMs. Therefore, any foreground depolarization in our data is caused by beam depolarization across the face of the source rather than bandwidth depolarization. Figure \[f:depol\] shows RMs in the upper panel and degree of polarization, $p$, in the bottom panel. A clear anti-correlation between $|$RM$|$ and $p$ is visible especially at the lower longitudes. As the scale of the structure in RM and $p$ is several degrees, this cannot be intrinsic to the sources but instead must be caused by the Galactic ISM. This agrees with the results of @ghs05 who found an anticorrelation between degree of polarization and H$\alpha$, which is correlated with $|$RM$|$. Due to the power law nature of RM as a function of scale, a high RM also indicates large fluctuations in RM. This is shown in Figure \[f:rm\_p\], which shows the standard deviation in RM as a function of fractional polarization, for all data together and for the spiral arms and interarm regions separately. As expected, the data in the spiral arms show a higher standard deviation in RM and a lower fractional polarization than in the interarm regions. The Galactic component of depolarization can be estimated from the power law spectrum of RM fluctuations. The angular size of the outer scale of structure $\theta_0^K$ is much larger than the angular source size $\theta_{src}$. In this approximation $\theta_0^K>>\theta_{src}$, the depolarization by a power spectrum of RM fluctuations is given by the degree of polarization $p$, adapted from @t91 as: $$\langle (\frac{p(\lambda)}{p_0})^2\rangle \approx 1 - 4 \sigma^2 \lambda^4 2^{m/2} \left(\frac{r_{src}}{l_0^K}\right)^m \Gamma(1+\frac{m}{2}) \label{e:tribble}$$ where $p_0$ is the degree of polarization of the extragalactic source when its radiation arrives at the Milky Way, and $r_{src}$ is the size of the source. The RM standard deviation $\sigma$, structure function slope $m$ and outer scale $l_0^K$ are related via the structure function $ D_{\mbox{RM}}(r) = 2 \sigma^2 (r/l_0^K)^m$ for $r < l_0^K$. A decrease in degree of polarization $p$ with increasing standard deviation in RM $\sigma$, as predicted by equation \[e:tribble\], is visible in Figure \[f:rm\_p\] for all data together and for the spiral arms and interarm regions separately. As expected, the data in the spiral arms show a higher standard deviation in RM and a lower fractional polarization than in the interarm regions. With an estimate of $p_0$ and $r_{src}$, we can fit $\sigma$ as a function of $p$ to the data points, and obtain a best-fit outer scale $l_0^K$. The amount of intrinsic depolarization resulting in polarization degree $p_0$ can be estimated from the extragalactic sources observed around the LMC (Gaensler et al. 2005) to be 10.4%, which we assume is the average degree of polarization of unresolved point sources at 1.4 GHz for which all depolarization is intrinsic. With these assumptions, the depolarization below 10.4% is then due to the variations in Galactic RM across the face of the source with a median size of 6 arcsec for this flux density range (Gaensler et al.2005). Note that the percentage of intrinsic depolarization is higher than the actual average degree of polarization due to a selection of strong, highly-polarized sources over weak, weakly polarized ones, and due to selection of sources with linear $\phi(\lambda^2)$ behavior. However, since we are interested in the relative depolarization only, this selection effect does not influence our conclusions. In the spiral arms it is straightforward to use equation (\[e:tribble\]) to determine the outer scale $l_0^K$ needed to obtain the observed depolarization. Assuming Kolmogorov turbulence ($m = 5/3$), we determine the value of the RM standard deviation $\sigma$ from the structure function saturation level. For the interarm regions we observe a spectrum which is considerably shallower than a Kolmogorov spectrum. Assuming that this spectrum turns over to a steeper Kolmogorov spectrum towards small scales, the Kolmogorov slope on small scales will dominate the depolarization of the point sources. Therefore $\sigma$ can be assumed to be the value of the structure function at the scale $l_0^K$, which is the outer scale of the Kolmogorov turbulence, i.e. the scale at which the Kolmogorov slope turns over into a shallower slope. The data are best represented by equation (\[e:tribble\]) for $l_0^K=0.2^{\circ}, 0.25^{\circ}$, and $0.1^{\circ}$ for all data, spiral arms and interarm regions, respectively (shown in Figure \[f:rm\_p\] as a solid line, with outer scales twice higher and lower indicated by dashed lines), so that the outer scales of Kolmogorov turbulence are approximately 8.7 pc in the spiral arms and 3.5 pc in the interarm regions. The data in the spiral arms are fairly sparse - confirming additional depolarization in the spiral arms - so that the standard deviations and the fit to the model are uncertain. However, in the interarm regions the depolarization model is a good fit to the data, and the probability that the standard deviations are constant with fractional polarization is $<0.1\%$. The estimates of $l_0^K$ from depolarization are somewhat larger than $l_0^K$ from the structure function analysis. However, if we consider that the errors in the distances are large, in conjunction with assumptions such as the lack of correlation between magnetic field and electron density, the difference in estimates of $l_0^K$ from the two methods is not necessarily significant. Certainly both methods indicate that the outer scale of the Kolmogorov spectrum is likely a few parsecs, much smaller than the previously assumed value of $\sim 100$ pc. Expected steep spectrum on small scales {#s:steep} ======================================= As argued in the previous section, the RM structure function has to turn over to steeper slopes on smaller scales to be consistent with the electron density fluctuation data on smaller scales. There are two additional reasons why the RM structure functions cannot continue to have the same shallow or flat slope on smaller scales. Firstly, a steep Kolmogorov-like magnetic field power spectrum is indicated by cosmic ray data. As cosmic rays are most effectively scattered by magnetic field fluctuations on the same scale as their ion gyro radius, cosmic ray losses as a function of energy are closely related to the magnetic field power spectrum. The cosmic ray distribution as calculated from a leaky box model can explain cosmic ray observational data if the Galactic magnetic field has a power spectrum with a Kolmogorov spectral index [@j88]. Furthermore, the cosmic ray power spectrum is remarkably smooth[^10] on scales of $10^9$ to $10^{18}$ eV, corresponding to gyro radii of $10^{20}$ to $10^{12}$ cm. Therefore, the magnetic field power spectrum is also expected to be smooth on these scales. However, recent numerical simulations show that the magnetic field power spectrum does not necessarily follow the electron density spectrum. In fact, magnetohydrodynamic simulations in the limit of a weak homogeneous magnetic field show that the magnetic field fluctuations are all concentrated on scales much smaller than those under discussion here [@scm02]. This would indicate a flat structure function of magnetic field on larger scales. At first sight, this theory agrees with the observations shown in Fig. \[f:sf\], where the weak-mean-field approximation could be applicable to the spiral arms, and structure functions would be expected to be flat. However, this would indicate that magnetic field fluctuations of a few microgauss would be present on scales as small as a fraction of a parsec. In this case the degree of polarization, $p$, as a function of the intrinsic degree of polarization, $p_0$, is given by $$p = p_0 \mbox{e}^{-2\sigma^2\lambda^4} \approx \mbox{e}^{-309}$$ [@b66; @sbs98] in the approximation that the scale of the fluctuations is much smaller than the telescope beam. The standard deviation of RM, $\sigma$, is derived from the flat structure functions in the spiral arms. Therefore, magnetic field fluctuations on these scales would completely depolarize the synchrotron radiation in the Galaxy at 1.4 GHz. Since we observe polarized radiation at this frequency coming from all over the Galactic plane [e.g. @tgp03; @hgm06], magnetic field fluctuations cannot remain at this magnitude towards smaller scales. Instead, we showed in Section \[s:depol\] that the observed amount of depolarization is consistent with a RM structure function with a Kolmogorov slope and an outer scale of a few parsecs or smaller. [ccccccccc]{} region & longitude&$B_{tot}$& $n_{e0}$ & $L$ &$L^*$&$l_0^K$(SF)&$l_0^K$(depol)\ & \[$^{\circ}]$ &\[$\mu$G\]&\[cm$^{-3}$\]&\[kpc\]&\[kpc\]& \[pc\] & \[pc\]\ Interarm 1 & 255 - 280 & 4.5 & 0.03 & 11.5 & 2 & 2.3 & 3.5\ Interarm 2 & 290 - 305 & 4.6 & 0.045 & 16 & 2 & 0.8 & 8.7\ Interarm 3 & 315 - 326 & 5.2 & 0.075 & 18 & 2 & 0.3 & 3.5\ Carina arm & 280 - 290 & 8.3 & 0.06 & 14.5 & 2 & 2.4$^*$& 8.7\ Crux arm & 305 - 315 & 9.0 & 0.07 & 17 & 2 & 1.0 & 3.5\ Discussion {#s:disc} ========== The conclusion that the outer scale of turbulence in the spiral arms is observed to be on the order of a few parsecs is not expected or straightforward. Based on evidence discussed in the introduction, outer scales of turbulence are expected to be on the order of 100 pc, which is one or two orders of magnitude higher than those observed here. Earlier work noted the small outer scale of fluctuations in the spiral arms, but attributed that outer scale likely to regions along the line of sight [@hgb06]. regions are ubiquitous enough in the spiral arms to dominate fluctuations in RM, and this solution would reconcile a larger outer scale of turbulence with a smaller observed outer scale of RM fluctuations. However, if this were the case, the amplitude of the RM structure functions would lie above the amplitude of the turbulence structure function. Our estimate of the amplitude of the RM structure functions indicates that the observed structure functions lie [*far below*]{} the lower limit for the RM structure function if extrapolated from electron density fluctuations on small scales. If the fluctuations in RM associated with Kolmogorov turbulence were to continue up to scales of a hundred parsecs, the observed amplitude of the structure function of RM would be much higher than that observed. In the interarm regions, a plausible option is multiple scales of energy input: for supernova-driven turbulence, the outer scale is believed to be about 100 pc (as observed). However, if energy sources such as stellar winds or outflows, interstellar shocks or H [ii]{} regions input a significant amount of energy into the interstellar turbulence on smaller scales (typically parsecs, @mk04), this may flatten the structure function on scales of $\sim 1$ pc to scales of $\sim 100$ pc, consistent with our observations. This does not contradict earlier studies that reported outer scales of the order of 100 pc. @sc86 found larger outer scales, but their data included large parts of the sky, most or all of which were located at higher latitudes. @ccs92 present some data in the Galactic plane, but the outer scale of fluctuations in those data is not well determined due to the paucity of data and the included geometrical component of the magnetic field. Leahy’s (1987) structure functions of RMs of sources in the Galactic plane are consistent with an outer scale of a parsec, as is the analysis of DM variations of close pulsar pairs in globular clusters [@ss02; @r07]. So the picture arises of a smaller energy input scale of turbulence or fluctuations in the magneto-ionized ISM in the Galactic plane, while larger-scale structure exists in the Galactic thick disk or halo. Indications of increasing correlation lengths of the magnetic field with height above the galaxy case have been found in a number of external galaxies [@dkw95]. Summary and conclusions {#s:sum} ======================= Faraday rotation measurements of polarized extragalactic sources behind the inner Galactic plane have been used to study the characteristics of the magnetized, ionized interstellar medium in the plane, in particular in the spiral arms and in interarm regions. Rotation measure structure functions show a shallow slope in the interarm regions and saturation on a scale of $\sim 100$ pc, i.e.there are no fluctuations on scales larger than the saturation scale. Flat structure functions in the spiral arms indicate that the outer scale of RM fluctuations in the spiral arms is smaller than $\sim 10$ pc, the smallest scale observed. These shallow and flat structure functions must turn over to steeper slopes towards smaller scales for three reasons: (1) to match up with the electron density power spectrum on subparsec scales, assuming the large and small scale datasets are part of the same power spectrum; (2) a shallow RM structure function on smaller scales would give more depolarization than observed; and (3) cosmic ray distribution data and the smooth cosmic ray power spectrum indicate a smooth magnetic field power spectrum with a slope similar to the Kolmogorov slope. The scale of the break in the structure function is the outer scale of the Kolmogorov power spectrum $l_0^K$, and is estimated using two independent methods: the analysis of RM structure functions, and by modeling the depolarization of the extragalactic sources. Given the large uncertainties in input parameters, both methods agree reasonably well and imply an outer scale of the Kolmogorov slope of a few parsecs. This estimate is almost two orders of magnitude smaller than the generally assumed outer scale of ISM turbulence of $\sim 100$ pc. However, extrapolating the observed electron density fluctuations on small scales to parsec scales shows that the amplitude of the structure function would be orders of magnitude higher if the Kolmogorov spectrum did in fact extend out to 100 pc. Instead, the outer scale of Kolmogorov turbulence $l_0^K$ that we obtained from our observations indicates that energy in ISM turbulence is injected on scales of a parsec rather than 100 pc. This is the main energy injection scale in the spiral arms, which show flat structure functions on scales larger than that. In the interarm regions, the structure functions keep rising although not as steep as Kolmogorov turbulence, indicating an additional source of structure. We propose that in the spiral arms stellar energy sources such as stellar winds and protostellar outflows are the predominant sources of turbulence, whereas in the interarm regions there is evidence of energy injection on larger scales, most likely caused by supernova remnant and superbubble expansion. The authors thank Anne Green and John Dickey for helpful comments to the manuscript, and Katia Ferrière for enlightening discussions. The ATCA is part of the Australia Telescope, which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. [*Facilities:*]{} . Armstrong, J. W., Rickett, B. J., & Spangler, S. R. 1995, ApJ, 443, 209 Balbus, S. A., & Hawley, J. F. 1991, ApJ, 376, 214 Beck, R., Brandenburg, A., Moss, D., et al. 1996, ARA&A, 34, 155 Beuermann, K., Kanbach, G., & Berkhuijsen, E. M. 1985, A&A, 153, 17 Brown, J. C., Haverkorn, M., Gaensler, B. M., Taylor, A. R., Bizunok, N. S., McClure-Griffiths, N. M., Dickey, J. M., & Green, A. J. 2007, ApJ, 663, 258 Burn, B. J. 1966, MNRAS, 133, 67 Clegg, A. W., Cordes, J. M., Simonetti, J. M., & Kulkarni, S. R. 1992, ApJ, 386, 143 Cordes, J. M., & Lazio, T. J. W. 2002, preprint (astro-ph/0207156) Dumke, M., Krause, M., Wielebinski, R., & Klein, U. 1995, A&A, 302, 691 Elmegreen, B. G., Elmegreen, D. M., & Leitner, S. N. 2003, ApJ, 590, 271 Elmegreen, B. G., Kim, S., & Staveley-Smith, L. 2001, ApJ, 548, 749 Finkbeiner, D. P. 2003, ApJS, 146, 407 Gaensler, B. M., Haverkorn, M., Staveley-Smith, L., Dickey, J. M., McClure-Griffiths, N. M., Dickel, J. R., Wolleben, M., 2005, Science, 307, 1610 Gardner, F. F., & Whiteoak, J. B. 1966, ARAA, 4, 245 Han, J. L., Manchester, R. N., Lyne, A. G., Qiao, G. J., & Van Straten, W. 2006, ApJ, 642, 868 Han, J. L., Ferrière, K., & Manchester, R. N. 2004, ApJ, 610, 820 Haverkorn, M., Gaensler, B. M., Brown, J. C., Bizunok, N. S., McClure-Griffiths, N. M., Dickey, J. M., & Green, A. J. 2006a, ApJL, 637, 33 Haverkorn, M., Gaensler, B. M., McClure-Griffiths, N. M., Dickey, J. M., & Green, A. J. 2006b, ApJS, 167, 230 Haverkorn, M., Gaensler, B. M., McClure-Griffiths, N. M., Dickey, J. M., & Green, A. J. 2004, ApJ, 609, 776 Haverkorn, M., Katgert, P., de Bruyn, A. G. 2003, A&A, 403, 1045 Hawley, J. F., & Balbus, S. A. 1991, ApJ, 376, 223 Heiles, C. 1995, in [*The Physics of the Interstellar Medium and Intergalactic Medium*]{}, ed. by A. Ferrara, C. F. McKee, C. Heiles, & P. R. Shapiro, p. 507 Jokipii, J. R. 1988, in proceedings of the AIP Conference Radio wave scattering in the interstellar medium, ed. J. M. Cordes, B. J. Rickett & D. G. Backer, p. 48 Kolmogorov, A. N., 1941, Dokl. Akad. Nauk SSSR, 30, 301 Landecker, T. L., Reid, R. I., Wolleben, M., Reich, W., Kothes, R., Del Rizzo, D., Uyaniker, B., Gray, A. D., & Taylor, A. R. 2006, AAS, 208, 4909 Lazaryan, A. L., & Shutenkov, V. P. 1990, SvAL, 16, 297L Leahy, J. P. 1987, MNRAS, 226, 433 Löhmer, O., Kramer, M., Mitra, D., Lorimer, D. R., & Lyne, A. G. 2001, ApJ, 562, L157 Mac Low, M.-M., & Klessen, R. S. 2004, Rev. Mod. Phys., 76, 1, 125 McClure-Griffiths, N. M., Dickey, J. M., Gaensler, B. M., Green, A. J., Haverkorn, M., Strasser, S. 2005, ApJS, 158, 178 Minter, A. H., & Spangler, S. R. 1996, ApJ, 458, 194 Mitra, D., Wielebinski, R., Kramer, M., & Jessner, A. 2003, A&A, 398, 993 Norman, C. A., & Ferrara, A. 1996, ApJ, 467, 280 Pacholczyk, A. B. 1970, “Radio Astrophysics”, W. H. Freeman and Co. Ransom, S. M. 2007, in SINS - Small Ionized and Neutral Structures in the Diffuse Interstellar Medium, eds. M. Haverkorn and W. M. Goss, San Francisco: Astronomical Society of the Pacific, p. 265 Rickett, B. J. 1977, ARA&A, 15, 479 Schekochihin, A., Cowley, S., Maron, J., & Malyshkin, L. 2002, PhRvE, 65, 016305 Sellwood, J. A., & Balbus, S. A. 1999, ApJ, 511, 660 Shishov, V. I., Smirnova, T. V., Sieber, W., Malofeev, V. M., Potapov, V. A., Stinebring, D., Kramer, M., Jessner, A., & Wielebinski, R. 2003, A&A, 404, 557 Simonetti, J. H., & Cordes, J. M. 1986, ApJ, 310, 160 Smirnova, T. V., & Shishov, V. I. 2002, Astron. Astrophys. Transac., 21, 45 Sokoloff, D. D., Bykov, A. A., Shukurov, A., Berkhuijsen, E. M., Beck, R., & Poezd, A. D. 1998, MNRAS, 299, 189 Spangler, S. R., & Gwinn, C. R. 1990, ApJ, 353L, 29 Stanimirović, S., Staveley-Smith, L., Dickey, J. M., Sault, R. J., Snowden, S. L. 1999, , 302, 417 Stil, J. M., & Taylor, A. R., 2007, ApJL, 663, 21 Stinebring, D. R., Smirnova, T. V., Hankins, T. H., Hovis, J. S., Kaspi, V. M., Kempner, J. C., Myers, E., & Nice, D. J. 2000, ApJ, 539, 300 Strong, A. W., Moskalenko, I. V., & Reimer, O. 2000, ApJ, 537, 763 Sun, X. H, & Han, J. L. 2004, in The Magnetized Interstellar Medium, ed. B. Uyaniker, W. Reich, R. Wielebinski (Katlenburg-Lindau: Copernicus GmbH), 25 Taylor, A. R., Gibson, S. J., Peracaula, M., Martin, P. G., Landecker, T. L., Brunt, C. M., Dewdney, P. E., Dougherty, S. M., Gray, A. D., Higgs, L. A., Kerton, C. R., Knee, L. B. G., Kothes, R., Purton, C. R., Uyaniker, B., Wallace, B. J., Willis, A. G., & Durand, D. 2003, AJ, 125, 3145 Tribble, P. C. 1991, MNRAS, 250, 726 Vallée, J. P. 2004, NewAR, 48, 763 Wang, N., Manchester, R. N., Johnston, S., Rickett, B., Zhang, J., Yusup, A., & Chen, M. 2005, MNRAS, 358, 270 Westpfahl, D. J., Coleman, P. H., Alexander, J., & Tongue, T. 1999, AJ, 117, 868 Wolleben, M., Landecker, T. L., Reich, W., & Wielebinski, R. 2006, A&A, 448, 411 You, X. P., Hobbs, G. B., Coles, W. A., Manchester, R. N., & Han, J. L. 2007, astro-ph/0709.0135 [^1]: Marijke Haverkorn is a Jansky fellow of the National Radio Astronomy Observatory [^2]: Astronomy Department University of California at Berkeley, 601 Campbell Hall, Berkeley CA 94720, USA, marijke@astro.berkeley.edu [^3]: Centre for Radio Astronomy, University of Calgary, 2500 University Drive N.W., Calgary, AB, Canada; jocat@ras.ucalgary.ca [^4]: School of Physics A29, The University of Sydney, NSW 2006, Australia [^5]: Australian Research Council Federation Fellow [^6]: Australia Telescope National Facility, CSIRO, PO Box 76, Epping, NSW 1710, Australia; naomi.mcclure-griffiths@csiro.au [^7]: A structure function measures the amount of fluctuations in a quantity as a function of the scale of the fluctuations. The second order structure function of a function $f$ is defined as $D_f(\delta\theta) = \langle (f(\theta)-f(\theta+\delta\theta))^2\rangle_{\theta}$, where $\theta$ is the position of a source in angular coordinates, $\delta\theta$ is the separation between sources, i.e. the scale of the measured fluctuation, and $\langle\rangle_{\theta}$ means the averaging over all positions $\theta$. [^8]: http://www.atnf.csiro.au/research/cont/sgps/queryForm.html [^9]: We follow the argument put forth in @h95, although he uses $\epsilon_{arms} \approx 2\langle \epsilon \rangle = 2(\epsilon_{arms} + \epsilon_{interarms})$. His approximation $\epsilon_{interarms} \approx 0$ is reasonable in the outer Galaxy. However, the @bkb85 synchrotron maps indicate that $\epsilon_{arms} = 2\epsilon_{interarms}$ is more appropriate in the inner Galaxy. [^10]: The steepening of the spectrum around $3\times10^{15}$ eV (the “knee”) and flattening around $3\times10^{18}$ eV (the “ankle”) are very slight and not relevant for this argument.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we present the complete derivation of the effective contour model for electrical discharges which appears as the asymptotic limit of the minimal streamer model for the propagation of electric discharges, when the electron diffusion is small. It consists of two integro-differential equations defined at the boundary of the plasma region: one for the motion and a second equation for the net charge density at the interface. We have computed explicit solutions with cylindrical symmetry and found the dispersion relation for small symmetry-breaking perturbations in the case of finite resistivity. We implement a numerical procedure to solve our model in general situations. As a result we compute the dispersion relation for the cylindrical case and compare it with the analytical predictions. Comparisons with experimental data for a 2-D positive streamers discharge are provided and predictions confirmed.' author: - 'M. Arrayás$^{1}$ and M. A. Fontelos$^{2}$' title: 'Electric discharge contour dynamics model: the effects of curvature and finite conductivity.' --- Introduction ============ The appearance and propagation of ionization waves is the prelude of electrical breakdown of various media. In the case of a gas, the specific features of the breakdown waves are determined by the type of the gas, the value of the pressure, the geometry of the discharge cell and the value and variation rate of the voltage at the electrodes. The geometry determines the space distribution of the electric field and hence the dynamics of the ionization fronts. In the case where there in no initial ionization in the discharge gap, the ionization wave may originate from one or several overlapping electron avalanches. After attenuation of the electric field in the avalanche body, a conducting channel or streamer develops: a plasma region fully ionized with a positive side expanding towards the cathode and a negative region towards the anode. One of the approaches used to model the development of the avalanche-streamer transition and the streamer propagation is a nonlinear system of balance equations with a diffusion-drift approximation for the currents, together with Poisson equation [@Lagarkov]. Some progress in the understanding of the propagation mechanism has been achieved using that model. We can mention: the study of stationary plane ionization waves [@r1; @vanS], self-similar solutions for ionization waves in cylindrical and spherical geometries [@aft; @r3], the effect of photoionization [@mmt] and a branching mechanism as the result of the instability of planar ionization fronts [@ME; @aftprl; @abft]. In this hydrodynamic approximation, the fronts are subject to both stabilizing forces due to diffusion which tend to dampen out any disturbances, and destabilizing forces due to electric field which promote them. The solution of the model, even in the simplest cases, poses a challenging problem both numerical and analytical. Early numerical simulations can be found in [@Dhali; @Vitello]. Recently, a contour dynamics model have been deduced in the limit of small electron diffusion [@Arrayas10], which resembles the Taylor-Melcher leaky dielectric model for electrolyte solutions [@S], but adapted to the context of electric (plasma) discharges. This contour dynamics model allows to study more general situations in two-dimensional and three-dimensional cases. ![The schematic of the contour dynamics model. The case displayed corresponds to a negative streamer discharge discharge so $\sigma$ represents the negative surface charge density. The electric field points towards the plasma region in this case.[]{data-label="fig1a"}](fig1.eps){width="50.00000%"} The contour dynamics model consists of an interface separating a plasma region from a neutral gas region as it is shown in Fig. \[fig1a\]. The separating surface has a net charge $\sigma$ and the thickness goes to zero as $\sqrt{D}$ being $D$ the charge diffusion coefficient. The case displayed in the figure correspond a negative discharge, so the electric field is pointing towards the plasma region and $\sigma$ is the negative charge density at the surface. The front will evolve following the equation $$v_{N}=-\mu_e\mbox{E}_{\nu }^++2\sqrt{\frac{D_e}{l_0}\mu_e|\mbox{E}_{\nu}^+|\exp\left(-\frac{\mbox{E}_0}{|\mbox{E}_{\nu}^+|}\right)}-D_e\kappa, \label{cm1}$$where $\mbox{E}_{\nu }^{+}$ is the normal component of the electric field at the interface when approaching it from outside the plasma region, $\mu_e$ the eletron mobility, $D_e$ is the electron diffusion coefficient, E$_0$ is a characteristic ionization electric field and $\kappa$ the curvature of the interface. The parameter $l_0$ is the microscopic ionization characteristic length. At the interface, the total negative surface charge density will change according to $$\frac{\partial \sigma }{\partial t}+\kappa v_{N}\sigma =-\frac{\mbox{E}_{\nu }^{-}}{\varrho_e}-j_\nu^{-}\,, \label{cm2}$$being now $\mbox{E}_{\nu }^-$ the electric field at the interface coming from inside the plasma, $\varrho_e$ is a parameter proportional to the resistivity of the electrons in the created plasma and $j_\nu^{-}$ the current contribution of any electromotive force if present. Although the equations and are written for the case of a negative front plotted in Fig. \[fig1a\], and we will present the derivation of the model for this case, we could use in principle the same model for a positive front, but the electric field should be sign reversed, and $\sigma$ would represent the positive surface charge density. Although the moving carriers in the model are the electrons, one may think of a front made of [*holes*]{}, with a positive surface charge density, and characterized with the corresponding parameters for the mobility, diffusion and so on. In this paper, using the contour dynamics model, we will study cylindrical discharges when the plasma has finite conductivity. The dispersion curve for transversal instabilities will be obtained for these finite conductivity streamers. The results will be compared with the limiting cases of perfect conductivity, which is the Lozansky-Firsov model [@LF] with a correction due to electron diffusion, and with the case of a perfect insulator, i.e the limit of very small conductivity. Finally, we compare the results with an actual experiment for a positive streamer discharge. We start by introducing the model. Taking a minimal set of balance equations to describe in fully a deterministic manner the discharge (see for example [@ME]), we will derive the contour dynamics equations for the evolution of the interface between the plasma region and the gas region free of charge (or with a very small density of charge). The outline of this derivation has already been reported [@Arrayas10] but here we present it in full details. Then, we proceed by studying a cylindrical discharge in the case of finite conductivity, and the analytical limits of infinite resistivity and ideal conductivity. With the model at hand we will predict some features of the stability of the fronts. Numerical simulations are made to calculate the dispersion curves and test some of the analytical predictions. We briefly describe the numerical methods employed in the corresponding section. We end with an analysis of the results, the comparison with an experiment for a positive 2-D streamer discharge, and overview of possibilities that the model opens for more complicated geometries and fully 3D cases. The dynamical contour model =========================== In this section we obtain our model as a limit of a set of balance equations describing a streamer discharge. We will first recall the minimal description of a streamer discharge and some of the properties of the traveling planar fronts, and then make use of the asymptotic behaviour of those planar fronts in the limit of small diffusion to give a correction to the velocity of propagation of curved fronts. After finding the dynamics of the effective interface, a balance of the charge transport along the interface will be provided in order to complete the model. The minimal model ----------------- For simulating the dynamical streamer development of streamers out of a macroscopic initial ionization seed, in a non-attaching gas like argon or nitrogen, the model of a streamer discharge [@contphys] can be simplified. As a first approach, the processes with the smaller probabilities or cross sections can be ignored. Attachment and recombination processes can be neglected on that basis in comparison with the ionization process for non-attaching gases. We also ignore photoionization processes in this work. With these considerations in mind, the resulting balance equations are $$\begin{aligned} \frac{\partial N_{e}}{\partial t} &=& \nabla \cdot \left( \mu_e N_e \mbox{\bf E} + D_e\nabla N_e \right) + \nu_{i} N_e, \label{model01} \\ \frac{\partial N_{p}}{\partial t} &=& \nu_{i} N_e, \label{model02}\end{aligned}$$ where $N_{e}$ is the electron density, $N_{p}$ is the positive ion density, $\mu_e$ is the electron mobility and $D_e$ the diffusion coefficient. The ionization coefficient $\nu_{i}$ can be modeled following the phenomenological approximation suggested by Townsend, which leads to $$\nu_{i} = \mu_e l_0^{-1}|\mbox{\bf E}|\exp \left(- \frac{\mbox{E}_0}{|\mbox{\bf E}|} \right), \label{townsend}$$ where $l_{0}$ is the ionization length, and $\mbox{E}_{0}$ is the characteristic impact ionization electric field. The fitting of experimental data can be done using those parameters [@Rai]. Note also that it is assumed the positive ions do not move and $\mu_e{\mbox{\bf E}}$ is the drift velocity of electrons. Those are valid approximations at the initial stages of the streamers development, but it may not be right afterward. To close the model, we consider Gauss’s law $$\nabla \cdot {\mbox{\bf E}} = \frac{e(N_p-N_e)}{\varepsilon_0}. \label{gauss}$$ For convenience the equations are reduced to dimensionless form. Townsend approximation provides physical scales and intrinsic parameters of the model if only impact ionization is present in the gas [@vanS]. The units are given by the ionization length $l_0$, the characteristic impact ionization field $\mbox{E}_0$, and the electron mobility $\mu_e$. The velocity scale yields $U_0=\mu_e \mbox{E}_0$, and the time scale $\tau_0=l_0/U_0$. Typical values of these quantities for nitrogen at normal conditions are $l_0\approx 2.3\,\mu\mathrm m$, $\mbox{E}_0 \approx 200$ kV/m, and $\mu_e \approx 380\,\mathrm {cm^2/Vs}$. We introduce the dimensionless variables ${\bf r}_d={\bf r}/l_0$, $t_d=t/\tau_0$, the dimensionless field ${\bf E}_d={\mbox{\bf E}}/\mbox{E}_0$, the dimensionless electron and positive ion densities $n_e=N_e/N_0$ and $n_p=N_p/N_0$ with $N_0=\varepsilon_0 {\cal E}_0/(e l_0)$, and the dimensionless diffusion constant $D=D_e/(l_0 U_0)$. From now on, all the quantites will be dimensionless unless othewise stated. Note however that we will not write the subindex $d$. Just for reference, the dimensionless model reads $$\begin{aligned} \label{1} \frac{\partial n_e}{\partial t} &=& \nabla\cdot (n_e {\bf E} + D\; \nabla n_e) + n_e \alpha(|{\bf E}|),\\ \label{2} \frac{\partial n_p}{\partial t} &=& n_e \alpha(|{\bf E}|), \\ \label{3} \nabla\cdot{\bf E}&=&n_p - n_e, \\ \label{ft} \alpha(|{\bf E}|) &=& |{\bf E}| \exp(-1/|{\bf E}|),\end{aligned}$$ Planar fronts and boundary layer -------------------------------- Using the minimal streamer model, we can compute traveling wave solutions in the planar case. We will assume that the plasma region is on the left and the front is moving toward the right. The traveling waves are solutions such as $n_e$ and $n_p$ decay exponentially at infinity. This means that we can take $$\begin{aligned} n_e &=& A e^{-\lambda(x-vt)},\nonumber\\ n_p &=& B e^{-\lambda(x-vt)},\nonumber \\ {\bf E}&=& (\mbox{E}^+ + Ce^{-\lambda(x-vt)})\,\hat{\bf x}\nonumber,\end{aligned}$$ asymptotically far ahead for the planar wave in the $\hat{\bf x}$ direction, being $\mbox{E}^+$ the value of the electric field at the infinity. Introducing these expressions into the minimal model equations we get the relation $$D\lambda^2 - (\mbox{E}^+ +v)\lambda + \alpha(|\mbox{E}^+|)=0, \label{lambda}$$ which has real solutions if and only if $$v \ge - \mbox{E}^+ + 2 \sqrt{D \alpha(|\mbox{E}^+|)}.$$ All initial data decaying at infinity faster than $A e^{-\lambda^*x}$, with $\lambda^*=1/\sqrt{D \alpha(|\mbox{E}^+|)}$, will develop traveling waves with velocity $v^* =- \mbox{E}^+ + 2 \sqrt{D \alpha(|\mbox{E}^+|)}$. Clearly, from the assumption that the plasma state is on the left, negative velocity solutions are unphysical. So in the case of a negative front, when E$^+$ is negative, the front will move at least with the drift velocity in the case that $D=0$. For positive fronts, the motion will be possible only if the creation of charge, given by the Townsend factor, and its diffusion can compensate the drift. A detailed discussion about the propagation mechanism can be found at [@vanS]. If $D\ll1$ the profiles for $n_p$ and ${\bf E}$ will vary very little from the profiles with $D=0$ and $n_e$ will develop a boundary layer at the front. This boundary layer has a width of $O(\sqrt{D})$ as shown in Fig. \[fig2a\]. The main results for the structure of the boundary layer which we are going to make use are $$\begin{aligned} \label{nebl} n_e &=& f(\chi),\\ \label{npbl} n_p &=& -\sqrt{D}\int_{\chi}^{\infty}f(z)\,dz,\\ \label{Ebl} \mbox{E}&=&\mbox{E}^++O(\sqrt{D}),\end{aligned}$$ with $\chi = (x-v^*t)/\sqrt{D}$, and $\mbox{E}$ the electric field in the $\hat{\bf x}$ direction. The function $f(\chi)$, also appearing in , is the solution of the equation $$\frac{\partial ^{2}f}{\partial \chi ^{2}}+2\sqrt{\alpha(|\mbox{E}^+|)}\frac{\partial f}{\partial \chi} = f(f-1), \label{blf}$$ which becomes the solution of a Fisher equation under the additional assumptions that the Townsend factor $\alpha(|{\bf E}|) \approx 1$. So, as it is plotted in Fig. \[fig2a\], the function $f$ changes from constant values in a region of width $\sqrt{D}$, when imposing the two maching conditions $f(-\infty)=1$ and $f(\infty)=0$, thus separating the plasma region from the gas. The complete mathematical details can be found in [@aftprl] and [@abft]. ![Derivation of the contour dynamics model. We take a surface of constant $n_e$ at the boundary which has an effective width of order $\sqrt{D}$. The local coordinates tangent and normal to the surface, $\tau$ and $\nu$, together with a pillbox are also shown schematically.[]{data-label="fig2a"}](fig2.eps){width="45.00000%"} The correction due to the curvature ----------------------------------- Next we will add the correction to the propagation velocity due to the curvature of the front. We take a level surface of $n_e$ representing the interface, and introduce local coordinates $\tau $ (along the level surfaces of $n_{e}$) and $\nu$ (orthogonal to the level surfaces of $n_{e}$). The schematic can be seen in Fig. \[fig2a\]. We scale the normal coordinate with the boundary layer thickness $\nu=\chi\sqrt{D}$, and expand the Laplacian times $D$ like $$D\,\Delta =\frac{\partial ^{2}}{\partial \chi ^{2}}+\sqrt{D}\kappa \frac{% \partial }{\partial \chi }+D\left( \Delta _{\perp}-\kappa ^{2}\chi \frac{% \partial }{\partial \chi }\right) +O(D^{\frac{3}{2}}),$$where $\Delta\equiv\nabla^2$ is the Laplacian operator, $\Delta _{\perp}$ is the transverse Laplacian and $\kappa$ is twice the mean curvature in 3-D or just the curvature in 2-D (details of this expansion can be found in [@Pismen]). We write in local coordinates, and using , we find $$\begin{split} \frac{\partial n_{e}}{\partial t}-\mbox{E}_{\tau }\frac{\partial n_{e}}{\partial \tau }-\left( \frac{\mbox{E}_{\nu }}{\sqrt{D}}+\sqrt{D}\kappa \right) \frac{% \partial n_{e}}{\partial \chi }=\\ =\frac{\partial ^{2}n_{e}}{\partial \chi ^{2}}% +n_{e}\alpha(|\mathbf{E}|)+n_e\left(n_{p}-n_{e}\right) +O(D). \end{split}$$Finally we use so that $$\frac{\partial n_{e}}{\partial t}-\mbox{E}_{\tau }\frac{\partial n_{e}}{\partial \tau }-\left( \frac{\mbox{E}_{\nu }}{\sqrt{D}}-2\sqrt{\alpha(|\mbox{E}_{\nu }|)}+\sqrt{D}\kappa \right) \frac{% \partial n_{e}}{\partial \chi }=O(D^{\frac{1}{2}}). \label{transne}$$Note that the curvature term correction will be relevant provided $ 1\ll \kappa \ll D^{-\frac{1}{2}}$. Thus we have obtained a transport equation for the electron density with velocity $$\mathbf{v}=-\mathbf{E}+(2\sqrt{D\alpha(|\mathbf{E}|)}-D\kappa )\mathbf{n}, \label{vkurvature}$$ The level line $n_e$ which we have taken as representative of the interface evolution will move with a normal velocity $$v_{N}=-\mbox{E}_{\nu }+2\sqrt{D\alpha(|\mathbf{E}|)}-D\kappa. \label{bb1}$$Notice that the level lines concentrate in a small region where $n_e$ presents a jump from its bulk value to zero, so most level lines follow . The tangential component of the velocity will not change the geometry of the interface during its evolution, although tangential exchanges of charge affect the evolution through the dependence of $v_{N}$ on $E_{\nu }$. The mathematical description of this effect will be the subject of next section. Charge transport along the interface ------------------------------------ In order to describe the charge transport along the interface we trace a small “pillbox” $\mathcal{D}$ around a portion of the interface having the top and bottom areas bigger than the lateral area, i.e. $\Delta \tau \gg \Delta \nu$ as we can see in Fig. \[fig2a\]. On the other hand $\mathcal{D}$ will be big enough to contain the diffusive layer and so the portion where the total negative charge density $n_{e}-n_{p}$ has significant values different of zero. We subtract from and integrate over the pillbox volume $\mathcal{D} $, assume that $n_{e}\rightarrow 0$ for $\chi =\nu /\sqrt{D}\gg 1$, $\left\vert \nabla n_{e}\right\vert \rightarrow 0$ for $\left\vert \chi \right\vert \gg 1$ and get $$%\begin{split} %\frac{\partial}{\partial t}\int_{\mathcal{D} }(n_{e}-n_{p})\,dV&=\int_{\partial %\mathcal{D} }\left( n_{e}{\mbox{\bf E}}+D\,\nabla n_{e}\right) \cdot \mathbf{n}dS %=\\ & = \left. n_{e}\mbox{E}_{\nu }\Delta \tau \right]_{\chi = -\infty }^{\infty }+O(D^{\frac{1}{2}}), %\end{split} \frac{\partial}{\partial t}\int_{\mathcal{D} }(n_{e}-n_{p})\,dV = \left. n_{e}\mbox{E}_{\nu }\Delta \tau \right|_{\chi = -\infty }^{\infty }+O(D^{\frac{1}{2}}), \label{s1}$$ where the contributions of the lateral transport of charge through the lateral surface is neglected in comparison with the exchange of charge in the normal direction. Note that in the Taylor-Melcher model this assumption is also made. As explained in [@Arrayas10] the left hand side of equation can be written as the time partial derivative of the product of the negative surface charge density $\sigma$ times the normal area $\Delta\tau$, and the change of a surface element can be related to the curvature times the normal velocity, so that $$\frac{\partial \sigma }{\partial t}+\kappa v_{N}\sigma= -\left. n_{e}\mbox{E}_{\nu }\right|_{\chi = -\infty } \label{s2}$$ If a charge source $I(t)$ is present in the plasma, for instance at $x_0$, this source will create a current density inside the plasma and we will have at the interior of $\Omega$ $$\nabla \cdot \mathbf{j} = I(t)\delta(\mathbf{x}-\mathbf{x}_0). \label{nablaj}$$ By adding this contribution to we can finally write $$\frac{\partial \sigma }{\partial t}+\kappa v_{N}\sigma =-\frac{\mbox{E}_{\nu }^{-}}{\varrho}-j_\nu^{-}\,, \label{sigmaeq}$$where $j_\nu^{-}$ is the current density coming from the ionized region $\Omega$ to its boundary $\partial \Omega$ in the normal direction $\nu$, $\mbox{E}_{\nu }^{-}$ is the normal component of the electric field when approaching the interface from inside, and $\varrho^{-1}=\lim_{\chi =-\infty }n_{e}$ is the effective movility of the electrons inside the plasma. Note that the quasineutrality of the plasma, further away of the interface is not changed by the current, but there is a jump in the normal component of the electric field across the interface given by $$\mbox{E}_{\nu }^{+}-\mbox{E}_{\nu }^{-}=-\sigma, \label{jump}$$with $\mbox{E}_{\nu }^{+}$ the normal component of the electric field when approaching the interface from outside the plasma region. The effective contour model --------------------------- The Eqs.  and together constitute the dynamical model able to describe the evolution of an interface separating a plasma region from a neutral region. Notice that in the case $\varrho^{-1}\gg 1$, we arrive to Lozansky-Firsov model [@LF] with a correction due to electron diffusion, meanwhile in the limit $D = 0$ we arrive at the classical Hele-Shaw model. Such a model is known to possess solutions that develop singularities in the form of cusps in finite time [@PK] but, when regularized by surface tension corrections, the interface may develop various patterns including some of fractal-type (see [@Low] for a recent development and references therein). Eq. (\[sigmaeq\]) will provide the surface charge density $\sigma $ as a function of time. From it, we can compute the electric field and move the interface with (\[bb1\]). Two limits can be easily identified in the case that there is no charge injection inside the plasma, i.e $\mbox{j}_{\nu}^-\approx 0$: a) the limit of large conductivity$$\varrho^{-1}\gg 1, \ \mbox{E}_{\nu }^{-}=0,$$so that the interface is equipotential and b) the limit of small conductivity$$\varrho^{-1}\ll 1,\ \frac{\partial }{\partial t}\left( \sigma \Delta \tau \right) =0\Rightarrow \sigma \Delta \tau =Cte,$$where the charge contained by a surface element is constant and the density only changes through deformation (with change of area) of the interface. In the next sections we will study the intermediate case of finite resistivity. The case of finite resistivity in 2-D geometries ================================================ As an application we will solve the 2-D case for different conductivities. In order to grasp some features of the model first we will consider how fronts with radial symmetry evolve. Then we will study the stability of those fronts under small perturbations and finally solve the model numerically in order to test some of the analytical predictions. Solutions with radial symmetry ------------------------------ The electric potential created by a surface charge distribution with radial symmetry at the distance $r$ is found by solving the equation $$\Delta V=\sigma \delta(r).$$ The fundamental solution turns out to be in polar coordinates $$V(\mathbf{x})=\begin{cases} C\log |\mathbf{x}|,& |\mathbf{x}| > r\\ C\log r, & |\mathbf{x}| \le r \end{cases}$$ where $C$ will be determined by the condition of the electric field jump at the surface. From the potential solution we can compute the electric field which has a discontinuity at the surface $$E_{\nu }^{-}=0, \qquad E_{\nu }^{+}=-\frac{C}{r}.$$ For the current density, the solution of gives $$\mathbf{j}=\frac{I(t)}{2\pi r^2}\mathbf{r},$$ and finally using and the fact that $v_N = dr/dt$ and $\kappa = 1/r$, we get $$\frac{\partial \sigma }{\partial t}+\frac{1}{r}\frac{\partial r}{\partial t}\sigma = -\frac{I(t)}{2\pi r}. \label{sigma}$$ This equation can be easily solved. We can write it as $$\frac{\partial (r\sigma) }{\partial t} = -\frac{I(t)}{2\pi},$$ to get $$\sigma = -\frac{Q(t)}{2\pi r}, \quad \text{with} \quad Q(t)= \int_0^t I(t)\,dt,$$ where we have assumed that $\sigma(0)=0$. Now we can see from the condition that $C = -Q(t)/2\pi$, so $$E_{\nu}^{+}=\frac{Q(t)}{2\pi r}.$$ Then, defining $\varepsilon\equiv D$, the interface evolves according to as $$\frac{dr}{dt}=-\left(\frac{Q(t)}{2\pi}+\varepsilon \right)\frac{1}{r}+2\sqrt{\varepsilon\alpha(|Q(t)/2\pi r|)}. \label{r0}$$ We shall analyze next two limiting cases. First the case where $$r \ll \frac{|Q(t)|}{4\pi \varepsilon^{\frac{1}{2}}\sqrt{\alpha(|Q(t)/2\pi r|)}}, \quad \text{and} \quad \varepsilon \ll 1.$$ Then expression results $$\frac{dr}{dt}\approx-\frac{Q(t)}{2\pi r},$$ so $$r(t)\approx \sqrt{r(0)^2 - \int_0^t Q(t')/\pi \,dt'}.$$ For the particular case $Q(t)=Q$ is constant $$r(t)\approx \sqrt{r(0)^2 - t Q/\pi}$$ The second case is the opposite one. If $$r \gg \frac{|Q(t)|}{4\pi \varepsilon^{\frac{1}{2}}\sqrt{\alpha(|Q(t)/2\pi r|)}}, \quad \text{and} \quad \varepsilon \ll 1,$$ we have now $$\frac{dr}{dt}\approx 2\varepsilon^{\frac{1}{2}}\sqrt{\alpha(|Q(t)/2\pi r|)}.$$ For the particular case $Q(t)= Q$, by standard asymptotic calculations, when $t\gg 1$ we deduce $$\quad r(t)\approx \frac{|Q|}{\pi}\log{t}.$$ Stability analysis ------------------ We will study now the stability of the fronts under small perturbations. We change by a small amount the position of the front as well as the charge density. The perturbed position and charge surface density of the interface on the interface will be parametrized using the polar angle as $$\begin{aligned} \label{pert1} r(\theta,t)=r(t)+\delta S(\theta,t),\\ \label{pert2} \sigma(\theta,t)=-\frac{Q(t)}{2\pi r(\theta,t)}+\delta \Sigma(\theta,t),\end{aligned}$$ where $r(t)$ is the solution of the equations for the radial symmetrical front, $Q(t)= \int_0^t I(t)\,dt$ and $\delta$ a small parameter. The electric potential will change by $\delta V_p(\mathbf{x})$ after adding a geometrical perturbation of the interface and some extra charge on it. This term satisfies the equation $\Delta V_p = O(\delta)$. Changing coordinates to $$\mathbf{x} \longrightarrow \mathbf{\tilde{x}}=\mathbf{x}\,\frac{r(t)}{r(\theta,t)},$$ the perturbed surface becomes a disk of radius $r(t)$ again, and solving for it yieds $$\begin{aligned} V_p(\tilde{r},\theta)=\sum_1^{\infty}\psi_n \cos(n\theta)\left(\frac{r}{\tilde{r}}\right)^n, \,\,\tilde{r} > r\label{cvv1}\\ V_p(\tilde{r},\theta)=\sum_1^{\infty}\varphi_n \cos(n\theta)\left(\frac{\tilde{r}}{r}\right)^n, \,\,\tilde{r} \le r \label{cv1}\end{aligned}$$ where it is imposed that $V_p$ remains finite at the origin and at very large distances becomes zero. Now taking the condition of continuity for the potential, we have at the interface $\mathbf{x}_s$ (in the original coordinate system) $$V_p(\mathbf{x}_s^+)= V_p(\mathbf{x}_s^-) + S\,\frac{Q(t)}{2\pi r(t)},$$ and writing the surface perturbation as $$S = \sum_{n=1}^{\infty}s_n(t)\cos(n\theta), \label{sn}$$ the coefficients of the series in and can be related by $$\psi_n = \varphi_n + \frac{Q(t)}{2\pi r} s_n. \label{coeff}$$ Making use of the expressions –, one can calculate the electric field to $\delta$ order. We will need the normal components of the electric field at both sides of the surface, together with the jump condition to find the charge perturbation of. The normal components of the electric field at the interface are $$\begin{aligned} E_{\nu}^+&=&\frac{Q(t)}{2\pi (r+\delta S)}+\delta\sum_1^{\infty}\left(\varphi_n + \frac{Q(t)}{2\pi r}s_n\right)\frac{n}{r}\cos(n\theta),\nonumber \\ E_{\nu}^-&=&-\delta\sum_1^{\infty}\varphi_n \frac{n}{r}\cos(n\theta), \label{Enormal}\end{aligned}$$ thus $$\Sigma = -\sum_{n=1}^\infty \left(2\varphi_n + \frac{Q(t)}{2\pi r} s_n \right)\frac{n}{r}\cos(n\theta). \label{sigman}$$ The dynamics of the front will be changed by the perturbation introduced. The curvature correction turns out to be $$\kappa = \frac{r^2+2rS\delta-rS_{\theta\theta}\delta+O(\delta^2)}{\left(r^2+2rS_{\theta}\delta+O(\delta^2)\right)^{\frac{3}{2}}} = \frac{1}{r}-\frac{S+S_{\theta\theta}}{r^2}\delta+O(\delta^2), \label{kappap}$$ (the subindex $\theta$ means the partial derivative with respect this variable) and the normal component of the velocity $$v_N = \frac{d r(t)}{d t} +\delta \frac{\partial S(\theta,t)}{\partial t}, \label{vp}$$ so the contour model equation , to first order gives $$\begin{split} \frac{d r(t)}{d t} + \delta\frac{\partial S(\theta,t)}{\partial t} &= -\frac{Q(t)}{2\pi r}+\delta S\frac{Q(t)}{2\pi r^2} -\delta\sum_1^{\infty}\left(\varphi_n + \frac{Q(t)}{2\pi r}s_n\right)\frac{n}{r}\cos(n\theta) + 2\varepsilon ^{\frac{1}{2}}\sqrt{\alpha_0+\delta \alpha_1}-\\&-\varepsilon \left(\frac{1}{r}-\delta\,\frac{S+S_{\theta\theta}}{r^2}\right). \label{cmp1} \end{split}$$ where we have written the Townsend function up to first order as $\alpha = \alpha_0 +\delta\alpha_1+ O(\delta^2)$. Now, we have $$|E_0+\delta E_1|e^{-\frac{1}{|E_0+E_1\delta|}} \approx|E_0|e^{-\frac{1}{|E_0|}}+\delta\,\mathrm{sign}(E_0) E_1\biggl(1+\frac{1}{|E_0|}\biggr)e^{-\frac{1}{|E_0|}}=\alpha_0 +\delta\alpha_1,$$ where, using , $$\begin{aligned} E_0&=&\frac{Q(t)}{2\pi r},\label{elec0}\\ E_1&=&\sum_{n=1}^{\infty}\left(n\varphi_n + (n-1)\frac{Q(t)}{2\pi r}s_n\right)\frac{1}{r}\cos(n\theta),\end{aligned}$$ so that $$\sqrt{\alpha}=\sqrt{\alpha_0}+\delta\frac{\alpha_1}{2\sqrt{\alpha_0}}=\sqrt{\alpha_0}\biggl[1+\delta\,\mathrm{sign}(Q(t))\frac{E_1}{2|E_0|}\biggl(1+\frac{1}{|E_0|}\biggr)\biggr].$$ Taking into account for the zero order term, we get from $$\frac{\partial S}{\partial t} = S\frac{Q(t)}{2\pi r^2} -\sum_1^{\infty}\left(\varphi_n + \frac{Q(t)}{2\pi r}s_n\right)\frac{n}{r}\cos(n\theta) + \varepsilon \left(\frac{S+S_{\theta\theta}}{r^2}\right)+\varepsilon^\frac{1}{2}\frac{\alpha_1}{\sqrt{\alpha_0}}, \label{eqS}$$ and finally making use of the expansion for the perturbation $S$ yields $$\begin{split} \label{eqSn} \frac{ds_n}{dt}& = \Biggl[-1+\varepsilon^\frac{1}{2}\frac{2\pi r\sqrt{\alpha_0}\,\mathrm{sign}(Q(t))}{|Q(t)|}\biggl(1+\frac{2\pi r}{|Q(t)|}\biggr)\Biggr]\frac{n}{r}\varphi_n-\\&-\Biggl[\frac{Q(t)}{2\pi r^2}(n-1)+\frac{\varepsilon}{r^2}(n^2-1)+\varepsilon^\frac{1}{2}\frac{(n-1)\sqrt{\alpha_0}}{r}\left(1+\frac{2\pi r}{|Q(t)|}\right)\Biggr]s_n. \end{split}$$ In order to find the correction to the charge density we take Eq. and multiply it by $r(\theta,t)$. Then we use the curvature expansion written as $$\kappa =\frac{1}{r(\theta,t)}-\frac{S_{\theta \theta}}{r^2}\delta,$$ (being $r=r(t)$ is the zero order term in the position), and the fact that $$v_N=\frac{dr(\theta,t)}{dt}.$$ Hence $$\frac{\partial (r(\theta,t) \sigma(\theta,t)) }{\partial t} - r(\theta,t)\frac{S_{\theta \theta}}{r^2}v_N \sigma(\theta,t)\,\delta = -\frac{r(\theta,t)}{\varrho}E_\nu^--\frac{I(t)}{2\pi},$$ so that, at $O(\delta)$, $$\frac{\partial (r \Sigma)}{\partial t} + \frac{S_{\theta \theta}}{r}\frac{Q(t)}{2\pi r}\frac{dr}{dt}= \frac{1}{\varrho}\sum_1^{\infty}n\varphi_n \cos(n\theta). \label{eqSig}$$ Making use of the , and , we get $$-\frac{d}{dt}\left( 2n\varphi_n+n\frac{Q(t)}{2\pi r}s_n\right) =\frac{Q(t)}{2\pi r^2}\frac{dr}{dt}n^2 s_n+\frac{n}{\varrho}\varphi_n,$$ or after simplifying $$2\frac{d\varphi_n}{dt}+ \frac{Q(t)}{2\pi r}\frac{ds_n}{dt}=-\frac{Q(t)}{2\pi r^2}\frac{dr}{dt}(n-1) s_n-\frac{I(t)}{2\pi r}s_n-\frac{1}{\varrho}\varphi_n.$$ Finally, using and $$\begin{split} 2\frac{d\varphi_n}{dt}+\frac{Q(t)}{2\pi r}\Biggl[-\frac{n}{r}\varphi_n+\varepsilon^\frac{1}{2}\frac{2\pi r\sqrt{\alpha_0}\,\mathrm{sign}(Q(t))}{|Q(t)|}\biggl(1+\frac{2\pi r}{|Q(t)|}\biggr)\frac{n}{r}\varphi_n-\\-\frac{Q(t)}{2\pi r^2}(n-1)s_n-\frac{\varepsilon}{r^2}(n^2-1)s_n-\varepsilon^\frac{1}{2}\frac{(n-1)\sqrt{\alpha_0}}{r}\biggl(1+\frac{2\pi r}{|Q(t)|}\biggr)s_n\Biggr]=\\=-\frac{Q(t)}{2\pi r^2}\biggl(-\frac{Q(t)}{2\pi r}-\frac{\varepsilon}{r}+2\varepsilon^{\frac{1}{2}}\sqrt{\alpha_0}\biggr)(n-1) s_n -\frac{I(t)}{2\pi r}s_n-\frac{1}{\varrho}\varphi_n \end{split},$$ and after rearranging the terms $$\begin{split} \frac{d\varphi_n}{dt}& =\frac{1}{2}\Biggl[\frac{Q(t)}{2\pi r^2}n-\varepsilon^\frac{1}{2}\frac{n\sqrt{\alpha_0}}{r}\left(1+\frac{2\pi r}{|Q(t)|}\right)-\frac{1}{\varrho}\Biggr]\varphi_n+\\&+\Biggr\{\frac{Q(t)}{2\pi r^2}\Biggl[\frac{Q(t)}{2\pi r}+\frac{(n+2)\varepsilon}{2r}+\varepsilon^{\frac{1}{2}}\sqrt{\alpha_0}\biggl(\frac{\pi r}{|Q(t)|}-\frac{1}{2}\biggr)\Biggr](n-1)-\frac{I(t)}{4\pi r}\Biggr\}s_n. \end{split} \label{eqPhin}$$ Thus the time evolution of each particular mode has been obtained and it is governed by and . Special limits -------------- First we study the limit of ideal conductivity. It corresponds to $\varrho \to 0$, and hence, from , we can conclude that $\varphi_n \to 0$. Physically this means that in the limit of very high conductivity, the electric field inside goes to zero ($E_\nu^- \to 0$), as we approach to the behavior of a perfect conductor. If we consider that $Q(t)=Q_0$ is constant or its variation in time is small compared with the evolution of the modes (which also implies $I(t)\to 0$), and the same for the radius of the front $r(t)=r_0$, we can try a solution $s_n = \exp(\omega_n t),\, \varphi_n =0$, to , and get a discrete dispersion relation of the form $$\omega_n=-\frac{Q_0}{2\pi r_0^2}(n-1)-\frac{\varepsilon}{r_0^2}(n^2-1)-\varepsilon^\frac{1}{2}\frac{(n-1)\sqrt{\alpha_0}}{r_0}\left(1+\frac{2\pi r_0}{|Q_0|}\right). \label{disp1}$$ Next we consider the limit of finite resistivity, but such that the total charge is constant at the surface, or varies very slowly. Writing as $$\frac{d\varphi_n}{dt}=-\frac{d}{dt}\left(\frac{Q(t)}{4\pi r}s_n\right)-\frac{Q(t)}{4\pi r^2}\frac{dr}{dt}n s_n-\frac{1}{2\varrho}\varphi_n,$$ we have now $$\frac{d\varphi_n}{dt}=-\frac{Q_0}{4\pi r_0}\frac{ds_n}{dt}-\frac{1}{2\varrho}\varphi_n. \label{dvar}$$ For a small enough conductivity, $\varrho \to \infty $ so no extra charge reaches the surface, we find $\varphi_n =-\frac{Q_0}{4\pi r_0}s_n$, and with $s_n = \exp(\omega_n t)$, yields $$\omega_n=-\frac{Q_0}{2\pi r_0^2}\left(\frac{n}{2}-1\right)-\frac{\varepsilon}{r_0^2}(n^2-1)-\frac{\varepsilon^\frac{1}{2}\sqrt{\alpha_0}}{r_0}\left(1+\frac{2\pi r_0}{|Q_0|}\right)\left(\frac{3n}{2}-1\right). \label{disp2}$$ In a curved geometry we can see that the modes are discrete. However, if we compare and , for small $n$ and vanishingly small $\alpha_0$ there is a $1/2$ factor discrepancy in the dispersion curve between the two limits. The origin of this prefactor was discussed for planar fronts in [@CPRL], and the dispersion relation for planar fronts was obtained in the case of constant charge in [@abft]. We get in this 2-D curved case the same factor $1/2$ that we got for the planar case. On the other hand, imposing constant potential at the surface gives a factor of $1$. The intermediate situations can be studied by solving the system and . Another important consequence is that in both cases the maximum growth correspond to a perturbation with $$n \propto|Q_0|/D, \label{disp3}$$ provided that the $\varepsilon^\frac{1}{2}$ term can be neglected and $Q_0$ is negative, implying that the number of fingers increases with the net charge and decreases with electron diffusion. Numerical simulations --------------------- In order to test the analytical predictions, we have calculated numerically the dispersion relation curves for the cases studied previously, when $\varrho \to 0$, so we have a perfect conducting plasma, and when $\varrho$ remains finite. We will outline the numerical algorithms and present here the results. We start for the case of finite resistivity. The 2-D solution for the potential problem can be written as $$\Phi(\mathbf{x})=\int_{\partial\Omega}\frac{1}{2\pi }\log \left\vert \mathbf{x}-\mathbf{x}^{\prime}\right\vert \sigma (\mathbf{x}^{\prime })ds',$$Note that the integration domain $\partial\Omega$ is the curve manifold. The electric field results $$\mathbf{E}=-\int_{\partial \Omega}\frac{1}{2\pi }\frac{\mathbf{x}-\mathbf{x}^{\prime}}{\left\vert \mathbf{x}-\mathbf{x}^{\prime}\right\vert^2} \sigma (\mathbf{x}^{\prime})ds^{\prime}.$$In order to obtain the component in the normal direction $E_{\nu}$, we will multiply it by the normal pointing outside the plasma region, i.e. $$\mathbf{n}=\frac{(y_{\beta},-x_{\beta})}{\sqrt{x_{\beta}^{2}+ y_{\beta}^{2}}},$$ where the subindex denotes the derivative respect to the curve parameter $\beta$. So we can write $$E_{\nu} = -\int_{\partial\Omega}\frac{1}{2\pi }\frac{(x-x^{\prime},y-y^{\prime})}{(x-x^{\prime})^{2}+(y-y^{\prime})^{2}} \frac{(y_{\beta},-x_{\beta})}{\sqrt{x_{\beta}^{2}+ y_{\beta}^{2}}}\sigma (x^{\prime},y^{\prime})\sqrt{x^{\prime\, 2}_{\beta}+ y^{\prime\, 2}_{\beta}}d{\beta}^{\prime}.$$ Now when approximating the integral as a discrete sum on the interface, i.e. $E_{\nu}^{+}$ limit, some care must be taken. We need the limit $E_{\nu}^{+}$ on the interface. When $\mathbf{x}$ coincides with $\mathbf{x}^{\prime}$ there is an extra contribution of half the pole, which is $\sigma(\mathbf{x})/2$. The $E_{\nu}^{-}$ can be obtained from the boundary condition , and the curvature must be expressed in the appropriate coordinates system. The case of constant potential, which corresponds to $\varrho=0$, is treated numerically as follows. We have to fulfill the condition $$\int_{\partial\Omega}\frac{1}{2\pi }\log \left\vert \mathbf{x}-\mathbf{x}^{\prime }\right\vert \sigma (\mathbf{x}^{\prime })ds=V_{0},$$being $V_{0}$ a constant for any $\mathbf{x}$ belonging to $\partial\Omega$. Discretizing the domain in small segments $A_{i}$ between points $\mathbf{x}_{i}$ and $\mathbf{x}_{i+1}$ we can approximate the integral as $$\sigma (\overline{\mathbf{x}}_{i})\int_{A_{i}}\frac{1}{2\pi }\log \left\vert \overline{\mathbf{x}}_{i}-\mathbf{x}^{\prime }\right\vert ds+\sum_{\substack{ j \\ i\neq j}}\frac{1}{2\pi }\log \left\vert \overline{\mathbf{x}}_{i}-% \overline{\mathbf{x}}_{j}\right\vert \sigma (\overline{\mathbf{x}}% _{j})\left\vert \mathbf{x}_{j+1}-\mathbf{x}_{j}\right\vert =V_{0},$$ where $\overline{\mathbf{x}}_{i}$ is the mean point of the $A_{i}$ segment. The self contribution of the segment to the integral is taken as $$\int_{A_{i}}\frac{1}{2\pi }\log \left\vert \overline{\mathbf{x}}_{i}-\mathbf{% x}^{\prime }\right\vert ds=\int_{-\frac{h_{i}}{2}}^{\frac{h_{i}}{2}}% \frac{1}{2\pi }\log \left\vert x\right\vert dx=\frac{1}{2\pi }h_{i}\left( \log \frac{h_{i}}{2}-1\right),$$ being $h_{i}$ the length of $A_{i}$. So we end with the equation $$M_{ij}\sigma _{j}=V_{0}\mathbf{1},$$where $\mathbf{1}$ is the identity matrix, $\sigma _{j}=\sigma (\overline{\mathbf{x}}_{j})$, and$$M_{ij}=\left\{ \begin{array}{c} \frac{1}{2\pi }h_{j}\log \left\vert \overline{\mathbf{x}}_{i}-\overline{\mathbf{x}% }_{j}\right\vert,\ \ \ \text{for} \ i\neq j \\ \frac{1}{2\pi }h_{i}\left( \log \frac{h_{i}}{2}-1\right), \ \text{for} \ i=j% \end{array}% \right.$$Due to the linearity of the problem, we can solve $A_{ij}\sigma _{j}=\mathbf{% 1}$ and rescale subsequently the solution in order to fulfill $\sum \sigma_{j}h_{j}=Q$. In the numerical simulations presented here, we will follow the evolution of a total initial dimensionless charge $Q=-10$ distributed uniformly along the curve given by $$\begin{aligned} x(\theta) &=& [1+ 0.05\cos(n\theta)]\cos(\theta),\nonumber\\ y(\theta) &=& [1+ 0.05\cos(n\theta)]\sin(\theta). \label{icharge}\end{aligned}$$ where n gives the mode of the perturbation and $\theta$ is the curve parameter. We assume that there is not input current, so $j_\nu^{-}=0$ in , and then compute the exponential growth of each mode for a small period of time in order to get the dispersion curve. In Fig. \[fig3\], for different values of the inverse of the resistivity coefficient $1/\varrho$ (or effective conductivity), we plot the corresponding dispersion curves. Note that the slope increases with the increase of the conductivity of the plasma, the maxima moves to higher modes, and for larger $n$’s the dispersion curves become negative as predicted by and . The slope around the origin $n=1$ is larger for the case of ideal conductivity, i.e. when the interface is equipotential. ![Dispersion relation for the discrete modes of a perturbation with initial value $Q=-10$ for different inverse resistivities $1/\varrho$. The $\blacktriangle$ are for 0, $+$ for 5, $\ast$ for 10, $\blacksquare$ for 15, $\blacktriangledown$ for 25. The case of zero resistivity corresponds to $\blacklozenge$.[]{data-label="fig3"}](fig3.eps){width="50.00000%"} Comparisons with 2-D positive discharge experiment ================================================== In this section we will make some estimations in order to test the validity of the assumptions made in our contour dynamical model. We will use the experimental data presented at reference [@Japos]. The experiment reported there consists in the measuring of the potential and electric field distribution of a surface streamer discharge on a dielectric material. For that, a technique based on Pockels crystals have been applied in order to obtain some temporal and spatial resolution of the discharge (see the reference for details). However, a note of warning must be done: a surface discharge is not a truly 2-D discharge, due to the fact that there is a vertical contribution of the electric field, and the discharge has two different interfaces, the air and the substrate, so the boundary conditions are not the same that the presented so far in this paper. Nevertheless, and keeping that in mind, we may try a quantitative estimation à la Fermi from our results and compare it with the actual experiment. Here it is a brief account of the experiment. A discharge is created on a dielectric surface using a positive tip and branching is observed. Then the potential is measure using Pockels crystals, laser pulses and a ccd camera. The temporal resolution is 3.2 ns and the electric field close to the tip reaches values of 3 kV/mm, leaving behind a potential gradient of 0.5 kV/mm. At position $r=8$ mm the front moves with an estimated velocity from the pictures of 0.18 mm/ns (from the charge density data, the front has a radius of 4 mm at 3 ns, 8 mm at 15 ns and 9 mm at 28 ns). The pictures show a sharp interface for the charge distribution, so our model should be able to give some quantitative predictions. Unfortunately, there is only one discharge reported, so the estimations we are going to make are very rough. The experimental data gives a characteristic front speed $U_0\approx 0.1$ mm/ns, and $\mbox{E}_0 \approx 200$ kV/m. In order to get an estimation of the diffusion coefficient $D$ we can make use of the expression . We take $$\mbox{E}_{\nu }^+\approx\frac{3\, \mathrm{kV/mm}}{\mbox{E}_0} \approx 1.5,\,\,\mathrm{and}\,\,\, v_{N}\approx\frac{0.185\,\mathrm{mm/ns}}{U_0} \approx 1.9,$$ so that $D\approx 0.05$ is the number that we get. Note that from the expression $U_0=\mu_e \mbox{E}_0$, we could find the experimantal value for the mobility $\mu_e$ for this discharge. Now we can make a prediction. The maximum of the dispersion relation will tell us the number of fingers one may find in such experiments. We have calculated the dispersion relation for two limit cases. The limit of ideal conductivity and the limit of infinite resistivity . Those limits would give a lower and upper estimation values for the actual dispersion relation. We expect that the experiment will lie in between and be closer to the predictions given by the limit of infinite resistivity, as the discharge is on a dielectric plate. But before using those dispersion relations we need a further estimation for the surface charge density. We can get the surface density from the jump of the electric field across the interface. So the dimensionless expression reads $$\sigma_0 \approx \frac{(3-0.5)\,\mathrm{kV/mm}}{\mbox{E}_0}, \,\,\,\mathrm{at}\,\, r_0=8 \mathrm{mm}.$$ In the dispersion relation expressions and we have to make the substitution $Q_0/2\pi r_0 =\sigma_0$ and find the maximum for $n$. For the ideal conductivity case yields a maximum at $n \approx 76$, and for the case of infinite resistivity , turns out $n \approx 14$. Counting the numbers of real fingers in the experimental pictures at 15 ns, the number is around 20 (one has to extrapolate the number as the pictures do not show the whole discharge). This number is much closer to the lower limit as we pointed before, pointing in the direction that the electrons on the dielectric surface, when moving through the plasma, feel a much higher resistivity than in a conductor. Although we do not expect to capture the whole physics of the discharge with the contour model, some essentials ingredients for the early development of the front seem to be well accounted by it. The theoretical prediction made in this section is a rather good one, despite all the approximations made and gives some insight about the parameters involved, such the mobility of the carriers, diffusion coefficient, number of fingers, and so no. Conclusions =========== We have presented the complete derivation of the contour dynamics moder electric discharges introduced in [@Arrayas10]. The model appears as the leading asymptotic description for the minimal streamer model when the electron diffusion coefficient is very small. It consists of two integro-differential equations defined at the boundary of the plasma region: one for the motion of the points of the boundary where the velocity in the normal direction is given in terms of the electric field created by the net charge there, and a second equation for the evolution of the charge density at the boundary. This second equation is very similar to the Taylor-Melcher model in electrohydrodynamics [@S]. In the model the electric field is determined by solving Poisson equation with a given surface charge density, leading to a singular integral of the density. Once our model has been deduced, we have computed explicit solutions with cylindrical symmetry and investigated their stabilities. The resulting dispersion relation is such as the perturbation with the small mode number can grow exponentially fast. In fact, both the number of modes become unstable and the mode that becomes most unstable (the one corresponding to the dispersion relation) depends critically on the electric resistivity of the media. We have computed analytically the dispersion relation and found that the number of unstable modes grows with the inverse of the resistivity (the conductivity) and the most unstable mode also increases with it. In the limit of vanishing resistivity one can consider the medium as a perfect conductor and therefore impose that the potential is constant at the boundary. The dispersion relation for the model with finite resistivity converges to this limit when resistivity tends to zero. We have implemented a numerical procedure to solve our model in general situations. In order to develop the numerical method, we needed to evaluate certain singular integrals that appear when computing the electrical field. As one result of the numerical method, the dispersion relations have been computed and compared them with the analytical results. As a difference to our previous communication [@Arrayas10], we have paid special attention to the effects of Townsend expression for impact ionization on the dispersion relation and the cases of intermediate resistivities. Finally, we have taken some experimental data from a positive surface streamer discharge and compare them with our model predictions. The number of fingers calculated from our model is of the same order of the observed one in the actual experiment. We have been able also to estimate the diffusion coefficient from the data. We have shown that the behaviour of the carriers inside the plasma is closer to the limit of high resistivity, so the importance of taking into account the plasma resistivity is made clear. Thus, it is proved that our contour model is able to capture essential parts of the physics involved in the earlier development the streamer discharge, with an extra bonus: we can study more complex geometries and general situations both analytically and numerically. We are now in the process to complete the fully 3-D case and extend these results. The authors thank support from the Spanish Ministerio de Educación y Ciencia under projects AYA2009-14027-C05-04 and MTM2008-0325. [99]{} A. N. Lagarkov and I. M. Rutkevich, [*Ionization waves in elec-tric breakdown on gases*]{} (Springer-Verlag, New York, 1994). I. M. Rutkevich, Sov. J. Plasma Phys. [**15**]{}, 844 (1989). U. Ebert, W. van Saarloos, and C. Caroli, Phys. Rev. Lett. [**77**]{}, 4178 (1996); Phys. Rev. E **55**, 1530 (1997). M. Arrayás, M. A. Fontelos, and J. L. Trueba, Phys. Rev. E **71**, 037401 (2005); J. Phys. A **39**, 7561 (2006). A. S. Kyuregyan, Phys. Rev. Lett. **101**, 174505 (2008) M. Arrayás, M. A. Fontelos and J.L. Trueba. J. Phys. D: Appl. Phys **39** 5176-5182 (2006) M. Arrayás, U. Ebert, W. Hundsdorfer, Phys. Rev. Lett. **88**, 174502 (2002). M. Arrayás, M. A. Fontelos, and J. L. Trueba, Phys. Rev. Lett. [**95**]{}, 165001 (2005). M. Arrayás, S. Betelú, M. A. Fontelos, and J. L. Trueba, SIAM J. Appl. Math. [**68**]{}, 1122 (2008). S. K. Dhali and P. F. Williams, Phys. Rev. A [**31**]{}, 1219 (1985); J. Appl. Phys. [**62**]{}, 4696 (1987). P. A. Vitello, B. M. Penetrante, and J. N. Bardsley, Phys. Rev. E 49, 5574 (1994). M. Arrayás, M.A. Fontelos, C. Jiménez, Phys. Rev. E [**81**]{}, 035401(R) (2010). D. A. Saville, Annu. Rev. Fluid Mech. [**29**]{} 27–64 (1997). E.D. Lozansky and O.B. Firsov, J. Phys. D: Appl. Phys. [**6**]{}, 976–981 (1973). M. Arrayás, J. L. Trueba, Cont. Phys. **46** 265–276.(2005). Y. P. Raizer, [*Gas Discharge Physics*]{} (Springer, Berlin 1991). P. Ya. Polubarinova-Kochina, Dokl. Akad Nauk USSR [**47**]{}, no 4, 254.257 (1945) (in Russian). S. Li, J. S. Lowengrub, J. Fontana, P. Palffy-Muhoray, Phys. Rev. Lett. [**102**]{}, 174501 (2009). L.M. Pismen, [*Patterns and Interfaces in Dissipative Dynamics*]{}, Springer, where this expansion is explained. M. Arrayás, M. A. Fontelos and J.L. Trueba. Phys. Rev. Lett. [**101**]{}, 139502 (2008). D. Tanaka, S. Matsuoka, A. Kumada and K. Hidaka, J. Phys. D: Appl. Phys. **42**, 075204 (2009).
{ "pile_set_name": "ArXiv" }
--- abstract: | Energy-parity objectives combine $\omega$-regular with quantitative objectives of reward MDPs. The controller needs to avoid to run out of energy while satisfying a parity objective. We refute the common belief that, if an energy-parity objective holds almost-surely, then this can be realised by some finite memory strategy. We provide a surprisingly simple counterexample that only uses coBüchi conditions. We introduce the new class of bounded (energy) storage objectives that, when combined with parity objectives, preserve the finite memory property. Based on these, we show that almost-sure and limit-sure energy-parity objectives, as well as almost-sure and limit-sure storage parity objectives, are in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time for energy-parity MDPs. author: - bibliography: - 'bibliography.bib' title: 'MDPs with Energy-Parity Objectives' --- Introduction ============ [**Context.**]{} Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and controlled behaviour [@Puterman:book]. Such a process starts in an initial state and makes a sequence of transitions between states. Depending on the type of the current state, either the controller gets to choose an enabled transition (or a distribution over transitions), or the next transition is chosen randomly according to a predefined distribution. By fixing a strategy for the controller, one obtains a Markov chain. The goal of the controller is to optimise the (expected) value of some objective function on runs of such an induced Markov chain. [**Our Focus and Motivation.**]{} In this paper we study MDPs with a finite number of states, where numeric rewards (which can be negative) are assigned to transitions. We consider quantitative objectives, e.g. the total expected reward or the limit-average expected reward [@Puterman:book; @CD2011]. Note that the total reward is not bounded *a priori*. We also consider $\omega$-regular objectives that can be expressed by parity conditions on the sequence of visited states (subsuming many simpler objectives like Büchi and coBüchi). When reasoning about controllers for mechanical and electrical systems, one may need to consider quantitative objectives such as the remaining stored energy of the system (which must not fall below zero, or else the system fails), and, at the same time, parity objectives that describe the correct behaviour based on the temporal specification. Thus one needs to study the combined *energy-parity* objective. [**Status Quo.**]{} Previous work in [@CD2011] (Sec. 3) considered the decidability and complexity of the question whether the energy-parity objective can be satisfied *almost-surely*, i.e. whether there exists a strategy (or: a controller) that satisfies the objective with probability $1$. They first show that in the restricted case of energy-Büchi objectives, finite memory optimal strategies exist, and that almost-sure satisfiability is in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time. They then describe a direct reduction from almost-sure energy-parity to almost-sure energy-Büchi. This reduction claimed that the winning strategy could be chosen among a certain subclass of strategies, that we call *colour-committing*. Such a strategy eventually commits to a particular winning even colour, where this colour must be seen infinitely often almost-surely and no smaller colour must ever been seen after committing. However, this reduction from almost-sure energy-parity to almost-sure energy-Büchi in [@CD2011] (Sec. 3) contains a subtle error (which also appears in the survey in [@chatterjee2011games] (Theorem 4)). In fact, we show that strategies for almost-sure energy-parity may require infinite memory. [**Our contributions**]{} can be summarised as follows. *1)* We provide a simple counterexample that shows that, even for almost-sure energy-coBüchi objectives, the winning strategy requires infinite memory and cannot be chosen among the colour-committing strategies. *2)* We introduce an energy *storage* objective, which requires that the energy objective is met using a finite energy store. The size of the store can be fixed by the controller, but it cannot be changed. We argue that the almost-sure winning sets for energy-Büchi and storage-Büchi objectives coincide. Moreover, we show that the reduction in [@CD2011] actually works for storage-parity instead of for energy-parity conditions. I.e. [@CD2011] shows that almost-sure storage parity objectives require just finite memory, are in $\NP\cap\coNP$, and can be solved in pseudo-polynomial time. *3)* We develop a solution for the original almost-sure energy-parity objective. It requires a more involved argument and infinite-memory strategies that are obtained by composing three other strategies. We show that almost-sure energy-parity objectives are in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time. *4)* We then study the *limit-sure problem*. Here one asks whether, for every $\epsilon >0$, there exists a strategy that satisfies the objective with probability $\ge 1-\epsilon$. This is a question about the existence of a family of $\epsilon$-optimal strategies, not about a single strategy as in the almost-sure problem. The limit-sure problem is equivalent to the question whether the *value* of a given state and initial energy level (w.r.t. the objective) is $1$. For the *storage-parity* objective, the limit-sure condition coincides with the almost-sure condition, and thus the complexity results from [@CD2011] apply. In contrast, for the *energy-parity* objective the limit-sure condition does *not* coincide with the almost-sure condition. While almost-sure energy-parity implies limit-sure energy-parity, we give examples that show that the reverse implication does *not* hold. We develop an algorithm to decide the limit-sure energy-parity objective and show that the problem is in $\NP \cap \coNP$ and can be solved in pseudo-polynomial time. Moreover, each member in the family of $\epsilon$-optimal strategies that witnesses the limit-sure energy-parity condition can be chosen as a finite-memory strategy (unlike winning strategies for almost-sure energy-parity that may require infinite memory). [**Related work.**]{} Energy games were introduced in [@chakrabarti2003resource] to reason about systems with multiple components and bounded resources. Energy objectives were later also considered in the context of timed systems [@BFLMS2008], synthesis of robust systems [@bloem2009better], and gambling [@BergerKSV08] (where they translate to “not going bankrupt”). The first analysis of a combined qualitative–quantitative objective was done in [@chatterjee2005mean] for mean-payoff parity games. Almost-sure winning in energy-parity MDPs was considered in [@CD2011] (cf. [@chatterjee2011games] as a survey). However, it was shown in [@CD2011] that almost-sure winning in energy-parity MDPs is at least as hard as two player energy-parity games [@CD2012]. A recent paper [@BKN:ATVA2016] considers a different combined objective: maximising the expected mean-payoff while satisfying the energy objective. The proof of Lemma 3 in [@BKN:ATVA2016] uses a reduction to the (generally incorrect) result on energy-parity MDPs of [@CD2011], but their result still holds because it only uses the correct part about energy-Büchi MDPs. Closely related to energy MDPs and games are one-counter MDPs and games, where the counter value can be seen as the current energy level. One-counter MDPs and games with a termination objective (i.e. reaching counter value $0$) were studied in [@BBEKW10] and [@BBE10], respectively. [**Outline of the Paper.**]{} The following section introduces the necessary notations. discusses combined energy-parity objectives and formally states our results. In \[sec:bug\] we explain the error in the construction of [@CD2011] (Sec. 3), define the bounded energy storage condition and derive the results on combined storage-parity objectives. discusses the almost-sure problem for energy-parity conditions and provides our new proof of their decidability. The limit-sure problem for energy-parity is discussed in \[sec:limit,sec:ls-EP\]. discusses lower bounds and the relation between almost/limit-sure parity MDPs and mean-payoff games. Due to the space constraints some details had to be omitted and can be found in the full version [@DBLP:journals/corr/MayrSTW17]. Notations {#sec:preliminaries} ========= A probability distribution over a set $X$ is a function $f:X\to[0,1]$ such that $\sum_{x\in X} f(x) = 1$. We write $\Dist{X}$ for the set of distributions over $X$. [**Markov Chains.**]{} A *Markov chain* is an edge-labeled, directed graph $\sys{C}\eqdef (V,E,\prob)$, where the elements of $V$ are called *states*, such that the labelling $\prob:E\to[0,1]$ provides a probability distribution over the set of outgoing transitions of any state $s\in V$. A *path* is a finite or infinite sequence $\rho\eqdef s_1s_2\ldots$ of states such that $(s_i,s_{i+1})\in E$ holds for all indices $i$; an infinite path is called a *run*. We use $w\in V^*$ to denote a finite path. We write $s\step{x}t$ instead of $(s,t)\in E \land \prob(s,t)=x$ and omit superscripts whenever clear from the context. We write $\Runs[\sys{C}]{w}$ for the cone set $wV^\omega$, i.e., the set of runs with finite prefix $w\in V^*$, and assign to it the probability space $(\Runs[\sys{C}]{w},\mathcal{F}_{w}^\sys{C},\Prob[\sys{C}]{w})$, where $\mathcal{F}_{w}^\sys{C}$ is the $\sigma$-algebra generated by all cone sets $\Runs[\sys{C}]{wx}\subseteq \Runs[\sys{C}]{w}$, for $x=x_1x_2\dots x_l\in V^*$. The probability measure $\Prob[\sys{C}]{w}:\mathcal{F}^{\sys{C}}_w\to[0,1]$ is defined as $\Prob[\sys{C}]{w}(\Runs{wx}) \eqdef \Pi_{i=1}^{l-1} \lambda(x_i,x_{i+1})$ for cone sets. By Carathéodory’s extension theorem [@billingsley-1995-probability], this defines a unique probability measure on all measurable subsets of runs. [**Markov Decision Processes.**]{} A *Markov Decision Process (MDP)* is a sinkless directed graph $\sys{M}\eqdef(\VC,\VP, E, \prob)$, where $V$ is a set of states, partitioned as $V\eqdef \VC\uplus\VP$ into *controlled* ($\VC$) and *probabilistic* states ($\VP$). The set of *edges* is $E\subseteq V\x V$ and $\prob:\VP\to\Dist{E}$ assigns each probabilistic state a probability distribution over its outgoing edges. A *strategy* is a function $\sigma:V^*\VC\to\Dist{E}$ that assigns each word $ws\in V^*\VC$ a probability distribution over the outgoing edges of $s$, that is $\sigma(ws)(e)>0$ implies $e=(s,t)\in E$ for some $t\in V$. A strategy is called *memoryless* if $\sigma(xs)=\sigma(ys)$ for all $x,y\in V^*$ and $s\in \VC$, and *deterministic* if if $\sigma(w)$ is Dirac for all $w\in V^*\VC$. Each strategy induces a Markov chain $\sys{M}(\sigma)$ with states $V^*$ and where $ws\step{x}wst$ if $(s,t)\in E$ and either $s\in\VP \land x=\prob(s,t)$ or $s\in\VC\land x=\sigma(ws,t)$. We write $\Runs[\sys{M}]{w}$ for the set of runs in $\sys{M}$ (with prefix $w$), consisting of all runs in $\Runs[\sys{M}(\sigma)]{w}$ for some strategy $\sigma$, and $\Runs[\sys{M}]{}$ for the set of all such paths. [**Objective Functions.**]{} An *objective* is a subset ${\mathsf{Obj}}\subseteq \Runs[\sys{M}]{}$. We write $\overline{{\mathsf{Obj}}}\eqdef\Runs[\sys{M}]{}\setminus{\mathsf{Obj}}$ for its complement. It is satisfied *surely* if there is a strategy $\sigma$ such that $\Runs[\sys{M}(\sigma)]{}\subseteq {\mathsf{Obj}}$, *almost-surely* if there exists a $\sigma$ such that $\Prob[\sys{M}(\sigma)]{}({\mathsf{Obj}}) = 1$ and *limit-surely* if $\sup_\sigma\Prob[\sys{M}(\sigma)]{}({\mathsf{Obj}}) = 1$. In other words, the limit-sure condition asks that there exists some infinite sequence $\sigma_1,\sigma_2,\ldots$ of strategies such that $\lim_{n\to\infty} \Prob[\sys{M}(\sigma_n)]{}({\mathsf{Obj}}) = 1$. We call a strategy *$\eps$-optimal* if $\Prob[\sys{M}(\sigma)]{}({\mathsf{Obj}}) \ge 1-\eps$. Relative to a given MDP $\sys{M}$ and some finite path $w$, we define the *value* of ${\mathsf{Obj}}$ as $\Val[\sys{M}]{w}{{\mathsf{Obj}}} \eqdef \sup_\sigma \Prob[\sys{M}(\sigma)]{w}({\mathsf{Obj}})$. We use the following objectives, defined by conditions on individual runs. A *reachability condition* is defined by a set of target states $T \subseteq V$. A run $s_0s_1\ldots$ satisfies the reachability condition iff there exists an $i \in \N$ s.t. $s_i \in T$. We write $\eventually T \subseteq \Runs{}$ for the set of runs that satisfy the reachability condition. A *parity condition* is given by a function $\parity:V\to\N$, that assigns a priority (non-negative integer) to each state. A run $\rho \in \Runs{}$ satisfies the parity condition if the minimal priority that appears infinitely often on the run is even. The *parity objective* is the subset $\Parity[]{} \subseteq \Runs{}$ of runs that satisfy the parity condition. *Energy conditions* are given by a function $\cost{}:E\to\Z$, that assigns a *cost* value to each edge. For a given initial energy value $k\in\N$, a run $s_0s_1\ldots$ satisfies the $k$-energy condition if, for every finite prefix, the energy level $k+\sum_{i=0}^l\cost(s_i,s_{i+1})$ stays greater or equal to $0$. Let $\EN{k} \subseteq \Runs{}$ denote the $k$-energy objective, consisting of those runs that satisfy the $k$-energy condition. *Mean-payoff conditions* are defined w.r.t. the same cost function $\cost{}:E\to\Z$ as the energy conditions. A run $s_0s_1\ldots$ satisfies the *positive mean-payoff condition* iff $\liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\cost(s_i,s_{i+1}) > 0$. We write $\PosMP\subseteq \Runs{}$ for the positive mean-payoff objective, consisting of those runs that satisfy the positive mean-payoff condition. Parity Conditions under Energy Constraints {#sec:claims} ========================================== We study the combination of energy and parity objectives for finite MDPs. That is, given a MDP and both cost and parity functions, we consider objectives of the form $\EN{k}\cap\Parity{}$ for integers $k\in \N$. We are interested in identifying those control states and values $k\in\N$ for which the combined $k$-energy-parity objective is satisfied almost-surely and limit-surely, respectively. \[ex:lval\] Consider a controlled state $s$ that can go left or right with cost $0$, or stay with cost $1$. The probabilistic state on the left increases or decreases energy with equal chance, whereas the probabilistic state on the right has a positive energy updrift. State $s$ has priority $1$, all other states have priority $0$. =\[pstate,minimum size=0.2cm\] (Y) [s]{}; (Z) [r]{}; (X) [l]{}; (lla) ; (rra) ; (llb) ; (rrb) ; (Y) edge node\[above\] [$0$]{} (Z); (Y) edge node\[above\] [$0$]{} (X); (Z) edge node\[left\][${\frac{1}{3}},0$]{} (rrb); (rrb) edge node\[right\] [$1,-1$]{}(Z); (Z) edge node\[left\][${\frac{2}{3}},0$]{} (rra); (rra) edge node\[right\][$1,+1$]{} (Z); (X) edge node\[right\][${\frac{1}{2}},0$]{} (lla); (lla) edge node\[left\][$1,+1$]{} (X); (X) edge node\[right\] [${\frac{1}{2}},0$]{} (llb); (llb) edge node\[left\] [$1,-1$]{} (X); (Y) edge node\[\] [$+1$]{} (Y); From states other than $s$ there is only one strategy. It holds that $\Val{l}{\Parity{}}=1$ but $\Val{l}{\EN{k}}=0$ for any $k\in\N$ and so $\Val{l}{\EN{k}\cap\Parity{}}=0$. For state $r$ we have that $\Val{r}{\EN{k}\cap\Parity{}}=\Val{r}{\EN{k}}=1-(1/2)^k$, due to the positive drift. For all $k\in\N$ the state $s$ does not satisfy the $k$-energy-parity objective almost-surely but limit-surely: $\Val{s}{\EN{k}\cap \Parity{}}=1$ (by going ever higher and then right). Notice that these energy-parity objectives are trivially monotone in the parameter $k$ because $\EN{k}\subseteq\EN{k+1}$ holds for all $k\in\N$. Consequently, for every fixed state $p$, if there exists some $k\in N$ such that the $k$-energy-parity objective holds almost-surely (resp. limit-surely), then there is a minimal such value $k$. By *solving* the almost-sure/limit-sure problems for these monotone objectives we mean to compute these minimal sufficient values for all initial states. We now state our two main technical results. We fix a finite MDP $\sys{M}\eqdef(\VC,\VP, E, \prob)$, a parity function $\parity:V\to\N$ with maximal colour $d\in\N$ and a cost-function $\cost:E\to\Z$ with maximal absolute value $W \eqdef \max_{e\in E} |\cost(e)|$. Let $|\lambda|$ and $|\cost{}|$ be the size of the transition table $\lambda$ and the cost function $\cost{}$, written as tables with valuations in binary. We use $\widetilde {\mathcal O}(f(n))$ as a shorthand for ${\mathcal O}(f(n)\log^k f(n))$ for some constant $k$. \[thm:correction\] \[thm:as-energy-parity\] (1) Almost-sure optimal strategies for $k$-energy-parity objectives may require infinite memory. (2) The almost-sure problem for $k$-energy-parity objectives is in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time $\widetilde {\mathcal O}(d\cdot|V|^{4.5} \cdot (|\lambda| + |\cost{}|)^2 + \card{E}\cdot d\cdot \card{V}^5\cdot W)$. \[thm:ls-energy-parity\] \[thm:main\] (1) The limit-sure problem for $k$-energy-parity objectives is in $\NP\cap\coNP$ and can be solved in pseudo-polynomial time $\widetilde {\mathcal O}(d\cdot|V|^{4.5} \cdot (|\lambda| + |\cost{}|)^2 + \card{E}\cdot d\cdot \card{V}^5\cdot W)$. (2) If the $k$-energy-parity objective holds limit-surely then, for each $\eps>0$, there exists a *finite memory* $\eps$-optimal strategy. The claimed algorithms are *pseudo polynomial* in the sense that they depend (linearly) on the value $W$. If the cost-deltas are $-1,0,$ or $1$ only, and not arbitrary binary encoded numbers, this provides a polynomial time algorithm. Part (2) of \[thm:as-energy-parity\] was already claimed in [@CD2011], Theorem 1. However, the proof there relies on a particular finiteness assumption that is not true in general. In the next section we discuss this subtle error and describe the class of (bounded) *storage* objectives, for which this assumption holds and the original proof goes through. Our new proof of \[thm:as-energy-parity\] is presented in \[sec:as-EP\]. The proof of \[thm:ls-energy-parity\] is deferred to \[sec:limit,sec:ls-EP\]. It is based on a reduction to checking almost-sure satisfiability of storage-parity objectives, which can be done in pseudo polynomial time (cf. Theorem \[thm:storage\]). We first establish in \[sec:limit\] that certain *limit values* are computable for each state. In \[sec:extensions\] we then provide the actual reduction, which is based on precomputing these limit values and produces an MDP which is only linearly larger and has no new priorities. Energy Storage Constraints {#sec:storage} ========================== \[sec:bug\] The argument of [@CD2011] to show computability of almost-sure energy-parity objectives relies on the claim that the controller, if it has a winning strategy, can eventually commit to visiting an even colour infinitely often and *never* visiting smaller colours. We show that this claim already fails for coBüchi conditions (i.e. for MDPs that only use colours $1$ and $2$). We then identify a stronger kind of energy condition—the storage energy condition we introduce below—that satisfies the above claim and for which the original proof of [@CD2011] goes through. Let us call a strategy *colour-committing* if, for some colour $2i$, almost all runs eventually visit a position such that *almost all* possible continuations visit colour $2i$ infinitely often and *no continuation* (as this is a safety constraint) visits a colour smaller than $2i$. \[claim\] If there exists some strategy that almost-surely satisfies $\EN{k}\cap\Parity$ then there is also a colour-committing strategy that does. Consider the following example, where the controller owns states $A,B,C$ and tries to avoid visiting state $B$ infinitely often while maintaining the energy condition. This can be expressed as an energy-parity condition where $ \parity(A)= \parity(C)= \parity(D)= 2 $ and $\parity(B)=1$. =\[cstate,minimum size=0.2cm,\] (A) [$A$]{}; (B) [$B$]{}; (C) [$C$]{}; (D) [$D$]{}; \(A) edge\[bend right\] node\[swap\] [$1$]{} (D); (D) edge\[bend right\] node\[swap\] [$\frac{2}{3}$, $0$]{} (A); (D) edge\[bend left\] node [$\frac{1}{3}$, $0$]{} (C); (C) edge\[bend left\] node [$-1$]{} (D); (C) edge node [$0$]{} (B); (B) edge node [$0$]{} (A); First notice that all states almost-surely satisfy the $0$-energy-coBüchi condition $\EN{0}\cap\Parity$. One winning strategy is to chooses the edge $C\step{-1}D$ over $C\step{0} B$, unless the energy level is $0$, in which case $C\step{0} B$ is favoured over $C\step{-1}D$. This strategy is not colour-committing but clearly energy safe: the only decreasing step is avoided if the energy level is $0$. To see why this strategy also almost-surely satisfies the parity (in this case coBüchi) objective, first observe that it guarantees a positive updrift: from state $D$ with positive energy level, the play returns to $D$ in two steps with expected energy gain $+1/3$, from state $D$ with energy level $0$, the play returns to $D$ in either two or three steps, in both cases with energy gain $+1$. The chance to visit state $C$ with energy level $0$ when starting at state $D$ with energy level $k\in\N$ is $(1/2)^{k+1}$. This is the same likelihood with which state $B$ is eventually visited. However, every time state $B$ is visited, the system restarts from state $D$ with energy level $1$. Therefore, the chance of re-visiting $B$ from $B$ is merely $1/4$. More generally, the chance of seeing state $B$ at least $n$ further times is $(1/4)^{n}$. The chance of visiting $B$ infinitely often is therefore $\lim_{n \to \infty} (1/4)^n = 0$. This strategy thus satisfies the parity—in this case coBüchi—objective almost-surely. Consequently, the combined $0$-energy-parity objective is almost-surely met from all states. To contradict \[claim\], we contradict the existence of an initial state and a colour-committing strategy that almost-surely satisfies the $0$-energy-parity objective. By definition, such a strategy will, on almost all runs, eventually avoid state $B$ completely. As every run will surely visit state $D$ infinitely often, we can w.l.o.g. pick a finite possible prefix $s_1s_2\ldots s_j$ (i.e. a prefix that can occur with a likelihood $\delta > 0$) of a run that ends in state $s_j=D$ and assume that none (or only a $0$ set, but these two conditions coincide for safety objectives) of its continuations visits state $B$ again. Let $l\eqdef\sum_{i=1}^j\cost(s_i,s_{i+1})$ denote the sum of rewards collected on this prefix. Note that there is a $(1/3)^{l+1}>0$ chance that some continuation alternates between states $D$ and $C$ for $l+1$ times and thus violates the $l$-energy condition. Consequently, the chance of violating the $0$-energy parity condition from the initial state is at least $\delta\cdot(1/2)^{l+1}>0$. Notice that every finite memory winning strategy for the $\Parity$ objective must also be colour-committing. The system above therefore also proves part (1) of \[thm:as-energy-parity\], that infinite memory is required for $k$-energy-parity objectives. In the rest of this section we consider a stronger kind of energy condition, for which \[claim\] does hold and the original proof of [@CD2011] goes through. The requirement is that the strategy achieves the energy condition without being able to store an infinite amount of energy. Instead, it has a finite energy store, say $s$, and cannot store more energy than the size of this storage. Thus, when a transition would lead to an energy level $s' > s$, then it would result in an available energy of $s$. These are typical behaviours of realistic energy stores, e.g. a rechargeable battery or a storage lake. An alternative view (and a consequence) is that the representation of the system becomes finite-state once the bound $s$ is fixed, and only finite memory is needed to remember the current energy level. For the definition of a storage objective, we keep the infinite storage capacity, but instead require that no subsequence loses more than $s$ energy units. The definitions are interchangeable, and we chose this one in order not to change the transitions of the system. \[def:storage\] For a finite MDP with associated cost function, a run $s_0s_1\ldots$ satisfies the *$s$-storage condition* if, for every infix $s_ls_{l+1}\ldots s_u$, it holds that $s+\sum_{i=l}^{u+1}\cost(s_i,s_{i+1})\ge 0$. Let $\ES{k,s} \subseteq \Runs{}$ denote the $k$-energy $s$-storage objective, consisting of those runs that satisfy both the $k$-energy and the $s$-storage condition. \[example:ks\_tradeoff\] The two parameters can sometimes be traded against each other, as shown in the following example. \(M) [$q$]{}; (L) ; (R) ; \(M) edge\[bend right\] node\[swap\] [$+3$]{} (L); (L) edge\[bend right\] node\[swap\] [$-2$]{} (M); (M) edge\[bend left\] node [$-1$]{} (R); (R) edge\[bend left\] node [$+2$]{} (M); From state $q$ in the middle, one can win with an initial energy level $0$ by always going left, provided that one has an energy store of size at least $2$. With an energy store of size $1$, however, going left is not an option, as one would not be able to return from the state on the left. But with an initial energy level of $1$, one can follow the strategy to always turn to the right. So the $\ES{0,2}$ and $\ES{1,1}$ objectives hold almost-surely but the $\ES{0,1}$ objective does not. We sometimes want to leave the size of the energy store open. For this purpose, we define $\ES{k}$ as the objective that says “there is an $s$, such that $\ES{k,s}$ holds” and $\mathsf{ST}$ for “there is an $s$ such that $\ES{s,s}$ holds”. Note that this is not a path property; we rather require that the $s$ is fixed globally. In order to meet an $\ES{k}$ property almost-surely, there must be a strategy $\sigma$ and an $s\in \N$ such that almost all runs satisfy $\ES{k,s}$: $\exists \sigma,s$ s.t. $\Prob[\sys{M}(\sigma)]{}(\ES{k,s}) = 1$. Likewise, for limit-sure satisfaction of $\mathsf{ST}$, we require $\exists s\ \forall \varepsilon >0\ \exists \sigma$ s.t.  $\Prob[\sys{M}(\sigma)]{}(\ES{s,s}) \ge 1-\eps$. We now look at combined storage-parity and storage-mean-payoff objectives. \[thm:storage\] \[thm:as-storage-parity\] For finite MDPs with states $V$, edges $E$ and associated cost and parity functions, with maximal absolute cost $W$ and maximal colour $d\in\N$: - The almost-sure problem for storage-parity objectives is in $\NP\cap\coNP$, and there is an algorithm to solve it in $\mathcal{O}(\card{E}\cdot d\cdot \card{V}^4\cdot W)$ deterministic time. - Memory of size $\mathcal{O}(\card{V}\cdot W)$ is sufficient for almost-sure winning strategies. This also bounds the minimal values $k,s\in\N$ such that $\ES{k,s}\cap\Parity$ holds almost-surely. The proof is provided by Chatterjee and Doyen [@CD2011]: they first show the claim for energy-Büchi objectives $\EN{k}\cap\Parity$ (where $d=1$) by reduction to two-player energy-Büchi *games* ([@CD2011], Lemma 2). Therefore, almost-sure winning strategies come from first-cycle games and operate in a bounded energy radius. As a result, almost-sure satisfiability for energy-Büchi and storage-Büchi coincide. They then ([@CD2011], Lemma 3) provide a reduction for general parity conditions to the Büchi case, assuming \[claim\]. Although this fails for energy-parity objectives, as we have shown above, almost-sure winning strategies for storage-parity can be assumed to be finite memory and therefore colour committing. The construction of [@CD2011] then goes through without alteration. The complexity bound follows from improvements for energy parity games [@CD2012]. \[thm:as-ES-PosMP\] \[thm:as-storage-PosMP\] \[lem:bailout-to-parity\] For finite MDPs with combined storage and positive mean-payoff objectives: - The almost-sure problem is in $\NP\cap\coNP$ and can be solved in $\mathcal{O}(\card{E}\cdot d\cdot \card{V}^4\cdot W)$ deterministic time. - Memory of size $\mathcal{O}(\card{V}\cdot W)$ is sufficient for almost-sure winning strategies. This also bounds the minimal value $k,s\in\N$ such that $\ES{k,s}\cap\PosMP$ holds almost-surely. We show that, for every MDP $\sys{M}$ with associated $\cost$ function, there is a linearly larger system $\sys{M'}$ with associated $\cost'$ and $\parity$ function —where the parity function is Büchi, i.e. has image $\{0,1\}$—that, for every $k\in\N$, $\PosMP\cap\ES{k}$ holds almost-surely in $\sys{M}$ iff $\Parity\cap\ES{k}$ holds almost-surely in $\sys{M}'$. For every state $q$ of $\sys{M}$, the new system $\sys{M'}$ contains two new states, $q'$ and $q''$, edges $(q,q')$ and $(q,q'')$ with costs $0$ and $-1$, respectively. Each original edge $(q,r)$ is replaced by two edges, $(q',r)$ and $(q'',r)$. All original states become controlled, and the primed and double primed copies of a state $q$ are controlled if, and only if, $q$ was controlled in $\sys{M}$. The double primed states have colour $0$, while all original and primed states have colour $1$. See \[fig:storage-PosMP\] (on the left) for an illustration. To give the idea of this construction in a nutshell, the Büchi condition in $\sys{M'}$ intuitively sells one energy unit for visiting an accepting state (or: for visiting a state with colour $0$, the double primed copy). $\ES{k}$ implies that, as soon as $s+1$ energy is available, one can sell off one energy unit for a visit of an intermediate accepting state. $\PosMP$ implies that this can almost-surely be done infinitely often. Vice-versa, $\ES{k}$ implies non-negative mean payoff. $\ES{k}$ plus Büchi can always be realised with finite memory by \[thm:as-storage-parity\] (2), and such a strategy then implies that $\PosMP\cap\ES{k}$ holds almost-surely in $\sys{M}$. Now the claim holds by \[thm:as-storage-parity\]. Note that the order of quantification in the limit-sure problems for storage objectives ($\exists s. \forall \eps \ldots$) means that limit-sure and almost-sure winning coincides for storage-parity objectives: if there is an $s$ such that $\ES{s,s} \cap \Parity{}$ holds limit-surely then one can get rid of the storage condition by hardcoding energy-values up to $s$ into the states. The same is true for mean-payoff-storage objectives. The claims in \[thm:as-storage-parity,thm:as-ES-PosMP\] thus also hold for the limit-sure problems. Finally, we restate the result from [@CD2011], Theorem 2 (1) on positive mean-payoff-parity objectives and add to it an explicit computational complexity bound that we will need later. \[thm:as-Par-PosMP\] For finite MDPs with combined positive mean-payoff and parity objectives, - The almost-sure problem is in [P]{} and can be solved in $\widetilde {\mathcal O}(d\cdot|V|^{3.5} \cdot (|\lambda| + |\cost{}|)^2)$ time. - Finite memory suffices. The computation complexity bound follows from the analysis of Algorithm 1 in [@CD2011]. It executes $d/2$ iterations of a loop, in which Step 3.3 of computing the mean-payoff of maximal end components dominates the cost. This can be formulated as a linear program (LP) that uses two variables, called [*gain*]{} and [*bias*]{}, for each state[@Puterman:book]. This LP can be solved using Karmarkar’s algorithm [@Karmarkar/84/Karmarkar] in time $\widetilde {\mathcal O}(|V|^{3.5} \cdot (|\lambda| + |\cost{}|)^2)$. Note that the complexity refers to *all* (not each) maximal end-components. As we do not need to obtain a maximal payoff ${\tau}>0$ but can use any smaller value, like ${\tau}/2$, finite memory suffices. Almost-Sure Energy-Parity {#sec:as-EP} ========================= \[sec:correction\] In this section we prove \[thm:correction\]. ; ; ; ; ; ; node \[label=[\[label distance=-0.4cm\]120:$\Parity{}$]{}\] ; node \[anchor=north\] [$\mathsf{ST}$]{}; node \[label=[\[label distance=-0.4cm\]30:$\PosMP$]{}\] ; (4,-2) node\[right,align=center\] [ $\NP \cap\coNP$,[\ ]{} (pseudo) P,[\ ]{} \[thm:as-ES-PosMP\]. ]{} edge\[blue\] (2,-1.25); (-2,-2) node\[left,align=center\] [ $\NP \cap\coNP$,[\ ]{} (pseudo) P,[\ ]{} \[thm:storage\]. ]{} edge\[red\] (0,-1.25); (1,2.25) node\[above,align=center\] [ Polynomial time,[\ ]{} ]{} edge\[green\] (1,0.7); Our proof can be explained in terms of the three basic objectives: storage ($\mathsf{ST}$), positive mean-payoff ($\PosMP$), and parity ($\Parity$). It is based on the intuition provided by the counterexample in the previous section. Namely, in order to almost-surely satisfy the energy-parity condition one needs to combine two strategies: 1. One that guarantees the parity condition and, at the same time, a positive expected mean-payoff. Using this strategy one can achieve the energy-parity objective with some non-zero chance. 2. A *bailout* strategy that guarantees positive expected mean-payoff together with a storage condition. This allows to (almost-surely) set the accumulated energy level to some arbitrarily high value. We show that, unless there exist some safe strategy that satisfies storage-parity, it is sufficient (and necessary) that such two strategies exist and that the controller can freely switch between them. I.e. they do not leave the *combined* almost-sure winning set unless a state that satisfies storage-parity is reached. Recall that the combined positive mean-payoff-parity objective (for case 1 above) is independent of an initial energy level and its almost-sure problem is decidable in polynomial time due to . The mean-payoff-storage objective $\ES{k}\cap\PosMP$ (for case 2 above), as well as the storage-parity objective are computable by \[thm:as-ES-PosMP,thm:storage\], respectively. See \[fig:combinations\]. To establish \[thm:correction\], we encode the almost-sure winning sets of the storage-parity objective directly into the system (\[def:encode-sc,lem:ff-ext-claims\]), in order to focus on the two interesting conditions from above. We then show (\[def:fix:refinement,lem:safe-computable\]) that the existence of the two strategies for bailout and $\mathsf{ST} \cap \PosMP$, and the minimal safe energy levels can be computed in the claimed bounds. In \[lem:mainfix\] we show that these values coincide with the minimal energy levels of the energy-parity objective for the original system, which concludes the proof. \[def:encode-sc\] For a given MDP $\sys{M}$ and associated $\cost$ and $\parity$ functions, we define an MDP $\sys{M}'\eqdef(\VC',\VP', E', \prob')$ with states $V'\eqdef \VC'\uplus\VP'$ as follows. For every state $q$ of $\sys{M}$ there are two states, $q$ and $q'$ in $V'$ such that both have the same colour as $q$ in $\sys{M}$, every original incoming edge now only goes to $q'$, and every original outgoing edge now only goes from $q$. Moreover, $q'$ is controlled and has an edge to $q$ with $\cost(q',q)=0$. Finally, $\sys{M}'$ contains a single winning sink state $w$ with colour $0$ and a positive self-loop, and every state $q'$ gets an edge to $w$ where the cost of $-k_q$, where $k_q\in\N$ is the minimal value such that, for some $s\in\N$, the storage-parity objective $\ES{k_q,s}\cap\Parity$ holds almost-surely See \[fig:encode-sc\] (on the right) for an illustration. =\[cstate,minimum size=0.1cm\] (q) [$q$]{}; (top) ; (midbelow) ; (belowl) ; (belowr) ; (top) to (q); (q) to (belowr); (q) to (belowl); (top’) ; (mid’) ; (q’) [$q'$]{}; (top’) ; (belowl’) ; (belowr’) ; (q”) [$q$]{}; (w) [$w$]{}; (top’)to (q’); (q’) to node\[left\] [$0$]{} (q”); (q”) to (belowr’); (q”) to (belowl’); (q’) to node\[right\] [$-k_q$]{} (w); (w) edge node\[right\] [$+1$]{} (w); \(b) at ($(q)!0.4!(mid')$) [$\implies$]{}; (top’) ; (mid’) ; (belowl’) ; (belowr’) ; (q’) [$q$]{}; (q”) [$q'$]{}; (q”’) [$q''$]{}; (top’)to (q’); (q’) to node\[left\] [$0$]{} (q”); (q’) to node\[right\] [$-1$]{} (q”’); (q”) to (belowr’); (q”) to (belowl’); (q”’) to (belowr’); (q”’) to (belowl’); \(b) at ($(q)!0.4!(mid')$) [$\impliedby$]{}; The next lemma summarises the relevant properties of $\sys{M}'$. It follows directly from \[def:encode-sc\] and the observation that $\ES{k,s}\subseteq \EN{k}$ holds for all $k,s\in\N$. \[lem:ff-ext-claims\] For every MDP $\sys{M}$, state $q$ and $k \leq s\in\N$, 1. $\EN{k}\cap\Parity$ holds almost-surely in $\sys{M}$ if, and only if, it holds almost-surely in $\sys{M'}$. 2. If $\ES{k,s}\cap\Parity{}$ holds almost-surely in $\sys{M}$ then $\ES{k,s}\cap\Parity{}\cap\PosMP$ holds almost-surely in $\sys{M}'$. For a set $S\subseteq V'$ of states, we write $\sys{M}'|S$ for the restriction of $\sys{M}'$ to states in $S$, i.e. the result of removing all states not in $S$ and their respective edges. \[def:fix:refinement\] We define $R\subseteq V'$ as the largest set of states such that, in $\sys{M}'|R$, every state 1. almost-surely satisfies the $\PosMP\cap\Parity$ objective, and 2. almost-surely satisfies the $\PosMP\cap\mathsf{ST}$ objective. For every state $q\in V'$ let ${\text{safe}(q)}\in\N$ be the minimal number $k$ such that $\PosMP\cap\ES{k}$ holds almost-surely in $\sys{M}'|R$ and ${\text{safe}(q)}\eqdef\infty$ if no such number exists (in case $q\notin R$). The relevance of these numbers for us is, intuitively, that if ${\text{safe}(q)}$ is finite, then there exists a pair of strategies, one for the $\PosMP\cap\Parity$ and one for the ${\PosMP\cap\ES{k}}$ objective, between which the controller can switch as often as she wants. \[lem:safe-computable\] For a given $\sys{M}'$, the values ${\text{safe}(q)}$ are either $\infty$ or bounded by ${\mathcal O}(\card{V}\cdot W)$, computable in pseudo-polynomial time $\widetilde {\mathcal O}(d\cdot|V|^{4.5} \cdot (|\lambda| + |\cost{}|)^2 + \card{E}\cdot d\cdot \card{V}^5\cdot W)$, and verifiable in $\NP\cap\coNP$. Finite values ${\text{safe}(q)}\in \N$ are clearly bounded by minimal sufficient values for almost-sure satisfiability of $\ES{k}\cap\PosMP$ in $\sys{M'}$. Therefore, the claimed bound holds by definition of $\sys{M}'$ and \[thm:as-storage-parity,thm:as-storage-PosMP\]. The set $R$ is in fact the result of a refinement procedure that starts with all states of $\sys{M}'$. In each round, it removes states that fail either of the two conditions. For every projection $\sys{M}'|S$, checking Condition 1 takes $\widetilde {\mathcal O}(d\cdot|V|^{3.5} \cdot (|\lambda| + |\cost{}|)^2)$ time by \[thm:as-Par-PosMP\] and Condition 2 can be checked in $\mathcal{O}(\card{E}\cdot d\cdot \card{V}^4\cdot W)$ time by \[thm:as-ES-PosMP\]. All in all, this provides a pseudo-polynomial time algorithm to compute $R$. By another application of \[lem:bailout-to-parity\], we can compute the (pseudo-polynomially bounded) values ${\text{safe}(q)}$. In order to verify candidates for values ${\text{safe}(q)}$ in $\NP$, and also $\coNP$, one can guess a witness, the sequence of sets $R_0\supset R_1 \supset \ldots \supset R_j=R$, together with certificates for all $i\le j$ that $R_{i+1}$ is the correct set following $R_i$ in the refinement procedure. This can be checked all at once by considering the disjoint union of all $\sys{M}'|R_i$. \[lem:mainfix\] For every $k\in\N$ and state $q$ in $\sys{M}'$, the energy-parity objective $\EN{k}\cap\Parity$ holds almost-surely from $q$ if, and only if, ${\text{safe}(q)}\le k$. [**($\implies$)**]{}. First observe that the winning sink $w$ in $\sys{M}'$ is contained in $R$, and has ${\text{safe}(w)}=0$ since the only strategy from that state satisfies $\ES{0,0}\cap\Parity{}\cap\PosMP$. For all other states there are two cases: either there is an $s\in\N$ such that $\ES{k,s}\cap\Parity{}$ holds almost-surely, or there is no such $s$. If there is, then the strategy that goes to the sink guarantees the objective $\ES{k,s}\cap\Parity{}\cap\PosMP$, which implies the claim. For the second case (there is no $s$ such that $\ES{k,s}\cap\Parity{}$ holds almost-surely) we see that every almost-surely winning strategy for $\EN{k}\cap\Parity{}$ must also almost-surely satisfy $\PosMP$. To see this, note that the energy condition implies a non-negative expected mean-payoff, and that an expected mean-payoff of $0$ would imply that the storage condition $\ES{k,s}$ is satisfied for some $s$, which contradicts our assumption. Consequently the $\PosMP\cap\Parity{}$ objective holds almost-surely. We now show that the $\ES{k,s}\cap\PosMP$ objective holds almost-surely in state $q$, where $s>{\text{safe}(r)}$ for all states $r$ with ${\text{safe}(r)} < \infty$. We now define a strategy that achieves $\ES{k,s}\cap\PosMP$. For this, we first fix a strategy $\sigma_q$ that achieves $\EN{h_q} \cap \Parity{}$ with $h_q={\text{safe}(q)}$ for each state $q$ with ${\text{safe}(q)} < \infty$. When starting in $q$, we follow $\sigma_q$ until one of the following three events happen. We have (1) sufficient energy to move to the winning sink $w$. In this case we do so. Otherwise, if we (2) have reached a state $r$ and since starting to follow $\sigma_q$, the energy balance is strictly greater than [^1] $h_r - h_q$. Then we abandon $\sigma_q$ and follow $\sigma_r$ as if we were starting the game. Before we turn to the third event, we observe that, for each strategy $\sigma_q$, there is a minimal distance [^2] $d_q \in \mathbb N$ to (1) or (2) and a positive probability $p_q > 0$ that either event is reached in $d_q$ steps. The third event is now simply that (3) $d_q$ steps have lapsed. When in state $r$ we then also continue with $\sigma_r$ as if we were starting the game. It is obvious that no path has negative mean payoff. Moreover, as long as the game does not proceed to the winning sink, a partial run starting at a state $q$ and ending at a state $r$ has energy balance $\geq h_r - h_q$, such that the resulting strategy surely satisfies $\mathsf{ST}$. The expected mean payoff is $\ge p_q/d_q$, and $\PosMP$ is obviously satisfied almost-surely. Consequently, $\ES{h_q,s}\cap\PosMP$ holds almost-surely from $q$. We conclude that every state for which the $\EN{k}\cap\Parity{}$ objective holds almost-surely must satisfy both criteria of \[def:fix:refinement\] and thus be a member of $R$. Since almost-sure winning strategies cannot leave the respective winning sets, this means that every winning strategy for the above objective also applies in $\sys{M'}|R$ and thus justifies that ${\text{safe}(q)}\le k$. [**($\impliedby$)**]{}. By definition of $R$, there are two finite memory strategies $\sigma$ and $\beta$ which almost-surely satisfy the $\PosMP\cap\Parity$, and the bailout objective $\PosMP\cap\ES{k}$, respectively, from every state $q$ with ${\text{safe}(q)}\le k$. Moreover, those strategies will never visit any state outside of $R$. We start with the bailout strategy $\beta$ and run it until the energy level is high enough (see below). We then turn to $\sigma$ and follow it until (if ever) it *could* happen in the next step that a state $q$ is reached while the energy level falls below ${\text{safe}(q)}$. We then switch back to $\beta$. The “high enough” can be achieved by collecting enough energy that there is a positive probability that one does not change back from $\sigma$ to $\beta$. For this, we can start with a sufficient energy level $e$ such that $\sigma$ never hits an energy $\leq 0$ with a positive probability [^3]. The sum $e+s+W$ consisting of this energy, the sufficient storage level for $\PosMP \cap \ES{k}$, and the maximal change $W$ of the energy level obtained in a single step suffices. The constructed strategy then almost-surely satisfies the $\EN{k_q}\cap\PosMP\cap\Parity$ objective from every state $q$ and $k_q\eqdef{\text{safe}(q)}$. In particular, this ensures that the $k$-energy-parity objective holds almost-surely from $q$ in $\sys{M}'|R$ and therefore also in $\sys{M}'$. \(1) The fact that infinite memory is necessary follows from our counterexample to \[claim\], and the observation that every finite memory winning strategy for the $\Parity$ objective must also be colour-committing. For parts (2) and (3), it suffices, by \[lem:ff-ext-claims\](1) and \[lem:mainfix\], to construct $\sys{M}'$ and compute the values ${\text{safe}(q)}$ for every state $q$ of $\sys{M}'$. The claims then follow from \[lem:safe-computable\]. Limit Values {#sec:limit} ============ Since $\EN{k}\subseteq\EN{k+1}$ holds for all $k\in\N$, the chance of satisfying the $k$-energy-parity objective depends (in a monotone fashion) on the initial energy level: for every state $p$ we have that $ \Val[\sys{M}]{p}{\EN{k}\cap\Parity{}} \le \Val[\sys{M}]{p}{\EN{k+1}\cap\Parity{}}. $ We can therefore consider the respective *limit values* as the limits of these values for growing $k$: $$\LValP[\sys{M}]{p} \quad \eqdef \quad \sup_k \Val[\sys{M}]{p}{\EN{k}\cap\Parity}.$$ Note that this is *not* the same as the value of $\Parity$ alone. For instance, the state $l$ from \[ex:lval\] has limit value $\LValP{l}=0 \neq \Val{l}{\Parity{}}=1$. The states $r$ and $s$ from \[ex:lval\] have $\LValP{r}{=}1$ and $\LValP{s}{=}1$. In fact, for any $\sys{M},w,k$ and parity objective $\Parity$ it holds that $\Val[\sys{M}]{p}{\EN{k}\cap\Parity} ~\le~ \LValP[\sys{M}]{p} ~\le~ \Val[\sys{M}]{p}{\Parity}$. Limit values are an important ingredient in our proof of \[thm:main\]. This is due to the following property, which directly follows from the definition. \[lem:value-one-states\] Let $\sys{M}$ be an MDP and $p$ be a state with $\LValP[\sys{M}]{p} = 1$. Then, for all $\eps>0$, there exist a $k\in\N$ and a strategy $\sigma$ such that $\Prob[\sys{M}(\sigma)]{p}(\EN{k}\cap\Parity{}) \ge 1- \eps$. We now show how to compute limit values, based on the following two sets. $$\begin{aligned} A &\quad\eqdef\quad \{p\in Q\mid \exists k \exists \sigma~\Prob[\sys{M}(\sigma)]{p}(\EN{k}\cap\Parity{})=1\}\\ B &\quad\eqdef\quad \{p\in Q\mid \exists \sigma~\Prob[\sys{M}(\sigma)]{p}(\PosMP\cap\Parity{})=1\}\end{aligned}$$ The first set, $A$, contains those states that satisfy the $k$-energy-parity condition almost-surely for some energy level $k\in\N$. The second set, $B$, contains those states that almost-surely satisfy the combined positive mean-payoff-parity objective. Our argument for computability of the limit values is based on the following theorem, which claims that limit values correspond to the values of a reachability objective with target $A\cup B$. Formally, \[thm:LVAL-reach\] For every MDP $\sys{M}$ and state $p$, $\LValP[\sys{M}]{p} = \sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B))$. Before we prove this claim by \[lem:LVAL-reach-up,lem:LVAL-reach-low\] in the remainder of this section, we remark that we can compute $A \cup B$ without constructing $A$. Let us consider the set $$A' \quad\eqdef\quad \{p\in Q\mid \exists k \exists \sigma~\Prob[\sys{M}(\sigma)]{p}(\ES{k}\cap\Parity{})=1\}\ ,$$ and observe that $A' \subseteq A$ holds by definition and that the construction of $A$ from \[thm:as-energy-parity\] establishes $A \subseteq A' \cup B$. Thus, $A \cup B = A' \cup B$ holds, and it suffices to construct $A'$ and $B$, which is cheaper than constructing $A$ and $B$. We now start with some notation. For an MDP $\sys{M}\eqdef(\VC,\VP, E, \prob)$, the *attractor* of a set $X\subseteq V$ of states is the set ${\mathit{Att}(X)} \eqdef \{q \mid \exists \sigma~\Prob[\sys{M}(\sigma)]{q}(\eventually X)=1\}$ of states that almost-surely satisfy the reachability objective with target $X$. For an MDP $\sys{M}\eqdef(\VC,\VP, E, \prob)$ an *end-component* is a strongly connected set of states $C \subseteq V$ with the following closure properties: - for all controlled states $v \in C \cap V_C$, some successor $v'$ of $v$ is in $C$, and - for all probabilistic states $v \in C \cap V_P$, all successors $v'$ of $v$ are in $C$. Given $\cost{}$ and $\parity{}$ functions and $i\in\N$, we call an end-component - *$i$ dominated*, if they contain a state $p$ with $\parity(p)=i$, but no state $q$ with $\parity(q)<i$, - *$i$ maximal*, if it is a maximal (w.r.t. set inclusion) $i$ dominated end-component, and - *positive*, if its expected mean-payoff is strictly greater than $0$ (recall that the mean-payoff of all states in a strongly connected set of states of an MDP is equal). [lemma]{}[LemMPge]{} \[lem:MPge0\] The states of each positive $2i$-maximal end-component $C$ are contained in $B$. (sketch) We consider a strategy $\sigma$ that follows the optimal (w.r.t. the mean-payoff) strategy most of the time and “moves to” a fixed state p with the minimal even parity 2i only sparsely. Such a strategy keeps the mean-payoff value positive while satisfying the parity condition. We show that $\sigma$ can be defined to use finite memory or no memory, but randomisation. Either way, $\sigma$ induces a probabilistic one-counter automata [@etessami2010quasi], whose probability of ever decreasing the counter by some finite $k$ can be analysed, based on the mean-payoff value, using the results in [@BKK/14]. \[lem:LVAL-reach-up\] For every MDP $\sys{M}$ and state $p$, $\LValP[\sys{M}]{p} \ge \sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B))$. Assume w.l.o.g. that ${\tau}\eqdef \sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B)) > 0$. We show that $\LValP[\sys{M}]{p}$ is at least ${\tau}- 2\varepsilon$ for all $\varepsilon>0$ as follows. We start by choosing $k\in\N$ big enough so that for every state $q\in A\cup B$, some strategy satisfies the $k$-energy-parity objective with probability $>1-\eps$. We then consider a memoryless strategy (e.g. from solving the associated linear program), which guarantees that the set $A\cup B$ is reached with likelihood ${\tau}$, and then determine a natural number $l$ such that it is reached within $l$ steps with probability $>{\tau}- \eps$. This reachability strategy $\sigma$ can now be combined with an $\eps$-optimal strategy for states in $A\cup B$: until a state in $A\cup B$ is reached, the controller plays according to $\sigma$ and then swaps to a strategy that guarantees the $k$-energy-parity objective with likelihood $>(1-\eps)$. Such a strategy exists by our assumption on $k$. This combined strategy will satisfy the $\EN{k+l}$-energy-parity objective with probability $> ({\tau}-\eps)(1-\eps) \ge {\tau}- 2\eps$. \[def:non-losing\] \[Non-losing end-component\] We call an end-component non-losing, iff the smallest priority of states in the end-component is even and there is a strategy that allows to 1. almost-surely stay within this end-component, 2. almost-surely visit all states in the end-component, and 3. satisfy the energy condition from some energy level with non-zero probability. \[lem:non-losing\] Every non-losing end-component $I$ is contained in ${\mathit{Att}(A\cup B)}$. We start with a case distinction of the mean-payoff value of $I$. (Recall that, as an end-component in an MDP, all states of $I$ have the same mean-payoff values.) If this value is positive and $2i$ is the lowest priority in $I$, then $I$ is contained in some $2i$ maximal end-component and by \[lem:MPge0\], also in $B\subseteq{\mathit{Att}(A\cup B)}$. If this value is negative, then the third condition of \[def:non-losing\] cannot be satisfied together with the first two. This leaves the case where the mean-payoff value is $0$. If the mean-payoff value is $0$, then there exists a bias function $b:I\to \Z$ that satisfies the following constraints: - $b(v) = \min\Big\{\mathsf{cost}\big((v,v')\big) + b(v') \mid v' \in I \wedge (v,v') \in E\Big\}$ holds for all controlled states $v \in I \cap V_C$, - $b(v) = \sum\limits_{v' \in \{w \in I \mid (v,w) \in E\}} \lambda(v)\big((v,v')\big) \cdot \Big(\mathsf{cost}\big((v,v')\big) + b(v') \Big)$ for all probabilistic states $v \in I \cap V_P$. When adjusting $b$ to $b'$ by adding the same constant to all valuations, $b'$ is obviously a bias function, too. We call a transition $(v,v')$ *invariant* iff $b(v) = \mathsf{cost}\big((v,v')\big) + b(v')$ holds. A set $G\subseteq V$ of states invariant if it is strongly connected and contains only controlled states with an invariant transition into $G$ and only probabilistic states with only invariant outgoing transitions, which all go to $G$. We now make the following case distinction. Case 1: there is a nonempty, invariant set $G \subseteq I$, such that the state $p$ of $G$ with minimal priority has even priority. First notice that $G\subseteq A$: if the minimal value of the bias function is $b_{\min}$, then the bias of a state in $p$ minus $b_{\min}$ serves as sufficient energy when starting in $p$: it then holds that $\Prob[\sys{M}(\sigma)]{p}(\EN{k}\cap\Parity{})=1$, where $k\eqdef b(p) - b_{\min}$, and $\sigma$ is a memoryless randomised strategy that assigns a positive probability to all transitions into $G$. Since $I$ is an end-component, it is contained in the attractor of $G$, which implies the claim, as ${\mathit{Att}(G)} \subseteq {\mathit{Att}(A)} \subseteq {\mathit{Att}(A\cup B)}$. Case 2: there is no non-empty invariant set $G \subseteq I$ with even minimal priority. We show that this is a contradiction with the assumption that $I$ is a non-losing set, in particular with condition 3 of \[def:non-losing\]. We assume for contradiction that there is a strategy $\sigma$ and an energy level $k$ such that we can satisfy the energy parity condition with a positive probability while staying in $I$ and starting at some state $p \in I$. We also assume w.l.o.g. that all bias values are non-negative, and $m$ is the maximal value among them. We set $k' = k + m$. The ‘interesting’ events that can happen during a run are selecting a non-invariant transition from a controlled state or reaching a probabilistic state (and making a random decision from this state), where at least one outgoing transition is non-invariant. We capture both by random variables, where random variables that refer to taking non-invariant transition from controlled states (are deterministic and) have a negative expected value, while random variables that refer to taking a transition from a probabilistic state where at least one outgoing transition is non-invariant refers to a random variable drawn from a finite weight function with expected value $0$ and positive variation. Note that random variables corresponding to probabilistic non-invariant transitions are independent and drawn from a *finite* set of distributions. Let $\alpha$ be any infinite sequence of such random variables. From the results on finitely inhomogeneous controlled random walks [@durrett1991making], we can show that almost-surely the sum of some prefix of $\alpha$ will be lower than $-k'$ (and in fact lower than any finite number). The proof follows the same reasoning as in Proposition 4.1 of [@BBEKW10], where a sufficient and necessary condition was given for not going bankrupt with a positive probability in *solvency games* [@BergerKSV08]. We now consider the set of runs induced by $\sigma$. As we just showed, almost all runs that have infinitely many interesting events (as described above) will not satisfy the $k'$-energy condition. Almost all runs that have finitely many interesting events will have an odd dominating priority, and therefore will not satisfy the parity condition. Thus, the probability that the energy parity condition is satisfied by $\sigma$ is $0$. \[lem:LVAL-reach-low\] For every MDP $\sys{M}$ and state $p$, $\LValP[\sys{M}]{p} \le \sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B))$. Fix $p$ and $\sigma$. Every run from $p$ will, with probability $1$, eventually reach an end-component and visit all states of the end-component infinitely often [@de1997formal]. Let $C$ be an end-component such that $C$ forms the infinity set of the runs from $p$ under $\sigma$ with a positive probability ${\tau}>0$. If $C$ does not satisfy the conditions of non-losing end-components, then the probability $\Prob[\sys{M}(\sigma)]{q}(\EN{k}\cap\Parity{})$ that the $k$-energy-parity objective is satisfied from some state $q\in C$ is $0$, independent of the value $k$. Thus, the probability of satisfying the $k$-energy-parity objective from an initial state $p$ is bounded by the chance of reaching a state in some non-losing end-component. These observations hold for every strategy $\sigma$ and therefore we can bound $$\begin{aligned} \LValP[\sys{M}]{p} &= \sup_{k}\sup_{\sigma}\Prob[\sys{M}(\sigma)]{p}(\EN{k}\cap\Parity)\\ &\le\sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (\mathit{NLE})), \end{aligned}$$ where $\mathit{NLE}\subseteq V$ denotes the union of all non-losing end-components. Now \[lem:non-losing\] implies that $\sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (\mathit{NLE})) \leq \sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B))$, which completes the proof. \[lem:checking-LVAL-1\] Determining the limit value of a state $p$ can be done in $\widetilde {\mathcal O}(|E|\cdot d\cdot \card{V}^4\cdot W + d \cdot |V|^{3.5} \cdot (|\lambda| + |\cost{}|)^2)$ deterministic time. They can also be determined in  and  in the input size when $W$ is given in binary. Recall that $\LValP[\sys{M}]{p} = \sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B))$ by Theorem \[thm:LVAL-reach\], that $A \cup B = A' \cup B$, and that $A'$ and $B$ are the sets of control states that almost-surely satisfy the storage-parity and mean-payoff-parity objective, respectively. Using the results of Section \[sec:limit\], the algorithm proceeds as follows. 1. Compute $A'$, which can be done in time $\mathcal O(|E|\cdot d \cdot |V|^4 \cdot W)$ by \[thm:as-storage-parity\]. 2. Compute, for each occurring even priority $2i$, the following: 1. the set of $2i$ maximal end-components, which can be computed in $\mathcal O(|E|)$; and 2. the mean payoff value for the $2i$ maximal end-components can be computed using Karmarkar’s algorithm [@Karmarkar/84/Karmarkar] for linear programming in time $\widetilde {\mathcal O}(|V|^{3.5} \cdot (|\lambda| + |\cost{}|)^2)$ —note that the complexity refers to *all* (not each) $2i$ maximal end-components. 3. Consider the union of $A$ with all the $2i$ maximal end-components with positive mean payoff computed in Step 2, and compute the maximal achievable probability of reaching this set. (By the results of Section \[sec:limit\], this yields the probability $\sup_{\sigma} \Prob[\sys{M}(\sigma)]{p}(\eventually (A\cup B))$.) The last step costs $\widetilde {\mathcal O}(|V|^{3.5} \cdot |\lambda|^2)$ [@Karmarkar/84/Karmarkar] for solving the respective linear program [@Puterman:book], which is dominated by the estimation of the cost of solving the linear programs from (2b). Likewise, the cost of Step (2a) is dominated by the cost of Step (2b). This leaves us with once the complexity of (1) and $d$ times the complexity of (2b), resulting in the claimed complexity. Note that it depends on the size of representation (in binary) $\lambda$ and $W$ (in unary), and the bigger of these values dominates the complexities. Finally, all steps are in  and in . Limit-Sure Energy-Parity {#sec:ls-EP} ======================== \[sec:extensions\] In this section we provide the reduction from checking if an energy-parity objective holds limit-surely, to checking if such an objective holds almost-surely. The reduction basically extends the MDP so that the controller may “buy” a visit to a good priority (at the expense of energy) if currently in a state $p$ with limit value $\LValP{p}=1$. The *extension* of a finite MDP $\sys{M}$ for given cost and parity functions is the MDP $\sys{M'}\supseteq\sys{M}$ where, additionally, for every controlled state $s\in\VC$ with $\LValP[\sys{M}]{s}=1$, there is a new state $s'$ with parity $0$ and edges $(s,s'), (s',s)$ with $\cost(s,s') = -1$ and $\cost(s',s) = 0$. We write $V'$ for the set of states of the extension. =\[pstate,minimum size=0.2cm,\] (Y) [s]{}; (Z) [r]{}; (X) [l]{}; (lla) ; (rra) ; (llb) ; (rrb) ; (Y) edge node [$0$]{} (Z); (Y) edge node [$0$]{} (X); (Z) edge node\[left\][${\frac{1}{3}},0$]{} (rrb); (rrb) edge node\[right\] [$1,-1$]{}(Z); (Z) edge node\[left\][${\frac{2}{3}},0$]{} (rra); (rra) edge node\[right\][$1,+1$]{} (Z); (X) edge node\[right\][${\frac{1}{2}},0$]{} (lla); (lla) edge node\[left\][$1,+1$]{} (X); (X) edge node\[right\] [${\frac{1}{2}},0$]{} (llb); (llb) edge node\[left\] [$1,-1$]{} (X); (SE) [$s'$]{}; (Y) edge node\[swap\][$-1$]{} (SE); (SE) edge node\[swap\][$0$]{} (Y); (Y) edge node\[\] [$+1$]{} (Y); Note that the extension only incurs an $\mathcal O(\card{V_C})$ blow-up, and $\sys{M'}$ satisfies the $\EN{k}\cap\Parity{}$ objective iff $\sys{M'}$ does. \[thm:extension\] Let $\sys{M}$ be a MDP with extension $\sys{M'}$, $p$ be a state and $k\in\N$. Then, $\Val[\sys{M}]{p}{\EN{k}\cap\Parity{}} = 1$ if, and only if, $\Prob[\sys{M'}(\sigma)]{p}(\EN{k}\cap\Parity{}) = 1$ for some strategy $\sigma$. In the remainder of this section we prove this claim. For brevity, let us write $\effect{w}$ for the cumulative cost $\sum_{i=1}^{k-1}\cost(s_i,s_{i+1})\in\Z$ of all steps in a finite path $w=s_1s_2\dots s_k\in V^*$. \[lem:extension:onlyif\] Let $\sys{M}$ be a MDP with extension $\sys{M'}$, $p$ be a state of $\sys{M}$, $k\in\N$ and $\sigma'$ a strategy for $\sys{M'}$ such that $\Prob[\sys{M'}(\sigma')]{p}(\EN{k}\cap\Parity{}) = 1$. Then $\Val[\sys{M}]{p}{\EN{k}\cap\Parity{}} = 1$. Recall \[lem:value-one-states\], that states with $\LValP[\sys{M}]{s}=1$ have the property that, for every $\eps>0$, there exists $n_{s,\eps}\in\N$ and a strategy $\sigma_{s,\eps}$ such that $$\Prob[\sys{M}(\sigma_{s,\eps})]{s}(\EN{n_{s,\eps}}\cap\Parity{}) \ge 1-\eps.$$ Consider now a fixed $\eps>0$ and let $n_\eps\eqdef\max\{n_{s,\eps}\mid \LValP{s}=1\}$. We show the existence of a strategy $\sigma$ for $\sys{M}$ that satisfies ${\Prob[\sys{M}(\sigma)]{p}(\EN{k}\cap\Parity{})} \ge 1-\eps$. We propose the strategy $\sigma$ which proceeds in $\sys{M}$ just as $\sigma'$ does in $\sys{M'}$ but skips over “buying” loops $(s,s')$ followed by $(s',s)$ in $\sys{M'}$. This goes on indefinitely unless the observed path $\rho=s_0s_1\ldots s_l$ reaches a *tipping point*: the last state $s_l$ has $\LValP[\sys{M}]{s_l}=1$ and the accumulated cost is $\effect{\rho}\ge n_{\eps}$. At this point $\sigma$ continues as $\sigma_{s_l,\eps}$. We claim that $\Prob[\sys{M}(\sigma)]{p}(\EN{k}\cap\Parity{}) \ge 1-\eps$. Indeed, first notice that for any prefix ${\tau}\in V^*$ of a run $\rho\in \Runs[\sys{M}(\sigma)]{p}$ until the tipping point, there is a unique corresponding path ${\tau}'=s'_1s'_2\dots s'_i\in V'^*$ in $\sys{M'}$, which is a prefix of some run $\rho'\in\Runs[\sys{M'}(\sigma')]{p}$. Moreover, the strategy $\sigma$ maintains the invariant that the accumulated cost of such prefix ${\tau}$ is $$\effect{{\tau}} = \effect{{\tau}'} + \card{\{j\mid s'_j\in V'\setminus V\}},$$ the accumulated cost of the corresponding path ${\tau}'$ plus the number of times ${\tau}'$ visited a new state in $V'\setminus V$. In particular this means that the path ${\tau}$ can only violate the energy condition if also ${\tau}'$ does. To show the claim, first notice that the error introduced by the runs in $\Runs[\sys{M}(\sigma)]{p}$ that eventually reach a tipping point cannot exceed $\eps$. This is because from the tipping point onwards, $\sigma$ proceeds as some $\sigma_{s,\eps}$ and thus achieves the energy-parity condition with chance $\ge 1-\eps$. So the error introduced by the runs in $\Runs[\sys{M}(\sigma)]{p}$ is a weighted average of values $\le \eps$, and thus itself at most $\eps$. Now suppose a run $\rho\in\Runs[\sys{M}(\sigma)]{p}$ never reaches a tipping point. Then the corresponding run $\rho'\in \Runs[\sys{M}'(\sigma')]{p}$ cannot visit new states in $V'\setminus V$ more than $n_\eps$ times. Since with chance $1$, $\rho'$ and therefore also $\rho$ satisfies the $k$-energy condition it remains to show that $\rho$ also satisfies the parity condition. To see this, just notice that $\rho'$ satisfies this condition almost-surely and since it visits new states only finitely often, $\rho$ and $\rho'$ share an infinite suffix. The “only if” direction of \[thm:extension\] is slightly more complicated. We go via an intermediate finite system $\sys{B}_k$ defined below. The idea is that if $\EN{k}\cap\Parity{}$ holds limit-surely in $\sys{M}$ then $\Parity{}$ holds limit-surely in $\sys{B}_k$ and since $\sys{B}_k$ is finite this means that $\Parity{}$ also holds almost-surely in $\sys{B}_k$. Based on an optimal strategy in $\sys{B}_k$ we then derive a strategy in the extension $\sys{M'}$ which satisfies $\EN{k}\cap\Parity{}$ a.s. The two steps of the argument are shown individually as \[lem:M-to-B,lem:B-to-M’\]. Together with \[lem:extension:onlyif\] these complete the proof of \[thm:extension\]. Let $\sys{B}_k$ be the finite MDP that mimics $\sys{M}$ but hardcodes the accumulated costs as long as they remain between $-k$ and $\card{V}$. That is, the states of $\sys{B}_k$ are pairs $(s,n)$ where $s\in V$ and $-k\le n \le \card{V}$. Moreover, a state $(s,n)$ - is a (losing) sink with maximal odd parity if $n=-k$ or ${\LValP[\sys{M}]{s}}<1$, - is a (winning) sink with parity $0$ if $n=\card{V}$. We reuse strategies for $\sys{M}$ in $\sys{B}_k$ and write $\sys{B}_k(\sigma)$ for the Markov chain that is the result of basing decisions on $\sigma$ until a sink is reached. \[lem:M-to-B\] If $\Val[\sys{M}]{s}{\EN{k}\cap\Parity{}}=1$ then $\Val[\sys{B}_k]{(s,0)}{\Parity{}}=1$. We show that, for every $\eps>0$, there is a strategy $\sigma$ such that $\Prob[\sys{B}_k(\sigma)]{s}(\Parity{})\ge 1-\eps$. This would be trivial (by re-using strategies from $\sys{M}$) if not for the extra sinks for states with $\LValP[\sys{M}]{s}< 1$. Let’s call these states *small* here and let $S$ be the set of all small states. We aim to show that the $k$-energy-parity condition can be satisfied and at the same time, the chance of visiting a small state with accumulated cost below $\card{V}$ can be made arbitrary small. More precisely, define $D\subseteq~\Runs[\sys{M}]{}$ as the set of runs which never visit a small state with accumulated cost below $\card{V}$: $$D\eqdef\{s_0s_1\dots \mid \forall i\in\N.~s_i\in S \implies \effect{s_0\dots s_i}\ge\card{V}\}.$$ We claim that $$\begin{aligned} \label{eq:M-to-B} &&\Val[\sys{M}]{s}{\EN{k}\cap\Parity{}\cap D} = 1 \end{aligned}$$ holds. We show this by contradicting the converse that, for $$\gamma\eqdef\Val[\sys{M}]{s}{\overline{\EN{k}\cap\Parity{}\cap D}} = \Val[\sys{M}]{s}{\overline{\EN{k}\cap\Parity{}}\cup \overline{D}},$$ $\gamma >0$. Equivalently, we contradict that, for every strategy $\sigma$, $$\begin{aligned} \label{eq:M-to-B'} \Prob[\sys{M}(\sigma)]{s}(\overline{\EN{k}\cap\Parity{}})<\gamma/2 &\;\,\Rightarrow\;\, \Prob[\sys{M}(\sigma)]{s}(\overline{D})>\gamma/2. \end{aligned}$$ To do this, we define $\delta<1$ as the maximum of $$\{~\Val[\sys{M}]{s}{\EN{n}\cap\Parity{}} < 1 ~\mid~ s\in S,~n\le k+\card{V}~ \}\cup\{0\},$$ that is, the maximal value $\Val[\sys{M}]{s}{\EN{n}\cap\Parity{}} < 1$ for any $s\in S$ and $n\le k+\card{V}$, and $0$ if no such value exists. Notice that this is well defined due to the finiteness of $V$. This value $\delta$ estimates the chance that a run that is not in $D$ fails the $k$-energy-parity condition. In other words, for any strategy $\sigma$ and value $0\le \beta\le 1$, $$\Prob[\sys{M}(\sigma)]{s}(\overline{D})>\beta \text{ implies } \Prob[\sys{M}(\sigma)]{s}(\overline{\EN{k}\cap\Parity{}})\ge \beta\cdot (1-\delta).$$ This is because $\Prob[\sys{M}(\sigma)]{s}(\overline{D})$ is the chance of a run reaching a state $s$ with accumulated cost $n<\card{V}$ and because $\Val[\sys{M}(\sigma)]{s}{\EN{n}\cap\Parity{}} \le \delta$. We pick an $\eps'>0$ that is smaller than $(\gamma/2)\cdot (1-\delta)$. By assumption of the lemma, there is some strategy $\sigma$ such that $\Prob[\sys{M}(\sigma)]{s}(\overline{\EN{k}\cap\Parity{}})<\eps'<\gamma/2$. Then by \[eq:M-to-B’\], we get $\Prob[\sys{M}(\sigma)]{s}(\overline{D})>\gamma/2$ and thus $\Prob[\sys{M}(\sigma)]{s}(\overline{\EN{k}\cap\Parity{}})\ge (\gamma/2)\cdot (1-\delta)>\eps'$, which is a contradiction. We conclude that \[eq:M-to-B\] holds. To get the conclusion of the lemma just observe that for any strategy $\sigma$ it holds that $\Prob[\sys{M}(\sigma)]{s}(\EN{k}\cap\Parity{}\cap D) ~\le~ \Prob[\sys{B}_k(\sigma)]{s}(\Parity{}).$ \[lem:B-to-M’\] If $\Val[\sys{B}_k]{s}{\Parity{}}=1$ then $\Prob[\sys{M'}(\sigma')]{s}(\EN{k}\cap\Parity{}) =1$ for some $\sigma'$. Finite MDPs have pure optimal strategies for the $\Parity{}$ objective [@Chatterjee:2004:QSP:982792.982808]. Thus by assumption and because $\sys{B}_k$ is finite, we can pick an optimal strategy $\sigma$ satisfying $\Prob[\sys{B}_k(\sigma)]{s}(\Parity{}) =1$. Notice that all runs in $\Runs[\sys{B}_k(\sigma)]{s}$ according to this optimal strategy must never see a small state (one with $\LValP[\sys{M}]{p}<1$). Based on $\sigma$, we construct the strategy $\sigma'$ for $\sys{M'}$ as follows. The new strategy just mimics $\sigma$ until the observed path $s_1s_2\ldots s_n$ visits the first controlled state after a cycle with positive cost: it holds that $s_n\in\VC$ and there are $i,j\le n$ with $s_i=s_j$ and $\effect{s_i\ldots s_j}>0$. When this happens, $\sigma'$ uses the new edges to visit a $0$-parity state, forgets about the cycle and continues just as from $s_1s_2\ldots s_is_{j+1}\dots s_n$. We claim that $\Prob[\sys{M'}(\sigma')]{s}(\EN{k}\cap\Parity{}) =1$. To see this, just observe that a run of $\sys{M'}(\sigma')$ that infinitely often uses new states in $V'\setminus V$ must satisfy the $\Parity{}$ objective as those states have parity $0$. Those runs which visit new states only finitely often have a suffix that directly corresponds to a run of $\sys{B}_k(\sigma)$, and therefore also satisfy the parity objective. Finally, almost all runs in $\Runs[\sys{M'}(\sigma')]{s}$ satisfy the $\EN{k}$ objective because all runs in $\Runs[\sys{B}_k(\sigma)]{s}$ do, and a negative cost due to visiting a new state in $V'\setminus V$ is always balanced by the strictly positive cost of a cycle. This concludes the proof of \[thm:extension\]. The proof of \[thm:ls-energy-parity\] now follows by \[thm:LVAL-reach,thm:extension\] and the fact that almost-sure reachability, positive mean-payoff and $k$-energy-parity and $k$-storage-parity objectives are (pseudo) polynomial time computable (\[thm:as-energy-parity,thm:as-storage-parity\]). Fix a MDP $\sys{M}\eqdef(\VC,\VP, E, \prob)$ with cost and parity functions. For (1) and (2) we can, by \[lem:checking-LVAL-1\] compute the set of control states $p$ with limit value $\LValP[\sys{M}]{p} = 1$. Based on this, we can (in logarithmic space) construct the extension $\sys{M'}\eqdef(\VC',\VP', E', \prob')$ where $\card{\VC'} = 2\cdot\card{\VC}$, $\card{E'} = \card{E} + 2\cdot\card{\VC}$ and the rest is as in $\sys{M}$. By \[thm:extension\], a state $p\in \VC\cup\VP$ satisfies the $k$-energy-parity objective limit-surely in $\sys{M}$ iff it satisfies it almost-surely in $\sys{M}'$. The claim then follows from \[thm:as-energy-parity\]. \(3) To see that there are finite memory $\eps$-optimal strategies we observe that the strategies we have constructed in \[lem:extension:onlyif\] work in phases and in each phase follow some finite memory strategy. In the first state, these strategies follow some almost-surely optimal strategy in the extension $\sys{M}'$, but only as long as the energy level remains below some threshold that depends on $\eps$. If this level is exceeded it means that a “tipping point” is reached and the strategy switches to a second phase. The second phase starts from a state with limit value $1$, and our strategy just tries to reach a control state in the set $A'\cup B$ from \[sec:limit\]. For almost-sure reachability, memoryless deterministic strategies suffice. Finally, when ending up in a state of $A'$, the strategy follows an almost-sure optimal strategy for storage-parity (with finite memory by \[thm:storage\]). Similarly, when ending up in a state of $B$, the strategy follows almost-sure optimal strategy for the combined positive mean-payoff-parity objective (with finite memory by [@CD2011]). Lower Bounds {#sec:complexity} ============ Polynomial time hardness of all our problems follows, e.g., by reduction from <span style="font-variant:small-caps;">Reachability in AND-OR graphs</span> [@immerman1981number], where non-target leaf nodes are energy decreasing sinks. This works even if the energy deltas are encoded in unary. If we allow binary encoded energy deltas, i.e. $W \gg 1$, then solving two-player energy games is logspace equivalent to solving two-player mean-payoff games ([@BFLMS2008], Prop. 12), a well-studied problem in $\NP\cap\coNP$ that is not known to be polynomial [@zwick1996complexity]. Two-player energy games reduce directly to both almost-sure and limit-sure energy objectives for MDPs, where adversarial states are replaced by (uniformly distributed) probabilistic ones: a player max strategy that avoids ruin in the game directly provides a strategy for the controller in the MDP, which means that the energy objective holds almost-surely (thus also limit-surely). Conversely, a winning strategy for the opponent ensures ruin after a fixed number $r$ of game rounds. Therefore the error introduced by any controller strategy in the MDP is at least $(1/d)^r$, where $d$ is the maximal out-degree of the probabilistic states, which means that the energy objective cannot be satisfied even limit-surely (thus not almost-surely). It follows that almost-sure and limit-sure energy objectives for MDPs are at least as hard as mean-payoff games. The same holds for almost-sure and limit-sure storage objectives for MDPs, since in the absence of parity conditions, storage objectives coincide with energy objectives. Finally we obtain that all the more general almost-sure and limit-sure energy-parity and storage-parity objectives for MDPs are at least as hard as mean-payoff games. Conclusions and Future Work {#sec:conclusion} =========================== We have shown that even though strategies for almost-sure energy parity objectives in MDPs require infinite memory, the problem is still in $\NP\cap\coNP$. Moreover, we have shown that the limit-sure problem (i.e. the problem of checking whether a given configuration (state and energy level) in energy-parity MDPs has value $1$) is also in $\NP\cap\coNP$. However, the fact that a state has value $1$ can always be witnessed by a family of strategies attaining values $1-\epsilon$ (for every $\epsilon >0$) where each member of this family uses only finite memory. We leave open the decidability status of quantitative questions, e.g. whether $\Val[\sys{M}]{p}{\EN{k}\cap\Parity{}} \ge 0.5$ holds. Energy-parity objectives on finite MDPs correspond to parity objectives on certain types of infinite MDPs where the current energy value is part of the state. More exactly, these infinite MDPs can be described by single-sided vector addition systems with states [@AMSS:CONCUR2013; @ACMSS:FOSSACS2016], where the probabilistic transitions cannot change the counter values but only the control-states (thus yielding an upward-closed winning set). I.e. single-sidedness corresponds to energy objectives. For those systems, almost-sure Büchi objectives are decidable (even for multiple energy dimensions) [@ACMSS:FOSSACS2016], but the decidability of the limit-sure problem was left open. This problem is solved here, even for parity objectives, but only for dimension one. However, decidability for multiple energy dimensions remains open. If one considers the more general case of MDPs induced by counter *machines*, i.e. with zero-testing transitions, then even for single-sided systems as described above all problems become undecidable from dimension 2 onwards. However, decidability of almost-sure and limit-sure parity conditions for MDPs induced by one-counter machines (with only one dimension of energy) remains open. [**Acknowledgements. **]{} This work was partially supported by the EPSRC through grants EP/M027287/1 and EP/M027651/1 (Energy Efficient Control), and EP/P020909/1 (Solving Parity Games in Theory and Practice). Appendix {#appendix .unnumbered} ======== Missing Proof from Section \[sec:limit\] {#sec:appendix} ======================================== If the expected mean-payoff value of $C$ is positive, then we can assume w.l.o.g. a pure memoryless strategy $\sigma$ that achieves this value for all states in $C$. This is because finite MDPs allow pure and memoryless optimal strategies for the mean-payoff objective (see e.g. [@LL:SIAM1969], Thm. 1). This strategy does not necessarily satisfy the parity objective. However, we can mix it with a pure memoryless reachability strategy $\rho$ that moves to a fixed state $p$ with the minimal even parity $2i$ among the states in $C$. Broadly speaking, if we follow the optimal (w.r.t. the mean-payoff) strategy most of the time and “move to $p$” only sparsely, the mean-payoff of such combined strategy would be affected only slightly. This can be done by using memory to ensure that the mean-payoff value remains positive (resulting in a pure finite memory strategy), or it can be done by always following $\rho$ with a tiny likelihood $\varepsilon >0$, while following $\sigma$ with a likelihood of $1-\varepsilon$ (resulting in a randomised memoryless strategy). For the pure finite memory strategy, we can simply follow $\rho$ for $|C|$ steps (or until $p$ is reached, whatever happens earlier) followed by $n$ steps of following $\sigma$. When $n$ goes to infinity, the expected mean payoff converges to the mean payoff of $\sigma$. Since the mean-payoff of $\sigma$ is strictly positive, the combined strategy achieves a strictly positive mean-payoff already for some fixed finite $n$, and thus finite memory suffices. Note that using either of the just defined strategies would result in a finite-state Markov chain with integer costs on the transitions. We can simulate such a model using [*probabilistic one-counter automata*]{} [@etessami2010quasi], where the energy level is allowed to change by at most $1$ in each step, just by modelling an increase of $k$ by $k$ increases of one. Now we can use a result by Br[á]{}zdil, Kiefer, and Kučera [@BKK/14] for such a model for the case where it consists of a single SCC (which is the case here, because of the way $\sigma$ is defined). In particular, Lemma 5.13 in [@BKK/14] established an upper bound on the probability of termination (i.e. reaching energy level $0$) in a probabilistic one-counter automaton with a positive mean-payoff (referred to as ‘trend’ there) where starting with energy level $k$. This upper bound can be explicitly computed for any given probabilistic one-counter automaton and energy level $k$. However, for our purposes, it suffices to note that this bound converges to $0$ as $k$ increases. This shows that the probability of winning can be made arbitrarily close to $1$ by choosing a sufficiently high initial energy level and using the strategy defined in the previous paragraph. Thus the states in $C$ indeed have limit value $1$. Memory Requirements for $\eps$-optimal Strategies {#app:finite} ================================================= In this appendix, we discuss the complexity of the strategies needed. First, we show that the strategies for determining the limit values of states are quite simple: they can either be chosen to be finite memory and pure, or randomised and memoryless. For winning limit-surely from a state energy pair, finite memory pure strategies suffice, but not necessarily memoryless ones, not even if we allow for randomisation. We start by showing the negative results on examples. shows an MDP, where it is quite easy to see that both states have limit value $1$. However, when looking at the two memoryless pure strategies, it is equally clear that either (if the choice is to move from $s$ to $r$) the energy condition is violated almost-surely, or (if the choice is to remain in $s$), the parity condition is violated on the only run. Nevertheless, the state $s$ satisfies the $k$-energy-parity objective limit-surely, but not almost-surely, for any fixed initial energy level $k$. =\[cstate,minimum size=0.2cm\] (Y) [s]{}; (Z) [r]{}; (Y) to \[bend left\] node\[midway\] [$-1$]{} (Z); (Z) to \[bend left\] node\[midway\] [$\frac{1}{2}, -1$]{} (Y); (Z) edge node\[\] [$\frac{1}{2}, -1$]{} (Z); (Y) edge node\[\] [$+1$]{} (Y); =\[cstate,minimum size=0.2cm\] (X) [p]{}; (Y) [s]{}; (Z) [r]{}; (Y) to \[bend left\] node\[midway\] [$+1$]{} (X); (X) to \[bend left\] node\[midway\] [$+1$]{} (Y); (Y) to \[bend left\] node\[midway\] [$-1$]{} (Z); (Z) to \[bend left\] node\[midway\] [$\frac{1}{4}, -1$]{} (Y); (Z) edge node\[\] [$\frac{3}{4}, +1$]{} (Z); shows an energy-parity MDP, where all states have limit value $1$, and the two left states have limit value $1$ even from zero energy. (They can simply boost their energy level long enough.) Only in the middle state do we need to make choices. For all memoryless randomised strategies that move to the left with a probability $>0$, the minimal priority on all runs is almost-surely $1$, such that these strategies are almost-surely losing from all states and energy levels. The only remaining candidate strategy is to always move to the right. But, for all energy levels and starting states, there is a positive probability that the energy objective is violated. (E.g. when starting with energy $k$ in the middle state, it will violate the energy condition in $k+1$ steps with a chance $4^{-\lceil k/2\rceil}$.) To see that finite memory always suffices, we can simply note that the strategies we have constructed work in stages. The ‘energy boost’ part from \[sec:extensions\] does not require memory on the extended arena (and thus finite memory on the original arena). Further memory can be used to determine when there is sufficient energy to progress to the strategy from \[sec:limit\]. The strategy for \[sec:limit\] consists of reaching $A'$ or a positive $2i$ maximal set almost-surely and then winning limit-surely there. For almost-sure reachability, memoryless deterministic strategies suffice. The same holds for winning in $A'$. For winning in a positive $2i$ maximal set, the proof of \[lem:MPge0\] also establishes that pure finite memory and randomised memoryless strategies suffice. [^1]: Note that the energy balance can never be strictly smaller than $h_r - h_q$ in such a case, as there would not be a safe continuation from $r$ otherwise. [^2]: If there is no point where (2) is met, the energy balance on state $r$ is always exactly $h_r - h_q$, such that $\sigma_q$ satisfies $\ES{h_q,s}$, and (1) is satisfied immediately. [^3]: We argue in that, with sufficient energy, this probability can be moved arbitrarily close to $1$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We perform a systematic study of models involving leptoquarks and diquarks with masses well below the grand unification scale and demonstrate that a large class of them is excluded due to rapid proton decay. After singling out the few phenomenologically viable color triplet and sextet scenarios, we show that there exist only two leptoquark models which do not suffer from tree-level proton decay and which have the potential for explaining the recently discovered anomalies in $B$ meson decays. Both of those models, however, contain dimension five operators contributing to proton decay and require a new symmetry forbidding them to emerge at a higher scale. This has a particularly nice realization for the model with the vector leptoquark $(3,1)_{2/3}$, which points to a specific extension of the Standard Model, namely the Pati-Salam unification model, where this leptoquark naturally arises as the new gauge boson. We explore this possibility in light of recent $B$ physics measurements. Finally, we analyze also a vector diquark model, discussing its LHC phenomenology and showing that it has nontrivial predictions for neutron-antineutron oscillation experiments.' author: - Nima Assad - Bartosz Fornal - Benjamín Grinstein bibliography: - 'bibliography.bib' title: Baryon Number and Lepton Universality Violation in Leptoquark and Diquark Models --- Introduction ============ Protons have never been observed to decay. Minimal grand unified theories (GUTs) [@Georgi:1974sy; @Fritzsch:1974nn] predict proton decay at a rate which should have already been measured. The only four-dimensional GUTs constructed so far based on a single unifying gauge group with a stable proton require either imposing specific gauge conditions [@Karananas:2017mxm] or introducing new particle representations [@Fornal:2017xcj]. A detailed review of the subject can be found in [@Nath:2006ut]. Lack of experimental evidence for proton decay [@Miura:2016krn] imposes severe constraints on the form of new physics, especially on theories involving new bosons with masses well below the GUT scale. For phenomenologically viable models of physics beyond the Standard Model (SM) the new particle content cannot trigger fast proton decay, which seems like an obvious requirement, but is often ignored in the model building literature. Simplified models with additional scalar leptoquarks and diquarks not triggering tree-level proton decay were discussed in detail in [@Arnold:2012sd], where a complete list of color singlet, triplet and sextet scalars coupled to fermion bilinears was presented. An interesting point of that analysis is that there exists only one scalar leptoquark, namely $(3,2)_{\frac76}$ (a color triplet electroweak doublet with hypercharge $7/6$) that does not cause tree-level proton decay. In this model dimension five operators that mediate proton decay can be forbidden by imposing an additional symmetry [@Arnold:2013cva]. In this paper we collect the results of [@Arnold:2012sd] and extend the analysis to vector particles. This scenario might be regarded as more appealing than the scalar case, since the new fields do not contribute to the hierarchy problem. We do not assume any global symmetries, but we do comment on how imposing a larger symmetry can remove proton decay that is introduced through nonrenormalizable operators, as in the scalar case. Since many models for the recently discovered $B$ meson decay anomalies [@Aaij:2014ora; @Aaij:2017vbb] rely on the existence of new scalar or vector leptoquarks, it is interesting to investigate which of the new particle explanations proposed in the literature do not trigger rapid proton decay. Surprisingly, the requirement of no proton decay at tree level singles out only a few models, two of which involve the vector leptoquarks $(3,1)_{\frac23}$ and $(3,3)_{\frac23}$, respectively. Remarkably, these very same representations have been singled out as giving better fits to $B$ meson decay anomalies data [@Alonso:2015sja]. An interesting question we consider is whether there exists a UV complete extension of the SM containing such leptoquarks in its particle spectrum. Finally, although the phenomenology of leptoquarks has been analyzed in great detail, there still remains a gap in the discussion of diquarks. In particular, neutron-antineutron ($n - \bar{n}$) oscillations have not been considered in the context of vector diquark models. We fill this gap by deriving an estimate for the $n - \bar{n}$ oscillation rate in a simple vector diquark model and discuss its implications for present and future experiments. The paper is organized as follows. In Sec. \[repr\] we study the order at which proton decay first appears in models including new color triplet and sextet representations and briefly comment on their experimental status. In Sec. \[leptvec\] we focus on the unique vector leptoquark model which does not suffer from tree-level proton decay and has an appealing UV completion. In particular, we study its implications for $B$ meson decays. In Sec. \[other\] we analyze a model with a single vector color sextet, discussing its LHC phenomenology and implications for $n - \bar{n}$ oscillations. Section \[conclusions\] contains conclusions. ----------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------- Field   ${\rm SU}(3)_c\times {\rm SU}(2)_L\times {\rm U}(1)_Y$ reps.   \[1pt\] Scalar leptoquark   $\left(3,2\right)_{\frac76}'$    \[2pt\]   [Scalar diquark]{}      $\left(3,1\right)_{\frac23}$, $\left(6,1\right)_{-\frac23}$, $\left(6,1\right)_\frac13$, $\left(6,1\right)_{\frac43}$, $\left(6,3\right)_\frac13$   \[2pt\]     Vector leptoquark       $\left(3, 1\right)_\frac23'$, $\left(3,1\right)_\frac53$, $\left(3, 3\right)_\frac23'$    \[2pt\]   Vector diquark      $\left(6,2\right)_{-\frac16}$, $\left({6},2\right)_{\frac56}$    \[2pt\] ----------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------- : []{data-label="table2"} Viable leptoquark and diquark models {#repr} ==================================== For clarity, we first summarize the combined results of [@Arnold:2012sd] and this work in Table \[table2\], which shows the only color triplet and color sextet models that do not exhibit tree-level proton decay. The scalar case was investigated in [@Arnold:2012sd], whereas in this paper we concentrate on vector particles. As explained below, the representations denoted by primes exhibit proton decay through dimension five operators (see also [@Arnold:2013cva]). We note that although the renormalizable proton decay channels involving leptoquarks are well-known in the literature, to our knowledge the nonrenormalizable channels have not been considered anywhere apart from the scalar case in [@Arnold:2013cva]. Proton decay in vector models ----------------------------- We first enumerate all possible dimension four interactions of the new vector color triplets and sextets with fermion bilinears respecting gauge and Lorentz invariance. A complete set of those operators is listed in Table \[table1\] [@Dorsner:2016wpm]. For the vector case there are two sources of proton decay. The first one comes from tree-level diagrams involving a vector color triplet exchange, as shown in Fig. \[fig:1\]. This excludes the representations $(3, 2)_\frac16$ and $(3, 2)_{-\frac56}$, since they would require unnaturally small couplings to SM fermions or very large masses to remain consistent with proton decay limits. The second source comes from dimension five operators involving the vector leptoquark representations $(3, 1)_\frac23$ and $(3, 3)_\frac23$: $$\label{dim5op} \frac{1}{\Lambda}\, (\overline{Q}^c_L H^\dagger)\gamma^\mu d_R V_\mu \, , \ \ \ \ \frac{1}{\Lambda}\, (\overline{Q}^c_L \tau^A H^\dagger)\gamma^\mu d_R V^A_\mu \ ,$$ respectively. Those operators can be constructed if no additional global symmetry forbidding them is imposed and allow for the proton decay channel shown in Fig. \[fig:2\], resulting in a lepton (rather than an antilepton) in the final state. The corresponding proton lifetime estimate is: $$\label{pl} \tau_p \approx \left(2.5\times 10^{32} \ {\rm years}\right) \left(\frac{M}{10^4 \ {\rm TeV}}\right)^4 \left(\frac{\Lambda}{M_{\rm Pl}}\right)^2, $$ where the leptoquark tree-level coupling and the coefficient of the dimension five operator were both set to unity. The numerical factor in front of Eq. (\[pl\]) is the current limit on the proton lifetime from the search for $p\to K^+\pi^+e^-$ [@Olive:2016xmw]. Even in the most optimistic scenario of the largest suppression of proton decay, i.e., when the new physics behind the dimension five operator does not appear below the Planck scale, those operators are still problematic for $M \lesssim 10^4 \ {\rm TeV}$, which well includes the region of interest for the $B$ meson decay anomalies. The dimension five operators can be removed by embedding the vector leptoquarks into UV complete models. As argued in [@Arnold:2013cva] for the scalar case, it is sufficient to impose a discrete subgroup $\mathcal{Z}_3$ of a global ${\rm U}(1)_{B-L}$ to forbid the problematic dimension five operators. They are also naturally absent in models with gauged ${\rm U}(1)_{B-L}$.[^1] Ultimately, as shown in Table \[table2\], there are only five color triplet or sextet vector representations that are free from tree-level proton decay, two of which produce dimension five proton decay operators. In the scalar case, as shown in [@Arnold:2012sd], there are six possible representations with only one suffering from dimension five proton decay.   Operator    \[0pt\][$ \ \ {\rm SU}(3)_c \ \ $]{} \[0pt\][$ \ \ {\rm SU}(2)_L \ \ $]{} \[0pt\][$ \ \ \ {\rm U}(1)_Y \ \ \ $]{}    $p$ decay     ---------------------------------------------------- -------------------------------------- --------------------------------------- ----------------------------------------- ------------------       $3 $ $2$ $-\,5/6 \ \ $ tree-level \[-5pt\] $\bar{6}$ $2$ $-\,5/6 \ \ $ –       $3 $ $2$ $ 1/6 $ tree-level \[-5pt\] $\bar{6}$ $2$ $1/6 $ –    $\overline{Q}_L \gamma^\mu L_L V_\mu $    $3 $ $1 , 3$ $2/3 $ dim 5    $\overline{Q}^c_L \gamma^\mu {e}_R V^*_\mu $    ${3}$ $2$ $-\,5/6 \ \ $ tree-level    $\overline{L}^c_L\gamma^\mu {u}_R V^*_\mu $    ${3}$ $2$ $1/6 $ tree-level    $\overline{L}^c_L\gamma^\mu {d}_R V^*_\mu $    ${3}$ $2$ $-5/6 \ \ $ tree-level    $\overline{u}_R\gamma^\mu {e}_R V_\mu $    ${3}$ $1$ $5/3 $ dim 7    $\overline{d}_R\gamma^\mu {e}_R V_\mu $    ${3}$ $1$ $2/3 $ dim 5 \[table1\] ![](figure_1.pdf){width="0.6\linewidth"} \[fig:1\] ![ []{data-label="fig:2"}](figure_2.pdf){width="0.64\linewidth"} Leptoquark phenomenology {#qqq} ------------------------ The phenomenology of scalar and vector leptoquarks has been extensively discussed in the literature [@Buchmuller:1986zs; @Davidson:1993qk; @Hewett:1997ce] and we do not attempt to provide a complete list of all relevant papers here. For an excellent review and many references see [@Dorsner:2016wpm], which is focused primarily on light leptoquarks. Low-scale leptoquarks have recently become a very active area of research due to their potential for explaining the experimental hints of new physics in $B$ meson decays, in particular $B^+ \rightarrow K^+ \ell^+ \ell^-$ and $B^0 \rightarrow K^{*0} \ell^+ \ell^-$, for which a deficit in the ratios $R_{K^{(*)}}=\text{Br}(B\!\to\! K^{(*)}\mu^+\mu^-)/\text{Br}(B\!\to\! K^{(*)}e^+e^-)$ with respect to the SM expectations has been reported [@Aaij:2014ora; @Aaij:2017vbb]. A detailed analysis of the anomalies can be found in [@Geng:2017svp; @Ciuchini:2017mik; @Hiller:2017bzc; @DAmico:2017mtc; @Altmannshofer:2017yso; @Capdevila:2017bsm]. Several leptoquark models have been proposed to alleviate this tension and are favored by a global fit to $R_{K^{(*)}}$, $R_{D^{(*)}}$ and other flavor observables. Surprisingly, not all of those models remain free from tree-level proton decay. The leptoquark models providing the best fit to data with just a single new representation are: scalar $(3,2)_{\frac16}$ [@Becirevic:2016yqi], vector $(3,1)_{\frac23}$ [@Alonso:2015sja; @Buttazzo:2017ixm] and vector $(3,3)_{\frac23}$ [@Fajfer:2015ycq]. Among those, only the models with the vector leptoquarks $(3,1)_{\frac23}$ and $(3,3)_{\frac23}$ are naturally free from any tree-level proton decay, since for the scalar leptoquark $(3,2)_{\frac16}$ there exists a dangerous quartic coupling involving three leptoquarks and the SM Higgs [@Arnold:2012sd] which triggers tree-level proton decay. Interestingly, as indicated in Table \[table1\], both vector models $(3,1)_{\frac23}$ and $(3,3)_{\frac23}$ suffer from dimension five proton decay and require imposing an additional symmetry to eliminate it. An elegant way to do it would be to extend the SM symmetry by a gauged ${\rm U}(1)_{B-L}$. Actually, such an extended symmetry would eliminate also the tree-level proton decay in the model with the scalar leptoquark $(3,2)_{\frac16}$. However, as we will see in Sec. \[leptvec\], only in the case of the: -    vector leptoquark $(3,1)_{\frac23}$ there exists a very appealing SM extension which intrinsically contains such a state in its spectrum, simultaneously forbidding dimension five proton decay. Diquark phenomenology {#dq} --------------------- The literature on the phenomenology of diquarks is much more limited. It focuses on scalar diquarks [@Hewett:1988xc] and predominantly looks at three aspects: LHC discovery reach for scalar diquarks [@Tanaka:1991nr; @Atag:1998xq; @Cakir:2005iw; @Chen:2008hh; @Han:2009ya; @Gogoladze:2010xd; @Berger:2010fy; @Baldes:2011mh; @Richardson:2011df; @Karabacak:2012rn; @Kohda:2012sr; @Das:2015lna; @Chivukula:2015zma; @Zhan:2013sza], $n - \bar{n}$ oscillations mediated by scalar diquarks [@Mohapatra:1980qe; @Babu:2008rq; @Ajaib:2009fq; @Gu:2011ff; @Baldes:2011mh; @Babu:2012vc; @Arnold:2012sd; @Babu:2013yca] and baryogenesis [@Babu:2008rq; @Arnold:2012sd; @Baldes:2011mh; @Gu:2011ff; @Babu:2012vc; @Babu:2013yca]. Studies of vector diquarks investigate only their LHC phenomenology [@Arik:2001bc; @Sahin:2009dca; @Richardson:2011df; @Zhang:2010kr; @Grinstein:2011yv; @Grinstein:2011dz], concentrating on their interactions with quarks. In Sec. \[other\] we close the gap in diquark phenomenology by discussing the implications of a vector diquark model for $n - \bar{n}$ oscillation experiments. Vector leptoquark model {#leptvec} ======================= As emphasized in Sec. \[qqq\], the SM extended by just the vector leptoquark $(3,1)_{\frac23}$ is a unique model, which, apart from being free from tree-level proton decay, has a very simple and attractive UV completion, automatically forbidding dimension five proton decay operators. Pati-Salam unification ---------------------- A priori, any of the leptoquarks can originate from an extra GUT irrep, either scalar or vector. In particular, the vector leptoquark $(3,1)_{\frac23}$ could be a component of the vector $40$ irrep of ${\rm SU}(5)$. Nevertheless, this generic explanation does not seem to be strongly motivated or predictive. Another interpretation of the vector $(3,1)_{\frac23}$ state arises in composite models [@Gripaios:2009dq; @Gripaios:2014tna; @Barbieri:2016las]. The third, perhaps the most desirable option, is that the vector leptoquark $(3,1)_{\frac23}$ is the gauge boson of a unified theory. Indeed, this scenario is realized if one considers partial unification based on the Pati-Salam gauge group: $${\rm SU}(4) \times {\rm SU}(2)_L \times {\rm SU}(2)_R$$ at higher energies [@Pati:1974yy].[^2] In this case the vector leptoquark $(3,1)_{\frac23}$ emerges naturally as the new gauge boson of the broken symmetry, and is completely independent of the symmetry breaking pattern. It is also interesting that the Pati-Salam partial unification model can be fully unified into an ${\rm SO}(10)$ GUT. The fermion irreps of the Pati-Salam model, along with their decomposition into SM fields, are $$\label{eq:PatiSalamIrreps} \begin{aligned} (4,2,1) &= (3,2)_{\frac16} \oplus (1,2)_{-\frac12}\\ (\bar4,1,2) &= (\bar3,1)_{\frac13} \oplus (\bar3,1)_{-\frac23} \oplus (1,1)_1 \oplus (1,1)_0\,. \end{aligned}$$ Interestingly, the theory is free from tree-level proton decay via gauge interactions. The explanation for this is straightforward. Since ${\rm SU}(4) \supset {\rm SU}(3)_c \times {\rm U}(1)_{B-L}$, this implies that $B\!-\!L$ is conserved. However, after the Pati-Salam group breaks down to the SM, the interactions of the leptoquark $(3,1)_{\frac23}$ with quarks and leptons have an accidental $B\!+\!L$ global symmetry. Those two symmetries combined result in both baryon and lepton number being conserved in gauge interactions, thus no proton decay can occur via a tree-level exchange of $(3,1)_{\frac23}$. In addition, there are no gauge-invariant dimension five proton decay operators in the Pati-Salam model involving the vector leptoquark $(3,1)_{\frac23}$. This was actually expected from the fact that ${\rm SU}(4) \supset {\rm SU}(3)_c \times {\rm U}(1)_{B-L}$ and, as discussed in Sec. \[qqq\], a ${\rm U}(1)_{B-L}$ symmetry is sufficient to forbid such operators. Flavor structure ---------------- The quark and lepton mass eigenstates are related to the gauge eigenstates through $n_f\times n_f$ unitary matrices, with $n_f=3$ the number of families of quarks and leptons. Expressing the interactions that couple the $(3,1)_{\frac23}$ vector leptoquark to the quark and the lepton in each irrep of Eqs.  in terms of mass eigenstates, one must include unitary matrices, similar to the Cabibbo-Kobayashi-Maskawa (CKM) matrix for the quarks in the SM, that measure the misalignment of the lepton and quark mass eigenstates: $$\begin{aligned} \label{eq:newINT} \mathcal{L} \ \supset \ \frac{g_4}{\sqrt2}\, V_\mu \,&\Big[L^u_{ij}\,(\bar u^i\gamma^\mu P_L\nu^j)+L^d_{ij}\,(\bar d^i \gamma^\mu P_L e^j) \nonumber\\ &+\ R_{ij}\,(\bar d^i\gamma^\mu P_R \, e^j)\Big] + {\rm h.c.}\,.\end{aligned}$$ The ${\rm SU}(4)$ gauge coupling constant, $g_4$, is not an independent paramter but fixed by the QCD coupling constant at the scale $M$ of the masses of the vector bosons of ${\rm SU}(4)$; to leading order $g_4(M)=\!\sqrt{4\pi \alpha_s(M)}\approx0.94$ at $M=16$ TeV, where, as will become clear later, this is the maximum value of $M$ consistent with the $R_{K^{(*)}}$ anomaly in this model. The unitary matrices $L^u$ and $L^d$, the CKM matrix $V$ and the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix $U$ satisfy $L^u= VL^dU$. B meson decays -------------- In the SM, flavor-changing neutral currents with $\Delta B=-\Delta S=1 $ are described by the effective Lagrangian [@Grinstein:1988me; @Buchalla:1995vs]: $$\label{eq:Leffnc} {\cal{L}}= - \frac{4 G_F}{\sqrt{2}} V_{tb}^{\phantom{*}}V_{ts}^*\bigg(\sum_{k=7}^{10} C_k \mathcal{O}_k+\sum_{i,j}C^{ij}_\nu \mathcal{O}^{ij}_\nu+ ...\bigg)\,, \ $$ where the ellipsis denote four-quark operators, $\mathcal{O}_7$ and $\mathcal{O}_8$ are electro- and chromo-magnetic-moment-transition operators, and $\mathcal{O}_9$, $\mathcal{O}_{10}$ and $\mathcal{O}_\nu$ are semi-leptonic operators involving either charged leptons or neutrinos:[^3] $$\begin{aligned} \mathcal{O}_{9(10)} &= \frac{e^2}{(4 \pi)^2} \big[\bar s \gamma_\mu P_ Lb\big]\big[\bar{\mu} \gamma^\mu (\gamma_5)\mu\big] \ , \\ \mathcal{O}^{ij}_\nu &=\frac{2e^2}{(4 \pi)^2} \big[\bar s \gamma^\mu P_L b\big] \big[\bar \nu^i\gamma^\mu P_L\nu^j\big] \ .\end{aligned}$$ Chirally-flipped $(b_{L(R)}\!\rightarrow \!b_{R(L)})$ versions of all these operators are denoted by primes and are negligible in the SM. New physics (NP) can generate modifications to the Wilson coefficients of the above operators, and, moreover, it can generate additional terms in the effective Lagrangian, $$\Delta\mathcal{L}=- \frac{4 G_F}{\sqrt{2}} V_{tb}^{\phantom{*}}V_{ts}^*\left(C_S \mathcal{O}_S +C_P \mathcal{O}_P + C'_S \mathcal{O}'_S +C'_P \mathcal{O}'_P \right)$$ in the form of scalar operators: $$\begin{aligned} \mathcal{O}_S^{(\prime)} & = \frac{e^2}{(4 \pi)^2} \big[\bar s P_{R(L)} b\big] \big[\bar{\mu} \mu\big] \ , \\ \mathcal{O}_P^{(\prime)} &= \frac{e^2}{(4 \pi)^2} \big[\bar s P_{R(L)} b\big] \big[\bar{\mu} \gamma_5 \mu\big] \ .\end{aligned}$$ Tensor operators cannot arise from short distance NP with the SM linearly realized and, moreover, under this assumption $C_P=-C_S$ and $C'_P=C'_S$ [@Alonso:2014csa]. Exchange of the $(3,1)_{\frac23}$ vector leptoquark gives tree-level contributions to the Wilson coefficients at its mass scale, $M$: $$\begin{aligned} \Delta C_9(M)&=-\Delta C_{10}(M) \!=\!-\frac{2\pi^2}{\sqrt2\, G_FM^2}\frac{g_4^2}{e^2}\frac{{L_{b\mu}^{d*}L_{s\mu}^{d}}}{V_{tb}^{\phantom{*}}V_{ts}^*} , \\ \Delta C'_9(M)&=\Delta C'_{10}(M)=-\frac{2\pi^2}{\sqrt2\,G_FM^2}\frac{g_4^2}{e^2}\frac{{R_{b\mu}^*R_{s\mu}}}{V_{tb}^{\phantom{*}}V_{ts}^*} \ ,\\ \Delta C_S(M)&=-\frac{4\pi^2}{\sqrt2\,G_FM^2}\frac{g_4^2}{e^2}\frac{{L^{d*}_{b\mu}R_{s\mu}}}{V_{tb}^{\phantom{*}}V_{ts}^*} \ ,\\ \Delta C'_S(M)&=-\frac{4\pi^2}{\sqrt2\,G_FM^2}\frac{g_4^2}{e^2}\frac{{R_{b\mu}^*L_{s\mu}^{d}}}{V_{tb}^{\phantom{*}}V_{ts}^*}\ ,\\ \Delta C^{ij}_\nu(M)&=0 \ .\end{aligned}$$ The recent $B$ meson decay measurements [@Aaij:2014ora; @Aaij:2017vbb] show an excess above the SM background in the ratios $R_{K^{(*)}} =\Gamma(B\!\rightarrow \!K^{(*)}\mu\mu)\,/\,\Gamma(B\!\rightarrow\! K^{(*)}ee)$. Those anomalies are best fit by $\Delta C_9=-\Delta C_{10}\approx-0.6$ [@Geng:2017svp], which requires $(g_4^2/M^2)L_{b\mu}^{d*}L_{s\mu}^{d}\approx 1.8\times 10^{-3}~\text{TeV}^{-2}$. Assuming $L_{b\mu}^{d*}L_{s\mu}^{d}=\tfrac12$, which is the largest value allowed by unitarity, we obtain the previously quoted leptoquark mass of $M \approx 16 \ {\rm TeV}$. Limits on $\Delta C'_{9,10}$ and $\Delta C_S^{(\prime)}$ can be accommodated by adjusting $R_{s\mu}$ and $R_{b\mu}$. Experimental bounds on $R_{K^{(*)}\nu} =\text{Br}(B\to K^{(*)}\nu\bar\nu)/$ $\text{Br}(B\to K^{(*)}\nu\bar\nu)^{\text{SM}}=\tfrac13\sum_{ij}|\delta^{ij}+C^{ij}_\nu/C_\nu^{\text{SM}}|^2$, where $C_\nu^{\text{SM}}\simeq-6.35$ [@Buras:2014fpa], severely constrain theoretical models for those anomalies. As seen above, the $(3,1)_{\frac23}$ vector leptoquark evades this constraint by giving no correction at all to $C_\nu$; the result holds generally for this type of NP mediator at tree level [@Alonso:2015sja; @Calibbi:2015kma]. It has been pointed out that generally the condition $\Delta C_\nu(M)=0$ is not preserved by renormalization group running of the Wilson coefficients [@Feruglio:2017rjo]. Because of the flavor structure of the interaction in Eq.  there are no “penguin” or wave function renormalization contributions to the running of $\Delta C_\nu$ down to the electroweak scale. The only contribution comes from the renormalization by exchange of SU(2) gauge bosons that mixes the singlet operator $(\bar q \gamma^\mu P_L e)(\bar e \gamma_\mu P_L q)$ into the triplet, $(\bar q\,\tau^a \gamma^\mu P_L e)(\bar e\,\tau^a \gamma_\mu P_L q)$, resulting in $$\Delta C^{ij}_\nu(M_W)=-\frac{3}{4\sqrt2 \,G_FM^2}\frac{g_4^2}{\sin^2\theta_w}\ln\left(\frac{M}{M_W}\right) \frac{S_{sj}S^*_{bi}}{V_{tb}^{\phantom{*}}V_{ts}^*} \ ,$$ where $S\!=\!V^\dagger L^u\!=\!L^dU$. The vector contribution to the rate does not interfere with the SM, which implies $R_{K^{(*)}\nu} -1=$ $\frac13\sum_{ij} |C^{ij}_\nu/C_\nu^{\text{SM}}|^2$; using $R_{K^{(*)}\nu} <4.3$ [@Lutz:2013ftz], we obtain the condition $M >0.8$ TeV. Since $\ln(M/M_W)$ is not large for $M \!\approx\! 16 \ {\rm TeV}$, the leading log term is subject to sizable $(\approx 100\%)$ corrections. However, a complete one-loop calculation is beyond the scope of this work. We pause to comment on the remarkable cancellation of the interference term between the SM and NP contributions to the rate for $B\to K^{(*)}\nu\bar\nu$ and the absence of a sum over generations in the pure NP contribution to the rate. These observations hold generally for any vector leptoquark model that couples universally to quark and lepton generations. This can be easily seen by not rotating to the neutrino mass eigenstate basis, a good approximation for the nearly massless neutrinos. Vector leptoquark exchange leads to an effective interaction with flavor structure $(\bar s_L \gamma^\mu\nu_L^2)(\bar \nu_L^3\gamma^\mu b_L)$, while the SM always involves a sum over the same neutrino flavors $\sim\sum_j \bar\nu_L^j\gamma^\mu\nu_L^j$. There are never common final states to the SM and the NP mediated interactions and therefore no interference. Moreover, there is a single flavor configuration in the final state of the NP mediated interaction ($\bar \nu^3 \nu^2$) while there are three configurations in the SM case ( $\bar \nu^j \nu^j$, $j=1,2,3$). It has been suggested that the vector leptoquark $(3,1)_{\frac23}$ may alternatively be used to account for the anomaly in semileptonic decays to $\tau$-leptons [@Sakaki:2013bfa; @Alonso:2015sja]. Defining, as is customary, $R_{D^{(*)}}=\text{Br}(B\to D^{(*)}\tau\nu)/\text{Br}(B\to D^{(*)}\ell\nu)$, the SM predicts [@Bernlochner:2017jka] (see also [@Bigi:2017jbd; @Jaiswal:2017rve; @Fajfer:2012vx; @Becirevic:2012jf]) $R_D=0.299(3)$ and $R_{D^*}=0.257(3)$. These branching fractions have been measured by Belle [@Matyja:2007kt; @Bozek:2010xy; @Hirose:2016wfn], BaBar [@Lees:2012xj; @Lees:2013uzd] and LHCb [@Aaij:2015yra], and the average gives [@Amhis:2016xyh] $R_D=0.403(47)$ and $R_{D^*}=0.310(17)$. The effect of leptoquarks on $B$ semileptonic decays to $\tau$ is described by the following terms of the effective Lagrangian for charged current interactions [@Goldberger:1999yh; @Cirigliano:2012ab]: $$\begin{aligned} \mathcal{L} \ \supset \ - \frac{4 G_F}{\sqrt{2}} \,V_{cb}\,&\Big[(U_{\tau j}+\epsilon^j_{L})\,(\bar c\gamma^\mu P_L b)(\bar \tau\gamma_\mu P_L\nu^j)\nonumber\\ & + \epsilon_{s_R}^{j}\,(\bar c \,P_R\, b)(\bar \tau\, P_L\,\nu^j)\Big]+\text{h.c.}\end{aligned}$$ with $$\begin{aligned} \!\!\!\epsilon_{L}^j \!=\!\frac{g_4^2}{4\sqrt2 \,G_FM^2}\frac{L^u_{cj}L^{d*}_{b\tau}}{V_{cb}} \ , \ \ \ \ \epsilon_{s_R}^j \!=\! - \frac{g_4^2}{2\sqrt2 \,G_FM^2}\frac{L^u_{cj}R^{*}_{b\tau}}{V_{cb}}.\end{aligned}$$ The $B_c$ lifetime [@Alonso:2016oyd] and $B_c\to\tau\nu$ branching fraction [@Akeroyd:2017mhr] impose severe constraints on $\epsilon_{s_R}$; these are accommodated by abating $R^{\phantom{u}}_{b\tau}$. Hence, $$\begin{aligned} \frac{R_{D^{(*)}}}{R_{D^{(*)}}^{\rm \, SM}} &\approx 1\!+\! 2\, {\rm Re}\Big[\sum_j\epsilon\, L^u_{cj} L^{d*}_{b\tau}\Big] \!\approx 1\!+\!2\, {\rm Re}\!\left[\epsilon\, (VL^d)_{c\tau} L^{d*}_{b\tau}\right]\nonumber\\ &\approx 1+ 2\, {\rm Re}\!\left[\epsilon\, L^d_{s\tau} L^{d*}_{b\tau}\right] \leq 1 + \epsilon \ ,\end{aligned}$$ where, $$\epsilon = \frac{g_4^2}{4\sqrt2 \,G_FV_{cb}M^2} \approx 0.1 \left(\frac{2 \ {\rm TeV}}{M}\right)^2 .$$ Towards a viable UV completion ------------------------------ We note that the simplest version of the model is heavily constrained by meson decay experiments. For generic, order one entries of the flavor matrices, the leptoquark mass is forced to be above the $1000 \ {\rm TeV}$ scale [@Valencia:1994cj; @Smirnov:2007hv; @Smirnov:2008zzb; @Carpentier:2010ue; @Kuznetsov:2012ai]. Surprisingly, all of the kaon decay bounds can be avoided if the unitary matrices $\hat{L}^d$ and $\hat{R}$ are of the form (see also [@Kuznetsov:2012ai]): \[matricesL\] \^d (   0 & 0 & 1\ L\^d\_[21]{} & L\^d\_[22]{} & 0\ L\^d\_[31]{} & L\^d\_[32]{} &0   ) ,       (   0 & 0 & 1\ R\_[21]{} & R\_[22]{} & 0\ R\_[31]{} & R\_[32]{} &0   ),         where it is actually sufficient that the entries labeled as zero are just $\lesssim 10^{-4}$. Although current $\tau$ decay constraints are irrelevant for our choice of the leptoquark mass, with unsuppressed right-handed (RH) currents the $B$ meson decay bounds require $M \gtrsim 40 \ {\rm TeV}$ [@Kuznetsov:2012ai]. Interestingly, we find that if a mechanism suppressing the RH currents is realized in nature, the present bounds from $B$ meson decays are much less stringent and require merely $M \gtrsim 19 \ {\rm TeV}$. A possible way to suppress the RH currents is to associate them with a different gauge group than the left-handed (LH) ones, and to have the RH group spontaneously broken at a much higher scale than the LH group. A simple setting is offered by the gauge group (4)\_L (4)\_R (2)\_L (1) and does not require introducing any new fermion fields beyond the SM particle content and a RH neutrino: $$\label{eq:new2} \begin{aligned} (4,1, 2, 0) &= (3,2)_{\frac16} \oplus (1,2)_{-\frac12}\,,\\ (1,4,1, \tfrac12) &= (3,1)_{\frac23} \oplus (1,1)_0\,,\\ (1,4,1, -\tfrac12) &= (3,1)_{-\frac13} \oplus (1,1)_{-1} \ . \end{aligned}$$ Such a model predicts rates for $B^+ \!\rightarrow\! K^+ e^\mp \mu^\pm$ and $\mu \rightarrow e\,\gamma$, among others, just above the experimental bounds reported in [@Aubert:2006vb; @TheMEG:2016wtm]. The details of the model along with an analysis of the relevant experimental constraints will be the subject of a future publication [@work].[^4] Vector diquark model {#other} ==================== In this section we discuss the properties of a model with just one additional representation – the vector color sextet: V\_= ( V\_u\ V\_d\ )\_\^= (6, 2)\_[-16]{}  , which is obviously free from proton decay. Although in the SM all fundamental vector particles are gauge bosons, we can still imagine that such a vector diquark arises from a vector GUT representation, for instance from a vector $40$ irrep of ${\rm SU}(5)$ [@Slansky:1981yr]. The Lagrangian for the model is given by: \[Lag1\] \_[V]{} &=&- (D\_[\[]{} V\_[\]]{})\^D\^[\[]{} V\^[\]]{}+M\^2V\_\^V\^\ && - ,      where $\alpha, \beta = 1, 2, 3 $ are ${\rm SU}(3)_c$ indices, $i, j = 1, 2, 3$ are family indices and there is an implicit contraction of the ${\rm SU}(2)_L$ indices. We assume that the mass term arises from a consistent UV completion. Among the allowed higher dimensional operators, $n-\bar{n}$ oscillations, as we discuss in Sec. \[nn\], are mediated by the dimension five terms: \[o1o2\] \_1 &= V\_\^[’]{} V\_\^[’]{} ( |u\^c\_R)\^\^ d\_R\^[’]{} \_ \_[’’’]{}  ,\ \_2 &= \_ \_[’’’]{}  . LHC bounds {#sec:LHC} ---------- Several studies of constraints and prospects for discovering vector diquarks at the LHC can be found in the literature. Most of the analyses have focused on the case of a sizable diquark coupling to quarks [@Arik:2001bc; @Sahin:2009dca; @Zhang:2010kr], although an LHC four-jet search that is essentially independent of the strength of the diquark coupling to quarks has also been considered [@Richardson:2011df]. There are severe limits on vector diquark masses arising from LHC searches for non-SM dijet signals [@Sirunyan:2016iap; @Aaboud:2017yvp]. For a coupling $\lambda_{ij} \approx 1$ $(i,j=1,2)$ those searches result in a bound on the vector diquark mass M\_[1]{}8  [TeV]{}  . Lowering the value of the coupling to $\lambda_{ij} \approx 0.01$ completely removes the LHC constraints from dijet searches and, at the same time, does not affect the strength of the four-jet signal arising from gluon fusion. Using the results of the analysis of four-jet events at the LHC presented in [@Richardson:2011df], the currently collected $37 \ {\rm fb}^{-1}$ of data by the ATLAS experiment [@Aaboud:2017yvp] with no evident excess above the SM background constrains the vector diquark mass to be \[m1\] M\_[1]{} 2.5  [TeV]{}  . Additional processes constraining the vector diquark model include neutral meson mixing and radiative $B$ meson decays [@Isidori:2010kg; @Blake:2016olu]. The resulting bounds in the case of scalar diquark models were calculated in [@Arnold:2012sd; @Maalampi:1987pb; @Chakraverty:2000rm] and are similar here. Neutron-antineutron oscillations {#nn} -------------------------------- In the light of null results from proton decay searches [@Miura:2016krn], the possibility of discovering $n - \bar{n}$ oscillations has recently gained increased interest [@Phillips:2014fgb]. The most important reason for this is that the matter-antimatter asymmetry of the Universe requires baryon number to be violated at some point during its evolution. If processes with $\Delta B = 1$ are indeed suppressed or do not occur in Nature at all, the next simplest case involves $\Delta B \!=\! 2$, a baryon number breaking pattern that may result in $n - \bar{n}$ oscillations without proton decay. Moreover, the SM augmented only by RH neutrinos can have an additional ${\rm U}(1)_{B-L}$ gauge symmetry without introducing any gauge anomalies. Through this symmetry, processes with $\Delta B = 2$ would be accompanied by $\Delta L \!=\! 2$ lepton number violating ones, which in turn are intrinsically connected to the seesaw mechanism generating naturally small neutrino masses [@Minkowski:1977sc]. Models constructed so far propose $n - \bar{n}$ oscillations mediated by scalar diquarks, as mentioned in Sec. \[dq\]. Those oscillations proceed through a triple scalar vertex, so that the process is described at low energies by a local operator of dimension nine. Below we show that $n - \bar{n}$ oscillations can also be mediated by vector diquarks, in particular within the model with just one new representation discussed in this section. The least suppressed channel is via a dimension five gauge invariant quartic interaction $\mathcal{O}_1$ in Eq. (\[o1o2\]) involving two vector diquarks $(6,2)_{-\frac16}$ and the SM up and down quarks, as shown in Fig. \[fig:4\], ultimately leading to $n - \bar{n}$ oscillations through a low energy effective interaction local operator of dimension nine, as in the case of scalar diquarks.[^5] ![ []{data-label="fig:4"}](figure_3.pdf){width="0.75\linewidth"} With the simplifying assumption that $c_1 \!\approx\! 1$ and neglecting other operators contributing to the signal we now estimate the rate of $n - \bar{n}$ oscillations in this model. The effective Hamiltonian corresponding to the operator $\mathcal{O}_1$ is: && - (\^c\_[L]{})\^\_d\_[R]{}\^[’]{} (\^c\_[L]{})\^\_d\_[R]{}\^[’]{} (\^c\_[R]{})\^\^ d\_[R]{}\^[’]{}\ & &(\_ \_[’’’]{} + \_[’]{} \_[’’]{}+\_[’]{} \_[’’]{}+\_[’]{} \_[’’]{})\ & & +  [h.c.]{}  . Combining this with the results of [@Arnold:2012sd; @Tsutsui:2004qc], we obtain an estimate for the $n - \bar{n}$ transition matrix element,[^6] ||[n]{}| \_[eff]{} | n |  [GeV\^6]{}  , where $|n\rangle$ is the neutron state at zero momentum. Current experimental limit [@Abe:2011ky] implies, assuming $\lambda_{11} \approx 0.01$ (which is well below the LHC bound from dijet searches, as discussed earlier), that \[lim\] M 2.5 [TeV]{} ()\^[1/4]{} . An interesting limit on the vector diquark mass is derived if we assume that the physics behind the triple diquark interaction with the SM Higgs is related to the physics responsible for providing the diquark its mass, i.e., for $\Lambda \approx M$. In such case the $n - \bar{n}$ oscillation search provides the bound: M 90  [TeV]{} , much stronger than the LHC limit. If there is new physics around that energy scale, it should be discovered by future $n - \bar{n}$ oscillation experiments with increased sensitivity [@Phillips:2014fgb], which are going to probe the vector diquark mass scale up to $\sim 175 \ {\rm TeV}$. This is especially interesting since models with TeV-scale diquarks tend to improve gauge coupling unification [@Babu:2012vc; @Babu:2013yca]. Conclusions =========== We have shown that lack of experimental evidence for proton decay singles out only a handful of phenomenologically viable leptoquark models. In addition, even leptoquark models with tree-level proton stability contain dangerous dimension five proton decay mediating operators and require an appropriate UV completion to remain consistent with experiments. This is especially relevant for the Standard Model extension involving the vector leptoquarks $(3,1)_{\frac23}$ or $(3,3)_{\frac23}$, since those are the only two models with a single new representation that do not suffer from tree-level proton decay and can explain the recently discovered anomalies in $B$ meson decays. The property which makes the vector leptoquark $(3,1)_{\frac23}$ even more appealing is that it fits perfectly into the simplest Pati-Salam unification model, where it can be identified as the new gauge boson. If such an exciting scenario is indeed realized in nature, the $B$ physics experiments can be used to actually probe the scale and various properties of grand unification! In the second part of the paper we focused on a model with a vector diquark $(6,2)_{-\frac16}$ and showed that neutron-antineutron oscillations can be mediated by such a vector particle. The model is somewhat constrained by LHC dijet searches; however, it can still yield a sizable neutron-antineutron oscillation signal, that can be probed in current and upcoming experiments. It can also give rise to significant four-jet event rates testable at the LHC. It would be interesting to explore whether a vector diquark with a mass at the TeV scale can improve gauge coupling unification in non-supersymmetric grand unified theories, similarly to the scalar case [@Stone:2011dn], providing even more motivation for upgrading the neutron-antineutron oscillation experimental sensitivity. Acknowledgments {#acknowledgments .unnumbered} --------------- We thank Aneesh Manohar for useful conversations and Pavel Fileviez Pérez for comments regarding the manuscript. This research was supported in part by the DOE Grant No. ${\rm DE}$-[SC]{}0009919. [^1]: We note that the dimension five operators (\[dim5op\]) provide a baryon number violating channel which may be used to generate a cosmological baryon number asymmetry. [^2]: Similar unification models have recently been constructed based on the gauge group ${\rm SU}(4) \times {\rm SU}(2)_L \times {\rm U}(1)$ [@Perez:2013osa; @Fornal:2015boa], in which the new gauge bosons also coincide with the vector leptoquark $(3,1)_{2/3}$. [^3]: We have assumed the $B$ anomalies in $R_{K^{(*)}}$ arise from a suppression in the $\mu$ channel relative to the SM. The Pati-Salam $(3,1)_{2/3}$ vector leptoquark model can equally well accommodate an enhancement in the electron channel, but a suppression in the muon channel is preferred by global fits to data that include angular moments in $B\to K^*\mu\mu$ and various branching fractions in addition to $R_{K^{(*)}}$. [^4]: Shortly after the results of our work became public, two other papers appeared proposing the explanation of $B$ decay anomalies by introducing new vector-like fields either within the Pati-Salam framework [@Calibbi:2017qbu] or in a model with an extended gauge group [@DiLuzio:2017vat]. [^5]: We note that $n - \bar{n}$ oscillations occur at the renormalizable level in a model involving the vector diquarks $(6,2)_{-1/6}$, $(6,2)_{5/6}$, and the scalar diquark $(6,1)_{-2/3}$, and proceed through a triple sextet interaction. [^6]: The single particle state normalization adopted here is: n() | n(’) = (2)\^3\^[(3)]{}(-’)  .
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study a class of timelike weakly extremal surfaces in flat Minkowski space ${\mathbb R}^{1+n}$, characterized by the fact that they admit a $C^1$ parametrization (in general not an immersion) of a specific form. We prove that if the distinguished parametrization is of class $C^k$, then the surface is regularly immersed away from a closed singular set of euclidean Hausdorff dimension at most $1+1/k$, and that this bound is sharp. We also show that, generically with respect to a natural topology, the singular set of a timelike weakly extremal cylinder in ${\mathbb R}^{1+n}$ is $1$-dimensional if $n=2$, and it is empty if $n \ge 4$. For $n=3$, timelike weakly extremal surfaces exhibit an intermediate behavior.' address: - 'Department of Mathematics, University of Toronto, Toronto, Ontario, Canada' - 'Dipartimento di Matematica, Università di Padova, Padova, Italy' - 'Dipartimento di Informatica, Università di Verona, Verona, Italy' author: - 'R. L. Jerrard' - 'M. Novaga' - 'G. Orlandi' title: On the regularity of timelike extremal surfaces --- Introduction ============ In this paper we study timelike extremal surfaces in $(1+n)$-dimensional flat Minkowski space. In particular, we focus on extremal immersions of a cylinder ${\mathbb R}\times{\mathbb S}^1$ into ${\mathbb R}^{1+n}$, which arise in models of closed cosmic strings and have been extensively studied in the physics community (see [@BoIn:34; @VS; @An:03] and references therein), as well as in the more recent mathematical literature [@Neu; @Li:04; @Mi:08; @KZZ; @KZ; @BHNO; @J; @NT], and have recently been proved [@BNO1; @J] to describe the dynamics of topological defects in various relativistic field theories in certain scaling limits. As for many geometric problems, timelike extremal surfaces present various kinds of singularities. For instance, it has been shown in [@BHNO] that a closed convex string in ${\mathbb R}^2$ with zero initial velocity shrinks to a point in finite time, while its shape approaches that of a circle. An analogous phenomenon can be found in other geometric evolutions such as the planar curvature flow [@GH] and the hyperbolic curvature flow of convex curves [@KKW]. However more complicated singularities can occur during the evolution (typically the formation of cusps), and a partial classification has been provided in [@EH], where the authors study self-similar singularity formation. A theory of generalized extremal surfaces in the varifolds sense (see [@allard]) has been recently proposed in [@BNO1; @BNO2]. In [@NT] it has been shown that any (immersed) timelike extremal cylinder in ${\mathbb R}^{1+2}$ necessarily develops singularities in finite time. In the same paper, the authors conjecture that this does not hold in ${\mathbb R}^{1+n}$ for $n\ge 3$, where existence of smooth timelike extremal cylinders is expected. On the other hand, there exist globally smooth timelike extremal surfaces with noncompact slices in ${\mathbb R}^{1+2}$, which are small perturbations of timelike planes [@Li:04; @KZZ]. The arguments of [@NT] rely heavily on a particular representation of extremal immersed cylinders, which we call the [*orthogonal gauge*]{}, known for a long time in the physics literature and first proved to be valid, as far as we know, in [@BHNO]. This representation also yields global weak solutions in the sense of [@BNO1; @BNO2]. The main goal of this paper is to estimate the dimension of the singular set of these weak solutions, which have the good property that they are images of $C^1$ maps (in general [*not immersions*]{}) of a specific form, see , below. In particular we prove that, if the map is of class $C^k$, then the dimension of the singular set is bounded above by $1+\frac 1 k$, and the bound is sharp. The upper bound on the dimension turns out to follow immediately from a classical refinement of Sard’s Theorem, due to Federer [@federer], so the construction of examples of [extremal]{} surfaces attaining this bound is the harder part of this result. We also show that the singular set is generically empty when $n>3$, confirming the conjecture of [@NT] in such dimensions. More precisely we show that, generically for $n>3$, given a closed curve $\Gamma$ immersed in ${\mathbb R}^n$ and a velocity field $v:\Gamma\to {\mathbb R}^n$, with $|v|<1$ and orthogonal to $\Gamma$, there exists a smooth globally immersed timelike extremal surface containing $\Gamma$ and tangent to $(1,v)$. For $n=3$, roughly speaking, both globally smooth immersed solutions and solutions that develop singularities occur for large sets of initial data (that is, sets with nonempty interior.) We start in Section \[S:2\] by quickly recalling some properties of the orthogonal gauge, including existence and (restricted) uniqueness of solutions of a Cauchy problem for timelike extremal surfaces. We also present some examples in Section \[S:4\] showing that uniqueness may fail without the restrictions imposed in Section \[S:2\]. Timelike extremal surfaces in the orthogonal gauge {#S:2} ================================================== Given an open interval $I\subset{\mathbb R}$, and an immersion $\psi: I\times {\mathbb R}\to {\mathbb R}^{1+n}$, possibly periodic with respect to the second variable, for an open set $U\subset I\times {\mathbb R}$ we define the Minkoswkian area of $\psi(U)$ to be $$\int_{U} \sqrt{| g|}\ , \qquad g := \det (g_{ij}), \qquad g_{ij} := (\partial_i \psi, \partial_j\psi)_m$$ where $(\cdot, \cdot)_m$ denotes the Minkowski inner product. This functional is also sometimes called the Nambu-Goto action. The surface parametrized by $\psi$ is said to be [*timelike*]{} if $g<0$ everywhere, and a timelike surface is [*extremal*]{} if $\psi$ is a critical point of the Minkowskian area functional with respect to compactly supported variations. It is noted in [@NT] that any timelike immersion of a surface into ${\mathbb R}^{1+n}$ can be reparametrized locally to have the form $$\psi(t,x) = (t, \gamma(t,x)). \label{formofpsi}$$ Here we will consider the initial value problem for timelike extremal surfaces with initial data of the form $$\gamma(0,x) = \gamma_0, \quad \gamma_t(0,x) = v_0, \label{initconduno}$$ where $\gamma_0\in C^1({\mathbb R}; {\mathbb R}^n)$ is an immersion and $v_0\in C^0({\mathbb R};{\mathbb R}^n)$ satisfies $v_0\cdot \gamma_{0}' = 0$ and $|v_0|<1$ everywhere. We call such a pair an [*admissible couple*]{}, and we say that an admissible couple is [*periodic*]{} if $\gamma_0$ and $v_0$ are periodic with the same period, which implies in particular that $\gamma_0$ parametrizes a closed curve. We remark that if $\gamma_0$ is an embedding, or more generally if $v_0\circ \gamma_0^{-1}$ is single-valued on $\mbox{Image}(\gamma_0)$, then the initial condition can be restated in the form $$\mbox{$\gamma_0$ parametrizes $\{x \in {\mathbb R}^ n : (0,x)\in M \}$,\ and $(1, v_0(x))\in T_{\psi(0,x)}M$ for every $x\in {\mathbb R}$.} \label{geometric.ic}$$ Two admissible couples $(\gamma_0,v_0)$, $(\hat \gamma_0, \hat v_0)$ are considered to be equivalent if there is a $C^1$ diffeomorphism $\lambda:{\mathbb R}\to {\mathbb R}$ such that $(\gamma_0,v_0) = (\hat \gamma_0, \hat v_0)\circ \lambda $. Equivalent couples encode exactly the same geometric data, and to any timelike surface $M$, whose $t=0$ slice is an immersed curve, one can assign an (equivalence class of) admissible couples, indeed possibly multiple equivalence classes if the curve is not embedded. Our approach is based on the observation, classical in the physics literature and straightforward to verify (see [@VS; @BHNO]), that if $\gamma\in C^k( I\times {\mathbb R}; {\mathbb R}^{n})$, $k\ge 1$ satisfies $$\begin{aligned} \label{vincolozero} \label{vincolouno} |\gamma_x|^2 - |\gamma_t|^2&=&1 \\\label{vincolodue} \gamma_x\cdot\gamma_t&=&0 \\\label{onde} \gamma_{tt} - \gamma_{xx} &=& 0 $$ for all $(t,x)\in I\times {\mathbb R}$, then $\psi(x,t) = (t, \gamma(t,x))$ is a solution of the Euler-Lagrange equations associated to the Minkowski area functional wherever $g\ne 0$, and hence is an extremal immersion near such points. This holds in the distributional sense if $k=1$ and classically if $k\ge 2$. In view of -, we will call such a parametrization the [*orthogonal gauge*]{}. The general solution $\gamma$ of - has the form $$\label{eqrepr} \gamma(t,x)=\frac{a(x+t)+b(x-t)}{2}$$ where $a,b\in C^1({\mathbb R};{\mathbb R}^n)$ are maps satisfying $$\label{vincoloab} |a'|=|b'|=1 \quad {\rm in\ }{\mathbb R}.$$ Indeed, is just d’Alembert’s formula, and once $\gamma$ is known to have the form , then the constraints , are easily seen to be equivalent to . Given a function $\gamma(t,x) = \frac 12( a(x+t)+b(x-t))$, with $a,b$ satisfying , we shall write in the sequel $\psi(t,x) := (t,\gamma(t,x))$ and $M := \mbox{Image}(\psi)$. We also define the singular set of $M$ as $$Sing := \{ \psi(t,x) : \mbox{rank}(\nabla \psi)(t,x) < 2 \} \ = \ \{ \psi(t,x) : \gamma_x(t,x) = 0\}.$$ We have that $M$ is timelike and regularly immersed in an open neighborhood of every point of $M\setminus Sing$, while, at every point of $Sing$, the orthogonal coordinate system degenerates and, as we will prove in Theorem \[teoex\] below, $M$ fails to be timelike. A stricter notion of singular set is $$Sing^* := \{ p\in Sing : \lim_{q\in M, q\to p} \tau(q)\mbox{ does not exist} \},$$ where $\tau(\cdot)$ is the (spatial) tangent $$\tau(p) = \frac{\gamma_x}{|\gamma_x|}\circ \psi^{-1}(p)$$ defined wherever it makes sense, which is at points $p\in M\setminus Sing$ where the set $\{\frac{\gamma_x}{|\gamma_x|}(t,x) : \psi(t,x)=p\}$ consists of exactly one element. We note that the definitions of $Sing$ and $Sing^*$ both have the drawback that they depend on the parametrization of $M$. We collect some known results in the following \[prop:summary\] Given an admissible couple $(\hat \gamma_0, \hat v_0)\in C^k\times C^{k-1}$, there exists an equivalent admissible couple $(\gamma_0, v_0)$ and a map $\gamma\in C^k({\mathbb R}\times {\mathbb R}; {\mathbb R}^n)$ of the form , , such that the initial condition holds. In addition, [**1**]{}. $\psi(t,x) = (t, \gamma(t,x))$ is timelike and an immersion in a neighborhood of every point where $\gamma_x\ne 0$, and it is neither timelike nor an immersion at points where $\gamma_x$ vanishes. [**2**]{}. $\psi$ is an extremal immersion wherever it is a immersion, and in particular this holds for $(t,x)$ in a neighborhood of $\{0\}\times {\mathbb R}$. [**3**]{}. If $\hat \psi$ is any extremal immersion of the form $\hat\psi(t,x) = (t, \hat\gamma(t,x))$ for $(t,x)\in I\times {\mathbb R}$ for some interval $I\subset {\mathbb R}$ containing $0$, and if $(\hat \gamma(0,\cdot), \hat \gamma_t(0,\cdot))$ is equivalent to $(\gamma_0, v_0)$, then $\psi$ is a reparametrization of $\hat \psi$, and thus $\psi(I\times {\mathbb R}) = \hat \psi(I\times {\mathbb R})$. [**4**]{}. $M = \mbox{Image}(\psi)$ can be identified with a global weak solution of the extremal surface equation, in the sense of [@BNO1; @BNO2]. This proposition implies in particular the local existence of a smooth timelike extremal surface $M$ satisfying the initial condition for an admissible couple $(\gamma_0, v_0)$ such that $v_0\circ \gamma_0^{-1}$ is single-valued, as well as the global existence of a weak solution. We show in Proposition \[prop:nonuniq\] below that the restriction of the uniqueness assertion [**3**]{} to the class of surfaces parametrized by maps to the form is in fact necessary; without this condition, uniqueness can fail. Given any admissible couple $(\hat \gamma_0, \hat v_0)$, we can always find an equivalent couple $(\gamma_0, v_0)$ such that $$\label{normalize.data} |\gamma_{0}'|^2 + |v_0|^2 = 1.$$ Letting $\gamma$ denote the solution of the wave equation with initial data , it is easy to check (see for example and below) that $\gamma$ satisfies , , thus proving the existence of an extremal immersion for the admissible couple $(\gamma_0, v_0)$. The proof of [**1**]{} is given in the proof of Theorem \[teoex\] below. The only subtle part is checking that $M$ is not timelike at $\psi(t,x)$, if $\gamma_x(t,x) = 0$; everything else follows easily from the definitions and . Concerning [**2**]{}, we have already noted that a straightforward computation shows that $\psi$ is an extremal immersion wherever it is an immersion, and it follows from [**1**]{} that $\psi$ is an immersion in a neighborhood of $\{0\}\times {\mathbb R}^n$. Finally, conclusions [**3**]{} and [**4**]{} are established in [@BHNO] and [@BNO1; @BNO2] respectively. They are proved for $\gamma$ which is periodic in the $x$ variable, but both facts are essentially local (due to finite propagation speed) and so the proofs work without change in the general case. In \[15, Theorem 4.1\], global existence of $C^2$ solutions is proved for an equation that, like (2-4)-(2-6), is equivalent to the equation for timelike extremal surfaces as long as the surfaces associated to the solutions remain immersed. In this result, the orthogonal gauge is not imposed, and the equations considered are thus nonlinear. We record some standard formulas. Differentiating we obtain $$\begin{aligned} \label{equno} \gamma_x(t,x)&=&\frac{a'(x+t)+b'(x-t)}{2} \\\label{eqdue} \gamma_t(t,x)&=&\frac{a'(x+t)-b'(x-t)}{2}.\end{aligned}$$ Letting $t=0$ in - and recalling , we deduce that $$\begin{aligned} a'(x)&=& \gamma_{0}'(x) + v_0(x) \label{eq:aprime}\\ b'(x)&=& \gamma_{0}'(x) - v_0(x) \label{eq:bprime}\end{aligned}$$ We define a [*cylinder*]{} to be a set $M\subset I\times{\mathbb R}^n$ that can be written [*globally*]{} as the image of a map $\psi$ of the form , where $\gamma(t, \cdot)$ is periodic with fixed period $E$ for every $t\in I$. It is straightforward to check that if one starts with a representative $(\hat \gamma_0, \hat v_0)$ of a periodic admissible couple such that does not hold, with $(\hat \gamma_0, \hat v_0)$ periodic of period $L$, then an equivalent couple $(\gamma_0, v_0)$ that satisfies is periodic with period $$E_0 := \int_0^L \frac{|\hat \gamma_{0}'(x)|}{\sqrt{1-|\hat v_0(x))|^2}}\,dx.$$ Then $a+b$ is periodic, and we see from , that $a', b'$ are periodic as well, all with period $E_0$. Hence, if $(\hat \gamma_0, \hat v_0)$ is a periodic admissible couple, then the surface associated to $(\hat \gamma_0, \hat v_0)$ by Proposition \[prop:summary\] is a cylinder. Notice that, given a solution $\gamma$, the corresponding couple $(a,b)$ is uniquely determined up to additive constants. In particular, the othogonal gauge provides a one-to-one correspondence between the set of all equivalence classes of admissible couples and the set $$X \ := \ \big\{(a,b)\in C^1({\mathbb R};{\mathbb R}^n)\times C^1({\mathbb R};{\mathbb R}^n):\ a'+b'\ \mbox{never vanishes}, \ |a'| = |b'| =1 \big\}/\sim$$ where $(a,b)\sim (c,d)$ iff there exist $x_0\in {\mathbb R}$, $z_0\in{\mathbb R}^n$ and $\sigma_0\in \{\pm1\}$ such that $$c(x)=a(\sigma_0 x+x_0)+z_0\qquad d(x)=b(\sigma_0 x+x_0)-z_0\qquad {\rm for\ all\ }x\in{\mathbb R}.$$ Similarly, equivalence classes of periodic admissible couples are parametrized by $$X_{\rm per} \ = \ \big\{[(a,b)]\in X :\ \textrm{$a', b', a+b$ periodic with the same period} \big\}$$ where $[\cdot ]$ denotes an equivalence class. When $(a,b)\in X_{\rm per}$, we shall denote by $E_0$ the common period of $a,\,b$. We shall consider the topology induced by $C^1({\mathbb R};{\mathbb R}^n)\times C^1({\mathbb R};{\mathbb R}^n)$ on $X$ (or equivalently on the set of admissible couples) and we refer to it as the $X$-topology. We say that a property holds [*generically*]{} if it holds for all admissible couples out of a closed set with empty interior with respect to this topology. Generic regularity {#sec:3} ================== In this section we study the regularity properties of extremal surfaces, which hold generically with respect to the $X$-topology. We start with a general regularity result which follows directly from the orthogonal gauge parametrization. \[teoex\] Given an admissible couple $(\gamma_0,v_0)$, there exists a global timelike extremal surface $M$ of the form , containing $\Gamma_0={\rm Image}(\gamma_0)$ and tangent to $(1,v_0)$, if and only if $$\label{condab} a'(s)\ne -b'(\sigma)\qquad {\rm for\ all\ }s,\sigma\in {\mathbb R}.$$ If $(a,b)\in X_{\rm per}$ then $M$ is an extremal cylinder. We have only defined [*timelike*]{} for immersed surfaces. A surface $M$ given as the image of a map $\psi$ may be smooth even where $\psi$ is not an immersion. In this case, we say that $M$ is timelike at a point $p\in M$ if $T_pM$ exists and is timelike, and in addition the spatial unit tangent $\tau$ is continuous at $p$. Assume . Then it is clear from the form of $\gamma$ that $\gamma_x$ never vanishes, and from the form of $\psi$, it follows that that $Sing = \emptyset$ and hence that $\psi$ is a global immersion. It follows from that $|\gamma_t|<1$ whenever $\gamma_x\ne 0$, and from this it is easy to check that $\psi$ is a timelike immersion everywhere. If fails, then $\gamma_x(t,x)=0$ for some $(t,x)\in{\mathbb R}\times{\mathbb R}$, and by we have $|\gamma_t(t,x)|=1$. We will show that $M$ is not timelike at $\psi(t,x)$. This is clearly the case if $p\in Sing^*$, so we assume that $p\not \in Sing^*$. Then we can define a spatial tangent $\tau(p)$, and $T_pM$ is spanned by $(0,\tau(p))$ and $( 1, \gamma_t(t,x))$. Thus it suffices to show that $$\tau(p)\cdot \gamma_t(t,x) = 0, \label{null}$$ since then it is easy to check that $T_pM$ contains no timelike vectors. To prove , fix a sequence $(t_k,x_k)$ in $M\setminus Sing$ such that $p_k := \gamma(t_k, x_k)\to \gamma(t,x)$. (We prove in Theorem \[thm:sing\] below that $\mathcal H^2(Sing) = 0$, so such a sequence exists.) Then since $\tau$ is continuous at $p$, $$\label{null2} \tau(p) := \lim_k \frac {\gamma_x(t_k,x_k)}{|\gamma_x(t_k,x_k)|}, \qquad \gamma_t = \lim_k \gamma_t (t_k,x_k).$$ We write $\gamma(t,x) = \frac 12(a(x+t)+b(x-t))$ as usual, and we use the notation $$m_k := a'(x_k+t_k) , \qquad n_k := -b'(x_k-t_k).$$ If we define $n_0 = a'(x+t)$ then, using the and the fact that $\gamma_x(t,x)= 0$, we find that $$\label{null3} m_k\mbox{ and }n_k \to n_0, \qquad\mbox{ as }k\to \infty,\qquad\qquad\mbox{ and \ } n_0 = \gamma_t (t,x).$$ Then $\gamma_x(t_k,x_k) = m_k-n_k$ and $n_0 = \gamma_t(t,x)$, so reduces to showing that if holds and $|n_k|=|m_k|=1$ for all $k$, then $$|(m_k - n_k)\cdot n_0 |= o(|n_k - m_k|) \qquad\mbox{ as $k\to \infty$.} $$ Writing $\theta_k := \cos^{-1} ( m_k\cdot n_0)$ and ${\varphi}_k := \cos^{-1} ( n_k\cdot n_0)$, it is not hard to see that $|n_k - m_k| \ge |\sin \theta_k - \sin {\varphi}_k| \ge \frac 12|\theta_k - {\varphi}_k|$ for $k$ sufficiently large, and then it suffices to check that $$|\cos \theta_k - \cos {\varphi}_k| = o(|\theta_k - {\varphi}_k|)$$ for $\theta_k,{\varphi}_k\to 0$, which is clear. Notice that condition is equivalent to say that the two curves $a',-b':{\mathbb R}\to {\mathbb S}^{n-1}$ do not intersect. The following result has been proved in [@NT]. \[coruno\] Let $n=2$ and let $(\gamma_0,v_0)$ be a periodic admissible couple. Then the curve $\Gamma_0={\rm Image}(\gamma_0)$ cannot be immersed in a global timelike extremal cylinder tangent to $(1,v_0)$. \[localcyl\]We emphasize that the corollary applies only to extremal [*cylinders*]{}. The proof does not rule out the possibility of smooth timelike extremal surfaces in ${\mathbb R}^{1+2}$ that are locally (but not globally) cylindrical, see Proposition \[prop:nonuniq\] below. By Theorem \[teoex\] it is enough to show that there exist $s,\sigma\in[0,E_0]$ such that $$a'(s)+b'(\sigma)=0.$$ As $|a'|=|b'|=1$ and $$\label{eqab} \int_0^{E_0}a'(s)\,ds=\int_0^{E_0}-b'(\sigma)\,d\sigma\,,$$ the supports of the curves $a'$ and $-b'$ are two connected arcs of ${\mathbb S}^1$, which necessarily intersect. The thesis then follows from Theorem \[teoex\]. \[cordue\] Let $n=3$ and let $(\gamma_0,v_0)$ be a periodic admissible couple. If $\Gamma_0=\mbox{Image}(\gamma_0)$ can be immersed in a global timelike extremal cylinder tangent to $(1,v_0)$, then the same holds for any periodic admissible couple $(\hat\gamma_0,\hat v_0)$, sufficiently close to $(\gamma_0,v_0)$ in the $X$-topology. Conversely, if $\Gamma_0$ cannot be immersed in a global timelike extremal cylinder tangent to $(1,v_0)$, then generically the same holds for any couple $(\hat\gamma_0,\hat v_0)$ sufficiently close to $(\gamma_0,v_0)$. The first assertion follows immediately from the fact that the set of couples $(a,b)$ satisfying is open in $X_{\rm per}$. The second assertion follows by noticing that the set of couples $(a,b)$ such that the support of $a'$ intersects at least two connected components of the complement in $\mathbb S^2$ of the support of $-b'$ is an open set in $X_{\rm per}$, while the set of couples $(a,b)$ such that the support of $a'$ intersects only one connected component of the complementary of the support of $-b'$ is a closed set with empty interior. If we consider data $(\gamma_0,v_0)$ parametrized by $X^2 := X_{\rm per}\cap \left(C^2({\mathbb R})\times C^2({\mathbb R})\right)$, endowed with the stronger topology induced by $C^2({\mathbb R})\times C^2({\mathbb R})$, then $\Gamma_0$ can generically be immersed in a global $E_0$-periodic surface tangent to $(1,v_0)$, which is a timelike extremal surface away from a discrete set of singular points, parametrized by the finite set $Sing$. Moreover, the cardinality of the singular set $Sing$ is invariant for small perturbations of $(\gamma_0,v_0)$ in the $X^2$-topology. Indeed, we observe that the couples $(a,b)$ such that the curves $a'$ and $-b'$ have a finite number of transversal intersections is a dense open set in $X_{\rm per}$ with respect to the $X^2$-topology, and the number of intersections is locally constant. Hence the curve $\gamma$ given by parametrizes a $E_0$-periodic timelike extremal cylinder tangent to $(1,v_0)$, away from a singular set which is finite in $[0,E_0]\times {\mathbb R}^3$, and the number of singularities is invariant for small perturbations of $(\gamma_0,v_0)$. An example of admissible couple in ${\mathbb R}^3$ which is immersed in a global timelike extremal cylinder has been given in [@NT]. More generally, we prove in Lemma \[lem:convexhull\] below that any curve in ${\mathbb S}^{2}$ whose convex hull contains a neighborhood of the origin can be realized as the set of tangent vectors of a closed curve $a$ such that $|a'| = 1$. Hence one can easily find pairs $a,b:{\mathbb R}\to {\mathbb R}^n, n\ge 3$ of periodic curves with the same period, such that $a'$ and $-b'$ trace out disjoint curves in $\mathbb S^{n-1}$. By Theorem \[teoex\], each such pair yields an example of a globally smooth timelike extremal cylinder. Assume that $c: {\mathbb S}^1\to {\mathbb S}^{n-1}$ is a smooth closed curve such that $0$ belongs to co$($Image$(c))$, where co$(\cdot)$ denotes the convex hull. Then there exists a closed curve $a:{\mathbb S}^1\to {\mathbb R}^n$, of the same smoothness as $c$, such that $\mbox{Image}\,(a') = \mbox{Image}\,(c)$. \[lem:convexhull\] We write $c$ as a $2\pi$-periodic function from ${\mathbb R}$ to ${\mathbb S}^{n-1}$. By assumption there exist points $0< x_0 < \ldots < x_{n} \le 2\pi$ such that $$0 \in \mbox{int} \left(\mbox{co} \{c(x_0), \ldots, c(x_n) \}\right) \label{eq:intco}$$ Let $p:{\mathbb R}\to {\mathbb R}$ be a smooth increasing function such that $p(x+2\pi) = p(x)+2\pi$, $$p(x_i) = x_i, \qquad \mbox{ and }\quad \frac {d^k p}{dx^k}(x_i) = 0 \mbox{ for every $k\in N$ and $i=0,\ldots, n$}.$$ Then, given positive numbers $\ell_0,\ldots, \ell_n$, let $L_i := \sum_{j=0}^i \ell_j$ and define $$\tilde c(x) := \begin{cases} c(p(x))&\mbox{ for }0\le x \le x_0\\ c(x_0)&\mbox{ for } x_0 \le x \le x_0+L_0 \\ c(p(x - L_0))&\mbox{ for }x_0+L_0 \le x \le x_1+L_0\\ \quad\vdots&\qquad\qquad\vdots \\ c(x_n)&\mbox{ for }x_n + L_{n-1} \le x \le x_n + L_n\\ c(p(x - L_n))&\mbox{ for }x_n+L_n \le x \le 2\pi + L_n. \end{cases}$$ We claim that one can choose positive $(\ell_i)$ so that $\int_0^{2\pi + L_n} \tilde c(x) \ dx = 0$. Indeed, since $$\int_0^{2\pi + L_n} \tilde c(x) \ dx = 0 \ = \ \int_0^{2\pi} c(p(x)) dx + \sum_{i=0}^n \ell_i c(x_i),$$ the claim follows from . We now fix $(\ell_i)$ as above and define $\hat c(x) := \tilde c\left( \frac {(2\pi + L_n) x}{2\pi}\right)$. Then $a(x) := \int_0^x \hat c(y) dy$, for $0\le x \le 2\pi$, defines a closed curve with the the required properties. \[cortre\] Let $n>3$ and let $(\gamma_0,v_0)$ be a periodic admissible couple. Then $\Gamma_0$ can be generically immersed in a global timelike extremal cylinder tangent to $(1,v_0)$. The assertion follows as before from the fact that the set of couples $(a,b)$ satisfying is open in $X_{\rm per}$, while the set of couples $(a,b)$ such that the curves $a',-b':[0,E_0]\to {\mathbb S}^2$ intersect is a closed set with empty interior. A related question is what happens if we assume $\gamma_0$ to be an embedded curve in ${\mathbb R}^n$ and ask if it is contained in a global [*embedded*]{} timelike extremal surface in ${\mathbb R}^{1+n}$. It is easy to check that Corollary \[coruno\] still holds in this case, and we expect that Corollaries \[cordue\] and \[cortre\] also hold, with similar proofs. As the set of periodic admissible couples which can be immersed in a global timelike extremal cylinder is parametrized by an open subset ${\mathcal O}\subset X_{\rm per}$, it is natural to speculate on the number of connected components of ${\mathcal O}$. While it is clear from Corollary \[coruno\] that ${\mathcal O}=\emptyset$ if $n=2$, it is not difficult to show that ${\mathcal O}$ has infinitely many connected components if $n=3,4$, while ${\mathcal O}$ is connected if $n>4$. Indeed, if $n=3$ and $a',-b'$ are two disjoint closed curves in ${\mathbb S}^2$, then the winding number of $a'$ around the image of $-b'$ is constant on connected components of ${\mathcal O}$, and one can easily find admissible couples with any prescribed winding number. If $n=4$ the linking number in ${\mathbb S}^3$ of the curves $a',-b'$ is constant on connected components of ${\mathcal O}$, and one can find admissible couples with any prescribed linking number. If $n>4$ the assertion follows from the fact the every knot is trivial in ${\mathbb S}^n$. Nonuniqueness of smooth extremal surfaces {#S:4} ========================================= In the following statement, we say that a surface $M\subset {\mathbb R}^{1+n}$ is [*locally cylindrical*]{} if, for every $t_0\in {\mathbb R}$, there exists an open interval $I\subset {\mathbb R}$ such that $t_0\in I$ and $M \cap (I\times {\mathbb R}^2)$ is a cylinder in $I\times {\mathbb R}^n$, i.e. it can be written in the form . \[prop:nonuniq\] If $n\ge 3$ there exist two distinct globally $C^\infty$ timelike extremal surfaces $M^1, M^2$ in ${\mathbb R}^{1+n}$, both locally cylindrical, such that $M^1$ and $M^2$ coincide when $t\in [0,\delta]$ for some $\delta>0$, in the sense that $$\label{nonuniq0} \left\{( t,x)\in M^1 : t \in [0,\delta] \right\} = \left\{( t,x)\in M^2 : \in [0,\delta] \right\}.$$ The surface $M^2$ that we construct below has the property that it is locally cylindrical but not globally cylindrical. In general, in geometric evolution problems, self-intersections can give rise to nonuniqueness. The proposition shows that, even if we require smoothness and impose the “locally cylindrical" topological constraint, one can still take advantage of self-intersections to generate examples of nonuniqueness. For $i=1,\ldots,3$ let $a_i, b_i$ be distinct $C^\infty$ maps ${\mathbb R}\to {\mathbb R}^n$, periodic with period $1$, such that $|a_i'|=|b_i'|=1$ and such that $$a_i(x) = b_j(x) = (x, 0,\ldots, 0)\qquad \mbox{ for all $i,j$ and all $x\in [-\delta, \delta]$ for some $\delta< \dfrac 12$\,.} \label{coincide}$$ Assume in addition that $$\{ (s, \sigma) \in {\mathbb R}\times {\mathbb R}: a_i'(s)+ b_j'(\sigma)=0 \mbox{ for some }i,j \} = \emptyset\,. \label{nonuniq1}$$ This says that no $b_j'$ ever passes through any point that is antipodal to any point on any $a_i'$. It follows easily from Lemma \[lem:convexhull\] that this can be accomplished. Now for any permutation $\pi:\{1,2,3\} \to \{1,2,3 \}$, let $(a^\pi,b^\pi)$ and be periodic curves ${\mathbb R}\to {\mathbb R}^n$ with period $3$, defined by $$(a^\pi, b^\pi)(x) = (a_{\pi(i)}(x), b_{\pi(i)}(x)) \qquad \mbox{ for }x\in [i-1, i]\mod 3.$$ Next, define $\gamma^\pi(t,x) := \frac 12 (a^\pi(x+t) + b^\pi(x-t))$, Letting $id$ denote the identity permutation, we claim that for every $\pi$, $$\mbox{ $\gamma^\pi(t, \cdot)$ and $\gamma^{id}(t, \cdot)$ parametrize the same curve for $0\le t \le \delta$}. \label{samecurve}$$ Indeed, if $t\ge 0$, we have $$\gamma^\pi(t,x) = \begin{cases} \frac 12(a_{\pi(i)}(x+t) + b_{\pi(i)}(x-t)) &\mbox{ if }i-1 \le x-t \le x+t \le i \mod 3\\ \frac 12(a_{\pi(i+1)}(x+t) + b_{\pi(i)}(x-t)) &\mbox{ if }i-1 \le x-t \le i \le x+t \mod 3\,, \end{cases} \label{gammapi}$$ where addition of indices is understood mod 3. If $0 \le t \le \delta$, it follows from this and that $$\gamma^\pi(t,x) = (x-i , 0 \ldots, 0)\ \mbox{ if }i-1 \le x-t \le i \le x+t,$$ for [*every*]{} permutation $\pi$. Then one can see by inspection of that holds (in fact it also holds for $t\in [-\delta, 0]$, by essentially the same argument). Next, note that implies that $$\mbox{$\gamma^\pi(\frac 12, x) = \frac 12 (a_{\pi(i+1)}(x+\frac 12) + b_{\pi(i)}(x-\frac 12)) \quad \mbox{ if } i - \frac 12 \le x \le i+ \frac 12 \mod 3$}$$ and from this one can see that in general $\gamma^{id}(\frac 12, \cdot) \ne \gamma^{\pi}(\frac 12, \cdot)$ if for example $\pi$ is an odd permutation. Finally, define $\psi^\pi(t,x) = (t, \gamma^\pi(t,x))$. Let $M^1$ be the surface parametrized by $\psi^{id}$, and let $M^2$ be the surface that agrees with $M^1$ when $t\le \delta$, and for $t\ge \delta/2$ is parametrized by $\psi^\pi(t,x)$ for some odd permutation $\pi$. This definition makes sense in view of . These surfaces have all the stated properties. In particular, it follows from and Theorem 4.1 that $M^1, M^2$ are both smoothly immersed and locally cylindrical. (Indeed, note that since $\gamma^\pi$ and $\gamma^{id}$ as constructed above are both periodic with period $3$ in the $t$ variable, we are free to switch back and forth at will between $\gamma^\pi$ and $\gamma^{id}$ every $3$ units of $t$.) When $n=2$, a similar argument yields two functions $\gamma^{id}, \gamma^\pi$ of the form , that parametrize the same curve for $|t|\le \delta$, but not for all $t$. These functions $\gamma^{id}, \gamma^\pi$ fail to be global timelike immersions, see Theorem \[thm:sing\] below, but it is presumably possible to arrange that the breakdown of uniqueness (for the image manifolds) occurs before the breakdown of regularity. In the proof of Proposition \[prop:nonuniq\], if $a_1=a_2=a_3$ and $b_1, b_2, b_3$ are distinct, then one can see from that $\gamma^{id}(t, \cdot)$ and $\gamma^\pi(t,\cdot)$ parametrize the same curve for every $t$. This shows that the different admissible pairs can generate the same extremal surface. We now provide an example of nonuniqueness of (weakly) extremal surfaces, due to the appearance of singularities in the evolution. Let $\gamma_1(t,x)=\frac{1}{2}(a_1(x+t)+a_1(x-t))$ and $\gamma_2(t,x)=\frac{1}{2}(a_2(x+t)+a_2(x-t))$ be orthogonal parametrization of two different global extremal cylinders $M_1$ and $M_2$, with $a_1, a_2$ arclength parametrizations of the boundaries of two distinct uniformly convex, centrally symmetric planar sets, both periodic with period $E_0$. Symmetry implies that $a_i(x+E_0/2) = -a_i(x)$ for all $x$ and $i=1,2$, and thus $$\gamma_i( E_0/4, x) = \frac 12\Big(a_i(x+ E_0/4) + a_i(x-E_0/4) \Big)= 0\qquad \mbox { for }x\in {\mathbb R}, i\in \{1,2\}.$$ In other words, $\gamma_1,\gamma_2$ both have an extinction singularity at the origin at time $\bar t :=E_0/4$. Note that the time derivatives at time $\bar t$ of $\gamma_1$ and $\gamma_2$ are respectively given by $a'_1(x + \bar t)$ and $a'_2(x +\bar t)$. Define now $\gamma(t,x)=\gamma_1(t,s(x))$ for $t<\bar t$ and $\gamma(t,x)=\gamma_2(t,x)$ for $t\ge \bar t$, where $s(x)$ is a reparametrization of $[0,E_0]$ such that $$a'_1(s(x) + \bar t)=a'_2(x + \bar t)$$ for all $x\in [0,E_0]$, i.e. $$s(x)= - \bar t +(a'_1)^{-1}\circ a'_2(x+\bar t).$$ It follows that the derivatives of $\gamma$ are continuous at $(x,\bar t)$ for any $x\in{\mathbb R}$, and hence $\gamma$ may be suitably extended to a $C^1$ [*nonorthogonal*]{} parametrization of a global (weakly) extremal cylinder $M$ that agrees respectively with $M_1$ and $M_2$ on disjoint time intervals. dimension of the singular set ============================= Given $\gamma(t,x) = ( a(x+t)+b(x-t))/2$, with $a,b$ satisfying , and $\psi(t,x) = (t,\gamma(t,x))$, we now prove some upper bounds on the size of the singular sets $Sing$ and $Sing^*$ associated to the cylinder $M={\rm Image}(\psi)$. In the following theorem, which is one of the main results of this paper, “dim" always means (Euclidean) Hausdorff dimension[^1]. Assume that $a,b\in C^{k}({\mathbb R}, {\mathbb R}^n)$, with $k\in\mathbb N$, with $(a,b)\in X$. Then, [**1**]{}. $\mathcal H^{1+ \frac 1{k}}(Sing) = 0$ and $\mbox{dim}(Sing^*)\le \mbox{dim}(Sing) \le 1+ \frac 1{k}$. [**2**]{}. It can happen that $\mbox{dim}(Sing^*) = 1+\frac 1k $. [**3**]{}. When $n=2$ and $(a,b)\in X_{\rm per}$, (at least) one of the following properties holds: - there exists $t_0$ such that $\gamma_x(t_0, x)=0$ for all $x$, - $Sing^*$ is at least one-dimensional, and the set $$\{ t\in {\mathbb R}: \exists x\in {\mathbb R}\mbox{ such that }\psi(t,x)\in Sing^* \}$$ contains an open interval. \[thm:sing\] \[ps\]In fact it is clear from the proof that conclusion [**1**]{} holds for any surface that is given locally as the image of a $C^k$ map, including any surface that can be written locally in the form with $a,b\in C^k$. In particular, this applies to noncompact surfaces, as well as local cylinders of the type appearing in Proposition \[prop:nonuniq\]. Also, we prove conclusion [**2**]{} for global cylinders, the most restrictive (topological) class of functions considered in this paper, so it follows that it holds for other classes of surfaces — noncompact, locally cylindrical — as well. Note also that Remark \[localcyl\] applies to conclusion [**3**]{}. It is natural to wonder whether the results we prove here for the weak solutions given by the explicit formula still hold in a larger class of weak solutions, and also whether any analogous results hold for higher-dimensional extremal surfaces. Conclusion [**3**]{} is a refinement of a result from [@NT]. Our proof gives more details than [@NT] concerning the situation described in , since we found this point not completely straightforward. The proof of [**3**]{} shows that, if for instance $a$ is a nonconvex curve in ${\mathbb R}^2$ and $b(x) = - a(x+E_0/2)$, then [*both*]{} the alternatives of conclusion [**3**]{} hold. The rest of this section is devoted to the proof of Theorem \[thm:sing\]. The estimate $\mathcal H^{1+\frac 1k}(Sing) = 0$ follows directly from a refined version of Sard’s Theorem, see Federer [@federer 3.4.3]. In [@BN] one can find a version of Sard’s Theorem more refined than the one cited above, which gives a necessary and sufficient condition for a set $A\subset{\mathbb R}$ to be the set of critical values of some function in $C^{k,\alpha}({\mathbb R},{\mathbb R}^n)$. If $a,b\in C^{k,\alpha}$ for $\alpha\in(0,1)$ and $k$ is a positive integer, these result implies that $$\mathcal H^{\frac 1{k+\alpha}}\Big( Sing \cap\big( \{ s\}\times {\mathbb R}^n \big) \Big) = 0 \qquad\mbox{ for every }s \in{\mathbb R}. $$ It is reasonable to conjecture, and may be even easy to prove, that under these hypotheses one has $\mathcal H^{1+\frac 1{k+\alpha}}(Sing) = 0$, but this does not immediately follow from [@BN]. However, straightforward modifications of the proof of [**2**]{} below show that it can happen that $\mbox{dim}(Sing^*) = 1+ \frac 1{k+\alpha}$, when $a,b\in C^{k,\alpha}$. \[rem:Ckalpha\] It is enough to construct an example when $n=2$. In order to do it, we will need the following result: \[lem:Sard\] For every positive integer $k$, there exists $f\in C^k([0,1];{\mathbb R})$ such that $$\mbox{dim}(f( \Sigma)) = \frac 1k, \qquad\mbox{ where } \ \Sigma:= \big\{x\in [0,1] : f'\mbox{ changes sign near $x$} \big\} . \label{reducetof}$$ The proof, which we defer to the end of this section, is a small modification of a classical argument used by Federer to prove the sharpness of his refined version of Sard’s Theorem which we cited in the proof of [**1**]{} above. We may assume that the function $f$ from Lemma \[lem:Sard\] satisfies $|f'|\le \frac 12$, since multiplying a function by a constant does not change the dimension of the associated set $\Sigma$. We define $g\in C^k([0,1])$ such that $g' = (1-f'^2)^{1/2}$ for $x\in [0,1]$, and we fix two periodic maps $a,b\in C^k({\mathbb R};{\mathbb R}^2)$, with the same period $E_0>3$, parametrized by arclength and such that $$a(x) = (f(x),g(x))\mbox{ for }x\in [0,1], \qquad b(x) = (0, -x) \ \mbox{ for }x\in [-1,2].$$ Then $$\gamma_x(t,x) = \frac 12(f'(x+t), g'(x+t)-1 ) \quad\mbox{ if }x+t\in [0,1],\ |t|\le \frac 12,$$ and since $|g'-1| \le C f'^2$, it follows that $\frac{\gamma_x}{|\gamma_x|}$ is discontinuous at all points $(t,x)$ such that $|t|\le \frac 12$ and $x+t\in \Sigma$. We then deduce that $$\begin{aligned} Sing^* \ &\supset \ \left\{(t,\frac{ f(x+t)}2, \frac{ g(x+t) - (x-t) } 2) : \ x+t\in\Sigma, \ |t|\le \frac12 \right\} \\ &\ = \ \left\{(0, \frac{ f(s)}2, \frac{ g(s) - s } 2) + (t, 0, t) : \ s \in \Sigma, \ |t|\le\frac 12 \right\} .\end{aligned}$$ If we let $$A_0 := \left \{ \frac 12(f(x), g(x)+x) : x\in \Sigma \right\},$$ then $\mbox{dim}(Sing^*) \ge \mbox{dim}(A_0) +1$, since $Sing^*\subset{\mathbb R}^{1+2}$ contains a copy of $A_0\subset {\mathbb R}^2$ translated along a line segment. Moreover, since $\frac 12 f(\Sigma)$ is the projection of $A_0$ on the $x$-axis, we conclude from that $$\mbox{dim}(Sing^*) \ge \ 1+ \mbox{dim}(A_0) \ge \ 1+ \mbox{dim}\Big( \frac 12 f(\Sigma) \Big) = 1+\frac 1k\,.$$ We remark that, although the map $\gamma$ constructed above is singular for $t=0$, one can easily modify the construction to arrange that $\gamma$ is regularly immersed at $t=0$ and develops singularities as described above at a later time. Indeed, $\gamma$ is a regular immersion at $t=0$ if $a' = e^{i\alpha}, b' = -e^{i\beta}$, and $\alpha < \beta < \alpha+2\pi$, and this condition can be achieved, while essentially preserving the above construction, by choosing $E_0$ large enough, taking a certain amount of care in how $\alpha$ is defined in $[1,E_0]$ and $\beta$ in $[2, E_0-1]$, and then replacing $a(\cdot)$ by $a(\cdot - E_0/2)$. Extend $a', b'$ to $E_0$-periodic maps from ${\mathbb R}$ to ${\mathbb S}^1$, and let $\alpha,\beta :{\mathbb R}\to {\mathbb R}$ be two continuous functions such that $$a'(x) = e^{i\alpha(x)}, \qquad -b' = e^{i\beta(x)}. \label{alphabeta}$$ As in the proof of Corollary \[coruno\], $a'$ and $b'$ satisfy , which implies that the images of $a'$ and $-b'$ are closed arcs with intersection of positive length. In particular, by adding $2\pi k$ to $\alpha$ for an appropriate integer $k$, we can assume that the set Image$(\alpha) \cap \mbox{Image}(\beta)$ contains an interval of positive length. It then follows that the function $$F(t,x) := \alpha(x+t) - \beta(x-t)$$ takes both positive and negative values. For example, to find a point where $F>0$, choose $s,\sigma\in {\mathbb R}$ such that $\alpha(s)> \beta(\sigma)$, and let $(t,x) = ( \frac 12 (s-\sigma), \frac 12 (s+\sigma))$. We shall consider 2 cases: [*Case 1*]{}. For every $t_0$, the function $x\mapsto F(t_0,x)$ does not change sign. Then, since $F$ assumes both positive and negative values, there must be some $t_0$ such that $F(t_0,x)=0$ for all $x$. It follows that $\gamma_x(t_0, x) = 0$ for all $x$. [*Case 2*]{}. There exists some $t_0$ such that $x\mapsto F(t_0,x)$ changes sign. Then by continuity $x\mapsto F(t,x)$ changes sign for all $t$ in a neighborhood of $t_0$. Fix such a $\bar t$, and let $S$ be a connected component of the set $\{ x : F(\bar t, x) = 0\}$ such that $F$ assumes both positive and negative values in every neighborhood of $S$. As observed in [@NT], it follows from that the unit tangent $\tau = \frac {\gamma_x}{|\gamma_x|}$ is given by $$\frac{\gamma_x}{|\gamma_x|}(\bar t,x) = \operatorname{sign}\left( \sin\left( \frac 12 F(\bar t,x) \right)\right)\ i\, e^{\frac i 2\, G(\bar t,x)}, \qquad G(\bar t,x):= \alpha(x+\bar t)+\beta(x-\bar t) \label{angles}$$ wherever $\gamma_x\ne 0$ (for simplicity of notation we identify ${\mathbb R}^2$ with $\mathbb C$). Therefore, if $S$ consists of a single point $(\bar t,x)$, then $\lim_{y\to x}\frac{\gamma_x}{|\gamma_x|}(\bar t,y)$ does not exist, which implies that $\psi(\bar t,x)\in Sing^*$. Suppose now that $S$ is an interval, say $S = [s_0,s_1]$. Then $\gamma_x(\bar t,x) = 0$ for all $x\in S$, so that $x\mapsto \gamma(\bar t,x)$ is constant for $x\in S$. It follows that $\psi(\bar t, s_0)\in Sing^*$ unless $$\label{eq:notsing} \tau(\bar t,s_0^-) = \tau(\bar t,s_1^+) , $$ where $\tau(\bar t,s_0^-) := \lim_{x\nearrow s_0}\tau(\bar t,x)$ and $\tau(\bar t,s_1^+) := \lim_{x\searrow s_1}\tau(\bar t,x)$. (Condition includes the assertion that both limits exist.) Recalling that $F$ changes sign near $S$, we deduce from that can only occur if $\operatorname{sign}(F(\bar t, s_0^-)) = - \operatorname{sign}(F(\bar t, s_1^+))$ and $$\frac 12({G(\bar t,s_1) - G(\bar t,s_0)}) = \pi \mod 2\pi. \label{eq:notsing2}$$ Assume this holds and let $$x_0 := \min\left\{ x\in S : |\alpha(x+\bar t) - \alpha(s_0 +\bar t)| = \frac \pi 3\right\}>s_0\,.$$ Since $0 = F(\bar t,x) = \alpha(x+\bar t) - \beta(x-\bar t)$ for all $x\in S$, we get $$\begin{aligned} F(\bar t+{\varepsilon}, x_0- |{\varepsilon}|) &=& \alpha(x_0+\bar t+{\varepsilon}-|{\varepsilon}| ) - \beta(x_0 -\bar t -{\varepsilon}-|{\varepsilon}|) \\ &=& \alpha(x_0 + \bar t +{\varepsilon}-|{\varepsilon}|) - \alpha(x_0 + \bar t-{\varepsilon}-|{\varepsilon}| ) \ne 0 $$ for all ${\varepsilon}$ such that $x_0-2|{\varepsilon}| > s_0$. The last inequality follows from the fact that $|\alpha(x_0+\bar t) - \alpha(s_0 +\bar t)| = \pi/3$, while $|\alpha(x_0 + \bar t - 2|{\varepsilon}|) - \alpha(s_0+\bar t)| < \pi/3$, by the choice of $x_0$. From the equality $F(\bar t,x) = 0$, for these ${\varepsilon}$ it also follows that $$F(\bar t - {\varepsilon}, x_0 -|{\varepsilon}|) = - F(\bar t+{\varepsilon}, x_0-|{\varepsilon}|). $$ In particular, the function ${\varepsilon}\mapsto F(\bar t+{\varepsilon}, x_0-|{\varepsilon}|)$ changes sign at ${\varepsilon}=0$. Fix now $y_0<s_0$ such that $F(\bar t,y_0)\ne 0$ and $$|\alpha(y+ \bar t) - \alpha(s_0+\bar t)| < \frac \pi 3\quad \mbox{ for all }y\in [y_0, s_0].$$ For all ${\varepsilon}$ sufficiently small, $F( t+{\varepsilon}, y_0+|{\varepsilon}|)$ has the same sign as $F(\bar t, y_0)$. Thus, for all ${\varepsilon}$ in an interval of the form $(-\delta,0)$ or $(0,\delta)$, the function $x\mapsto F(\bar t+ {\varepsilon}, x)$ must change sign between $y_0+|{\varepsilon}|$ and $x_0-|{\varepsilon}|$. For such ${\varepsilon}$, the interval $(y_0+|{\varepsilon}|, x_0-|{\varepsilon}|)$ must contain a connected component $\hat S = [\hat s_0, \hat s_1]$ of $\{ x : F(\bar t+{\varepsilon}, x) = 0\}$ such that $F$ assumes both positive and negative values in every neighborhood of $\hat S$. Moreover, our choice of $y_0$ and $x_0$ guarantees that, possibly reducing $\delta$, we have $$\begin{aligned} \pi \ & > \ |\alpha(\hat s_1 + \bar t+{\varepsilon}) - \alpha(s_0+\bar t)| + |\alpha(s_0+\bar t) - \alpha(\hat s_0+\bar t+{\varepsilon})| \\ & > \ |\alpha(\hat s_1 + \bar t+{\varepsilon}) - \alpha(\hat s_0+ \bar t+{\varepsilon})|\\ & = \ \frac 12\,|G(\bar t+{\varepsilon}, \hat s_1) - G(\bar t+{\varepsilon}, \hat s_0)|\end{aligned}$$ where the last equality follows from the fact that $F(\bar t+{\varepsilon}, s) = 0$ in $[\hat s_0, \hat s_1]$. Thus cannot hold, and hence for all ${\varepsilon}$ in the interval that we have found, $\psi(\bar t+{\varepsilon}, \hat s_0)\in Sing^*$, thus completing the proof of [**3**]{}. Finally we give a proof of Lemma \[lem:Sard\]. We divide the proof into three steps.\ [**Step 1.**]{} We first recall Federer’s proof that for any $k\in \mathbb N$ and $\mu\in (0,\frac 1k)$, there exists $g\in C^k([0,1])$ such that $$\mathcal H^{\mu}\Big( \big\{ g(x) : g'(x) = 0 \big\} \Big) >0. \label{reducetof.wk}$$ For $\sigma\in (0,1)$ we will write $C_\sigma$ to denote the “middle $\sigma$" Cantor-type set, so that $$C_\sigma = \cap_{\ell=1}^\infty\cup_{i \in \{ 0,1\}^\ell} C_\sigma(i)$$ where, for every $\ell$ and every $i\in \{0,1\}^\ell$, $C_\sigma(i)$ is a closed interval of length $(\frac {1-\sigma}2)^\ell$, and $C_\sigma (i_1,\ldots, i_\ell, 0)$, $C_\sigma (i_1,\ldots, i_\ell, 1)$ are obtained by removing from $C_\sigma (i_1,\ldots, i_\ell)$ a centered open interval of length $\sigma(\frac {1-\sigma}2)^\ell$. As usual we start with $C_\sigma(0) = [0, \frac 12(1-\sigma)]$ and $C_\sigma(1) = [\frac 12(1+\sigma)2, 1]$, and we label the intervals so that $C_\sigma (i_1,\ldots, i_\ell, 0)$ lies to the left of $C_\sigma (i_1,\ldots, i_\ell, 1)$. Fix now $\nu$ and $\delta>0$ such that $(k+\delta)\mu = \nu <1$, and let $\alpha, \beta\in (0,1)$ satisfy $$\left(\frac{1-\alpha}2\right)^\mu = \left(\frac {1-\beta}2\right)^\nu = \frac 12.$$ These numbers are chosen so that $C_\alpha$ and $C_\beta$ have dimension $\mu$ and $\nu$ respectively, and $\mathcal H^\mu(C_\alpha), \mathcal H^\nu(C_\beta)>0$. Notice that there is a natural map $g_0:C_\beta\to C_\alpha$, characterized by $$g_0( C_\beta \cap C_\beta(i)) = C_\alpha \cap C_\alpha(i)\qquad\quad\mbox{ for every $\ell\in \mathbb N$ and }i\in \{0,1\}^\ell.$$ As Federer noted in [@federer 3.4.4], $g_0$ extends to a $C^k$ map $g: [0,1]\to [0,1]$ by a routine application of the Whitney extension Theorem. The point is that, given $x,y\in C_\beta$, we can fix $\ell\in \mathbb N$ and $i\in \{0,1\}^\ell$ such that $x,y\in C_\beta(i)$, but $x$ and $y$ belong to different subintervals of $C_\beta(i)$. Then $g_0(x)$ and $g_0(y)$ both belong to $C_\alpha(i)$, and from this information one can easily check that $$|x-y| \ge \beta(\frac {1-\beta}2)^{\ell}, \qquad |g_0(x) - g_0(y)| \le (\frac {1-\alpha}2)^{ \ell} = (\frac {1-\beta}2)^{(k+\delta) \ell}.$$ As a result we get $$|g_0(x)-g_0(y)|\le C (\frac{1-\beta}2)^{\delta\ell}|x-y|^k = o( |x-y|^k),$$ hence Whitney’s Theorem yields a $C^k$ extension of $g$ as required. It is also clear that $g'=0$ in $C_\beta$, so that every point of $C_\alpha$ is a critical value of $g$, and holds. [**Step 2**]{}. We modify the above construction to produce a function $f\in C^k([0,1])$ such that, for a fixed $\mu< \frac 1k$, we have $$\mathcal H^\mu (f( \Sigma)) \ge 0 \qquad\mbox{ where } \ \Sigma:= \big\{x\in [0,1] : f'\mbox{ changes sign near $x$} \big\} . \label{eq:fmu}$$ To do this, we fix $\alpha,\beta$ as above and define $f_0:C_\beta\to C_\alpha$, characterized by $$f_0( C_\alpha \cap C_\alpha(i)) = C_\beta \cap C_\beta(i^*)\qquad\quad\mbox{ for every $\ell\in \mathbb N$ and }i\in \{0,1\}^\ell,$$ where for every $k$ and every $i\in \{0,1\}^k$, we define $i^*$ by $$\begin{aligned} i^*_j = i_j \quad \mbox{ if $j$ is odd}, \qquad \qquad i^*_j = i_j +1 \mod 2 \quad \mbox{ if $j$ is even}.\end{aligned}$$ Then, as in the classical argument described above, $f_0$ extends to a $C^k$ function $f:[0,1]\to {\mathbb R}$. In addition, we have the inclusion $C_\beta \subset \Sigma$, since every interval $C_\alpha(i)$ for $i$ odd contains points such that $x<y$ and $f(x)< f(y)$, whereas for $i$ even $C_\alpha(i)$ contains points such that $x<y$ and $f(x)> f(y)$. [**Step 3**]{}. For every $m>k$, let $f_m$ be a function satisfying with $\mu = \frac 1k - \frac 1m$, and extend $f_m$ so that $f_m' = 0$ on ${\mathbb R}\setminus [0,1]$. We define $$f(x) := \sum_{k=1}^\infty h_m f_m\left( 2^{m+1} (x - 2^{-m}) \right)$$ for a sequence $(h_m)$ decreasing to zero fast enough so that the series converges in $C^k$. Then $f\in C^k$ and satisfies . [10]{} W. K. Allard. On the First Variation of a Varifold. , 95(3):417-491, 1972. M.A. Anderson. Institute of Physics Publishing, Briston and Philadelphia, 2003. S.M. Bates and A. Norton On sets of critical values in the real line , 82(2):399-413, 1996. G. Bellettini, J. Hoppe, M. Novaga and G. Orlandi. Closure and convexity properties of closed relativistic strings. , 4(3):473-496, 2010. G. Bellettini, M. Novaga and G. Orlandi. Time-like minimal submanifolds as singular limits of nonlinear wave equations. , 239(6):335-339, 2010. G. Bellettini, M. Novaga and G. Orlandi. Lorentzian varifolds and applications to closed relativistic strings. , to appear. M. Born and L. Infeld. Foundations of a new field theory. , 144:425-451, 1934. J. Eggers and J. Hoppe. Singularity formation for time-like extremal hypersurfaces. , 680:274-278, 2009. H. Federer, Springer-Verlag, 1969. M. Gage and R.S. Hamilton. The heat equation shrinking convex plane curves. , 23:69–96, 1986. R. L. Jerrard. Defects in semilinear wave equations and timelike minimal surfaces in Minkowski space. , 4(2): 285-340, 2011. D.X. Kong, L. Kefeng and Z.G. Wang. Hyperbolic mean curvature flow: evolution of plane curves. , 29(3):493–514, 2009. D.X. Kong and Q. Zhang. Solutions formula and time-periodicity for the motion of relativistic strings in the Minkowski space ${\mathbb R}^{1+n}$. , 238:902-922, 2009. D.X. Kong, Q. Zhang and Q. Zhou. The dynamics of relativistic strings moving in the Minkowski space ${\mathbb R}^{1+n}$. , 269:153-174, 2007. H. Lindblad. A remark on global existence for small initial data of the minimal surface equation in Minkowskian space time. , 132:1095–1102, 2004. O. Milbredt, *The Cauchy problem for membranes.* PhD Thesis, arXiv:0807.3465v1, 2008. J.C. Neu. Kinks and the minimal surface equation in Minkowski space. , 43:421-434, 1990. L. Nguyen and G. Tian. On smoothness of timelike maximal cylinders in three dimensional vacuum spacetimes. , 2012. A. Vilenkin and E.P.S. Shellard. Cambridge University Press, 1994. [^1]: It is arguably slightly unnatural to characterize a singular set in Minkowski space by the Euclidean Hausdorff dimension, but note that this quantity is invariant with respect to Lorentz transformations.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A new optical design concept of telescopes to provide an aberration-free, wide field, unvignetted flat focal plane is described. The system employs three aspheric mirrors to remove aberrations, and provides a semi-circular field of view with minimum vignetting. The third mirror reimages the intermediate image made by the first two-mirror system with a magnification factor on the order of unity. The present system contrasts with the Korsch system where the magnification factor of the third mirror is usually much larger than unity. Two separate optical trains can be deployed to cover the entire circular field, if necessary.' author: - 'Kyoji <span style="font-variant:small-caps;">Nariai</span>' - 'Masanori <span style="font-variant:small-caps;">Iye</span>' title: 'Three-Mirror Anastigmat Telescope with an Unvignetted Flat Focal Plane' --- Introduction ============ Most of the currently used reflecting telescopes are essentially two-mirror systems where major Seidel aberrations of the third order are removed, but not entirely. The addition of a third aspheric mirror to construct telescope optics enables the removal of remaining major aberrations ([@paul35]; [@robb78]; [@yama83]; [@epps83]; [@schr87]; [@wils96]). For instance, @will84 designed a wide-field telescope with three aspheric mirrors, giving $4\degree$ field of view with an image size better than , or a $3\degree$ field of view with a flat focal plane [@will85]. @raki02a as well as @raki02b found solutions for a flat-field three-mirror telescope with only one mirror aspherized. However, many of the three-mirror telescope designs suffer from obscurations, except for the design reported by @kors80 for practical applications. A four-mirror telescope with spherical primary is another approach to achieve small obstruction ([@mein84]; [@wils94]; [@raki04]). In the present paper, we report on a new concept of three-mirror telescope design for a next-generation extremely large telescope. Two-Mirror System ================= An all-mirror optical system is characterized by the set of the aperture $\mathcal{D}_i$, the radius of curvature $r_i$, and the distance to the next surface of the $i$-th mirror surface $d_i$. We use $\mathcal{D}_1$ to represent the aperture diameter of the primary mirror. The geometry of a two-mirror system is determined when four design parameters ($\mathcal{D}_1$, $r_1$, $d_1$, $r_2$) are given. We adopt a coordinate system where light proceeds from left to right. Thus, in a two-mirror system, $r_1<0$ and $d_1<0$; while $r_2<0$ for a Cassegrain system and $r_2>0$ for a Gregorian system. Instead of radius $r$, we can also use focal length $f$. In this case, $f$ is positive for a concave mirror and is negative for a convex mirror: $$f_1 = -\frac{r_1}{2}, \qquad f_2 = \frac{r_2}{2}.$$ We represent the distance from the second mirror to the focal plane of the two-mirror system by $d_2$. The imaging formula for the secondary mirror gives $$\frac{1}{-(f_1 + d_1)} + \frac{1}{d_2} = \frac{1}{f_2},$$ and the geometrical relation gives the ratio of focal lengths, $$\frac{f_1}{f_{\mathrm{comp}}} = \frac{f_1+d_1}{d_2},$$ where $f_{\mathrm{comp}}$ is the focal length of the composite system. For a two-mirror system, instead of $d_2$, we often use the back focus $d_{\rm{BF}}$, which is the distance of the focal plane from the primary mirror, $$d_{\mathrm{BF}}=d_2+d_1 .$$ It is more convenient to give three parameters as the focal length of the primary mirror $f_1$, the composite focal length $f_{\mathrm{comp}}$, and the focal position, which is usually represented by $d_{\mathrm{BF}}$. Instead of $f_1$ and $f_{\mathrm{comp}}$, we also use the $F$-ratio of the primary, $F_1 = f_1/\mathcal{D}_1$, and the composite $F$-ratio, $F_{\mathrm{comp}} = f_{\mathrm{comp}} /\mathcal{D}_1$. When $F_1$, $F_{\mathrm{comp}}$, $d_{\mathrm{BF}}$, and the diameter of the primary $\mathcal{D}_1$ are given as design parameters, $f_1$, $f_{\mathrm{comp}}$, $r_1$, $d_1$, and $r_2$ are derived by $$f_1 = F_1 \mathcal{D}_1 ,$$ $$f_{\mathrm{comp}} = f_1 \frac{F_{\mathrm{comp}}}{F_1} ,$$ $$r_1 = -2 f_1 ,$$ $$d_1 = -f_1 \frac{f_{\mathrm{comp}} - d_{\mathrm{BF}}}{f_{\mathrm{comp}} + f_1} ,$$ and $$r_2 = - 2 f_{\mathrm{comp}} \frac{f_1 + d_1} {f_{\mathrm{comp}} -f_1} .$$ Besides the radius, a surface has parameters to characterize its figure: the conic constant, $k$, and the coefficients of higher-order aspheric terms. We describe in the following the design principle of basic two-mirror optics systems characterized by $k_1$ and $k_2$, since the third-order aberration coefficients are governed by these parameters. In a two-mirror system, because $k_1$ and $k_2$ are determined as functions of $r_1$, $r_2$, and $d_1$, the astigmatism $C$ and the curvature of field $D$ are also functions of $r_1$, $r_2$, and $d_1$. @schw05 provided an anastigmat solution with a flat field as $d = 2f$ and $r_1 = 2\sqrt{2}f$, where $f$ is the composite focal length, and is given by $1/f = 2/r_1 + 2/r_2 - 4d/r_1r_2$. Two concentric sphere system with $d_1 = -r_1 (1+\sqrt{5})/2$ also gives an anastigmat solution, known as Schwarzschild optics, which is used in the microscope objective [@burc47]. However, if the radii and the distance between the mirrors are to be determined by other requirements, such as the final focal ratio and the back focal distance, we can no longer make a two-mirror system anastigmatic. The classical Cassegrain or Gregorian telescope uses a paraboloid for the primary ($k_1=-1$) and a hyperboloid for the secondary ($k_2 \neq -1$). This arrangement makes it possible to have a pinpoint image on the optical axis by making the spherical aberration $B = 0$ by appropriately choosing $k_2$, but the field of view is limited by remaining non-zero coma $F$. A Ritchey–Chr[é]{}tien telescope has no spherical aberration or coma. The use of hyperboloids for both the primary and secondary makes it possible to vanish two principal aberrations ($k_1$ and $k_2$ are used to make $B=F=0$). Most of the modern large telescopes adopt the Ritchey–Chr[é]{}tien system because of its wider field of view compared with the classical Cassegrain or Gregorian system. The remaining aberrations among the third-order aberrations, excepting the distortion, are the astigmatism, $C$, and the curvature of field, $D$. With $k_1$ and $k_2$ already used to make $B=F=0$ for the Ritchey–Chr[é]{}tien system, there are no parameters available to control $C$ and $D$. It is thus clear that we cannot make an anastigmat with a two-mirror system except for some special cases ([@schw05]; [@burc47]). Three-Mirror System =================== In our three-mirror system, the last mirror is placed so that it refocuses the intermediate image made by the first two mirrors. The intermediate focal plane of the first two mirrors is used as a virtual third surface, and the last concave mirror is numbered as the fourth surface. Three additional free parameters introduced are $d_3$, $r_4$, and $k_4$. We determine $r_4$ by the condition that the Petzval sum $P$ vanishes, $$\frac{1}{r_1} - \frac{1}{r_2} + \frac{1}{r_4} = 0 .$$ For ordinary telescopes, because the radius of curvature of the primary mirror is larger than that of the secondary mirror, $1/ |r_1|$ is smaller than $1 / |r_2|$. Therefore, the sign of the sum of the first two terms is determined by $r_2$. Therefore, $r_2$ and $r_4$ should have the same sign. Since $r_4 < 0$, $r_2 < 0$, the first two-mirror system should be of the Cassegrain type, not of the Gregorian type. The distance $d_4$ from the third mirror to the final focal position is calculated by $$\frac{1}{d_3} + \frac{1}{|d_4|} = \frac{2}{|r_4|} .$$ The magnification factor $M$ by the third mirror is given by $$M = \left|\frac{d_4}{d_3}\right| .$$ Using $k_1$, $k_2$, and $k_4$, we can make $B=F=C=0$. When we set $P=0$, we automatically have $D=0$. We thus obtained an anastigmatic optical system with a flat focal plane. As for the treatment of higher order aberration and an evaluation of the optical systems, readers might wish to see some reviews (e.g., [@wils96]; [@schr87]). We have not attempted to obtain explicit mathematical expressions for the aberration coefficients of this three-mirror system, but used an optimization procedure provided by the optical design program *optik* written by one of the authors (K.N.). Since the sixth-order aspheric coefficients of the mirror surface affect the fifth-order aberration, the sixth-order aspheric coefficient of the primary mirror is used to control the spherical aberration of the fifth order of the system. For this case, too, we used the optimizing function of *optik*. The mathematical expressions for the fifth-order aberration coefficients can be found, for instance, in @mats72. Because there are 12 independent fifth-order aberrations, it is not as straightforward as in the third-order case to control them with available aspheric coefficients, excepting the case of the fifth-order spherical aberration. We therefore treat only the fifth-order spherical aberration using the sixth-order aspheric coefficient of the primary mirror. Exit Pupil ========== We now consider the position and the radius of the exit pupil before discussing the vignetting problem. Let us study the size of the radius of the exit pupil first. We take the primary mirror as the pupil. We write the lens equations for the second and third mirrors with the sign convention for a single mirror; namely, $f_2$ is negative since the mirror is convex and $f_4$ is positive since the mirror is concave. Let $t_i$ and $t'_i$ be the distances that appear in the imaging of pupil by the $i$-th lens. Note that $t_2$ and $t_4$ are positive, whereas $t'_2$ is negative, since the image is imaginary and $t'_4$ is positive, since the image is real. The lens equation for the second mirror and the third mirror ($= \mbox{surface 4}$) is written as $$\frac{1}{t_i} + \frac{1}{t'_i} = \frac{1}{f_i} , \quad (i = 2, 4). \label{eq:mirror23}$$ Let us define $\xi_i$ and $\eta_i$ as $$\xi_i = \frac{f_i}{t_i } , \quad \eta_i= \frac{t'_i}{t_i} , \quad (i = 2, 4). \label{eq:xieta}$$ Then, $\eta_i$ can be rewritten with $\xi_i$ or with $f_i$ and $t_i$ as $$\eta_i= \frac{\xi_i}{1-\xi_i} = \frac{f_i}{t_i-f_i} , \quad (i = 2, 4).$$ Since the distance between the first and the second mirror is usually large compared to the focal length of the secondary mirror, $\xi_2$ and $\eta_2$ are small compared to unity, say, 0.1 in a typical case. If the center of curvature of the third mirror is placed at the focal position of the first two mirrors, $f_4 = d_4/2$ and $\xi_4$ and $\eta_4$ may have a value of around 0.3. Using $$t_2=d_1, \quad t_4 = d_2 + d_3 - t'_2 = d_2 + d_3 - f_2 (1+\eta_2) ,$$ we can rewrite equation (\[eq:xieta\]) explicitly as $$\xi_2 = \frac{f_2}{d_1}, \quad \xi_4 = \frac{f_4}{d_2+d_3 - t'_2} = \frac{f_4}{d_2+d_3 - \displaystyle\frac{d_1 f_2}{d_1-f_2}} ,$$ $$\eta_2 = \frac{f_2}{d_1-f_2}, \quad \eta_4 = \frac{f_4}{d_2 + d_3- \displaystyle\frac{d_1 f_2}{d_1-f_2} -f_4}.$$ The radius $R_{\mathrm{ep}}$ of the exit pupil is the radius of the entrance pupil, $\mathcal{D}_1/2$, multiplied by $t'_2/t_2$ and $t'_4/t_4$, $$\begin{aligned} R_{\mathrm{ep}} &=& \frac{\mathcal{D}_1}{2}\frac{t'_2}{t_2} \frac{t'_4}{t_4} = \frac{\mathcal{D}_1 \eta_2 \eta_4 }{2} \\[6pt] &=& \frac{\mathcal{D}_1}{2} \frac{f_2}{(d_1-f_2)} \frac{f_4}{\left( d_2+d_3 - \displaystyle\frac{d_1 f_2}{d_1-f_2} -f_4 \right)}. \label{eq:Rpupil}\end{aligned}$$ The position of the exit pupil is written as $$t'_4 = \frac{f_4}{1-\xi_4} = f_4 (1+\eta_4) .$$ Note that the current optical system is not telecentric, since the exit pupil is at a finite distance. The principal rays in the final focal plane are not collimated, but are diverging in proportion to the distance from the optical axis. This feature, however, will not be a practical difficulty in designing the observational instrument, unless one wants to cover the entire field, filling 2m in diameter, in a single optical train. Obstruction =========== Obstruction when $M>1$ ---------------------- The detector unit at the final focal plane obstructs the ray bundle that goes through the intermediate focal plane of the first two-mirror system. We solve this problem by using only the semi-circular half field of view at the focal plane of the first two mirrors. If the magnification is $M=1$, the image on one half field at this virtual plane is reimaged to the other half side, where the detector can be placed without essential vignetting. If the magnification/minification factor, $M$, is not unity, the field of view without vignetting is narrowed because of obstruction. It is easy to see that obstruction on the optical axis is always 50% regardless of the value of $M$. Figure 1 illustrates the geometry, showing the third mirror, pupil plane, virtual image plane, and final image plane, where the position along the optical axis ($z$-axis) of each surface is measured from the origin set at the apex of the third mirror. In figure 1, we assume that at the virtual focal plane of the two-mirror system (surface 3), light passes from left to right above the axis $(x>0)$, reflected at the third mirror (surface 4), passing through the exit pupil (surface 5), and imaged at the final focal plane below the optical axis $(x<0)$ (surface 6). (140mm,71.93 mm)[fig1.eps]{} The point A$(-d_3,0)$ at the field center of the virtual image plane is reimaged by the third mirror onto the point B$(-d_4,0)$ of the final image plane. The upper half of the beam from point A is vignetted at the folding mirror, and does not reach to point B on the detector surface. The limiting radius, $h$, on the virtual image plane, beyond which the beam from the virtual plane is reflected by the third mirror and refocused on the detector surface, does not suffer any obstruction, and is defined by joining the edge point, A, of the folding mirror and the edge point $\mathrm{H}_{\mathrm{P}}(-t'_4, R_{\mathrm{ep}})$, of the exit pupil. Table 1 gives the coordinates of some particular points for defining the edge of the partially vignetted field. ----------------------------------- ---------- ------------------- --------- ------------------- Point $z$ $x$ Surface Description \[2pt\] $\mathrm{H}_{\mathrm{V}}$ $-d_3$ $h$ 3 light enters here $\mathrm{H}_{\mathrm{P}}$ $-t'_4 $ $R_{\mathrm{ep}}$ 5 upper edge of the exit pupil $\mathrm{H}_{\mathrm{F}}$ $-d_4$ $-Mh$ 6 light images here \[2pt\] ----------------------------------- ---------- ------------------- --------- ------------------- : Coordinates of a few key points for the vignetting geometry.[]{data-label="poiname"} Because $\triangle \mathrm{BH_{F}A}$ and $\triangle \mathrm{PH_{P}A}$ are similar to each other, $$\frac{Mh}{R_{\mathrm{ep}}} = \frac{(M-1) \, d_3}{d_3-t'_4}.$$ Thus, the limiting radius on the image plane, $Mh$, is written as $$\begin{aligned} Mh &=& R_{\mathrm{ep}} \frac{(M-1) \, d_3}{d_3-t'_4} \\[3pt] &=& R_{\mathrm{ep}}(M-1) \frac{1}{1 - \displaystyle\frac{t'_4}{d_3} } = \frac{R_{\mathrm{ep}} (M^2-1)}{1-M\eta_4} .\end{aligned}$$ Obstruction when $M<1$ ---------------------- In this case, the final image plane is closer to the third mirror than is the virtual image plane, as shown in figure 2. Because $\triangle \mathrm{BH_{V}A}$ and $\triangle \mathrm{PH_{P}B}$ are similar to each other, $$\frac{h}{R_{\mathrm{ep}}} = \frac{(1-M) \, d_3}{M d_3-t'_4}.$$ Thus, the limiting radius on the image plane, $M h$, is written as $$\begin{aligned} %\displayadjust Mh &=& R_{\mathrm{ep}} M \frac{(1-M) \, d_3}{M d_3-t'_4}\\[6pt] &=& R_{\mathrm{ep}}(1-M) \frac{1}{1 - \displaystyle\frac{t'_4}{M d_3} } = \frac{R_{\mathrm{ep}} (1-M^2)}{M-\eta_4} .\end{aligned}$$ (140mm,67.57 mm)[fig2.eps]{} Limiting Radius for the Vignetting-Free Field --------------------------------------------- Because the 1 arcminute on the image plane is $$a = f_{\mathrm{comp}} \frac{\pi}{180 \times 60} = F_{\mathrm{comp}} \mathcal{D}_1 \frac{\pi}{180 \times 60} ,$$ the limiting radius, $Mh$, in arcminute scale, $a$, is expressed as $$\begin{aligned} \frac{R_{\mathrm{ep}} (M^2-1)}{1 - M\eta_4} \frac{1}{a} = \frac{1}{2F_{\mathrm{comp}}} \frac{M^2-1}{1-M\eta_4} \eta_2 \eta_4 \frac{180 \times 60}{\pi} \nonumber\\[2mm] \hspace{25mm} (M>1) \end{aligned}$$ and $$\begin{aligned} \frac{R_{ep} (1-M^2)}{M-\eta_4} \frac{1}{a} = \frac{1}{2F_{\mathrm{comp}}} \frac{1-M^2}{M-\eta_4} \eta_2 \eta_4 \frac{180 \times 60} {\pi} \nonumber\\[2mm] \hspace{25mm} (1>M). \end{aligned}$$ In a typical case, if we take the radius of field of view as $6\arcmin$ and allow $1\arcmin$ to be the limiting radius for the vignetting-free field, we have $$0.9 < M < 1.1.$$ Figure \[vignet\] shows the optical throughput of the present system for three cases with $M=1.0$, 0.9, and 0.8. Note that for $M=1$, the 50% vignetting takes place only along the $x=0$ axis of the semi-circular field. The field away from this axis by the diffraction size of the optics can be made essentially obstruction-free. (85mm,53.43 mm)[fig3.eps]{} Example Layout to Cover the Circular Field ========================================== Figure \[jelt3d\] shows an example layout of an all-mirror anastigmat telescope to cover a full circular field of view of $10\arcmin$ radius by two optical branches, each covering a semi-circular field of view. In this figure, only one of the two optical branches is shown, for simplicity. By folding the beam by flat mirrors, M3 and M4, one can have an unvignetted semi-circular focal plane FP reimaged by M5 (third aspheric mirror), and refolded by M6 as shown in figure \[m4-6\], where two optical branches are shown. Table \[lensdat\] gives the lens data of the optical system shown in figure \[jelt3d\]. (85mm,65.96 mm)[fig4.eps]{} (75mm,106.56mm)[fig5.eps]{} A spot diagram out to $10\arcmin$ from the optical axis is shown in figure \[spot\]. Note that the designed spot sizes are smaller than the diffraction circle for a 30m ELT out to $8\arcmin$. Actual manufacturing of such an anastigmat system needs to be further studied. Conclusion ========== The present three-mirror anastigmat telescope system provides a flat focal plane with diffraction-limited imaging capability with minimal vignetting. The magnification factor by the third mirror, $M$, should be designed to be close to unity. Although only a semi-circular field of view can be made unvignetted in one optical train, one can cover the entire circular field without vignetting by deploying two such separate optical trains, each covering a semi-circular field. The present system is similar to the three-mirror anastigmat Korsch system with a $45\degree$ mirror placed at the pupil plane [@kors80] concerning its aberration-free optical performance. However, the magnification factor by the third mirror, $M$, should be large in order to make the vignetting factor small for the Korsch system, whereas the magnification factor by the third mirror should be close to unity in the present system. Therefore, the present system can be used for applications that require a wider field of view. Another merit of the present system is the avoidance of central obscuration. The authors are grateful to Dr. A. Rakich, who kindly pointed out the existence of many important classical and modern papers to be referred in relation to the three-mirror telescope design. They also appreciate comments of Dr. Y. Yamashita on the background of the present work. ------------- ------------------------ -------------- -------------- ----------- ---------- ------------- ------------ --------- --------- Number Surface type Radius Thickness Glass Distance Conic $\phi_v$ Surface \[2pt\] OBJ Standard 1.0E$+$040 Infinity Infinity 0.0 1 Even Asphere $-90.0$ $-39.090909$ Mirror M1 15.0 $-0.992036$ 1.5E$-$015 0.0 STOP Even Asphere $-13.131313$ 34.090909 Mirror M2 1.975613 $-1.412689$ 0.0 0.0 3 Coord Break 0.0 $\cdots$ 0.0 45.0 0.0 4 Standard 1.0E$+$040 0.0 Mirror M3 2.112906 0.0 5 Coord Break $-25.0$ $\cdots$ 0.0 45.0 0.0 6 Coord Break 0.0 $\cdots$ 0.0 0.0 45.0 7 Standard 1.0E$+$040 0.0 Mirror M4 1.772806 0.0 3 8 Coord Break 15.374510 $\cdots$ 0.0 0.0 45.0 9 Even Asphere $-15.374510$ $-13.374510$ Mirror M5 1.999334 $-0.720989$ 0.0 0.0 4 10 Coord Break 0.0 $\cdots$ 0.0 0.0 $-22.5$ 11 Standard 1.0E$+$040 0.0 Mirror M6 1.039940 0.0 6 12 Coord Break 2.0 $\cdots$ 0.0 0.0 $-22.5$ IMA $-1.0\mathrm{E}{+}040$ 1.185943 0.0 \[2pt\] ------------- ------------------------ -------------- -------------- ----------- ---------- ------------- ------------ --------- --------- (160mm,119.88 mm)[fig6.eps]{} Matsui, Y. 1972, Lens Sekkeiho (Tokyo: Kyoritu Publishing Co.),in Japanese Paul, M. 1935, Rev. Opt., A14, 169 Schwarzschild, K. 1905, Investigations into Geometrical Optics II, Theory of Mirror Telescopes, English translation by A. Rakich, $\langle$http://members.iinet.net.au/\~arakich/$\rangle$
{ "pile_set_name": "ArXiv" }
--- abstract: 'We call the *$p$-fundamental string* of a complex simple Lie algebra to the sequence of irreducible representations having highest weights of the form $k\omega_1+\omega_p$ for $k\geq0$, where $\omega_j$ denotes the $j$-th fundamental weight of the associated root system. For a classical complex Lie algebra, we establish a closed explicit formula for the weight multiplicities of any representation in any $p$-fundamental string.' address: - 'L.: Institut für Mathematik, Humboldt Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany. Permanent affiliation: CIEM–FaMAF (CONICET), Universidad Nacional de Córdoba, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba, Argentina.' - 'R.B.: CIEM–FaMAF (CONICET), Universidad Nacional de Córdoba, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba, Argentina.' author: - 'Emilio A. Lauret, Fiorela Rossi Bertone' date: Noviembre 2017 title: Multiplicity formulas for fundamental strings of representations of classical Lie algebras --- Introduction {#sec:intro} ============ Let $\mathfrak g$ be a complex semisimple Lie algebra. We fix a Cartan subalgebra $\mathfrak h$ of $\mathfrak g$. Let $(\pi,V_\pi)$ be a finite dimensional representation of $\mathfrak g$, that is, a homomorphism $\pi:\mathfrak g\to {\mathfrak{gl}}(V_\pi)$ with $V_\pi$ a complex vector space. An element $\mu\in \mathfrak h^*$ is called a *weight* of $\pi$ if $$V_\pi(\mu):= \{v\in V_\pi: \pi(X)v=\mu(X)v \text{ for all }X\in\mathfrak h\}\neq0.$$ The *multiplicity* of $\mu$ in the representation $\pi$, denoted by $m_\pi(\mu)$, is defined as $\dim V_\pi(\mu)$. There are many formulas in the literature to compute $m_\pi(\mu)$ for arbitrary $\mathfrak g$, $\pi$ and $\mu$. The ones by Freudenthal [@Freudenthal54] and Kostant [@Kostant59] are very classical. More recent formulas were given by Lusztig [@Lusztig83], Littelmann [@Littelmann95] and Sahi [@Sahi00]. Although all of them are very elegant and powerful theoretical results, they may not be considered *closed explicit expressions*. Moreover, some of them are not adequate for computer implementation (cf. [@Schutzer04thesis], [@Harris12thesis]). Actually, it is not expected a closed formula in general. There should always be a sum over a symmetric group (whose cardinal grows quickly when the rank of $\mathfrak g$ does) or over partitions, or being recursive, or written in terms of combinatorial objects (e.g. Young diagrams like in [@Koike87]), among other ways. However, closed explicit expressions are possible for particular choices of $\mathfrak g$ and $\pi$. Obviously, this is the case for ${\mathfrak{sl}}(2,{\mathbb C})$ and $\pi$ any of its irreducible representations (see [@Knapp-book-beyond §I.9]). Furthermore, for a classical Lie algebra $\mathfrak g$, it is not difficult to give expressions for the weight multiplicities of the representations ${\operatorname}{Sym}^k(V_\mathrm{st})$ and $\bigwedge^p (V_\mathrm{st})$ and also for their irreducible components (see for instance Lemmas \[lemCn:extreme\], \[lemDn:extremereps\] and \[lemBn:extremereps\] and Theorem \[thmAn:multip(k,p)\]; these formulas are probably well known but they are included here for completeness). Here, $V_{\mathrm{st}}$ denotes the standard representation of $\mathfrak g$. A good example of a closed explicit formula in a non-trivial case was given by Cagliero and Tirao [@CaglieroTirao04] for ${\mathfrak{sp}}(2,{\mathbb C})\simeq{\mathfrak{so}}(5,{\mathbb C})$ and $\pi$ arbitrary. In order to end the description of previous results in this large area we name a few recent related results, though the list is far from being complete: [@Cochet05], [@BaldoniBeckCochetVergne06], [@Bliem08-thesis], [@Schutzer12], [@Maddox14], [@FernandezGarciaPerelomov2014], [@FernandezGarciaPerelomov2015a], [@FernandezGarciaPerelomov2015b], [@FernandezGarciaPerelomov2017], [@Cavallin17]. The main goal of this article is to show, for each classical complex Lie algebra $\mathfrak g$ of rank $n$, a closed explicit formula for the weight multiplicities of any irreducible representation of $\mathfrak g$ having highest weight $k\omega_1+\omega_p$, for any integers $k\geq0$ and $1\leq p\leq n$. Here, $\omega_1,\dots,\omega_n$ denote the fundamental weights associated to the root system $\Sigma(\mathfrak g,\mathfrak h)$. We call *$p$-fundamental string* to the sequence of irreducible representations of $\mathfrak g$ with highest weights $k\omega_1+\omega_p$ for $k\geq0$. We will write $\pi_{\lambda}$ for the irreducible representation of $\mathfrak g$ with highest weight $\lambda$. For types ${\mathrm}B_n$, ${\mathrm}C_n$ or ${\mathrm}D_n$ (i.e. ${\mathfrak{so}}(2n+1,{\mathbb C})$, ${\mathfrak{sp}}(n,{\mathbb C})$ or ${\mathfrak{so}}(2n,{\mathbb C})$ respectively) an accessory representation $\pi_{k,p}$ is introduced to unify the approach (see Definition \[def:pi\_kp\]). We have that $\pi_{k,p}$ and $\pi_{k \omega_1+\omega_p}$ coincide except for $p=n$ in type ${\mathrm}B_n$ and $p=n-1,n$ in type ${\mathrm}D_n$. The weight multiplicity formulas for $\pi_{k,p}$ are in Theorems \[thmCn:multip(k,p)\], \[thmDn:multip(k,p)\] and \[thmBn:multip(k,p)\] for types ${\mathrm}C_n$, ${\mathrm}D_n$ and ${\mathrm}B_n$ respectively. Their proofs follow the same strategy (see Section \[sec:strategy\]). The formulas for the remaining cases, namely the (spin) representations $\pi_{k \omega_1+\omega_n}$ in type ${\mathrm}B_n$ and $\pi_{k \omega_1+\omega_{n-1}}$, $\pi_{k \omega_1+\omega_n}$ in type ${\mathrm}D_n$, can be found in Theorem \[thmDn:multip(spin)\] and \[thmBn:multip(spin)\] respectively. Given a weight $\mu=\sum_{j=1}^{n} a_j\varepsilon_j$ (see Notation \[notacion\]) of a classical Lie algebra $\mathfrak g$ of type ${\mathrm}B_n$, ${\mathrm}C_n$ or ${\mathrm}D_n$, we set $$\begin{aligned} \label{eq:notation-one-norm} {\|{\mu}\|_1}=\sum_{j=1}^{n} |a_j| \quad\text{and}\quad {Z}(\mu) = \#\{1\leq j\leq n: a_j=0\}. \end{aligned}$$ We call ${\|{\mu}\|_1}$ the *one-norm* of $\mu$. The function ${Z}(\mu)$ counts the number of zero coordinates of $\mu$. It is not difficult to check that $m_{\pi_{k\omega_1}}(\mu)$ depends only on ${\|{\mu}\|_1}$ for a fixed $k\geq0$. Moreover, it is known that $m_{\pi_{k,p}}(\mu)$ depends only on ${\|{\mu}\|_1}$ and ${Z}(\mu)$ for type ${\mathrm}D_n$ (see [@LMR-onenorm Lem. 3.3]). This last property is extended to types ${\mathrm}B_n$ and ${\mathrm}C_n$ as a consequence of their multiplicity formulas. \[cor:depending-one-norm-ceros\] For $\mathfrak g$ a classical Lie algebra of type ${\mathrm}B_n$, ${\mathrm}C_n$ or ${\mathrm}D_n$ and a weight $\mu=\sum_{i=1}^{n} a_i\varepsilon_i$, the multiplicity of $\mu$ in $\pi_{k,p}$ depends only on ${\|{\mu}\|_1}$ and ${Z}(\mu)$. For $\mathfrak g={\mathfrak{sl}}(n+1,{\mathbb C})$ (type ${\mathrm}A_n$), the multiplicity formula for a representation in a fundamental string is in Theorem \[thmAn:multip(k,p)\]. This case is simpler since it follows immediately from basic facts on Young diagrams. Although this formula should be well known, it is included for completeness. Explicit expressions for the weight multiplicities of a representation in a fundamental string are required in several different areas. The interest of the authors on them comes from their application to spectral geometry. Actually, many multiplicity formulas have already been applied to determine the spectrum of Laplace and Dirac operators on certain locally homogeneous spaces. See Section \[sec:conclusions\] for a detailed account of these applications. It is important to note that all the weight multiplicity formulas obtained in this article have been checked with Sage [@Sage] for many cases. This computer program uses the classical Freudenthal formula. Because of the simplicity of the expressions obtained in the main theorems, the computer takes usually a fraction of a second to calculate the result. Throughout the article we use the convention $\binom{b}{a}=0$ if $a<0$ or $b<a$. The article is organized as follows. Section \[sec:strategy\] explains the method to obtain $m_{\pi_{k,p}}(\mu)$ for types ${\mathrm}B_n$, ${\mathrm}C_n$ and ${\mathrm}D_n$. These cases are considered in Sections \[secBn:multip(k,p)\], \[secCn:multip(k,p)\] and \[secDn:multip(k,p)\] respectively, and type ${\mathrm}A_n$ is in Section \[secAn:multip(k,p)\]. In Section \[sec:conclusions\] we include some conclusions. Strategy {#sec:strategy} ======== In this section, we introduce the abstract method used to find the weight multiplicity formulas for the cases ${\mathrm}B_n$, ${\mathrm}C_n$ and ${\mathrm}D_n$. Throughout this section, $\mathfrak g$ denotes a classical complex Lie algebra of type ${\mathrm}B_n$, ${\mathrm}C_n$ and ${\mathrm}D_n$, namely ${\mathfrak{so}}(2n+1,{\mathbb C})$, ${\mathfrak{sp}}(n,{\mathbb C})$, ${\mathfrak{so}}(2n,{\mathbb C})$, for some $n\geq2$. We first introduce some standard notation. \[notacion\] We fix a Cartan subalgebra $\mathfrak h$ of $\mathfrak g$. Let $\{\varepsilon_{1}, \dots,\varepsilon_{n}\}$ be the standard basis of $\mathfrak h^*$. Thus, the sets of simple roots $\Pi(\mathfrak g,\mathfrak h)$ are given by $\{\varepsilon_1-\varepsilon_2,\dots, \varepsilon_{n-1}-\varepsilon_n,\varepsilon_n\}$ for type ${\mathrm}B_n$, $\{\varepsilon_1-\varepsilon_2,\dots, \varepsilon_{n-1}-\varepsilon_n,2\varepsilon_n\}$ for type ${\mathrm}C_n$, and $\{\varepsilon_1-\varepsilon_2,\dots, \varepsilon_{n-1}-\varepsilon_n,\varepsilon_{n-1}+\varepsilon_n\}$ for type ${\mathrm}D_n$. A precise choice for $\mathfrak h$ and $\varepsilon_j$ will be indicated in each type. We denote by $\Sigma(\mathfrak g,\mathfrak h)$ the set of roots, by $\Sigma^+(\mathfrak g,\mathfrak h)$ the set of positive roots, by $\omega_1,\dots,\omega_n$ the fundamental weights, by $P(\mathfrak g)$ the (integral) weight space of $\mathfrak g$ and by $P^{{+}{+}}(\mathfrak g)$ the set of dominant weights. Let $\mathfrak g_0$ be the compact real form of $\mathfrak g$ associated to $\Sigma(\mathfrak g,\mathfrak h)$, let $G$ be the compact linear group with Lie algebra $\mathfrak g_0$ (e.g. $G={\operatorname{SO}}(2n)$ for type ${\mathrm}D_n$ in place of ${\operatorname{Spin}}(2n)$), and let $T$ be the maximal torus in $G$ corresponding to $\mathfrak h$, that is, the Lie algebra $\mathfrak t$ of $T$ is a real subalgebra of $\mathfrak h$. Write $P(G)$ for the set of $G$-integral weights and $P^{{+}{+}}(G)=P(G)\cap P^{{+}{+}}(\mathfrak g)$. By the Highest Weight Theorem, the irreducible representations of $\mathfrak g$ and $G$ are in correspondence with elements in $P^{{+}{+}}(\mathfrak g)$ and $P^{{+}{+}}(G)$ respectively. For $\lambda$ an integral dominant weight, we denote by $\pi_\lambda$ the associated irreducible representation of $\mathfrak g$. We recall that, under Notation \[notacion\], the fundamental weights are: $$\begin{aligned} \text{in type ${\mathrm}B_n$},\qquad \omega_p &= \begin{cases} \varepsilon_1+\dots+\varepsilon_p &\text{if $1\leq p\leq n-1$,}\\ \frac12(\varepsilon_1+\dots+\varepsilon_n) &\text{if $p=n$,} \end{cases} \\ \text{in type ${\mathrm}C_n$},\qquad \omega_p &=\varepsilon_1+\dots+\varepsilon_p \quad\text{for every $1\leq p\leq n$},\\ \text{in type ${\mathrm}D_n$},\qquad \omega_p &= \begin{cases} \varepsilon_1+\dots+\varepsilon_p &\text{if $1\leq p\leq n-2$,}\\ \frac12(\varepsilon_1+\dots+\varepsilon_{n-1}-\varepsilon_{n}) &\text{if $p=n-1$,}\\ \frac12(\varepsilon_1+\dots+\varepsilon_{n-1}+\varepsilon_{n}) &\text{if $p=n$.} \end{cases}\end{aligned}$$ We set $\widetilde \omega_p=\varepsilon_1+\dots+\varepsilon_p$ for any $1\leq p\leq n$. Thus, $\widetilde \omega_p=\omega_p$ excepts for type ${\mathrm}B_n$ and $p=n$ when $\widetilde \omega_{n}=2\omega_n$, and for type ${\mathrm}D_n$ and $p\in\{n-1,n\}$ when $\widetilde \omega_{n-1}=\omega_{n-1}+\omega_n$ and $\widetilde \omega_{n}=2\omega_n$. \[def:pi\_kp\] Let $\mathfrak g$ be a classical Lie algebra of type ${\mathrm}B_n$, ${\mathrm}C_n$ or ${\mathrm}D_n$. For $k\geq0$ and $1\leq p\leq n$ integers, let us denote by $\pi_{k,p}$ the irreducible representation of $\mathfrak g$ with highest weight $k\omega_1+\widetilde \omega_p$, except for $p=n$ and type ${\mathrm}D_n$ when we set $\pi_{k,n}=\pi_{k\omega_1+2\omega_{n-1}}\oplus \pi_{k\omega_1+2\omega_{n}}$. By convention, we set $\pi_{k,0}=0$ for $k\geq0$. We next explain the procedure to determine the multiplicity formula for $\pi_{k,p}$. Step 1 : Obtain the decomposition in irreducible representations of $$\label{eq:sigma_kp} \sigma_{k,p}:=\pi_{k\omega_1}\otimes \pi_{\widetilde \omega_p},$$ and consequently, write $\pi_{k,p}$ in terms of representations of the form in the virtual representation ring. Fortunately, this decomposition is already known and coincides for the types ${\mathrm}B_n$, ${\mathrm}C_n$ and ${\mathrm}D_n$, thus the second requirement has also a uniform statement (see Lemma \[lem:step1\]). Step 2 : Obtain a formula for the weight multiplicities of the extreme cases $\pi_{k\omega_1}$ and $\pi_{\widetilde \omega_p}$. It will be useful to realize these representations inside ${\operatorname}{Sym}^k(V_{\pi_{\omega_1}})$ and $\bigwedge^p(V_{\pi_{\omega_1}})$ respectively. Note that $\pi_{\omega_1}$ is the standard representation. Step 3 : Obtain a closed expression for the weight multiplicities on $\sigma_{k,p}$. This is the hardest step. One has that (see for instance [@Knapp-book-beyond Exercise V.14]) $$\label{eq:multiptensor} m_{\sigma_{k,p}}(\mu) = \sum_{\eta} m_{\pi_{k\omega_1}}(\mu-\eta) \, m_{\pi_{\widetilde \omega_p}}(\eta),$$ where the sum is over the weights of $\pi_{\widetilde \omega_p}$. Then, the multiplicity formulas obtained in Step 2 can be applied. Step 4 : Obtain the weight multiplicity formula for $\pi_{k,p}$. We will replace the formula obtained in Step 3 into the formula obtained in Step 1. The following result works out Step 1. \[lem:step1\] Let $\mathfrak g$ be a classical Lie algebra of type ${\mathrm}B_n$, ${\mathrm}C_n$ or ${\mathrm}D_n$ and let $k\geq0$, $1\leq p\leq n$ integers. Then $$\label{eq:funsionrule(sigma)} \sigma_{k,p} = \pi_{k\omega_1}\otimes \pi_{\widetilde \omega_p} = \pi_{k-1,1}\otimes \pi_{0,p} \simeq \pi_{k,p}\oplus \pi_{k-1,p+1} \oplus \pi_{k-2,p}\oplus \pi_{k-1,p-1}.$$ Furthermore, in the virtual ring of representations, we have that $$\label{eq:virtualring(sigma)} \pi_{k,p} = \sum_{j=1}^p (-1)^{j-1} \sum_{i=0}^{j-1} \sigma_{k+j-2i,p-j}.$$ The decomposition is proved in [@KoikeTerada87 page 510, example (3)] by Koike and Terada, though their results are much more general and this particular case was probably already known. We now show . The case $p=1$ is trivial. Indeed, the right hand side equals $\sigma_{k+1,0}=\pi_{k,1}$ by definition. We assume that the formula is valid for values lower than or equal to $p$. By this assumption and we have that $$\begin{aligned} \pi_{k,p+1} &= \sigma_{k+1,p} - \pi_{k+1,p} -\pi_{k-1,p} -\pi_{k,p-1} = \sigma_{k+1,p} - \sum_{j=1}^p (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k+1+j-2i,p-j}\\ &\qquad - \sum_{j=1}^p (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k-1+j-2i,p-j} - \sum_{j=1}^{p-1} (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k+j-2i,p-1-j}.\end{aligned}$$ By making the change of variables $h=j+1$ in the last term, one gets $$\begin{aligned} \pi_{k,p+1} &= \sigma_{k+1,p}- \sum_{j=1}^p (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k+1+j-2i,p-j}- \sigma_{k,p-1} - \sum_{j=2}^p (-1)^{j-1} \sigma_{k+1-j,p-j}. \end{aligned}$$ The rest of the proof is straightforward. Type C {#secCn:multip(k,p)} ====== In this section we consider the classical Lie algebra $\mathfrak g$ of type ${\mathrm}C_n$, that is, $\mathfrak g={\mathfrak{sp}}(n,{\mathbb C})$. In this case, according to Notation \[notacion\], $\widetilde \omega_p=\omega_p$ for every $p$, thus $\pi_{k\omega_1+\omega_p} = \pi_{k,p}$. The next theorem gives the explicit expression of $m_{\pi_{k,p}}(\mu)$ for any weight $\mu$. This expression depends on the terms ${\|{\mu}\|_1}$ and ${Z}(\mu)$, introduced in . \[thmCn:multip(k,p)\] Let $\mathfrak g={\mathfrak{sp}}(n,{\mathbb C})$ for some $n\geq2$ and let $k\geq0$, $1\leq p\leq n$ integers. For $\mu\in P(\mathfrak g)$, if $r(\mu):=(k+p-{\|{\mu}\|_1})/2$ is a non-negative integer, then $$\begin{aligned} m_{\pi_{k,p}}(\mu) &= \sum_{j=1}^{p} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j}{2}\rfloor} \frac{n-p+j+1}{n-p+j+t+1}\binom{n-p+j+2t}{t} \\ &\quad \sum_{\beta=0}^{p-j-2t} 2^{p-j-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-j-2t-\beta} \\ &\quad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{r(\mu)-i-p+\alpha+t+j+n-1}{n-1}, \end{aligned}$$ and $m_{\pi_{k,p}}(\mu)=0$ otherwise. The rest of this section is devoted to prove this formula following the procedure described in Section \[sec:strategy\]. We first set the notation for this case. Here $G={\operatorname{Sp}}(n,{\mathbb C})\cap {\operatorname{U}}(2n)$ where ${\operatorname{Sp}}(n,{\mathbb C}) = \{g\in{\operatorname{SL}}(2n,{\mathbb C}): g^t J_ng=J_n:=\left(\begin{smallmatrix}0&{\operatorname}{Id}_n\\ -{\operatorname}{Id}_n&0\end{smallmatrix}\right)\}$, $\mathfrak g_0=\mathfrak{sp}(n,{\mathbb C})\cap \mathfrak{u}(2n)$, $$\begin{aligned} \label{eqCn:maximaltorus} T&= \left\{ {\operatorname{diag}}\left(e^{{\mathtt{i}}\theta_1},\dots, e^{{\mathtt{i}}\theta_n},e^{-{\mathtt{i}}\theta_1},\dots, e^{-{\mathtt{i}}\theta_n} \right) :\theta_i\in{\mathbb R}\;\forall\,i \right\},\\ \label{eqCn:subalgCartan} \mathfrak h &= \left\{ {\operatorname{diag}}(\theta_1,\dots,\theta_n,-\theta_1,\dots,-\theta_n): \theta_i\in{\mathbb C}\;\forall\,i \right\},\end{aligned}$$ $\varepsilon_i\big({\operatorname{diag}}\left(\theta_1,\dots,\theta_n,-\theta_1,\dots,-\theta_n \right)\big) =\theta_i$ for each $1\leq i\leq n$, $\Sigma^+(\mathfrak g,\mathfrak h)= \{\varepsilon_i\pm \varepsilon_j: 1\leq i<j\leq n\}\cup\{2\varepsilon_i:1\leq i\leq n\}$, and $$\begin{aligned} P(\mathfrak g) &= P(G)= {\mathbb Z}\varepsilon_1\oplus\dots\oplus{\mathbb Z}\varepsilon_{n},\\ P^{{+}{+}}(\mathfrak g) &= P^{{+}{+}}(G)= \left\{\textstyle\sum_{i}a_i\varepsilon_i \in P(\mathfrak g) :a_1\geq a_2\geq \dots \geq a_{n}\geq0\right\}.\end{aligned}$$ The following well known identities (see for instance [@FultonHarris-book §17.2]) will be useful to show Step 2, $$\begin{aligned} \label{eqCn:extremereps} \pi_{k\omega_1}=\pi_{k\varepsilon_1} &\simeq {\operatorname}{Sym}^k({\mathbb C}^{2n}),& \textstyle\bigwedge^p({\mathbb C}^{2n}) &\simeq \pi_{\omega_p}\oplus \textstyle\bigwedge^{p-2}({\mathbb C}^{2n}),\end{aligned}$$ for any integers $k\geq0$ and $1\leq p\leq n$. Here, ${\mathbb C}^{2n}$ denotes the standard representation of $\mathfrak g={\mathfrak{sp}}(2n,{\mathbb C})$. Since $G={\operatorname{Sp}}(n)$ is simply connected, $\pi_{\lambda}$ descends to a representation of $G$ for any $\lambda\in P^{{+}{+}}(\mathfrak g)$. In what follows we will work with representations of $G$ for simplicity. Thus, $m_{\pi}(\mu) = \dim \{v\in V_\pi : \pi (\exp X) v = e^{\mu(X)}v\quad \forall\, X\in\mathfrak t\}. $ \[lemCn:extreme\] Let $n\geq2$, $\mathfrak g={\mathfrak{sp}}(n,{\mathbb C})$, $k\geq0$, $1\leq p\leq n$ and $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(\mathfrak g)$. Then $$\begin{aligned} \label{eqCn:multip(k)} m_{\pi_{k\omega_1}}(\mu) &=m_{\pi_{k\varepsilon_1}}(\mu)= \begin{cases} \binom{r(\mu)+n-1}{n-1} & \text{ if }\, r(\mu):=\frac{k-{\|{\mu}\|_1}}{2}\in {\mathbb N}_0,\\ 0 & \text{ otherwise,} \end{cases} \\ m_{\pi_{\omega_p}}(\mu) &= \begin{cases} \frac{n-p+1}{n-p+r(\mu)+1}\binom{n-p+2r(\mu)}{r(\mu)} & \text{if }\,r(\mu):=\frac{p-{\|{\mu}\|_1}}{2}\in {\mathbb N}_0 \text{ and } |a_j|\leq1\;\forall\,j,\\ 0&\text{otherwise.} \end{cases} \label{eqCn:multip(p)} \end{aligned}$$ By , $\pi_{k\varepsilon_1}$ is realized in the space of homogeneous polynomials $\mathcal P_k\simeq {\operatorname}{Sym}^k({\mathbb C}^{2n})$ of degree $k$ in the variables $x_1,\dots,x_{2n}$. The action of $g\in G$ on $f(x)\in \mathcal P_k$ is given by $(\pi_{k\varepsilon_1}(g)\cdot f)(x) = f(g^{-1}x)$, where $x$ denotes the column vector $(x_1,\dots,x_{2n})^t$. The monomials $x_1^{k_1}\dots x_n^{k_n}x_{n+1}^{l_1}\dots x_{2n}^{l_n}$ with $k_1,\dots,k_n,l_1,\dots,l_n$ non-negative integers satisfying that $\sum_{j=1}^{n} k_j+l_j=k$ form a basis of $\mathcal P_k$ given by weight vectors. Indeed, one can check that the action of $h={\operatorname{diag}}\left(e^{{\mathtt{i}}\theta_1},\dots, e^{{\mathtt{i}}\theta_n},e^{-{\mathtt{i}}\theta_1},\dots, e^{-{\mathtt{i}}\theta_n} \right) \in T$ on the monomial $x_1^{k_1}\dots x_n^{k_n}x_{n+1}^{l_1}\dots x_{2n}^{l_n}$ is given by multiplication by $ e^{{\mathtt{i}}\sum_{j=1}^n\theta_j(k_j-l_j)}. $ Hence, $x_1^{k_1}\dots x_n^{k_n}x_{n+1}^{l_1}\dots x_{2n}^{l_n}$ is a weight vector of weight $\mu=\sum_{j=1}^n (k_j-l_{j}) \varepsilon_j$. Consequently, the multiplicity of a weight $\mu=\sum_{j=1}^n a_j\varepsilon_j\in\mathcal P(\mathfrak g)$ in $\mathcal P_k$ is the number of different tuples $(k_1,\dots,k_{n},l_1,\dots,l_{n})\in{\mathbb N}_0^{2n}$ satisfying that $\sum_{j=1}^{n} (k_j+ l_j)=k$ and $a_j=k_j-l_{j}$ for all $j$. For such a tuple, we note that $k-{\|{\mu}\|_1}= k-\sum_{i=1}^n |a_i|=2\sum_{i=1}^n \operatorname{min}(k_i,l_i)$. It follows that $\mu$ is a weight of $\mathcal{P}_k$ if and only if $k-{\|{\mu}\|_1}=2r$ with $r$ a non-negative integer. Moreover, its multiplicity is the number of different ways one can write $r$ as an ordered sum of $n$ non-negative integers, which equals $\binom{r+n-1}{n-1}$. This implies . For , we consider the representation $\bigwedge^p({\mathbb C}^{2n})$. The action of $G$ on $\bigwedge^p({\mathbb C}^{2n})$ is given by $ g\cdot v_1\wedge\dots\wedge v_p = (g v_1)\wedge\dots\wedge (g v_p), $ where $gv$ stands for the matrix multiplication between $g\in G\subset {\operatorname{GL}}(2n,{\mathbb C})$ and the column vector $v\in {\mathbb C}^{2n}$. Let $\{e_1,\dots,e_{2n}\}$ denote the canonical basis of ${\mathbb C}^{2n}$. For $I=\{i_1,\dots,i_p\}$ with $1\leq i_1<\dots<i_p\leq 2n$, we write $w_I=e_{i_1}\wedge\dots\wedge e_{i_p}$. Clearly, the set of $w_I$ for all choices of $I$ is a basis of $\bigwedge^p({\mathbb C}^{2n})$. Since $h={\operatorname{diag}}\left(e^{{\mathtt{i}}\theta_1},\dots, e^{{\mathtt{i}}\theta_n} ,e^{-{\mathtt{i}}\theta_1},\dots, e^{-{\mathtt{i}}\theta_n} \right) \in T$ satisfies $h e_j = e^{{\mathtt{i}}\theta_j} e_j$ and $h e_{j+n} = e^{-{\mathtt{i}}\theta_j}e_{j+n}$ for all $1\leq j\leq n$, we see that $w_I$ is a weight vector of weight $\mu=\sum_{j=1}^n a_j\varepsilon_j$ where $$\begin{aligned} \label{eq:weight_exteriorCn} a_j=\begin{cases} 1&\quad\text{if $j\in I$ and $j+n\notin I$,}\\ -1&\quad\text{if $j\notin I$ and $j+n\in I$,}\\ 0&\quad\text{if $j,j+n\in I$ or $j,j+n\notin I$.} \end{cases} \end{aligned}$$ Thus, an arbitrary element $\mu=\sum_j a_j\varepsilon_j\in P(\mathfrak g)$ is a weight of $\bigwedge^p({\mathbb C}^{2n})$ if and only if $|a_j|\leq 1$ for all $j$ and $p-{\|{\mu}\|_1}= 2r$ for some non-negative integer $r$. It remains to determine the multiplicity in $\bigwedge^p({\mathbb C}^{2n})$ of a weight $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(\mathfrak{g})$ satisfying $|a_j|\leq 1$ for all $j$ and $r:=\frac{p-{\|{\mu}\|_1}}{2}\in{\mathbb N}_0$. Let $I_\mu=\{i:1\leq i\leq n, \, a_i=1\}\cup\{i:n+1\leq i\leq 2n,\, a_{i-n}=-1\}$. The set $I_\mu$ has $p-2r$ elements. For $I=\{i_1,\dots,i_p\}$ with $1\leq i_1<\dots<i_p\leq 2n$, it is a simple matter to check that $w_I$ is a weight vector with weight $\mu$ if and only if $I$ has $p$ elements, $I_\mu\subset I$ and $I$ has the property that $j\in I\smallsetminus I_\mu \iff j+n\in I\smallsetminus I_\mu$ for $1\leq j\leq n$. One can see that there are $\binom{n-p+2r}{r}$ choices for $I$. Hence $ m_{\bigwedge^p({\mathbb C}^{2n})}(\mu) = \binom{n-p+2r}{r}$. From , we conclude that $m_{\pi_{\omega_p}}(\mu) = m_{\bigwedge^p({\mathbb C}^{2n})}(\mu) - m_{\bigwedge^{p-2}({\mathbb C}^{2n})}(\mu) = \binom{n-p+2r}{r} - \binom{n-p+2+2r}{r}$ and is proved. We next consider Step 3, namely, a multiplicity formula for $\sigma_{k,p}$. \[lemCn:multip(sigma\_kp)\] Let $n\geq2$, $\mathfrak g={\mathfrak{sp}}(n,{\mathbb C})$, $k\geq0$, $1\leq p<n$, and $\mu\in P(\mathfrak g)$. If $r(\mu):=(k+p-{\|{\mu}\|_1})/2$ is a non-negative integer, then $$\begin{aligned} m_{\sigma_{k,p}}(\mu) &= \sum_{t=0}^{\lfloor{p}/{2}\rfloor} \frac{n-p+1}{n-p+t+1}\binom{n-p+2t}{t}\sum_{\beta=0}^{p-2t} 2^{p-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-2t-\beta} \\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{r(\mu)-p+\alpha+t+n-1}{n-1}, \end{aligned}$$ and $m_{\sigma_{k,p}}(\mu)=0$ otherwise. Write $r=r(\mu)$ and $\ell={Z}(\mu)$. We may assume that $\mu$ is dominant, thus $\mu=\sum_{j=1}^{n-\ell} a_j\varepsilon_j$ with $a_1\geq \dots \geq a_{n-\ell}>0$ since it has $\ell$ zero-coordinates. In order to use , by Lemma \[lemCn:extreme\], we write the set of weights of $\pi_{\omega_p}$ as $$\mathcal P(\pi_{\omega_p}) := \bigcup_{t=0}^{\lfloor {p}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p)}$$ where $$\label{eq:calP} \mathcal P_{t,\beta,\alpha}^{(p)} = \left\{ \sum_{h=1}^{p-2t} b_h\varepsilon_{i_h}: \begin{array}{l} i_1<\dots<i_\beta\leq n-\ell< i_{\beta+1}<\dots<i_{p-2t} \\ b_j=\pm1\quad \forall j,\quad \#\{1\leq j\leq \beta: b_j=1\}=\alpha \end{array} \right\}.$$ A weight $\eta\in\mathcal P_{t,\beta,\alpha}^{(p)}$ has all entries in $\{0,\pm 1\}$ and satisfies ${\|{\eta}\|_1}=p-2t$, thus $m_{\pi_{\omega_p}}(\eta)=\frac{n-p+1}{n-p+t+1}\binom{n-p+2t}{t}$ by . It is a simple matter to check that $$\label{eq:card(P)} \# \mathcal P_{t,\beta,\alpha}^{(p)} = 2^{p-2t-\beta} \binom{n-\ell}{\beta}\binom{\beta}{\alpha} \binom{\ell}{p-2t-\beta}.$$ From , since the triple union above is disjoint, we obtain that $$\begin{aligned} m_{\sigma_{k,p}}(\mu) &= \sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \sum_{\eta\in \mathcal P_{t,\beta,\alpha}^{(p)}} m_{\pi_{k\varepsilon_1}}(\mu-\eta) \;m_{\pi_{\omega_p}}(\eta) .\end{aligned}$$ One has that ${\|{\mu-\eta}\|_1} = (k+p-2r) +(\beta-\alpha)-\alpha + (p-2t-\beta) = k-2(r+t+\alpha-p)$ for every $\eta \in \mathcal P_{t,\beta,\alpha}^{(p)}$. If $r\notin {\mathbb N}_{0}$, forces $m_{\pi_{k\varepsilon_1}}(\mu-\eta)=0$ for all $\eta \in \mathcal P_{t,\beta,\alpha}^{(p)}$, consequently $m_{\sigma_{k,p}}(\mu)=0$. Otherwise, $$\begin{aligned} m_{\sigma_{k,p}}(\mu) &= \sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{r+t+\alpha-p+n-1}{n-1} \;\frac{n-p+1}{n-p+t+1}\; \binom{n-p+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p)} \end{aligned}$$ by Lemma \[lemCn:extreme\] . The proof is complete by . Theorem \[thmCn:multip(k,p)\] follows by replacing the multiplicity formula given in Lemma \[lemCn:multip(sigma\_kp)\] into . Type D {#secDn:multip(k,p)} ====== We now consider type ${\mathrm}D_n$, that is, $\mathfrak g={\mathfrak{so}}(2n,{\mathbb C})$ and $G={\operatorname{SO}}(2n)$. We assume that $n\geq2$, so the non-simple case $\mathfrak g={\mathfrak{so}}(4,{\mathbb C})\simeq {\mathfrak{sl}}(2,{\mathbb C})\oplus {\mathfrak{sl}}(2,{\mathbb C})$ is also considered. Since $G$ is not simply connected and has a fundamental group of order $2$, the lattice of $G$-integral weights $P(G)$ is strictly included with index $2$ in the weight space $P(\mathfrak g)$. Consequently, a dominant weight $\lambda$ in $P(\mathfrak g)\smallsetminus P(G)$ corresponds to a representation $\pi_{\lambda}$ of ${\operatorname{Spin}}(2n)$, which does not descend to a representation of $G={\operatorname{SO}}(2n)$. In this case, for all $k\geq0$ and $1\leq p\leq n-2$, we have that $$\begin{aligned} \label{eqDn:pi_kp} \pi_{k,p} &=\pi_{k \omega_1+\omega_{p}},& \pi_{k,n-1} &= \pi_{k\omega_1+\omega_{n-1}+\omega_n},& \pi_{k,n} &= \pi_{k\omega_1+2\omega_{n-1}}\oplus \pi_{k\omega_1+2\omega_{n}}.\end{aligned}$$ Each of them descends to a representation of $G$ and its multiplicity formula is established in Theorem \[thmDn:multip(k,p)\]. The remaining cases $\pi_{k \omega_1+\omega_n-1}$ and $\pi_{k \omega_1+\omega_n}$, are spin representations. Their multiplicity formulas were obtained in [@BoldtLauret-onenormDirac Lem. 4.2] and are stated in Theorem \[thmDn:multip(spin)\]. \[thmDn:multip(k,p)\] Let $\mathfrak g={\mathfrak{so}}(2n,{\mathbb C})$ and $G={\operatorname{SO}}(2n)$ for some $n\geq2$ and let $k\geq0$, $1\leq p\leq n$ integers. For $\mu\in P(G)$, if $r(\mu):=(k+p-{\|{\mu}\|_1})/2$ is a non-negative integer, then $$\begin{aligned} m_{\pi_{k,p}}(\mu) &= \sum_{j=1}^{p} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j}{2}\rfloor} \binom{n-p+j+2t}{t} \sum_{\beta=0}^{p-j-2t} 2^{p-j-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-j-2t-\beta} \\ &\quad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{r(\mu)-i-p+\alpha+t+j+n-2}{n-2}, \end{aligned}$$ and $m_{\pi_{k,p}}(\mu)=0$ otherwise. Furthermore, $m_{\pi_{k,p}}(\mu)=0$ for every $\mu\in P(\mathfrak g)\smallsetminus P(G)$. \[thmDn:multip(spin)\] Let $\mathfrak g={\mathfrak{so}}(2n,{\mathbb C})$ and $G={\operatorname{SO}}(2n)$ for some $n\geq2$ and let $k\geq0$ an integer. Let $\mu\in P(\mathfrak g)\smallsetminus P(G)$. Write $r(\mu)= k+\frac{n}{2}- {\|{\mu}\|_1}$, then $$\begin{aligned} m_{\pi_{k\omega_1+\omega_{n}}}(\mu) &= \begin{cases} \binom{r(\mu)+n-2}{n-2} &\text{ if }r(\mu)\geq0 \text{ and } {\operatorname}{neg}(\mu)\equiv r(\mu)\pmod 2, \\ 0&\text{ otherwise}, \end{cases} \\ m_{\pi_{k\omega_1+\omega_{n-1}}}(\mu) &= \begin{cases} \binom{r(\mu)+n-2}{n-2} &\text{ if }r(\mu)\geq0 \text{ and } {\operatorname}{neg}(\mu)\equiv r(\mu)+1\pmod 2, \\ 0&\text{ otherwise}, \end{cases} \end{aligned}$$ where ${\operatorname}{neg}(\mu)$ stands for the number of negative entries of $\mu$. Furthermore, $m_{\pi_{k\omega_1+\omega_{n-1}}}(\mu) = m_{\pi_{k\omega_1+\omega_{n}}}(\mu) =0$ for every $\mu\in P(G)$. The proof of Theorem \[thmDn:multip(k,p)\] will follow the steps from Section \[sec:strategy\]. Let us first set the necessary elements introduced in Notation \[notacion\]. Define $\mathfrak h= \left\{ {\operatorname{diag}}\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right] \right): \theta_i\in{\mathbb C}\;\forall\,i \right\} $ and $ \varepsilon_i\big({\operatorname{diag}}\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right] \right)\big)=\theta_i $ for each $1\leq i\leq n$. Thus $\Sigma^+(\mathfrak g,\mathfrak h)=\{\varepsilon_i\pm\varepsilon_j: i<j\}$, $$\begin{aligned} P(\mathfrak g) &= \{\textstyle \sum_i a_i\varepsilon_i: a_i\in{\mathbb Z}\,\forall i, \text{ or } a_i-1/2\in{\mathbb Z}\,\forall i\},& P(G)&={\mathbb Z}\varepsilon_1\oplus\dots\oplus{\mathbb Z}\varepsilon_{n}, \\ P^{{+}{+}}(\mathfrak g) &=\left\{\textstyle\sum_{i}a_i\varepsilon_i \in P(\mathfrak g) :a_1\geq \dots\geq a_{n-1}\geq |a_n|\right\},& P^{{+}{+}}(G)&= P^{{+}{+}}(\mathfrak g)\cap P(G).\end{aligned}$$ It is now clear that $P(G)$ has index $2$ in $P(\mathfrak g)$. The multiplicity formulas in type ${\mathrm}D_n$ for the extreme representations in Step 2 are already determined. A proof can be found in [@LMR-onenorm Lem. 3.2]. \[lemDn:extremereps\] Let $n\geq2$, $\mathfrak g={\mathfrak{so}}(2n,{\mathbb C})$, $G={\operatorname{SO}}(2n)$, $k\geq0$ and $1\leq p\leq n$. For $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(G)$, we have that $$\begin{aligned} m_{\pi_{k\omega_1}}(\mu) = &\;m_{\pi_{k\varepsilon_1}}(\mu) = \begin{cases} \binom{r(\mu)+n-2}{n-2} & \text{ if }\, r(\mu):=\frac{k-{\|{\mu}\|_1}}{2} \in{\mathbb N}_0,\\ 0 & \text{ otherwise,} \end{cases} \label{eqDn:multip(k)} \\ m_{\pi_{\widetilde \omega_p}}(\mu) =& \begin{cases} \binom{n-p+2r(\mu)}{r(\mu)} & \text{if }\, r(\mu):=\frac{p-{\|{\mu}\|_1}}{2}\in {\mathbb N}_{0} \text{ and } |a_j|\leq1\;\forall\,j,\\ 0&\text{otherwise.} \end{cases} \label{eqDn:multip(p)} \end{aligned}$$ \[lemDn:multip(sigma\_kp)\] Let $n\geq2$, $\mathfrak g={\mathfrak{so}}(2n,{\mathbb C})$, $G={\operatorname{SO}}(2n)$, $k\geq0$, $1\leq p\leq n-1$, and $\mu\in P(G)$. Write $r(\mu)=(k+p-{\|{\mu}\|_1})/2$. If $r(\mu)$ is a non-negative integer, then $$\begin{aligned} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor{p}/{2}\rfloor} \binom{n-p+2t}{t}\sum_{\beta=0}^{p-2t} 2^{p-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-2t-\beta}\\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{r(\mu) -p+\alpha+t+n-2}{n-2}, \end{aligned}$$ and $m_{\sigma_{k,p}}(\mu)=0$ otherwise. We will omit several details in the rest of the proof since it is very similar to the one of Lemma \[lemCn:multip(sigma\_kp)\]. Write $r=(k+p-{\|{\mu}\|_1})/2$ and $\ell={Z}(\mu)$. We assume that $\mu$ is dominant. Lemma \[lemDn:extremereps\] implies that the set of weights of $\pi_{\widetilde \omega_p}$ is $ \mathcal P(\pi_{\widetilde \omega_p}) := \bigcup_{t=0}^{\lfloor {p}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p)}, $ with $\mathcal P_{t,\beta,\alpha}^{(p)}$ as in . One has that ${\|{\mu-\eta}\|_1} = k-2(r+t+\alpha-p)$ for any $\eta\in \mathcal P_{t,\beta,\alpha}^{(p)}$. Hence, and Lemma \[lemDn:extremereps\] imply $m_{\sigma_{k,p}}(\mu)=0$ if $r\notin{\mathbb N}_0$ and $$\begin{aligned} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{r+t+\alpha-p+n-2}{n-2} \;\binom{n-p+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p)} \end{aligned}$$ otherwise. The proof follows by . Theorem \[thmDn:multip(k,p)\] then follows by substituting in the multiplicity formula in Lemma \[lemDn:multip(sigma\_kp)\]. By Definition \[def:pi\_kp\], $\pi_{k,n}$ in type ${\mathrm}D_n$ is the only case where $\pi_{k,p}$ is not irreducible. We have that $\pi_{k,n}= \pi_{k\omega_1+\widetilde \omega_{n}} \oplus \pi_{k\omega_1+\widetilde \omega_{n}-2\varepsilon_n} = \pi_{k\omega_1+2\omega_{n-1}} \oplus \pi_{k\omega_1+2\omega_{n}}$ for every $k\geq0$. One can obtain the corresponding multiplicity formula for each of these irreducible constituents from Theorem \[thmDn:multip(k,p)\] by proving the following facts. If $\mu\in P(G)$ satisfies ${\|{\mu}\|_1}=k+n$, then $m_{\pi_{k\omega_1+2\omega_n}}(\mu) = m_{\pi_{k,n}}(\mu)$ and $m_{\pi_{k\omega_1+2\omega_{n-1}}}(\mu) = 0$ or $m_{\pi_{k\omega_1+2\omega_n}}(\mu) = 0$ and $m_{\pi_{k\omega_1+2\omega_{n-1}}}(\mu) = m_{\pi_{k,n}}(\mu)$ according $\mu$ has an even or odd number of negative entries respectively. Furthermore, if $\mu\in P(G)$ satisfies ${\|{\mu}\|_1} <k+n$, then $m_{\pi_{k\omega_1+2\omega_n}}(\mu) = m_{\pi_{k\omega_1+2\omega_{n-1}}}(\mu) = {m_{\pi_{k,n}}(\mu)}/{2}$. Type B {#secBn:multip(k,p)} ====== We now consider $\mathfrak g={\mathfrak{so}}(2n+1,{\mathbb C})$ and $G={\operatorname{SO}}(2n+1)$, so $\mathfrak g$ is of type ${\mathrm}B_n$. The same observation in the beginning of Section \[secDn:multip(k,p)\] is valid in this case. Namely, a weight in $P^{{+}{+}}(\mathfrak g) \smallsetminus P^{{+}{+}}(G)$ induces an irreducible representation of ${\operatorname{Spin}}(2n+1)$ which does not descend to $G$. For any $k\geq0$ and $1\leq p\leq n-1$, we have that $$\begin{aligned} \pi_{k,p} &=\pi_{k \omega_1+\omega_{p}},& \pi_{k,n} &=\pi_{k\omega_1+2\omega_{n}}.\end{aligned}$$ All of them descend to representations of $G$. The corresponding multiplicity formula is in Theorem \[thmBn:multip(k,p)\] and the remaining case, $\pi_{k\omega_1+\omega_n}$ for $k\geq0$, is considered in Theorem \[thmBn:multip(spin)\]. \[thmBn:multip(k,p)\] Let $\mathfrak g={\mathfrak{so}}(2n+1)$, $G={\operatorname{SO}}(2n+1)$ for some $n\geq2$ and let $k\geq0$, $1\leq p\leq n$ integers. For $\mu\in P(G)$, write $r(\mu)=k+p-{\|{\mu}\|_1}$, then $$\begin{aligned} m_{\pi_{k,p}}(\mu) = &\sum_{j=1}^{p} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j}{2}\rfloor} \binom{n-p+j+2t}{t} \\ &\qquad \sum_{\beta=0}^{p-j-2t} 2^{p-j-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-j-2t-\beta} \\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{\lfloor\frac{r(\mu)}{2}\rfloor-i-p+j+\alpha+t+n-1}{n-1}\\ &+\sum_{j=1}^{p-1} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j-1}{2}\rfloor} \binom{n-p+j+2t+1}{t} \\ &\qquad \sum_{\beta=0}^{p-j-2t-1} 2^{p-j-2t-\beta-1} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-j-2t-\beta-1} \\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{\lfloor\frac{r(\mu)+1}{2}\rfloor -i-p+j+\alpha+t+n-1}{n-1}. \end{aligned}$$ Furthermore, $m_{\pi_{k,p}}(\mu)=0$ for all $\mu\in P(\mathfrak g)\smallsetminus P(G)$. Notice that, in Theorem \[thmBn:multip(k,p)\], $m_{\pi_{k,p}}(\mu)=0$ if $r(\mu)<0$ because of the convention $\binom{b}{a}=0$ if $b<a$. We will omit most of details since this case is very similar to the previous ones, specially to type ${\mathrm}D_n$. According to Notation \[notacion\], we set $\mathfrak h= \left\{ {\operatorname{diag}}\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right],0 \right): \theta_i\in{\mathbb C}\;\forall\,i \right\}$, $ \varepsilon_i\big({\operatorname{diag}}\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right],0 \right)\big)=\theta_i $ for each $1\leq i\leq n$, $\Sigma^+(\mathfrak g,\mathfrak h)=\{\varepsilon_i\pm\varepsilon_j: i<j\}\cup\{\varepsilon_i\}$, $$\begin{aligned} P(\mathfrak g) &= \{\textstyle \sum_i a_i\varepsilon_i: a_i\in{\mathbb Z}\,\forall i, \text{ or } a_i-1/2\in{\mathbb Z}\,\forall i\},& P(G)&={\mathbb Z}\varepsilon_1\oplus\dots\oplus{\mathbb Z}\varepsilon_{n}, \\ P^{{+}{+}}(\mathfrak g) &=\left\{\textstyle\sum_{i}a_i\varepsilon_i \in P(\mathfrak g) :a_1\geq a_{2}\geq \dots\geq a_{n}\geq0\right\},& P^{{+}{+}}(G)&= P^{{+}{+}}(\mathfrak g)\cap P(G).\end{aligned}$$ It is well known that (see [@Knapp-book-beyond Exercises IV.10 and V.8]) $$\begin{aligned} \label{eqBn:extremereps} {\operatorname}{Sym}^k({\mathbb C}^{2n+1}) &\simeq \pi_{k\omega_1}\oplus {\operatorname}{Sym}^{k-2}({\mathbb C}^{2n+1}),& \pi_{\widetilde \omega_p}& \simeq \textstyle \bigwedge^p({\mathbb C}^{2n+1}), \end{aligned}$$ where ${\mathbb C}^{2n+1}$ denotes the standard representation of $\mathfrak g$. Actually, $\pi_{k\omega_1}$ can be realized inside ${\operatorname}{Sym}^k({\mathbb C}^{2n+1})$ as the subspace of harmonic homogeneous polynomials of degree $k$. \[lemBn:extremereps\] Let $n\geq2$, $\mathfrak g={\mathfrak{so}}(2n+1,{\mathbb C})$, $G={\operatorname{SO}}(2n+1)$, $k\geq0$ and $1\leq p\leq n$. For $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(G)$, we have that $$\begin{aligned} m_{\pi_{k\omega_1}}(\mu) = &\;m_{\pi_{k\varepsilon_1}}(\mu) = \tbinom{r(\mu)+n-1}{n-1} \quad \text{ where } r(\mu)=\lfloor \tfrac{k-{\|{\mu}\|_1}}{2}\rfloor, \label{eqBn:multip(k)} \\ m_{\pi_{\widetilde \omega_p}}(\mu) =& \begin{cases} \binom{n-p+r(\mu)}{\lfloor {r(\mu)}/{2}\rfloor} & \text{if }\, |a_j|\leq1\;\forall\,j, \\ 0&\text{otherwise,} \end{cases}\qquad\text{where $r(\mu)=p-{\|{\mu}\|_1}$}. \label{eqBn:multip(p)} \end{aligned}$$ Let $\mathcal P_k$ be the space of complex homogeneous polynomials of degree $k$ in the variables $x_1,\dots,x_{2n+1}$. Set $f_j=x_{2j-1}+ix_{2j}$ and $g_{j}= x_{2j-1}-ix_{2j}$ for $1\leq j\leq n$. One can check that the polynomials $ f_1^{k_1}\dots f_n^{k_n} g_1^{l_1}\dots g_{n}^{l_{n}}x_{2n+1}^{k_0} $ with $k_0,\dots,k_n,l_1,\dots,l_n$ non-negative integers satisfying that $\sum_{j=0}^{n} k_j+\sum_{j=1}^{n} l_j=k$ form a basis of $\mathcal P_k$ given by weight vectors, each of them of weight $\mu=\sum_{j=1}^n (k_j-l_{j})\varepsilon_j$. Notice that the number $k_0$ does not take part of $\mu$. Consequently, $m_{\pi_{\mathcal P_k}}(\mu)$ for $\mu=\sum_{j=1}^n a_j\varepsilon_j$ is the number of tuples $(k_0,\dots,k_{n}, l_1,\dots,l_{n})\in {\mathbb N}_0^{2n+1}$ satisfying that $a_j=k_j-l_{j}$ for all $1\leq j\leq n$ and $$\label{eqBn:conditionweightP_k} \sum_{j=0}^{n} k_j+\sum_{j=1}^{n} l_j=k.$$ Note that implies $k-{\|{\mu}\|_1}-k_0=2s$ for some integer $s\geq0$. We fix an integer $s$ satisfying $0\leq s\leq r:=\lfloor (k-{\|{\mu}\|_1})/2\rfloor $. Set $k_0=k-{\|{\mu}\|_1}-2s\geq0$. As in the proof of Lemma \[lemCn:extreme\], the number of $(k_1,\dots,k_n,l_1,\dots,l_n)\in {\mathbb N}_0^{2n}$ satisfying that $a_j=k_j-l_j$ for all $1\leq j\leq n$ and is equal to $\binom{s+n-1}{n-1}$. Hence, $$m_{\mathcal P_k}(\mu) = \sum_{s=0}^{r} \binom{s+n-1}{n-1}= \binom{r+n}{n}.$$ The second equality is well known. It may be proven by showing that both sides are the $r$-term of the generating function $(1-z)^{-(n+1)}$. From we conclude that $m_{\pi_{k\varepsilon_1}}(\mu) = m_{{\mathcal P}_k}(\mu) - m_{{\mathcal P}_{k-2}}(\mu) = \binom{r+n}{n}- \binom{r-1+n}{n} = \binom{r+n-1}{n-1}$. We have that $\pi_{\widetilde\omega_p}\simeq \bigwedge^p({\mathbb C}^{2n+1})$ by . By setting $v_j=e_{2j-1}-i e_{2j}$, $v_{j+n}=e_{2j-1}+i e_{2j}$ and $v_{2n+1}=e_{2n+1}$, one obtains that the vectors $w_I:=v_{i_1}\wedge \dots\wedge v_{i_p}$ for $I=\{i_1,\dots,i_p\}$ satisfying $1\leq i_1<\dots<i_p\leq 2n+1$, form a basis of $\bigwedge^p({\mathbb C}^{2n+1})$. Furthermore, $w_I$ is a weight vector of weight $\mu=\sum_{j=1}^n a_j\varepsilon_j$ given by . Note that the condition of $2n+1$ being or not in $I$ does not influence on $\mu$. Hence, $\mu=\sum_j a_j\varepsilon_j$ is a weight of $\bigwedge^p({\mathbb C}^{2n+1})$ if and only if $|a_j|\leq 1$ for all $j$ and $p-{\|{\mu}\|_1}\geq0$. Proceeding as in Lemma \[lemCn:extreme\], by writing $s=\lfloor \frac{p-{\|{\mu}\|_1}}{2} \rfloor\geq0$, the multiplicity of $\mu$ is $\binom{n-p+2s}{s}$ if $p-{\|{\mu}\|_1}$ is even and $\binom{n-p+2s+1}{s}$ if $p-{\|{\mu}\|_1}$ is odd. \[thmBn:multip(spin)\] Let $\mathfrak g={\mathfrak{so}}(2n+1,{\mathbb C})$ and $G={\operatorname{SO}}(2n+1)$ for some $n\geq2$ and let $k\geq0$ an integer. Let $\mu\in P(\mathfrak g)\smallsetminus P(G)$. Write $r(\mu)=k+\frac{n}{2}-{\|{\mu}\|_1}$, then $$\label{eqBn:multip(spin)} m_{\pi_{k\omega_1+\omega_n}}(\mu) =\binom{r(\mu)+n-1}{n-1}.$$ Furthermore, $m_{k\omega_1+\omega_n}(\mu)=0$ for all $\mu\in P(G)$. This proof is very similar to [@BoldtLauret-onenormDirac Lem. 4.2]. The assertion $m_{k\omega_1+\omega_n}(\mu)=0$ for every $\mu\in P(G)$ is clear since any weight of $\pi_{k\omega_1+\omega_n}$ is equal to the highest weight $k\omega_1+\omega_n$ minus a sum of positive roots, which clearly lies in $P(\mathfrak g)\smallsetminus P(G)$. Let $\mu\in P(\mathfrak g)\smallsetminus P(G)$. We may assume that $\mu$ is dominant, thus $\mu=\frac{1}{2}\sum_{i=1}^n a_i\varepsilon_i$ with $a_1\geq \dots \geq a_n \geq1$ odd integers. One has that $$\begin{aligned} \label{eq:fusionrule_spin} \pi_{k \omega_1}\otimes \pi_{\omega_n} \simeq \pi_{k \omega_1+\omega_n} \oplus \pi_{(k-1) \omega_1+\omega_n} \end{aligned}$$ for any $k\geq1$. Indeed, it follows immediately by applying the formula in [@Knapp-book-beyond Exercise V.19] since in its sum over the weights of $\pi_{\omega_n}$, the only non-zero terms are attained at the weights $\omega_n$ and $\omega_n-\omega_1$. It is well known that the set of weights of $\pi_{\omega_n}$ is $\mathcal{P}(\pi_{\omega_n}) :=\{ \frac{1}{2}\sum_{i=1}^n b_i\varepsilon_i: |b_i|=1\}$ and $m_{\pi_{\omega_n}}(\nu)=1$ for all $\nu\in \mathcal{P}(\pi_{\omega_n})$ (see for instance [@Knapp-book-beyond Exercise V.35]). We proceed now to prove by induction on $k$. It is clear for $k=0$ by the previous paragraph. Suppose that it holds for $k-1$. By this assumption and , we obtain that $$\label{eqBn:multip(tensorspin)} m_{\pi_{k\omega_1+\omega_n}}(\mu)= m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)-m_{\pi_{(k-1)\omega_1+\omega_n}}(\mu) = m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)- \binom{r+n-2}{n-1},$$ where $r=k+\frac{n}{2}-{\|{\mu}\|_1}$. It only remains to prove that $m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)=\binom{r+n-1}{n-1}+\binom{r+n-2}{n-1}$. Similarly to , we have that $m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)=\sum_{\eta\in\mathcal{P}(\pi_{\omega_n})} m_{\pi_{k\omega_1}}(\mu-\eta)$. Since $\mu$ is dominant, for any $\eta=\frac{1}{2}\sum_{i=1}^n b_i\varepsilon_i\in\mathcal{P}(\pi_{\omega_n})$, it follows that $${\|{\mu-\eta}\|_1}= \frac{1}{2}\sum_{i=1}^n (a_i-b_i) = {\|{\mu}\|_1}+\frac{n}{2}-\ell_1(\eta)= k-r+n-\ell_1(\eta),$$ where $\ell_1(\eta)=\#\{1\leq i\leq n: b_i=1\}$. By Lemma \[lemBn:extremereps\], $m_{\pi_{k \omega_1}}(\mu-\eta)\neq0$ only if $r +\ell_1(\eta)-n\geq0$. For each integer $\ell_1$ satisfying $n-r\leq\ell_1\leq n$, there are $\binom{n}{\ell_1}$ weights $\eta\in \mathcal{P}(\pi_{\omega_n})$ such that $\ell_1(\eta)=\ell_1$. On account of the above remarks, $$\begin{aligned} \label{eq:multiptensor_spin} m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)=& \sum_{\ell_1=n-r}^{n} \binom{\lfloor \frac{r+\ell_1-n}{2}\rfloor +n-1}{n-1} \binom{n}{\ell_1}= \sum_{j=0}^{r} \binom{\lfloor \frac{r-j}{2}\rfloor +n-1}{n-1} \binom{n}{j}. \end{aligned}$$ We claim that the last term in equals $\binom{r+n-1}{n-1}+\binom{r+n-2}{n-1}$. Indeed, a simple verification shows that both numbers are the $r$-th term of the generating function $\frac{1+z}{(1-z)^n}$. From and we conclude that $m_{\pi_{k\omega_1+\omega_n}}(\mu)= \binom{r+n-1}{n-1}$ as asserted. \[lemBn:multip(sigma\_kp)\] Let $n\geq2$, $\mathfrak g={\mathfrak{so}}(2n+1,{\mathbb C})$, $G={\operatorname{SO}}(2n+1)$, $k\geq0$, $1\leq p<n$, and $\mu\in P(G)$. Write $r(\mu)=k+p-{\|{\mu}\|_1}$. Then $$\begin{aligned} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor{p}/{2}\rfloor} \binom{n-p+2t}{t}\sum_{\beta=0}^{p-2t} 2^{p-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-2t-\beta}\\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{\lfloor\frac{r(\mu)}{2}\rfloor-p+\alpha+t+n-1}{n-1}\\ &+\sum_{t=0}^{\lfloor{(p-1)}/{2}\rfloor} \binom{n-p+1+2t}{t}\sum_{\beta=0}^{p-1-2t} 2^{p-1-2t-\beta} \binom{n-{Z}(\mu)}{\beta} \binom{{Z}(\mu)}{p-1-2t-\beta}\\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{\lfloor\frac{r(\mu)+1}{2}\rfloor-p+\alpha+t+n-1}{n-1}. \end{aligned}$$ Write $r=k+p-{\|{\mu}\|_1}$ and $\ell={Z}(\mu)$ and assume $\mu$ dominant. Define $\mathcal P_{t,\beta,\alpha}^{(p)}$ as in . From Lemma \[lemBn:extremereps\], we deduce that the set of weights of $\pi_{\widetilde \omega_p}$ is $$\mathcal P(\pi_{\widetilde \omega_p}) := \big(\bigcup_{t=0}^{\lfloor {p}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p)} \big) \cup \big(\bigcup_{t=0}^{\lfloor {p-1}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-1-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p-1)}\big).$$ This fact and give $$\begin{aligned} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{\lfloor \frac{r}{2}\rfloor+t+\alpha-p+n-1}{n-1} \;\binom{n-p+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p)} \\ & +\sum_{t=0}^{\lfloor {(p-1)}/{2}\rfloor}\;\sum_{\beta=0}^{p-1-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{\lfloor \frac{r-1}{2}\rfloor+t+\alpha-p+n}{n-1} \;\binom{n-p+1+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p-1)}, \end{aligned}$$ since ${\|{\mu-\eta}\|_1}= k-r-2(t+\alpha-p)$ for all $\eta\in\mathcal P_{t,\beta,\alpha}^{(p)}$ and ${\|{\mu-\eta}\|_1}= k-r-2(t+\alpha-p)-1$ for all $\eta\in\mathcal P_{t,\beta,\alpha}^{(p-1)}$. The proof follows by . Lemmas \[lem:step1\] and \[lemBn:multip(sigma\_kp)\] complete the proof of Theorem \[thmBn:multip(k,p)\]. Type A {#secAn:multip(k,p)} ====== Type ${\mathrm}A_n$ is the simplest case to compute the weight multiplicity formula of $\pi_{k,p}$. Actually, it follows immediately by standard calculations using Young diagrams. We include this formula to complete the list of all classical simple Lie algebras. We consider in $\mathfrak g={\mathfrak{sl}}(n+1,{\mathbb C})$, $ \mathfrak h =\{{\operatorname{diag}}\big(\theta_1,\dots,\theta_{n+1}\big) : \theta_i\in{\mathbb C}\;\forall\, i,\; \sum_{i=1}^{n+1}\theta_i=0\}. $ We set $\varepsilon_i\big({\operatorname{diag}}(\theta_1,\dots,\theta_{n+1})\big)= \theta_i$ for each $1\leq i\leq n+1$. We will use the conventions of [@FultonHarris-book Lecture 15]. Thus $$\mathfrak h^* = \bigoplus_{i=1}^{n+1} {\mathbb C}\varepsilon_i / \langle \textstyle\sum\limits_{i=1}^{n+1}\varepsilon_i=0 \rangle,$$ the set of positive roots is $\Sigma^+(\mathfrak g,\mathfrak h)=\{\varepsilon_i-\varepsilon_j: 1\leq i<j\leq n+1\}$, and the weight lattice is $ P(\mathfrak g)=\bigoplus_{i=1}^{n+1} {\mathbb Z}\varepsilon_i / \langle \textstyle\sum\limits_{i=1}^{n+1}\varepsilon_i=0 \rangle. $ By abuse of notation, we use the same letter $\varepsilon_i$ for the image of $\varepsilon_i$ in $\mathfrak h^*$. A weight $\mu=\sum_{i=1}^{n+1}a_i\varepsilon_i$ is dominant if $a_1\geq a_2\geq \dots \geq a_{n+1}$. The representations having highest weights $\lambda=\sum_{i=1}^{n+1} a_i\varepsilon_i$ and $\mu=\sum_{i=1}^{n+1} b_i\varepsilon_i$ are isomorphic if and only if $a_i-b_i$ is constant, independent of $i$. Consequently, we can restrict to those $\lambda=\sum_{i=1}^{n+1} a_i\varepsilon_i$ with $a_{n+1}=0$. Then, $$P^{++}(\mathfrak g)= \left\{ \textstyle\sum\limits_{i=1}^n a_i\varepsilon_i\in P(\mathfrak g): a_1\geq a_2\geq \dots \geq a_{n}\geq 0 \right\}.$$ The corresponding fundamental weights are given by $\omega_p=\varepsilon_{1} + \dots + \varepsilon_p$ for each $1\leq p\leq n$. It is well known that, for $\lambda\in P^{++}(\mathfrak g)$ and $\mu$ a weight of $\pi_{\lambda}$, one can assume that $\mu=\sum_{i=1}^{n+1} a_i \varepsilon_i$ with $a_i\in {\mathbb N}_0$ for all $i$ and $\sum_{i=1}^{n+1} a_i={\|{\lambda}\|_1}$. \[thmAn:multip(k,p)\] Let $\mathfrak g={\mathfrak{sl}}(n+1,{\mathbb C})$ for some $n\geq1$ and let $k\geq0$, $1\leq p\leq n$ integers. Let $\mu=\sum_{i=1}^{n+1} a_i \varepsilon_i\in P(\mathfrak g)$ with $a_i\in {\mathbb N}_0$ for all $i$ and $\sum_{i=1}^{n+1} a_i=k+p$. If $a_1+a_2+\dots +a_j\leq k+j$ for all $1\leq j\leq p$, then $$\begin{aligned} m_{\pi_{k\omega_1+\omega_p}}(\mu) &= \binom{n- {Z}(\mu)}{p-1}, \end{aligned}$$ and $m_{\pi_{k,p}}(\mu)=0$ otherwise. The Young diagram corresponding to the representation $\pi_{k \omega_1+\omega_p}$ is the diagram with $p$ rows, having all length $1$, excepting the first one which has length $k+1$. It is well known that the multiplicity of the weight $\mu$ in this representation is equal to the number of ways one can fill its Young diagram with $a_1$ $1$’s, $a_2$ $2$’s, $\dots$, $a_{n+1}$ $(n+1)$’s, in such a way that the entries in the first row are non-decreasing and those in the first column are strictly increasing (see for instance [@FultonHarris-book §15.3]). Consequently, the multiplicity of $\mu$ is equal to the number of ways of filling the first column. Since the first entry is uniquely determined, one has to choose $p-1$ different numbers for the rest of the entries. Hence, the theorem follows. Concluding remarks {#sec:conclusions} ================== For a classical complex Lie algebra $\mathfrak g$, it has been shown a closed explicit formula for the weight multiplicities of a representation in any $p$-fundamental string, namely, any irreducible representation of $\mathfrak g$ having highest weight $k\omega_1+\omega_p$, for some integers $k\geq0$ and $1\leq p\leq n$. When $\mathfrak g$ is of type ${\mathrm}A_n$, the proof was quite simple and the corresponding formula could be probably established from a more general result. In the authors’ best knowledge, the obtained expressions of the weight multiplicities for types ${\mathrm}B_n$, ${\mathrm}C_n$ and ${\mathrm}D_n$ are new, except for small values of $n$, probably $n\leq 3$. Although the formulas in Theorem \[thmCn:multip(k,p)\], \[thmDn:multip(k,p)\] and \[thmBn:multip(k,p)\] (types ${\mathrm}C_n$, ${\mathrm}D_n$ and ${\mathrm}B_n$ respectively) look complicated and long, they are easily handled in practice. It is important to note that all sums are over (integer) intervals, without including any sum over partitions or permutations. Furthermore, there are only combinatorial numbers in each term. Consequently, it is a simple matter to implement them in a computer program, obtaining a very fast algorithm even when the rank $n$ of the Lie algebra is very large. Moreover, for $p$ and a weight $\mu$ fixed, the formulas become a quasi-polynomial on $k$. This fact was already predicted and follows by the Kostant Multiplicity Formula, such as M. Vergne pointed out to Kumar and Prasad in [@KumarPrasad14] (see also [@MeinrenkenSjamaar99], [@Bliem10]). For instance, when $\mathfrak g={\mathfrak{so}}(2n,{\mathbb C})$ (type ${\mathrm}D_n$), Theorem \[thmDn:multip(k,p)\] ensures that $$m_{\pi_{k\omega_1}}(\mu) = \begin{cases} \binom{\frac{k-{\|{\mu}\|_1}}{2}+n-2}{n-2} &\text{if $k\geq{\|{\mu}\|_1}$ and $k\equiv{\|{\mu}\|_1} \pmod 2$,} \\ 0 &\text{otherwise.} \end{cases}$$ Consequently, the generating function encoding the numbers $\{m_{\pi_{k\omega_1}}(\mu):k\geq0\}$ is a rational function. Indeed, $$\sum_{k\geq0} m_{\pi_{k\omega_1}}(\mu) z^k = \sum_{k\geq0} m_{\pi_{(2k+{\|{\mu}\|_1})\omega_1}}(\mu) z^{2k+{\|{\mu}\|_1}} = \frac{z^{{\|{\mu}\|_1}}}{(1-z^2)^{n-1}}.$$ From a different point of view, for fixed integers $k$ and $p$, the formulas are quasi-polynomials in the variables ${\|{\mu}\|_1}$ and ${Z}(\mu)$. We end the article with a summary of past (and possible future) applications of multiplicity formulas in spectral geometry. We consider a locally homogeneous space $\Gamma{\backslash}G/K$ with the (induced) standard metric, where $G$ is a compact semisimple Lie group, $K$ is a closed subgroup of $G$ and $\Gamma$ is a finite subgroup of the maximal torus $T$ of $G$. When $G={\operatorname{SO}}(2n)$, $K={\operatorname{SO}}(2n-1)$ and $\Gamma$ is cyclic acting freely on $G/K\simeq S^{2n-1}$, we obtain a *lens space*. In order to determine explicitly the spectrum of a (natural) differential operator acting on smooth sections of a (natural) vector bundle on $\Gamma{\backslash}G/K$ (e.g. Laplace–Beltrami operator, Hodge–Laplace operator on $p$-form, Dirac operator), one has to calculate —among other things— numbers of the form $\dim V_\pi^\Gamma$ for $\pi$ in a subset of the unitary dual $\widehat G$ depending on the differential operator. Since $\Gamma\subset T$, $\dim V_\pi^\Gamma$ can be computed by counting the $\Gamma$-invariant weights in $\pi$ according to its multiplicity, so the problem is reduced to know $m_\pi(\mu)$. At the moment, some weight multiplicity formulas have been successfully applied to the problem described above. The multiplicity formula for $\pi_{k\omega_1}$ in type ${\mathrm}D_n$ (Lemma \[lemDn:extremereps\]) was used by Miatello, Rossetti and the first named author in [@LMR-onenorm] to determine the spectrum of the Laplace–Beltrami operator on a lens space. Furthermore, Corollary \[cor:depending-one-norm-ceros\] for type ${\mathrm}D_n$ was shown in the same article ([@LMR-onenorm Lem. 3.3]) obtaining a characterization of lens spaces $p$-isospectral for all $p$ (i.e.  their Hodge–Laplace operators on $p$-forms have the same spectra). Later, Boldt and the first named author considered in [@BoldtLauret-onenormDirac] the Dirac operator on odd-dimensional spin lens spaces. In this work, it was obtained and used Theorem \[thmDn:multip(spin)\], namely, the multiplicity formula for type ${\mathrm}D_n$ of the spin representations $\pi_{k\omega_1+\omega_{n-1}}$ and $\pi_{k \omega_1+\omega_{n}}$. As a continuation of the study begun in [@LMR-onenorm], Theorem \[thmDn:multip(k,p)\] was applied in the preprint [@Lauret-pspectralens] to determine explicitly every $p$-spectra of a lens space. Here, as usual, $p$-spectrum stands for the spectrum of the Hodge–Laplace operator acting on smooth $p$-forms. The article [@Lauret-pspectralens] was the motivation to write the present paper. The remaining formulas in the article may be used with the same goal. Actually, any application of the formulas for type ${\mathrm}D_n$ can be translated to an analogue application for type ${\mathrm}B_{n-1}$, working in spaces covered by $S^{2n-2}$ in place of $S^{2n-1}$ (cf. [@IkedaTaniguchi78 §4]). This was partially done in [@Lauret-spec0cyclic], by applying Lemma \[lemBn:extremereps\]. The result extends [@LMR-onenorm] (for the Laplace–Beltrami operator) to even-dimensional lens orbifolds. A different but feasible application can be done for type ${\mathrm}A_n$. One may consider the complex projective space $P^n({\mathbb C})={\operatorname{SU}}(n+1)/{\operatorname}{S}({\operatorname{U}}(n)\times{\operatorname{U}}(1))$. However, more general representations must be used. Indeed, in [@Lauret-spec0cyclic] was considered the Laplace–Beltrami operator and the representations involved had highest weights $k(\omega_1+\omega_n)$ for $k\geq0$. Theorem \[thmCn:multip(k,p)\] (type ${\mathrm}C_n$) does not have an immediate application since the spherical representations of the symmetric space ${\operatorname{Sp}}(n)/({\operatorname{Sp}}(n-1)\times{\operatorname{Sp}}(1))$ have highest weight of the form $k\omega_2$ for $k\geq0$. Maddox [@Maddox14] obtained a multiplicity formula for these representations. However, this expression it is not explicit enough to be applied in this problem. An exception was the case $n=2$, since in [@Lauret-spec0cyclic] was applied the closed multiplicity formula in [@CaglieroTirao04]. It is not know by the authors if there is a closed subgroup $K$ of $G={\operatorname{Sp}}(n)$ such that the spherical representations of $G/K$ are $\pi_{k\omega_1}$ for $k\geq0$, that is, $$\{\pi\in \widehat G: V_\pi^K\simeq {\operatorname}{Hom}_K(V_\pi,{\mathbb C})\neq0\} = \{\pi_{k\omega_1}:k\geq0\}.$$ In such a case, Theorem \[thmCn:multip(k,p)\] could be used. Acknowledgments {#acknowledgments .unnumbered} =============== The authors wish to thank the anonymous referee for carefully reading the article and giving them helpful comments. [BBCV06]{} . [*Volume computation for polytopes and partition functions for classical root systems.*]{} Discrete Comput. Geom. **35**:4 (2006), 551–595. DOI: [10.1007/s00454-006-1234-2](http://dx.doi.org/10.1007/s00454-006-1234-2). . On weight multiplicities of complex simple Lie algebras. PhD thesis, Univ. Köln, Mathematisch-Naturwissenschaftliche Fakultät, 2008. . [*Chopped and sliced cones and representations of [K]{}ac-[M]{}oody algebras.*]{} J. Pure Appl. Algebra **214**:7 (2016), 1152–1164. DOI: [10.1016/j.jpaa.2009.10.002](http://dx.doi.org/10.1016/j.jpaa.2009.10.002). . [*An explicit formula for the Dirac multiplicities on lens spaces.*]{} J. Geom. Anal. **27** (2017), 689–725. DOI: [10.1007/s12220-016-9695-x](http://dx.doi.org/10.1007/s12220-016-9695-x). . [*A closed formula for weight multiplicities of representations of $\mathrm{Sp}_2(\mathbb C)$.*]{} Manuscripta Math. **115**:4 (2004), 417–426. DOI: [10.1007/s00229-004-0499-0](http://dx.doi.org/10.1007/s00229-004-0499-0). . [*An algorithm for computing weight multiplicities in irreducible modules for complex semisimple Lie algebras.*]{} J. Algebra **471** (2017), 492–510. DOI: [10.1016/j.jalgebra.2016.08.044](http://dx.doi.org/10.1016/j.jalgebra.2016.08.044). . [*Vector partition function and representation theory.*]{} Conference Proceedings on Formal Power Series and Algebraic Combinatorics, Taormina, Italy (2005), 1009–1020. . [*On an approach for computing the generating functions of the characters of simple Lie algebras.*]{} J. Phys. A, Math. Theor. **47**:14 (2014), 091702. DOI: [10.1088/1751-8113/47/14/145202](http://dx.doi.org/10.1088/1751-8113/47/14/145202). . [*On the generating function of weight multiplicities for the representations of the Lie algebra $\operatorname{C}_{2}$.*]{} J. Math. Phys. **56**:4 (2015), 041702. DOI: [10.1063/1.4917054](http://dx.doi.org/10.1063/1.4917054). . [*Generating functions and multiplicity formulas: the case of rank two simple Lie algebras.*]{} J. Math. Phys. **56**:9 (2015), 091702. DOI: [10.1063/1.4930806](http://dx.doi.org/10.1063/1.4930806). . [*Some results on generating functions for characters and weight multiplicities of the Lie algebra $A_3$.*]{} [arXiv:1705.03711](http://arxiv.org/abs/1705.03711) (2017). . [*Zur Berechnung der Charaktere der halbeinfachen Lieschen Gruppen. I, II.*]{} Indag. Math. **57** (1954), 369–376, 487–491. . Representation Theory, A first course. Springer-Verlag New York, 2004. DOI: [10.1007/978-1-4612-0979-9](http://dx.doi.org/10.1007/978-1-4612-0979-9). . Combinatorial problems related to Kostant’s weight multiplicity formula. PhD thesis, University of Wisconsin-Milwaukee, Milwaukee, WI., 2012. . [*Spectra and eigenforms of the Laplacian on $S^n$ and $P^n(\mathbb C)$*]{}. Osaka J. Math. **15**:3 (1978), 515–546. . [Lie groups beyond an introduction.]{} [*Progr. Math.*]{} **140**. Birkhäuser Boston Inc., 2002. . [*On new multiplicity formulas of weights of representations for the classical groups.*]{} J. Algebra **107** (1987), 512–533. DOI: [10.1016/0021-8693(87)90100-1](http://dx.doi.org/10.1016/0021-8693(87)90100-1). . [*Young-diagrammatic methods for the representation theory of the classical groups of type $B\sb n$, $C\sb n$, $D\sb n$.*]{} J. Algebra **107** (1987), 466–511. DOI: [10.1016/0021-8693(87)90099-8](http://dx.doi.org/10.1016/0021-8693(87)90099-8). . [*A formula for the multiplicity of a weight.*]{} Trans. Amer. Math. Soc. **93**:1 (1959), 53–73. DOI: [10.2307/1993422](http://dx.doi.org/10.2307/1993422). . [*Dimension of zero weight space: an algebro-geometric approach.*]{} J. Algebra **403** (2014), 324–344. DOI: [10.1016/j.jalgebra.2014.01.006](http://dx.doi.org/10.1016/j.jalgebra.2014.01.006). . [*Spectra of orbifolds with cyclic fundamental groups*]{}. Ann. Global Anal. Geom. **50**:1 (2016), 1–28. DOI: [10.1007/s10455-016-9498-0](http://dx.doi.org/10.1007/s10455-016-9498-0). . [*The spectrum on $p$-forms of a lens space*]{}. [arXiv:1604.02471](http://arxiv.org/abs/1604.02471) (2016). . [*Spectra of lens spaces from 1-norm spectra of congruence lattices.*]{} Int. Math. Res. Not. IMRN **2016**:4 (2016), 1054–1089. DOI: [10.1093/imrn/rnv159](http://dx.doi.org/10.1093/imrn/rnv159). . [*Paths and root operators in representation theory.*]{} Ann. of Math. (2) **142**:3 (1995), 499–525. DOI: [10.2307/2118553](http://dx.doi.org/10.2307/2118553). . [*Singularities, character formulas, and a $q$-analog of weight multiplicities.*]{} Astérisque **101–102** (1983), 208–229. . [*An elementary approach to weight multiplicities in bivariate irreducible representations of $Sp(2r)$.*]{} Comm. Algebra **42**:9 (2014), 4094–4101. DOI: [10.1080/00927872.2013.804928](http://dx.doi.org/10.1080/00927872.2013.804928). . [*Singular reduction and quantization*]{}. Topology **38**:4 (1999), 699–762. DOI: [10.1016/S0040-9383(98)00012-3](http://dx.doi.org/10.1016/S0040-9383(98)00012-3). . The Sage Development Team, 2009, [www.sagemath.org](http://www.sagemath.org). . [*A new formula for weight multiplicities and characters*]{}. Duke Math. J. **101**:1 (2000), 77–84. DOI: [10.1215/S0012-7094-00-10113-5](http://dx.doi.org/10.1215/S0012-7094-00-10113-5). . [*On some combinatorial aspects of representation theory.*]{} PhD thesis, Rutgers The State University of New Jersey, 2004. . [*A new character formula for Lie algebras and Lie groups.*]{} J. Lie Theory **22**:3 (2012), 817–838.
{ "pile_set_name": "ArXiv" }
**[ Numerical Method for Solving Obstacle Scattering Problems by an Algorithm Based on the Modified Rayleigh Conjecture ]{}** Weidong Chen and Alexander Ramm, Math Dept, Kansas State University, Manhattan, KS 66506 e-mail:chenw@math.ksu.edu and ramm@math.ksu.edu [**Abstract.**]{} In this paper we present a numerical algorithm for solving the direct scattering problems by the Modified Rayleigh Conjecture Method (MRC) introduced in \[1\]. Some numerical examples are given. They show that the method is numerically efficient. [**Key words.**]{} direct obstacle scattering problem, Modified Rayleigh Conjecture, MRC algorithm [**AMS Subject Classification.**]{} 65Z05, 35R30 [**I. Introduction**]{} The classical Rayleigh Conjecture is discussed in \[4\] and \[5\], where it is shown that, in general, this conjecture is incorrect: there are obstacles (for example, sufficiently elongated ellipsoids) for which the series, representing the scattered field outside a ball containing the obstacle, does not converge up to the boundary of this obstacle. The Modified Rayleigh Conjecture (MRC) has been formulated and proved in \[1\] (see Theorem 1 below). A numerical method for solving obstacle scattering problems, based on MRC, was proposed in \[1\]. This method was implemented in \[2\] for two-dimensional obstacle scattering problems. The numerical results in \[2\] were quite encouraging: they show that the method is efficient, economical, and is quite competitive compared with the usual boundary integral equations method (BIEM). A recent paper \[3\] contains a numerical implementation of MRC in some three-dimensional obstacle scattering problems. Its results reconfirm the practical efficiency of the MRC method. In this paper a numerical implementation of the Modified Rayleigh Conjecture (MRC) method for solving obstacle scattering problem in three -dimensional case is presented. Our aim is to consider more general than in \[3\] three-dimensional obstacles: non-convex, non-starshaped, non-smooth, and to study the performance of the MRC in these cases. The minimization problem (5) (see below), which is at the heart of the MRC method, is treated numerically in a new way, different from the one used in \[2\] and \[3\]. Our results present further numerical evidence of the practical efficiency of the MRC method for solving obstacle scattering problems. The obstacle scattering problems (1)-(3), we are interested in, consists of solving the equation $$~~~~~~~(\bigtriangledown ^2 + k^2)u=0 ~~~~ in~~ D'=R^3\setminus D, ~~~~~~~~~~~~~~~~~~~~~~~(1)$$ where $D\subset R^3$ is a bounded domain, satisfies the Dirichlet boundary condition $$~~~~~~~~~~~~~~~~~~~u|_S =0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(2)$$ where $S$ is the boundary of $D$, which is assumed Lipschitz in this paper, and the radiation condition at infinity: $$u=u_0 +v=u_0 +A(\alpha ', \alpha ) \frac {e^{ikr}}{r} + o(\frac 1 r)~~~\quad r\rightarrow \infty ,~~~~~~~(3)$$ $$r:=|x|,~~ \alpha '=x/r, ~~u_0:=e^{ik\alpha\cdot x},$$ where $v$ is the scattered field, $\alpha\in S^2$ is given, $S^2$ is the unit sphere in $R^3$, $k=const>0$ is fixed, $k$ is the wave number. The coefficient $A(\alpha ',\alpha)$ is called the scattering amplitude. Denote $$~~~~~~~~~~~~~A_l (\alpha):=\int_{S^2} A(\alpha ', \alpha) \overline {Y_l(\alpha ')}d\alpha ',\qquad~~~~~~~~~~~~~~~~~~~~~~~~(4)$$ where $Y_l(\alpha )$ are the orthonormal spherical harmonics: $$Y_l=Y_{lm},~~ -l\leq m \leq l,~~l=0,1,2,...$$ $$Y_{lm}(\theta,\phi)=\frac 1 {\sqrt {2\pi}} e^{im\phi} \Theta _{lm}(cos\theta),$$ $$\Theta _{lm}(x) =\sqrt {\frac {2l+1} 2 \frac {(l-m)!} {(l+m)!}} P^m_l(x),$$ $P^m_l(x)$ are the associated Legendre functions of the first kind, $$P^m_l(x):=(1-x^2)^{m/2} \frac {d^mP_l(x)} {dx^m},~~m\geq 0,$$ and $$P_l(x):=\frac {(-1)^l} {2^l l!} \frac {d^l} {dx^l} (1-x^2)^l.$$ For $m<0$ $$\Theta _{lm}(x) =(-1)^m \Theta _{l,-m}(x).$$ Let $h_l(r)$ be the spherical Hankel functions of the first kind, normalized so that $h_l(kr) \sim e^{ikr}/r$ as $r \rightarrow +\infty$. Let $B_R:=\{x:|x|\leq R\}\supset D$, and the origin is inside $D$. Then in the region $r>R$, the solution to the acoustic wave problem (1)-(3) is of the form: $$u(x, \alpha)=e^{ik\alpha\cdot x} + \sum_{l=0}^{\infty} A_l(\alpha) \psi _l(x),~|x|>R,$$ $$\psi _l:=Y_l(\alpha ')h_l(kr),~~~~~ r>R,~~~\alpha '=x/r,$$ where $$\sum_{l=0}^{\infty}:=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}.$$ Fix $\epsilon >0$, an arbitrary small number. The following Lemmas and Theorem1 are proved in \[1\]. [**Lemma 1.**]{} [*There exist $L=L(\epsilon )$ and numbers $c_l=c_l(\epsilon )$ such that*]{} $$~~~~~~~~~~~~~ ||u_0(s)+\sum_{l=0}^{L} c_l(\epsilon ) \psi _l(s)||_{L^2(S)}< \epsilon.~~~~~~~~~~~~~~~~~~~~~~~(5)$$ [**Lemma 2.**]{}[*If (5) holds, then $||v_{\epsilon}(x)-v(x)||=O(\epsilon),~\forall x\in D',~~~ \epsilon\rightarrow 0.$ where $$~~~~~~~~~~~~~~~v_\epsilon (x):=\sum_{l=0}^{L} c_l(\epsilon ) \psi _l(x),~~~~~x\in D',~~~~~~~~~~~~~~~~~~~(6)$$ and $$||.||:=||.||_{H^m_{loc}(D')} +||.||_{L^2(D'; (1+|x|)^{-\gamma)}},~\gamma >0, m>0,~~~~~~~~~~~(7)$$ $m$ is arbitrary, and $H^m$ is the Sobolev space*]{}. [**Lemma 3.**]{} $c_l(\epsilon) \rightarrow A_l(\alpha), \forall l, \epsilon \rightarrow 0.$ [**Theorem 1 (Modified Rayleigh Conjecture).**]{} [*Let $D\in R^3$ be a bounded obstacle with Lipschitz boundary S. For any $\epsilon>0$ there exists $L=L(\epsilon)$ and $c_l(\epsilon)=c_{lm}(\epsilon)$, $0\leq l\leq L$, $-l\leq m\leq l$, such that inequality (5) holds. If (5) holds then function (6) satisfies the estimate $||v(x)-v_\epsilon(x)||=O(\epsilon)$, where the norm is defined in (7). Thus, $v_\epsilon(x)$ is an approximation of the scttered field everywhere in $D'$.*]{} In order to obtain an accurate solution, usually one has to take $L$ large. But as $L$ grows the condition number of the matrix $(\psi _l,~\psi _{l'})_{L^2(S)}$ is increasing very fast. So we choose some interior points $x_j\in D, ~j=1,2,...,J$, and use the following version of Theorem 1(\[2\]): [**Theorem 2.**]{} *Suppose $x_j\in D, ~j=1,2,...,J$, then $\forall\epsilon >0,~\exists L=L(\epsilon)$ and $c_{lj}(\epsilon), l=0,...,L,~j=0,...,J(\epsilon)$, such that* \(i) $$||u_0(s)+\sum_{j=0}^{J}\sum_{l=0}^{L} c_{lj}(\epsilon ) \psi _l(s-x_j)||_{L^2(S)}< \epsilon.~~~~~~~~~~~~~~~(5')$$ (ii)$$||v_{\epsilon}(x)-v(x)||=O(\epsilon),$$ where $$v_{\epsilon}(x)=\sum_{j=0}^{J}\sum_{l=0}^{L} c_{lj}(\epsilon ) \psi _l(s-x_j)$$ and the $||.||$ is defined in Lemma 2. [**Remark.**]{} Theorem 1 is the basis for MRC algorithm for computation of the field scattered by an obstacle: one takes an $\epsilon>0$ and an integer $L>0$, minimizes the left-hand side of (5) with respect to $c_l$, and if the minimum is $\leq \epsilon$ then the function (6) is the approximate solution of the obstacle scattering problem with the accuracy $O(\epsilon)$. If the above minimum is greater than $\epsilon$, then one increases $L$ until the minimum is less than $\epsilon$. This is possible by Lemma 1. In computational practice, one may increase also the number $J$ of points $x_j$ inside $D$, as explained in Theorem 2. The increase of $J$ allows one to reach the desired value of the above minimum keeping $L$ relatively small. This gives computational advantage in many cases. In section 2, an algorithm is presented for solving the problem (1)-(3). This algorithm is based on the MRC. Compared with the previous work in the case of two- and three-dimensional MRC(\[2\],\[3\]), we consider more general surfaces, in particular non-starshaped and piecewise-smooth boundaries. The numerical results are given in section 3. A discussion of the numerical results is given in section 4. [**II. The MRC algorithm for Solving Obstacle Scattering Problems**]{} [**1. Smooth starshaped boundary:**]{} Assume the surface $S$ is given by the equation $$r =r (\theta , \varphi),~~~~~~~~~~~ 0\leq \varphi\leq 2\pi,~0\leq \theta\leq \pi.$$ Define $$~~~~~~~~~~F(c_0, c_1,...,c_L):=||u_0+\sum_{l=0}^L c_l \psi_l ||_{L^2(S)}^2.~~~~~~~~~~~~~~~~~~~~~~~(5'')$$ Let $$h_1=2\pi/n_1,~~~~~~h_2=\pi/n_2$$ $$0=\varphi_0<\varphi_1<....<\varphi_{n_1}=2\pi,~ \varphi_{i_1}=i_1h_1,~~i_1=1,...,n_1,$$ $$0=\theta_0<\theta_1<....<\theta_{n_1}=\pi,~ \theta_{i_2}=i_2h_2,~~i_2=1,...,n_2,$$ where $n_1$ and $n_2$ are the number of steps. By Simpson’s formula(\[8\]), we obtain an approximation of $F(c_0, c_1,...,c_L)$: $$F(c_0, c_1,...,c_L)=\sum_{i_1=0}^{n_1} \sum_{i_2=0}^{n_2} a_{i_1 i_2} {\huge |}u_{0i_1i_2}+\sum_{l=0}^L c_l \psi_{l i_1 i_2}{\huge |}^2 w_{i_1 i_2} h_1 h_2~~~~~~~(5''')$$ where $$a_{i_1, i_2}= \left\{ \begin{array}{l} ~4,~~~~~i_1~ and~ i_2~ even \\~8,~~~~~i_1~ -~i_2~ odd \\16,~~~~~i_1~ and~ i_2~ odd \end{array} \right.$$ and $$\psi_{li_1i_2}=Y_l(\theta _{i_1}, \varphi _{i_2})h_l(kr(\theta _{i_1}, \varphi _{i_2})),~~~ w_{i_1 i_2}=w(\theta_{i_1} ,\varphi_{i_2})$$ where $$~~~~~~~~~~w(\theta ,\varphi )=(r^2 r^2_\varphi + r^2 r^2_\theta sin^2 \theta +r^4 sin^2 \theta )^{1/2}.~~~~~~~~~~~~~~~~~~~~~(8)$$ We can find $c^*=(c_0^*, c_1^*,...,c_L^*)$ such that $$~~~~~~~~~~~~~~~~~~~F(c^*)=min F(c_0, c_1,...,c_L).~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(9)$$ We first write $$~~~~~~~~~~~~~~~~~~~~~~~~~~F(c)=||Ac-B||^2,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(10)$$ where $$A=(A_{l,i})_{M\times L_1},~~A_{l,i}=\psi_{l i_1 i_2} (a_{i_1 i_2} w_{i_1 i_2} h_1 h_2)^{\frac 1 2},~~i=i_1i_2,$$ $$B=(B_i)_{M\times 1},~~B_i=u_{0i_1i_2} (a_{i_1 i_2} w_{i_1 i_2} h_1 h_2)^{\frac 1 2},$$ in which $M=n_1n_2$, $L_1=(L+1)(2L+1)$ since $c_l=c_{lm}, 0\leq l\leq L, -l\leq m\leq l$. Then Householder reflections are used to compute an orthogonal-triangular factorization: $A*P = Q*R$ where P is a permutation(\[8\], p.171), Q is an orthogonal matrix, and R is an upper triangular matrix. Let $r=rank(A)$. This algorithm requires $4ML_1r-2r^2(M+L_1)+4r^3/3$ flops(\[9\], pp.248-250). The least squares solution $c$ is computed by the formula $c = P*(R^{-1}*(Q'*(A^TB)))$. This minimization procedure is based on the matlab code(\[10\]). In \[2\] and \[3\] singular value decomposition was used for minimization of (5”). Here we use the matlab minimization code which is based on a factorization of the matrix A. This has the following advantages from the point of view of numerical analysis. We can choose an integer $r_1$: $$0<r_1\leq r$$ such that the first $r_1$ rows and columns of $R$ form a well-conditioned matrix when A is not of full rank, or the rank of A is in doubt(\[10\]). See Golub and Van Loan \[9\] for a further discussion of numerical rank determination. If we choose $x_j\in D,~ j=1,...,J$, we obtain $$F_J(c)=F_J(c_{01},..., c_{0J}, c_{11},...,c_{1J},..., c_{L1},...,c_{LJ})$$ $$=\sum_{i_1=0}^{n_1} \sum_{i_2=0}^{n_2} \sum_{j=1}^{J} a_{i_1 i_2}|u_{0i_1i_2}+\sum_{l=0}^L c_{lj} \psi_{l i_1 i_2}|^2 w_{i_1 i_2} h_1 h_2.$$ The algorithm for finding the minimum of $F_J(c)$ will be same. [**2. Piecewise-smooth boundary:**]{} Suppose $$S=\bigcup _{n=1} ^N S_n.$$ Then $$F(c_0, c_1,...,c_L)=\sum_{n=1}^N||u_0+\sum_{l=0}^L c_l \psi_l ||_{L^2(S_n)}^2$$ $$\forall (x,y,z) \in S_n,~~ r ^2 =x^2 +y^2 +z^2, ~~\cos\theta =z/r,~~\tan \varphi =y/x.~~~~~~(11)$$ [**3. Non-starshaped case:**]{} Suppose $S$ is a finite union of the surfaces, each of which is starshaped with respect to a point $\vec{r^0 _n}$, $$S=\bigcup _{n=1} ^N S_n.$$ and the the surfaces $S_n$ are given by the equations in local spherical coordinates: $$S_n:~~~~ \vec{r}-\vec{r^0 _n}=(r_n(\theta _n , \varphi _n) \cos\varphi _n\sin\theta _n,~ r_n(\theta _n, \varphi _n) \sin\varphi _n\sin\theta _n, ~r _n(\theta _n, \varphi _n) \cos\theta _n),$$ $$n=1,...N,$$ where $\vec{r^0 _n}$ are constant vectors. Then $$F(c_0, c_1,...,c_L)=\sum_{n=1}^N||u_0+\sum_{l=0}^L c_l \psi_l ||_{L^2(S_n)}^2.$$ The weight functions $w_n(\theta, \varphi)$ are the same as in (8) since $\vec{r^0 _n}$ are constant vectors. [**III. Numerical Results**]{} In this section, we give four examples to show the convergence rate of the algorithm and how the error depends on the shape of $S$. [**Example 1.**]{} The boundary S is the sphere of radius 1 centered at the origin. In this example, the exact coefficients are: $$c_{lm}=- \frac {4\pi i^l j_l(k)} {h_l(k)}~ \overline{Y_{lm}(\alpha)}$$ Let $k=1,~~\alpha=(1,0,0)$. We choose $n_1=20, ~n_2=10$. ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  6.3219  1.6547  0.2785  0.0368  0.0034  0.0003  0.0000  0.0000 ——————————————————————————————– err($c$)  0.0303  0.0172  0.0020  0.0004  0.0000  0.0000  0.0000  0.0000 ——————————————————————————————– where $$err(c)=(\sum_{l=0}^L |c^*_l-c_l|^2)^{\frac 1 2}.$$ When $n_1=40, ~n_2=20,$ ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  6.3544   1.6562  0.2820  0.0358  0.0036  0.0003  0.0000  0.0000 ——————————————————————————————– err($c$)  0.0147  0.0076  0.0011  0.0001  0.0000  0.0000  0.0000  0.0000 ——————————————————————————————– Next, we fix $n_1=20, ~n_2=10$ and test the results for different k and $\alpha$. When $k=2,~~\alpha=(1,0,0)$, ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  10.4506  5.5783  1.9291  0.5217  0.0970  0.0156  0.0020  0.0003 ——————————————————————————————– err($c$)  0.0404  0.0205  0.0048  0.0020  0.0005  0.0000  0.0000  0.0000 ——————————————————————————————– When $k=1,~~\alpha=(0,1,0)$, ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  6.3801  1.6628  0.2821  0.0371  0.0044  0.0003  0.0000  0.0000 ——————————————————————————————– err($c$)  0.0014  0.0106  0.0005  0.0004  0.0000  0.0000  0.0000  0.0000 ——————————————————————————————– When $k=1,~~\alpha=(0,0,1)$, ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  6.4156  1.6909  0.2955  0.0418  0.0025  0.0002  0.0000  0.0000 ——————————————————————————————– err($c$)  0.0093  0.0109  0.0049  0.0007  0.0001  0.0000  0.0000  0.0000 ——————————————————————————————– When $k=1,~~\alpha=(1/\sqrt{2},1/\sqrt{2},0)$, ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  6.3500  1.6711  0.2810  0.0371  0.0040  0.0003  0.0000  0.0000 ——————————————————————————————– err($c$)  0.0218  0.0057  0.0019  0.0004  0.0001  0.0000  0.0000  0.0000 ——————————————————————————————– When $k=1,~~\alpha=(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})$, ——————————————————————————————— L        0        1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————– $F(c^*)$  6.3739  1.6542  0.2850  0.0368  0.0040  0.0003  0.0000  0.0000 ——————————————————————————————– err($c$)  0.0170  0.0054  0.0021  0.0003  0.0001  0.0000  0.0000  0.0000 ——————————————————————————————– [**Example 2.**]{} The boundary S is the surface of the cube $[-1,1]^3$. Here $$S=\bigcup _{n=1} ^6 S_n.$$ and $$F(c_0, c_1,...,c_L)=\sum_{n=1}^6||u_0+\sum_{l=0}^L c_l \psi_l ||_{L^2(S_n)}^2$$ $$=\sum_{n=1}^6\sum_{i_1=0}^{n_1} \sum_{i_2=0}^{n_2} a_{i_1 i_2} {\huge |} u_{0i_1i_2}+\sum_{l=0}^L c_l \psi_{l i_1 i_2}{\huge |}^2 \Delta_1 \Delta_2$$ where $$\Delta_1=2/n_1,~~~~~~\Delta_2=2/n_2.$$ The origin is chosen at the cenetr of symmetry of the cube. The surface area element is calculated in the Cartesian coordinates, so the weight $w=1$. Let $S_1$ be the surface $$z=1, ~-1\leq x\leq 1, ~-1\leq y\leq 1$$ and $$x_{i_1}=-1+i_1 \Delta_1,~0\leq i_1\leq n_1$$ $$y_{i_2}=-1+i_2 \Delta_2,~0\leq i_2\leq n_2$$ Then $$\psi_{li_1i_2}=Y_l(\theta _{i_1}, \varphi _{i_2})h_l(kr(\theta _{i_1}, \varphi _{i_2})),~~~$$ and $\theta _{i_1}$ and $\varphi _{i_2}$ can be computed by formula (11). For other surfaces $S_j$ the algorithm is similar. The values of $\min F(c)=F(c*)$ and the values $\min F_J(c)=F_J(c*)$ with $x_j$: $$\{x_j:j=0,...,6\}=\{(0,0,0), (0.2,0,0), (-0.2,0,0),$$ $$(0,0.2,0), (0,-0.2,0), (0,0,0.2),(0,0,-0.2)\}$$ are given below. We choose $n_1=10, ~n_2=10$ ———————————————————————————————– L              0          1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000  8.0000 ——————————————————————————————— $F(c^*)$    10.6301  3.6277  2.6760  2.2309  1.9832  1.5737  1.5034  1.2948  1.1753 ——————————————————————————————— $F_J(c^*)$   2.6297  1.0970  0.5487  0.1572  0.0667  0.0320  0.0168  0.0078  0.0035 ——————————————————————————————— When $n_1=20, ~n_2=20,$ ———————————————————————————————– L              0          1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000  8.0000 ——————————————————————————————— $F(c^*)$    10.7923  3.7144  2.7778  2.3393  2.0873  1.6671  1.5938  1.4277  1.3368 ——————————————————————————————— $F_J(c^*)$   2.7248   1.1433  0.5757  0.1686  0.0694  0.0652  0.0236  0.0143  0.0090 ——————————————————————————————— [**Example 3.**]{} The boundary S is the surface of the ellipsoid $x^2+y^2+z^2/b^2=1,$ the values of $\min F(c)=F(c*), ~ b=2,3,4,5$ with $n_1=20,~n_2=10$ are: ——————————————————————————————— L        0         1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————— b=2  8.8836  5.4955  3.0421  2.8434  1.3622   1.2093  0.8753  0.8132 ——————————————————————————————— b=3  14.1617  12.0477  7.2296  7.0999  3.8077  3.6829  3.1324  3.0496 ——————————————————————————————— b=4  19.5326  17.9346  9.9927  9.8720  5.3333  5.2008  4.6793  4.5738 ———————————————————————————————- b=5  22.9765  21.5653  11.4850  11.3587  6.1637  6.0096  5.5202  5.3933 ———————————————————————————————- The values of $\min F_J(c)=F_J(c*), ~ b=2,3,4,5$ with $x_j$: $$\{x_j:j=0,...,6\}=\{(0,0,0), (0.5,0,0), (-0.5,0,0),$$ $$(0,0.5,0), (0,-0.5,0), (0,0,0.5),(0,0,-0.5)\}$$ are: ——————————————————————————————— L        0         1.0000  2.0000  3.0000  4.0000  5.0000  6.0000  7.0000 ——————————————————————————————— b=2   2.4856  0.7090  0.2530  0.0062  0.0000  0.0000  0.0000  0.0000 ——————————————————————————————— b=3  4.6639  1.3619  0.6618  0.0074  0.0000  0.0000  0.0000  0.0000 ——————————————————————————————— b=4  5.5183  1.8624  0.7844  0.0060   0.0000   0.0000   0.0000   0.0000 ———————————————————————————————- b=5  11.0579  8.7027  6.4831  0.8357  0.0017  0.0000  0.0000  0.0000 ———————————————————————————————- [**Example 4.**]{} The obstacle is a dumbbell. Its boundary S is not smooth, non-starshaped and not convex: $$S=S_1\bigcup S_2\bigcup S_3$$ $$S_1:~\vec{r} -(0,0,1)=(1.5\cos\varphi\sin\theta, 1.5\sin\varphi\sin\theta, 1.5\cos\theta )$$ $$S_2:~\vec{r} -(0,0,-1)=(1.5\cos\varphi\sin\theta, 1.5\sin\varphi\sin\theta, 1.5\cos\theta )$$ $$S_3:r \sin\theta =1$$ $$\{x_j:j=0,...,10\}=\{(0, 0, 0), (0, 0, 0.1), (0, 0, -0.1), (0, 0, 0.2),(0, 0, -0.2),$$ $$(0, 0, 0.3), (0, 0, -0.3), (0,0,0.4),(0,0,-0.4), (0,0,0.5),(0,0,-0.5)\};$$ We choose $n_1=20$, $n_2=10$ for every $S_i(i=1,2,3)$. ——————————————————————————————- L            0           1.0000   2.0000   3.0000   4.0000   5.0000   6.0000   7.0000 ——————————————————————————————- $F(c^*)$      25.8840   20.8059  16.4968  15.6622  12.9241  12.1915  11.0187  9.5263 ——————————————————————————————- $F_J(c^*)$    20.3118    8.0238    5.1062   2.5908   0.8304   0.4067   0.0453    0.0084 ——————————————————————————————- [**IV. Conclusion**]{} From the numerical results one can see that the accuracy of the numerical solution depends on the smoothness and elongation of the object. In Example 1 the surface $S$ is a unit sphere and the numerical solution is very accurate. In Example 3 the results for different elongated ellipsoids show that if the elongation (eccentricity) grows, then the accuracy decreases. In Example 2 the surface is not smooth and the result is less accurate than in Example 3. In Example 4 the surface in nonconvex and not smooth, but the accuracy is of the same order as in Example 2. When $b$ is large or $S$ is not smooth, the numerical results in Example 2 and Example 3 show that if one adds more points $x_j$ then the accuracy of the solution increases. In Example 1 and Example 2, as one increased $n_1$ and $n_2$, the minimum $F(c*)$ has also increased because the condition number of the matrix $A$ in (10) grew as $n_1$ and $n_2$ increased. Using the results of Example 1 one can check the accuracy in finding $c_l$ by the value of the minimum $$F(c*)\leq \epsilon.$$ **References** \[1\] Ramm A. G. \[2002\], Modified Rayleigh Conjecture and Applications, J. Phys. A: Math. Gen. 35, L357-L361. \[2\] Gutman S. and Ramm A. G. \[2002\], Numerical Implementation of the MRC Method for Obstacle Scattering Problems, J. Phys. A: Math. Gen. 35, L8065-L8074. \[3\] Gutman S. and Ramm A. G., Modified Rayleigh Conjecture Method for Multidimensional Obstacle Scattering Problems(submitted). \[4\] Barantsev, R., Concerning the Rayleigh hypothesis in the problem of scattering from finite bodies of arbitrary shapes, Vestnik Lenigrad. Univ., Math., Mech., Astron., 7, (1971), 52-62. \[5\] Millar, R., The Rayleigh hypothesis and a related least-squares solution to scattering problems for periodic surfaces and other scatterers, Radio, Sci., 8, (1973), 785-796. \[6\] Ramm, A. G., Scattering by obstacles, D. Reidel, 1986. \[7\] Triebel H., Theory of Function Spaces, vol. 78 of Monographs in Mathematics. Birkhauser Verlag, Basel, 1983. \[8\] Kincaid D. and Cheney W., Numerical Analysis: Mathematics of Scientific Computing, Brooks/Cole, 2002. \[9\] Golub G. H. and Van Loan C. F., Matrix Computations, The John Hopkins University Press: Baltimore and London, 1996. \[10\] Anderson, E., Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK User’s Guide (http://www.netlib.org/lapack/lug/lapack\_lug.html), Third Edition, SIAM, Philadelphia, 1999.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We explore the consequences of assuming that the neutrino mass matrix is a linear combination of the matrices of a three dimensional representation of the group $S_3$ and that it has one zero mass eigenvalue. When implemented, these two assumptions allow us to express the transformation matrix relating the mass eigenstates to the flavor eigenstates in terms of a single parameter which we fit to the available data.' author: - 'Duane A. Dicus$^{1,}$[^1], Shao-Feng Ge$^{1,2,}$[^2], and Wayne W. Repko$^{3,}$[^3]' title: 'Neutrino mixing with broken $S_3$ symmetry' --- Introduction ============ Following the discovery of neutrino oscillations, there has been considerable progress in determining values for the neutrino mass differences $m_i^2-m_j^2$ and for the mixing angles relating the mass eigenstates to the flavor eigenstates. The most recent fits suggest that one of the mixing angles is approximately zero and another has a value that implies a mass eigenstate that is nearly an equal mixture of $\nu_\mu$ and $\nu_\tau$. If these conclusions were exact, then they could be accommodated by postulating a neutrino mass matrix having a symmetry based on a three dimensional representation of the permutation group $S_3$. This connection has been extensively studied in the papers listed in Ref.[@intro]. The approach taken here is to retain a remnant of the $S_3$ symmetry, assume that one neutrino mass is zero, and see what this implies about the final form of the neutrino mass matrix and transformation between mass and flavor eigenstates. For Majorana neutrinos the most general form of the mass matrix is $$\label{M} M_{\nu}\,=\,\left(\begin{array}{ccc} A & B_1 & B_2 \\ B_1 & C_1 & D \\ B_2 & D & C_2 \end{array} \right)$$ Experiment seems to show approximate $\mu-\tau$ symmetry in the sense that one mass eigenstate has an almost equal probability of being $\nu_{\mu}$ or $\nu_{\tau}$. To realize this with $M_{\nu}$ requires $B_1\approx\,B_2$ and $C_1\approx\,C_2$. As mentioned above, exact $\mu-\tau$ symmetry can be nicely modelled using a 3-dimensional representation of the finite group $S_3$. However, suppose $\mu-\tau$ symmetry is not exact but we assume $M_{\nu}$ can still be expressed by the matrices of $S_3$. This ansatz, together with the assumption that one of the neutrino mass eigenvalues is zero, as required, for example, by the minimal seesaw model, allows us to derive two relations among the mixing angles and to predict all of the mixing angles in terms of one parameter. In the next section we review the conditions imposed by $S_3$ on the elements of $M_{\nu}$. In Sec. 3 we discuss the effect of these conditions on the minimal seesaw model. Following that, in Sec. 4, we find the eigenvalues and eigenstates when one mass eigenvalue is zero. Then, in Sec. 5, we write $A,\ldots,D$ of $M_{\nu}$ in terms of the mixing angles in the usual way and use the conditions derived from $S_3$ and from having one eigenvalue zero to find relations among the mixing angles. We are able to express these angles in terms of one parameter. In the last section we summarize our conditions on the mixing angles and compare our predictions with experiment. Finally, in an Appendix we discuss why it is possible to study neutrino mixing separately from the charged lepton sector. Conditions on the mass matrix from $S_3$ ======================================== The three dimensional representation of $S_3$ is well known. Nevertheless, for clarity, we will repeat it here. Each line of the following gives the elements that belong to a particular class $$\begin{aligned} D(e)\,&=&\,\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) \\ D(a)\,&=&\,\left(\begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)\,,\,\,\,\,D(b)\,=\,\left(\begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right) \\ D(c)\,&=&\,\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right)\,,\,\,\,D(d)\,=\,\left(\begin{array}{ccc} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right)\,,\,\, D(f)\,=\,\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)\end{aligned}$$ This is a reducible representation and the sum of the elements in each class commutes with every element of the group. If we define $$\label{classes} D_1\,=\,D(e)\,,\,\,\,\,D_2\,=\,D(a)+D(b)\,,\,\,\,D_3\,=\,D(c)+D(d)+D(f)$$ then the most general mass matrix, invariant under $S_3$, is $$\label{Minvar} M\,=\,\alpha\,D_1+\beta\,D_2+\gamma\,D_3\,\equiv\,\left(\begin{array}{ccc} A & B & B \\ B & A & B \\ B & B & A \end{array} \right)$$ Clearly this is not general enough but we can get a matrix which still respects $\mu-\tau$ symmetry by breaking $S_3$ with $D(c)$, $$\label{S3c} M\,=\,\alpha\,D_1+\beta\,D_2+\gamma\,D(c)\,=\,\left(\begin{array}{ccc} \alpha+\gamma & \beta & \beta \\ \beta & \alpha & \beta+\gamma \\ \beta & \beta+\gamma & \alpha \end{array} \right)\,.$$ This has the additional condition $A+B=C+D$ necessary for tri-bimaximal mixing. But suppose $\mu-\tau$ symmetry is not exact so we break $S_3$ in all possible ways $$\label{S3cdf} M\,=\,\alpha\,D_1+\beta\,D_2+\gamma\,D(c)+\delta\,D(d)+\epsilon\,D(f)\,=\,\left(\begin{array}{ccc} \alpha+\gamma & \beta+\epsilon & \beta+\delta \\ \beta+\epsilon & \alpha+\delta & \beta+\gamma \\ \beta+\delta & \beta+\gamma & \alpha+\epsilon \end{array} \right)$$ where we don’t include $D(a)$ or $D(b)$ because we would have to add them to get a symmetric matrix and their sum is $D_2$, and we omit $D_3$ because it just adds the same amount to each matrix element. Thus we get something of the form of Eq.(\[M\]) but the important thing is that however we break the $\mu-\tau$ symmetry there are still two relations among the elements, $$\begin{aligned} 2A+B_1+B_2\,&=&\,C_1+C_2+2D \label{S1} \\ B_1-B_2\,&=&\,C_2-C_1\,. \label{S2}\end{aligned}$$ These remnants of the $S_3$ symmetry are what we will use to restrict the parameters of the minimal seesaw model and to restrict the texture of the neutrino mass matrix. We will refer to them as the $S_3$ conditions. Minimal seesaw model ==================== In the minimal seesaw model (as reviewed, for example, in Ref.[@GXZ]) the neutrino mass matrix is written as $$\label{Mseesaw} M_{\nu}\,=\,m_D\,M_R^{-1}m_D^T$$ where $m_D$ is a $3\times\,2$ matrix $$\label{M32} m_D\,=\,\left(\begin{array}{cc} a_1 & a_2 \\ b_1 & c_1 \\ b_2 & c_2 \end{array} \right)\,,$$ and $M_R$ is a $2\times\,2$ matrix, $$\label{MR} M_R\,=\,\left(\begin{array}{cc} M_{22} & M_{23} \\ M_{23} & M_{33} \end{array} \right)\,.$$ The rank of $M_{\nu}$ is two and therefore one of the eigenvalues of (\[Mseesaw\]) must be zero. Evaluating (\[Mseesaw\]) for the parameters in (\[M\]) gives $$\begin{aligned} A\,&=&\,\frac{a_2^2M_{22}-2a_1a_2M_{23}+a_1^2M_{33}}{\mathcal{D}} \label{Ass} \\ \frac{1}{2}(B_1+B_2)\,&=&\,\frac{a_2(c_1+c_2)M_{22}-[a_2(b_1+b_2)+a_1(c_1+c_2)]M_{23}+a_1(b_1+b_2)M_{33}}{2\mathcal{D}} \label{B1ss} \\ \frac{1}{2}(C_1+C_2)\,&=&\,\frac{(c_1^2+c_2^2)M_{22}-2(b_1c_1+b_2c_2)M_{23}+(b_1^2+b_2^2)M_{33}}{2\mathcal{D}} \label{Css} \\ D\,&=&\,\frac{c_1c_2M_{22}-(b_2c_1+b_1c_2)M_{23}+b_1b_2M_{33}}{\mathcal{D}} \label{Dss} \\ \frac{1}{2}(B_1-B_2)\,&=&\,\frac{a_2(c_1-c_2)M_{22}-[a_2(b_1-b_2)+a_1(c_1-c_2)]M_{23}+a_1(b_1-b_2)M_{33}}{2\mathcal{D}} \label{dBss} \\ \frac{1}{2}(C_1-C_2)\,&=&\,\frac{(c_1^2-c_2^2)M_{22}-2(b_1c_1-b_2c_2)M_{23}+(b_1^2-b_2^2)M_{33}}{2\mathcal{D}} \label{dCss}\end{aligned}$$ where $\mathcal{D}\,=\,M_{22}M_{33}-M_{23}^2$ is the determinant of (\[MR\]). Now using the $S_3$ conditions (\[S1\]) and (\[S2\]) we get, after a lot of simplification, $$\begin{aligned} 0\,&=&\,(2a_2-c_1-c_2)\left[(a_2+c_1+c_2)\frac{M_{22}}{\mathcal{D}}-(a_1+b_1+b_2)\frac{M_{23}}{\mathcal{D}}\right] \nonumber \\ &-&\,(2a_1-b_1-b_2)\left[(a_2+c_1+c_2)\frac{M_{23}}{\mathcal{D}}-(a_1+b_1+b_2)\frac{M_{33}}{\mathcal{D}}\right]\,, \label{see1} \\ 0\,&=&\,(c_1-c_2)\left[(a_2+c_1+c_2)\frac{M_{22}}{\mathcal{D}}-(a_1+b_1+b_2)\frac{M_{23}}{\mathcal{D}}\right] \nonumber \\ &-&\,(b_1-b_2)\left[(a_2+c_1+c_2)\frac{M_{23}}{\mathcal{D}}-(a_1+b_1+b_2)\frac{M_{33}}{\mathcal{D}}\right]\,. \label{see2}\end{aligned}$$ Since we assume that $b_1\ne\,b_2,\,\,c_1\ne\,c_2$ (to avoid $B_1=B_2,\,\,\, C_1=C_2$), Eqs. (\[see1\]) and (\[see2\]) could be solved by requiring $$\begin{aligned} a_1+b_1+b_2\,&=&\,0\,,\label{abc1} \\ a_2+c_1+c_2\,&=&\,0\,.\label{abc2}\end{aligned}$$ But, when we put these relations back into (\[Ass\]) - (\[dCss\]), we get additional unwanted constraints $$\begin{aligned} B_1+B_2\,&=&\,-\,A\,, \label{newABB} \\ C_1+C_2+2\,D\,&=&\,A\,. \label{newCCDA}\end{aligned}$$ Thus the solution of (\[see1\]) and (\[see2\]) must involve conditions on the parameters of $M_R$ as well as those of $m_D$. Another way to understand the $S_3$ conditions and the restrictions (\[abc1\]), (\[abc2\]) is to consider a $Z_2$ symmetry [@SHY; @DGH] $$\label{Z1} G_1^T\,M_{\nu}G_1\,=\,M_{\nu}$$ where $$\label{Z2} G_1\,=\,\frac{1}{2+k^2}\left(\begin{array}{ccc} 2-k^2 & 2k & 2k \\ 2k & k^2 & -2 \\ 2k & -2 & k^2 \end{array}\right).$$ Eq.(\[Z1\]) gives two conditions $$\begin{aligned} \frac{B_1+B_2}{C_1+C_2+2D-2A}\,&=&\,\frac{k}{k^2-2} \label{Z3} \\ \frac{B_1-B_2}{C_1-C_2}\,&=&\,\frac{1}{k} \label{Z4}\end{aligned}$$ which are the $S_3$ conditions if $k=-1$. In the following sections we turn to finding restrictions on the mixing angles. In those sections the only use we will make of the minimal seesaw model is as motivation for setting one mass eigenvalue equal to zero. Eigenvalues and eigenstates =========================== For a zero mass eigenvalue we have, $$\label{m00} \left(\begin{array}{ccc} A & B_1 & B_2 \\ B_1 & C_1 & D \\ B_2 & D & C_2 \end{array}\right)\,\left(\begin{array}{c} \alpha \\ \beta \\ \gamma \end{array} \right)\,=\, \lambda\left(\begin{array}{c} \alpha \\ \beta \\ \gamma \end{array} \right)\,=\,0\,.$$ If we assume $\alpha\,\ne\,0$ (we will check this below) then we get three equations $$\begin{aligned} A\,&=&\,-\rho\,B_1-\sigma\,B_2 \label{ABB} \\ B_1\,&=&\,-\rho\,C_1-\sigma\,D \label{BCD} \\ B_2\,&=&\,-\rho\,D-\sigma\,C_2 \label{BDC}\end{aligned}$$ where $\rho\equiv\beta/\alpha,\,\sigma\equiv\gamma/\alpha$. So $B_1,B_2$ and $A$ are given by (\[BCD\]), (\[BDC\]), and $$\label{AA} A\,=\,\rho^2C_1+\sigma^2C_2+2\rho\sigma\,D$$ Now let’s use these in the $S_3$ relations. Eq.(\[S2\]) and Eq.(\[S1\]) give $$\begin{aligned} (\sigma-1)C_2-(\rho-1)C_1+(\rho-\sigma)D=0 \label{C2C1D} \\ (2\rho^2-\rho-1)C_1+(2\sigma^2-\sigma-1)C_2+(4\rho\sigma-\rho-\sigma-2)D=0 \label{C1C2D}\end{aligned}$$ Eqs.(\[C2C1D\]) and (\[C1C2D\]) can be reduced to $$\begin{aligned} (\rho+\sigma+1)[(1-\sigma)C_2+(1-\rho)D]\,&=&0\,,\label{newC2D} \\ (\rho+\sigma+1)[(1-\rho)C_1+(1-\sigma)D]\,&=&0\,, \label{newC1D}\end{aligned}$$ If we choose the solutions $$\begin{aligned} C_1\,&=&\,-\frac{1-\sigma}{1-\rho}D\,, \label{solC1D} \\ C_2\,&=&\,-\frac{1-\rho}{1-\sigma}D\,, \label{solC2D}\end{aligned}$$ then $D^2=C_1C_2$. The mass eigenvalues that we expect to be nonzero are given by $$\label{newmpmm} m_{\pm}\,=\,\frac{1}{2}\left[A+C_1+C_2\pm\sqrt{(A+C_1+C_2)^2+4(\rho^2+\sigma^2+1)(D^2-C_1C_2)}\right]$$ where we have used (\[BCD\]) and (\[BDC\]). Thus this solution makes a second mass eigenvalue zero. We need nonzero two masses in order to have two oscillation lengths. We might tolerate two zero masses in the case of normal hierarchy, $m_3\gg\,m_2\approx\,m_1$. We will ignore that special case except for a brief comment at the end of Sec. 5. Thus the only way to avoid two zero masses is to require $\rho+\sigma+1\,=\,0$. If we take $C_1$ and $C_2$ as the independent variables the nonzero eigenvalues are, from (\[newmpmm\]) using (\[AA\]), $$\label{pmmp} m_{\pm}\,=\,\frac{(2+2\sigma-\sigma^2)C_1+(1+4\sigma+\sigma^2)C_2}{2(2\sigma+1)} \pm\frac{3}{2}\left|\frac{(2+2\sigma+\sigma^2)C_1-(1+\sigma^2)C_2}{2\sigma+1}\right|$$ The total set of eigenvalues and eigenfunctions can be reduced to $$\begin{aligned} m_0\,&=&0\,, \label{m0} \\ |\nu_0>\,&=&\,\frac{1}{\sqrt{2}\sqrt{1+Re(\sigma)+|\sigma|^2}}[|\nu_e>-(1+\sigma)|\nu_{\mu}>+\sigma|\nu_{\tau}>] \label{n0} \\ m_{a}\,&=&\,\frac{1}{2\sigma+1}[(\sigma+2)^2\,C_1-(\sigma-1)^2\,C_2] \label{mp} \\ |\nu_{a}>\,&=&\,\frac{1}{\sqrt{3}}[|\nu_e>+|\nu_{\mu}>+|\nu_{\tau}>] \label{np} \\ m_{b}\,&=&\,\frac{2(\sigma^2+\sigma+1)}{2\sigma+1}[C_2-C_1] \label{mm} \\ |\nu_{b}>\,&=&\,\frac{1}{\sqrt{6}\sqrt{1+Re(\sigma)+|\sigma|^2}}[-(1+2\sigma)|\nu_e>- (1-\sigma)|\nu_{\mu}>+(2+\sigma)|\nu_{\tau}>] \label{nm}\end{aligned}$$ where $a,b$ are $+,-$ or $-,+$ depending on the signs of the factors in the absolute value of (\[pmmp\]). The only remaining case is to go back to (\[m00\]) and set $\alpha\,=\,0$. Eqs.(\[ABB\]), (\[BCD\]), and (\[BDC\]) are then $$\begin{aligned} B_2\,&=&\,-\lambda\,B_1 \label{alp1} \\ D\,&=&\,-\lambda\,C_1 \label{alp2} \\ C_2\,&=&\,-\lambda\,D\,=\,\lambda^2\,C_1 \label{alp3} \end{aligned}$$ where $\lambda\,\equiv\,\beta/\gamma$. The $S_3$ conditions are $$\begin{aligned} B_1\,&=&\,(\lambda-1)C_1 \label{alp4} \\ A\,&=&\,(\lambda-1)^2\,C_1\,. \label{alp5}\end{aligned}$$ Since one eigenvalue is zero the remaining eigenvalues are given by $$\label{lammas} \lambda_{\pm}\,=\,\frac{1}{2}\left[A+C_1+C_2\pm\sqrt{(A+C_1+C_2)^2+4(D^2+B_1^2+B_2^2-C_1C_2-AC_1-AC_2)}\right]$$ and (\[alp1\]) - (\[alp5\]) give $D^2+B_1^2+B_2^2-C_1C_2-AC_1-AC_2\,=\,0$. Thus $\alpha\,=\,0$ would require a second mass eigenvalue to be zero. So the only solution with broken $\mu-\tau$ symmetry and two nonzero masses is given by (\[m0\]) - (\[nm\]). Since we know $\mu-\tau$ symmetry is approximately true the parameter $\sigma$ will need to be large for inverted hierarchy or approximately $-\frac{1}{2}$ for normal hierarchy. Restrictions on the mixing angles ================================= The conditions on the elements of the mass matrix $A,\ldots\,,D$ will allow us to put conditions on the mixing angles $(\theta_s,\theta_a,\theta_x)\,\equiv\,(\theta_{12},\theta_{23},\theta_{13})$. The neutrino mixing matrix [@19] which diagonalizes $M_{\nu}$ via $V^TM_{\nu}V\,=\,M_{\nu}^{{\rm diag}}$ can be decomposed as $V=U''UU'$ [@SHY] where $U$ is a CKM type matrix $$\label{U} U\,=\,\left(\begin{array}{ccc} c_sc_x & -s_sc_x & -s_xp \\ s_sc_a-c_ss_as_xp^{*} & c_sc_a+s_ss_as_xp^{*} & -s_ac_x \\ s_ss_a+c_sc_as_xp^{*} & c_ss_a-s_sc_as_xp^{*} & c_ac_x \end{array} \right)$$ with $(s_{\alpha},c_{\alpha})\,\equiv\,(\sin\theta_{\alpha},\cos\theta_{\alpha})$ for $\alpha\,=\,s,a,x$ and $p=e^{i\delta_D}$ where $\delta_D$ is the Dirac phase, $U''$ is the rephasing matrix, ${\rm diag}(e^{i\alpha_1},e^{i\alpha_2},e^{i\alpha_3})$[@BM], and $U'\,=\,{\rm diag}(e^{-i\phi_1/2},e^{-i\phi_2/2},e^{-i\phi_3/2})$ where $\phi_1$, $\phi_2$, and $\phi_3$ are Majorana phases. The neutrino mass matrix is then $$\label{VMV} M_{\nu}\,=\,V^{*}M_{\nu}^{{\rm diag}}V^{\dagger}$$ with elements given by $$\begin{aligned} A\,&=&\,\left[c_x^2c_s^2m_1'+c_x^2s_s^2m_2'+p^{*2}s_x^2m_3'\right]e^{-2i\alpha_1} \label{VA} \\ B_1\,&=&\,\left[c_x[s_sc_sc_a-ps_xs_ac_s^2]m_1'-c_x[s_sc_sc_a+ps_xs_as_s^2]m_2'+p^{*}c_xs_xs_am_3'\right] e^{-i(\alpha_1+\alpha_2)} \label{VB1} \\ B_2\,&=&\,\left[c_x[s_sc_ss_a+ps_xc_ac_s^2]m_1'-c_x[s_sc_ss_a-ps_xc_as_s^2]m_2'-p^{*}s_xc_xc_am_3'\right] e^{-i(\alpha_1+\alpha_3)} \label{VB2} \\ C_1\,&=&\,\left[(s_sc_a-ps_xc_ss_a)^2m_1'+(c_sc_a+ps_xs_ss_a)^2m_2'+c_x^2s_a^2m_3'\right] e^{-2i\alpha_2} \label{VC1} \\ C_2\,&=&\,\left[(s_ss_a+ps_xc_sc_a)^2m_1'+(c_ss_a-ps_xs_sc_a)^2m_2'+c_x^2c_a^2m_3'\right] e^{-2i\alpha_3} \label{VC2} \\ D\,&=&\,\big[(s_ss_a+ps_xc_sc_a)(s_sc_a-ps_xc_ss_a)m_1' \nonumber \\ &+&\,(c_ss_a-ps_xs_sc_a)(c_sc_a+ps_xs_ss_a)m_2'-c_x^2s_ac_am_3'\big]e^{-i(\alpha_2+\alpha_3)} \label{VD}\end{aligned}$$ where $m_1'=m_0\sqrt{1+r}e^{i\phi_1},\,m_2'=m_0e^{i\phi_2},\,m_3'=0$ for inverted hierarchy or $m_1'=0,\, m_2'=m_0\sqrt{r}e^{i\phi_2},\,m_3'=m_0\sqrt{1+r}e^{i\phi_3}$ for normal hierarchy. This $m_0$ is a universal mass, not the same as the eigenvalue of the previous section, and $r$ is the ratio of the mass splittings, $r\equiv\,\Delta_s/\Delta_a$. It is easy to see that $\mu-\tau$ symmetry requires $s_a=c_a$ and $s_x=0$. Tri-bimaximal symmetry requires, in addition, $\tan\theta_s\,=\,\sqrt{2}\,\,\,{\rm or}\,-1/\sqrt{2}$. We have four relations for $\sigma$ from (\[ABB\]), (\[BCD\]), (\[BDC\]), and (\[C2C1D\]), all with $\rho$ replaced by $-\sigma-1$. The first three are $$\begin{aligned} \frac{1}{\sigma}\,&=&\,\frac{B_1-B_2}{A-B_1}\,, \label{sig1} \\ \frac{1}{\sigma}\,&=&\,\frac{C_1-D}{B_1-C_1}\,, \label{sig2} \\ \frac{1}{\sigma}\,&=&\,\frac{D-C_2}{B_2-D}\,. \label{sig3}\end{aligned}$$ Now we assume inverted hierarchy and substitute (\[VA\]) - (\[VD\]) on the RHSs to find relations on the mixing angles. Only two of these are independent because we have set $m_3'$ equal to zero so we get just one relation among the angles, $$\label{r1} s_x\,e^{i\delta_D}\,=\,c_x\left[c_a\,e^{i(\alpha_3-\alpha_1)}-s_a\,e^{i(\alpha_2-\alpha_1)}\right]\,,$$ and the solution for $\sigma$, $$\label{r2} \frac{1}{\sigma}\,=\,\frac{s_a\,e^{i(\alpha_2-\alpha_3)}-c_a}{c_a}\,.$$ One of the mass eigenvalues $m_a,\,m_b$, given by (\[mp\]) and (\[mm\]), must equal $m'_1$ and the other $m'_2$. If we evaluate $m_a$ and $m_b$ using (\[r2\]) and $C_1,\,C_2$ given by (\[VC1\]), (\[VC2\]) we find this requires $\alpha_2=\alpha_3$. Thus $\sigma$ is real and (\[r1\]) requires $$\label{three} \delta_D+\alpha_1-\alpha_2\,=\,0\,\,{\rm or}\,\,\pi\,.$$ It is immaterial which sign we take from (\[three\]); in what follows we use $$\begin{aligned} s_x\,&=&\,c_x(c_a-s_a) \label{X1} \\ \frac{1}{\sigma}\,&=&\,\frac{s_a-c_a}{c_a} \label{X2}\end{aligned}$$ We still have the condition from $S_3$, Eq.(\[C2C1D\]), which now depends only on $\delta_D$, $$\label{sig4} \frac{1}{\sigma}\,=\,\frac{2D-C_1-C_2}{2C_1-C_2-D}\,.$$ Using (\[X2\]) for the LHS and (\[r1\]) for $e^{i\delta_D}$, this gives a quadratic equation for $\tan\theta_s$ $$\begin{aligned} &&\tan\theta_s\,=\,\frac{m_1e^{-i\alpha}-m_2\,e^{i\alpha}}{(s_a+c_a)(m_1^2+m_2^2-2m_1m_2\cos2\alpha)} \nonumber \\ &&\left[c_x(1-4c_as_a)(m_1-m_2)\pm\sqrt{c_x^2(1-4c_as_a)^2(m_1-m_2)^2+4(1+2\,c_a\,s_a)(m_1^2+m_2^2-2m_1m_2\cos\,2\alpha)}\right] \label{TAN}\end{aligned}$$ where $\alpha\equiv\alpha_1-\alpha_2$. This gives real values only if $\delta_D$ is zero or $\pi$. One solution of the quadratic equation is $$\label{r3} \tan\theta_s\,=\,-\frac{2c_x}{c_a+s_a}[1-s_ac_a]$$ or, when we use (\[X1\]), $$\label{r3p} \tan\theta_s\,=\,-\frac{1}{c_x(s_a+c_a)}\,.$$ The other solution is $$\label{r3a} \tan\theta_s\,=\,\frac{c_x}{c_a+s_a}[1+2c_as_a]\,=\,c_x(c_a+s_a)\,.$$ Since $\tan(\frac{\pi}{2}-\theta_s)\,=\,1/\tan\theta_s$, and since the oscillation experiments measure $\sin^2\,2\theta$, these two solutions are effectively equivalent. Another way of expressing the results is to write all of the mixing angles in terms of the one parameter $\sigma$. From (\[X2\]) and then (\[X1\]) we find $$\begin{aligned} \tan\theta_a\,&=&\,\frac{\sigma+1}{\sigma}\,, \label{r4} \\ \tan\theta_x\,&=&\,\frac{-1}{\sqrt{1+2\sigma+2\sigma^2}}\,, \label{r5} \end{aligned}$$ and, from (\[r3\]), $$\begin{aligned} \tan\theta_s\,&=&\,-\frac{\sqrt{2}\sqrt{1+\sigma+\sigma^2}}{1+2\sigma} \label{r6}\end{aligned}$$ or, from (\[r3a\]), $$\begin{aligned} \tan\theta_s\,&=&\,\frac{1+2\sigma}{\sqrt{2}\sqrt{1+\sigma+\sigma^2}}\,. \label{r6a}\end{aligned}$$ These two solutions have $m_{b}=m_1'=m_0\sqrt{1+r},\,\,m_{a}=m_2'=m_0$ for (\[r6\]) or $m_{b}=m_2'=m_0,\,\,m_{a}=m_1'=m_0\sqrt{1+r}$ for (\[r6a\]). (Here we are only concerned with the magnitude of the masses so we have neglected Majorana phases.) Since $|\sigma|$ is large, Eq.(\[nm\]) shows that $|\nu_b>$ has a larger fraction of $|\nu_e>$ than does $|\nu_a>$ so $m_b$ should be smaller than $m_a$. If $r\,>\,0$ then Eq.(\[r3a\]) gives $m_a=m_0\sqrt{1+r}\,>\,m_b=m_0$ but Eq.(\[r3p\]) would require $r\,<\,0$. However, as mentioned above, these two solutions for $\tan\theta_s$ are effectively indistinguishable so we won’t worry about this point further. So far we have considered only inverted hierarchy but Eq.(\[n0\]) supports normal hierarchy with $\sigma\approx-\frac{1}{2}$. If we set $m_1=0$ and compare the right hand sides of (\[sig1\]), (\[sig2\]), and (\[sig3\]) we get the condition $$\label{rr1} c_s(c_x-s_x\,(s_a-c_a))+s_s(s_a+c_a)\,=\,0\,,$$ where to make the algebra simplier we immediately neglect all the phases. This expression for $\tan\theta_s$, $$\begin{aligned} \tan\theta_s\,=\,-\frac{c_x-s_x(s_a-c_a)}{s_a+c_a}\,, \label{rr3}\end{aligned}$$ superfically looks different than those of the inverted case. However, if we now set (\[sig4\]) equal to (\[sig1\]) or (\[sig2\]) or (\[sig3\]) we find Eq.(\[X1\]) which, when combined with (\[rr3\]), reproduces our first solution for $\tan\theta_s$ in the inverted hierarchy case, Eq.(\[r3p\]). Using these conditions the solution for $\sigma$ is now $$\label{rr5} \sigma\,=\,\frac{c_a-2s_a}{s_a+c_a}$$ or $$\label{nta} \tan\theta_a\,=\,\frac{1-\sigma}{2+\sigma}$$ When we solve for $m_a,\,m_b$ of Eqs.(\[mp\]), (\[mm\]) we get $$\begin{aligned} m_{b}\,&=&\,m_3\,=\,m_0\sqrt{1+r} \label{nhmm} \\ m_{a}\,&=&\,m_2\,=\,m_0\sqrt{r} \label{nhpm}\end{aligned}$$ If we set $m_2=0$ rather than $m_1$, we get the second expression for $\tan\theta_s$ of inverted hierarchy, (\[r3a\]), and $m_a=m_1$. For this normal hierarchy case $\sigma$ is approximately $-\frac{1}{2}$ and the eigenfunction with zero mass, Eq.(\[n0\]), has the largest fraction of $|\nu_e>$ as it should. We have insisted that only one mass eigenvalue be zero. If two masses are zero then, from (\[VA\]) - (\[VD\]) with only $m'_3$ nonzero, the $S_3$ conditions are both satisfied by (\[r1\]) above. Summary ======= We assume that the neutrino mass matrix is given by the three dimensional representation of the group $S_3$ and that one (but only one!) of the mass eigenvalues is zero, as required, for example, by the minimal seesaw model. The experimentally testable results for either inverted or normal hierarchy a real Dirac phase factor (there could be nonzero Majorana phases but they don’t affect our results), $$\begin{aligned} \delta_D&=&0\,\,,\pi \label{r7} $$ and two conditions on the mixing angles, $$\begin{aligned} \tan\theta_x\,&=&\,c_a-s_a\,, \label{r9}\end{aligned}$$ and $$\begin{aligned} \tan\theta_s\,&=&\,-\frac{1}{c_x(s_a+c_a)} \label{r10a} $$ or $$\label{r10b} \tan\theta_s\,=\,c_x(s_a+c_a)\,.$$ For inverted hierarchy we can convert (\[r4\]) - (\[r6a\]) to $$\begin{aligned} \sin^2\theta_a\,&=&\,\frac{(1+\sigma)^2}{1+2\sigma+2\sigma^2}\,, \label{sina} \\ \sin^2\theta_x\,&=&\,\frac{1}{2(1+\sigma+\sigma^2)}\,, \label{sinb} \end{aligned}$$ and $$\label{sin1} \sin^2\theta_s\,=\,\frac{2(1+\sigma+\sigma^2)}{3(1+2\sigma+2\sigma^2)}$$ or $$\label{sin2} \sin^2\theta_s\,=\,\frac{(1+2\sigma)^2}{3(1+2\sigma+2\sigma^2)}\,.$$ to more easily compare with the data. The result of fitting these expressions to the experimental values [@FLMPR; @Fetal] is shown in the following table. As mentioned above the oscillation expressions depend on $\sin^22\theta$ and thus the experiments can’t distinguish between $\theta_s$ greater or less than $\pi/4$. So the fit using (\[sin1\]) assumes the experimental value is less than $\pi/4$, while that using (\[sin2\]) assumes it is greater than $\pi/4$, since these are the values indicated by the formula. (Approximate $\mu-\tau$ symmetry gives $s_a\sim\,c_a\sim\,\frac{1}{\sqrt{2}}$ so (\[r10a\]) gives $\tan\theta_s\sim\frac{1}{\sqrt{2}}$ while (\[r10b\]) gives $\tan\theta_s\sim\sqrt{2}$; thus a fit of (\[sin2\]) with $\theta_s\,<\,\pi/4$ is untenable.) The two sets of expressions give equivalent fits to the data as they must. Angles Best Fit Exp Range Fit 1 Fit 2 ------------------ ---------- ----------------- --------- --------- $\sin^2\theta_a$ $0.466$ $0.408 - 0.539$ $0.425$ $0.425$ $\sin^2\theta_x$ $0.016$ $0.006 - 0.026$ $0.011$ $0.011$ $\sin^2\theta_s$ $0.312$ $0.294 - 0.331$ $0.337$ $--$ $\sin^2\theta_s$ $0.688$ $0.669 - 0.706$ $--$ $0.663$ : The second column gives the experimental best fit, the third column gives the $1-\sigma$ experimental range, the fourth column gives the central values using (\[sin1\]), the fifth column gives the central values using (\[sin2\]). The minimum $\chi^2$ is $2.15$. The value of $\sigma$ which gives the minimum is $-7.17$ if we use (\[X2\]) or $-0.388$ if we use (\[rr5\]). Our fits give values for the angles of $|\theta_a|\,=\,40.7^{\circ},\,\,|\theta_x|\,=\,6.02^{\circ},\,\,$ and $|\theta_s|\,=\,35.5^{\circ}$ or $54.5^{\circ}$. Similarly for a normal hierarchy of masses we could find expressions for $\sin^2\theta_a$, $\sin^2\theta_x$, and $\sin^2\theta_s$ in terms of the $\sigma$ for normal hierarchy given by (\[rr5\]). But if we call that $\sigma_N$, and the $\sigma$ given by (\[X2\]) $\sigma_I$, then (\[r4\]) and (\[nta\]) give $$\label{sigmas} \sigma_N\,=\,-\frac{2+\sigma_I}{1+2\sigma_I}$$ and if we replaced $\sigma_N$ by $\sigma_I$ in normal hierarchy expressions for $\sin^2\theta_a, \sin^2\theta_x, \sin^2\theta_s$ we would reproduce (\[sina\]) - (\[sin2\]). The normal hierarchy expressions would be just a reparameterization of (\[sina\]) - (\[sin2\]) above and thus give an identical fit. Recently MINOS[@MINOS] has presented a measurement of $\sin^2(2\theta_a)\,\sin^2(2\theta_x)$. We can easily fit this assuming the values above for $\sin^2\theta_a$ and $\sin^2\theta_s$ but replacing $\sin^2\theta_x$ by this combination. This is shown in Table II. The fact that the fit values of $\sin^2\theta_x$ are smaller than the value in Table I despite the experimental number being bigger is because of the larger error in that number. Still as $\chi^2$ increases by one from its minimum, $\sin\theta_x$ varies from $0.$ to only $0.024$, which implies that this model prefers small $\theta_x$. Angles Best Fit Exp Range IH Fit NH Fit -------------------------------------- ---------- ----------------- --------- --------- $\sin^2\theta_a$ $0.466$ $0.408 - 0.539$ $0.449$ $0.443$ $\sin^2(2\theta_a)\sin^2(2\theta_x)$ $0.18$ $0.06 - 0.32$ $0.021$ $--$ $\sin^2(2\theta_a)\sin^2(2\theta_x)$ $0.11$ $0.04 - 0.21$ $--$ $0.025$ $\sin^2\theta_s$ $0.312$ $0.294 - 0.331$ $0.335$ $0.335$ Minimum $\chi^2$ $2.90$ $2.37$ : The fourth and fifth columns give the results of fitting the MINOS values given in row two or row three. Again the experimental range is $1-\sigma$. The MINOS numbers depend on whether they assume inverted or normal hierarchy. The fitted numbers correspond to a $\sin^2\theta_x$ of $0.0052$ for IH or $0.0064$ for NH. The values of the angles are therefore $|\theta_a|\,=\,42.1^{\circ}$ or $41.7^{\circ}$, $|\theta_x|\,=\,4.14^{\circ}$ or $4.59^{\circ}$, and $|\theta_s|\,=\,35.4^{\circ}$. Acknowledgments {#acknowledgments .unnumbered} =============== SFG was supported by the Chinese Scholarship Council. DAD and SFG were supported in part by the U. S. Department of Energy under grant No. DE-FG03-93ER40757. WWR was supported in part by the National Science Foundation under Grant PHY-0555544. We thank Sacha Kopp for discussions of the MINOS results. DAD is a member of the Center for Particles and Fields and the Texas Cosmology Center. Charged Lepton and Neutrino Sectors =================================== In recent years it has become common to attempt a unified treatment of the charged lepton sector and the neutrino sector. In this paper, we discuss only the neutrino sector, as do many of our references. This appendix shows that the sectors can be discussed separately. The essential point of flavor mixing is that the physical mixing matrix, being misaligned between two representations of the up-type and down-type fermions, is independent of formalism or representation. For the lepton sector, the physical mixing matrix is the so-called PMNS matrix [@19], $$V_{PMNS}=U^\dagger_e U_\nu\,,$$ where the two flavor mixing matrices are determined by $$U^\dagger_e M_e M^\dagger_e U_e=D_e D^\dagger_e\,, \qquad U^T_\nu M_\nu U_\nu = D_\nu\,. \label{eq:diag}$$ As in the body of the paper, we take the neutrinos to be Majorana particles. The diagonal mass matrices are denoted as $D_e$ and $D_\nu$ for charged leptons and neutrinos. Now, we can make an arbitrary rotation on all the lepton fields, including left-handed charged leptons and neutrinos as well as the right-handed charged leptons. Since the left-handed charged leptons and neutrinos reside in common $SU(2)_L$ doublets, they share a common rotation, $$\begin{pmatrix} \nu_i \\ \ell_i \end{pmatrix}_L \rightarrow (T_L)_{ij} \begin{pmatrix} \nu_j \\ \ell_j \end{pmatrix}_L, \qquad (\ell_i)_R \rightarrow (T_R)_{ij} (\ell_j)_R\,.$$ Then the charged lepton and neutrino mass matrices become $$M_e\rightarrow\widetilde M_e=T_L M_e T^\dagger_R\,, \qquad M_\nu\rightarrow\widetilde M_\nu=T^*_L M_\nu T^\dagger_L\,,$$ and we denote the modified mixing matrices as $\widetilde U_e$ and $\widetilde U_\nu$ respectively. Eq.(\[eq:diag\]) becomes $$\widetilde U^\dagger_e \widetilde M_e \widetilde M^\dagger_e \widetilde U_e = D_e D^\dagger_e\,, \qquad \widetilde U^T_\nu \widetilde M_\nu \widetilde U_\nu=D_\nu\,,$$ where $$\widetilde U_e =T_L U_e\,,\qquad\widetilde U_\nu=T_L U_\nu\,.$$ The important point is that the physical mixing matrix is not affected, $$\widetilde V_{PMNS}=\widetilde U^\dagger_e \widetilde U_\nu= U^\dagger_e T^\dagger_L T_L U_\nu=U^\dagger_e U_\nu=V_{PMNS}\,.$$ This is expected because, if it were not true, the physical mixing matrix would depend on the formalism or representation. This property of formalism/representation independence allows us to rotate the charged leptons to a mass diagonal basis, since what we want to discuss is just the physical mixing matrix. It doesn’t matter in which basis the discussion is made. The question is how to realize this. We should also note that, after gauge symmetry breaking where the fermions acquire mass, the up-type and down-type fermions’ mass matrices should be constrained by different representations of some symmetry if there is any. Otherwise, the two mass matrices would be constrained to be of the same form and this would lead to trivial physical mixing. The full group is at least a product. We can imagine that the $S_3$ discussed in current work is kind of residual property of some symmetry. Before symmetry breaking, there would be a larger group governing both the charged lepton and neutrino sectors, especially the left-handed ones since they share a common left-handed doublet. But experimentally the symmetry is broken. There can be a residual symmetry for the left-handed charged leptons, for example ${\cal Z}_3$, $${\cal Z}_3 = \{ I, F, F^2 \}\quad \mbox{with} \quad F=\begin{pmatrix} 1 \\ & \omega \\ & & \omega^2 \end{pmatrix}\,,$$ where $\omega \equiv e^{2 i \pi / 3}$. If we use $\mathcal F$ to denote the group elements of ${\cal Z}_3$, then the charged lepton’s mass matrix has to satisfy $$\mathcal F^\dagger M_e M^\dagger_e \mathcal F=D_e D^\dagger_e\,.$$ It can be verified that under this constraint $M_e M^\dagger_e$ has to be diagonal, $$M_e M^\dagger_e=D_e D^\dagger_e\,,$$ and $U_e = I$. In other words, the physical mixing comes solely from the neutrino sector. Actually, ${\cal Z}_3$ is a subgroup of $S_3$ so we can apply $S_3$ in the charged lepton sector. But, as argued above, the representations of the charged lepton and neutrino sectors should be different. In other words, the residual ${\cal Z}_3$ of the charged lepton sector cannot be simply embodied in the residual $S_3$ of the neutrino sector. The unified group should be at least a product group [@Lam] $${\cal G}={\cal Z}_3\otimes S_3\,.$$ If we want to apply $S_3$ in the charged lepton sector too, this can be achieved by embedding ${\cal Z}_3$ in another $S_3$ whose representation is different from that of neutrino sector. Then the product group would be $S_3 \otimes S_3$. [99]{} S. Pakvasa and H. Sugawara, Phys. Lett. B[**73**]{}, 61 (1978); [**82**]{}, 105 (1979); E. Durman and H.S.Tsao, Phys. Rev. D[**20**]{}, 1207 (1979); Y. Yamanaka, H. Sugawara, and S. Pakvasa, Phys. Rev. D[**25**]{}, 1895 (1982); K. Kang, J. E. Kim, and P. Ko, Z. Phys. C[**72**]{}, 671 (1996), hep-ph/9503346; K. Kang, S. K. Kang, J. E. Kim, and P. Ko, Phys. Lett. A[**12**]{}, 1175 (1996), hep-ph/9611396; M. Fukugita, M. Tanimoto, and T. Yanagida, Phys. Rev. D[**57**]{}, 44299 (1998), hep-ph/9709388; H. Fritzsch and Z-z Xing, Phys. Rev. D[**61**]{}, 073016 (2000), hep-ph/9909304; E. Ma and G. Rajasekaran, Phys. Rev. D[**64**]{}, 113012 (2001), hep-ph/0106291; P. F. Harrison and W. G. Scott, Phys. Lett.  B[**557**]{}, 76 (2003) \[arXiv:hep-ph/0302025\]; S.-L. Chen, M. Frigerio, and E. Ma, Phys. Rev. D[**70**]{}, 073008 (2004) \[Erratum-ibid. D[**70**]{}, 079905 (2004)\] hep-ph/o404084; F. Caravaglios and S. Morisi, hep-ph/0503234; W. Grimus and L. Lavoura, JHEP [**0508**]{}, 013 (2005), hep-ph/0504153; J. E. Kim and J. -C. Park, JHEP [**0605**]{}, 017 (2006), hep-ph/0512130; R. N. Mohapatra, S. Nasri, and H. B. Yu, Phys. Lett. B[**639**]{}, 318 (2006), hep-ph/0605020; R. Jora, S. Nasri, and J. Schechter, Int. J. Mod. Phys. A[**21**]{}, 5875 (2006), hep-ph/0605069; M. Picariello, Int. J. Mod. Phys. A[**23**]{}, 4435 (2008), hep-ph/0611189; Y. Koide, Eur. Phys. J. C[**50**]{}, 809 (2007), hep-ph/0612058; A. Mondragon, M. Mondragon, and E. Peinado, Phys. Rev. D[**76**]{}, 076003 (2007), arXiv:0706.0354 (hep-ph); A. Mondragon, M. Mondragon, and E. Peinado, AIP Conf. Proc. 1026: 164 (2008), arXiv:0712.2488 (hep-ph); C.-Y. Chen and L. Wolfenstein, Phys. Rev. D[**77**]{}, 093009 (2008), arXiv:0709.3767. W-l. Guo, Z-z. Xing, and S. Zhou, Int.J.Mod.Phys. E[**16**]{}, 1 (2007). S.-F. Ge, H.-J. He, and F.-R. Yin, arXiv:1001.0940. S.-F. Ge, D.A. Dicus and H.-J. He, in preparation. B.Pontecorvo, Sov.Phys.JETP [**6**]{}, 429 (1958); Z.Maki, M.Nakagawa, S.Sakata, Prog. Theor.Phys.[**28**]{}, 870 (1962). A. Barroso and J. Maalampi, Phys. Lett. B[**132**]{}, 355 (1983). G.L.Fogli, E. Lisi, A. Marrone, A. Palazzo, A. M. Rotunno, Phys. Rev. Lett. [**101**]{}, 141801 (2008) \[arXiv:0806.2649\] and arXiv:0809.2936\[hep-ph\]. G.L. Fogli [*et. al.*]{}, arXiv:0805.2517v3 \[hep-ph\] and Phys. Rev. [**D78**]{}, 033010 (2008). P. Adamson [*et. al.*]{}, Phys. Rev. Lett. [**103**]{}, 261802-1 (2009). C. S. Lam, “The Unique Horizontal Symmetry of Leptons”, Phys. Rev D [**78**]{}, 073015 (2008), \[arXiv:0809.1185 \[hep-ph\]\]. [^1]: Electronic address: dicus@physics.utexas.edu [^2]: Electronic address: gesf02@mails.tsinghua.edu.cn [^3]: Electronic address: repko@pa.msu.edu
{ "pile_set_name": "ArXiv" }
--- author: - Gwennou Coupier - 'Adel Djellouli [^1]' - Catherine Quilliet date: 'Received: date / Revised version: date' title: 'Let’s deflate that beach ball' --- [leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore Introduction ============ Due to the boom of microfluidics and miniaturization, small spherical objects are increasingly studied in soft matter, many of them thin and prone to deformation. Deformation is usually accompanied by deflation (*e.g.* due to osmotic pressure, or leakage, or lateral expansion of the shell). There have been several theoretical or numerical studies [@Hutchinson; @1967; @LandauBook; @Quilliet2008; @Quilliet2008err; @Knoche2011; @Vliegen2011; @Quilliet_2012; @Hutchinson_2017; @pezzulla2018] and some experimental investigations [@Carlson_1967; @Carlson_1968; @Zhang_2017] about the deflation of a thin, elastic, shell. Most of them focus essentially on understanding and quantifying the scenario of the buckling instability that occurs beyond a certain threshold of compression or deflation. Less is known about the post-buckling behaviour [@Quilliet2008; @Quilliet2008err; @Knoche2011; @Quilliet_2012; @Knoche2014], let alone when thin shell theory is *a priori* not valid. It is generally assumed that a 2D description of the shell is valid when $d/R < 0.02$, where $d$ is the shell thickness and $R$ its mid-surface radius ($R-\frac{d}{2}$ and $R+\frac{d}{2}$ are then respectively the internal and external radii). In that case the 2D properties of the surface model can be interpreted in terms of shell thickness and 3D properties of the constituting material. These models indeed constitute a simplification compared to studies managing 3D features [@Church_1994; @Knoche2011]. In this paper, we investigate experimentally the deflation of elastic macroscopic shells, down to buckling and post-buckling deformations, for a broad range of relative shell thicknesses. These results are compared to what is known from thin shell theory, which allows to discuss its validity range. Our low-cost experimental set-up was conceived as an efficient and versatile tool for exploring with students instability issues and bifurcations diagrams under several conditions (volume or pressure imposed), and for characterizing shells before using them in a more complex environment [@Djellouli_2017]. Yet, it provides for the first time an experimental characterization of the relationship between pre-buckling and post-buckling states. Transition between these two states is accompanied by a fast release of energy, a feature present in Nature [@forterre05; @vincent11; @son2013] that has already been used in several applications with similar soft systems [@Djellouli_2017; @holmes07; @yang15; @ramachandran2016; @gomez2017; @holmes2019]. Spherical shells of an isotropic elastic material are expected to undergo sequences of shapes that depend only on intensive parameters: Poisson’s ratio $\nu$ and relative thickness $d/R$. Shells should first stay spherical while their radius decreases, up to the point where a buckling instability suddenly makes a circular depression appear, of characteristic dimension $\sqrt{dR}$ (Fig. \[fig:spi\]) [@LandauBook; @Pogorelov]. This step was only recently understood in terms of mode localization [@Hutchinson_2017; @Hutchinson_2016]. According to simulations and theoretical studies, the depression then grows axisymmetrically when the shell is slowly deflated [@Quilliet2008; @Quilliet2008err; @Knoche2011; @Hutchinson_2017]. Thicker shells keep axisymmetry up to self-contact (Fig. \[fig:photo-balles\]-a), while for thinner shells, the depression looses its axisymmetry during deflation, progressively developing radial folds (Fig. \[fig:photo-balles\]-b) [@Quilliet2008; @Quilliet2008err; @Quilliet_2012; @Hutchinson_2017; @Knoche2014]. Quantitatively, the deflation is characterized by the volume change $\Delta V$ from the initial nondeflated state, and the pressure drop $\Delta P=P_{ext}-P_{int}>0$ it induces between both sides of the shell (outside and inside the ball). In a surface model, the denominator of the dimensionless relative volume variation $\frac{\Delta V}{V_0}$ is the volume enclosed by the initial undeformed surface. For this experimental study we chose to take as a reference the volume $V_{0}=\frac{4}{3}\pi R^{3}$ initially enclosed by the midsurface of the shell, instead of the volume $V_{int}=\frac{4}{3}\pi\left(R-\frac{d}{2}\right)^{3}$ effectively contained in the shell, thus allowing direct comparison with surface models. The set-up we developed provides the pressure drop and the volume variation of deflated spheres of known initial volume ; we could then follow and discuss deformation paths observed in a $\Delta P\,-\,\frac{\Delta V}{V_0}$ diagram. We denote by $\wp(\frac{\Delta V}{V_0})$ the state equation between both quantities at equilibrium. This function $\wp$ is to be determined in this paper. Set-up for deflation experiments ================================ We considered about 25 commercial hollow balls (beach balls, squash balls, juggle balls, balls for rhythmic gymnastics...) made of elastomers, of external radii $R+d/2$ ranging between 39.5 and 190 mm, and $d/R$ ratios between $6.5\thinspace10^{-3}$ and 0.25, plus a homemade ball of relative thickness 0.22 [@Djellouli_2017]. All Young moduli $Y_{3D}$ measured for small strains are between 0.5 and 7.5 MPa (see section \[sec:Traction\]). In order to easily measure volumes and pressures, the ball is filled with an incompressible fluid (water). It is then also immersed in water so as to avoid gradients of hydrostatic pressure along the ball (which amounts to study shapes not deformed by gravity). The ball is connected to a U-shape manometer, a syringe, and a third tube connected to the tank of water so as to favor initial quick equilibration of all pressures (Fig. \[fig:Principle\]-a). In the initial state, the pressure difference $\Delta P$ is 0. Taps allow us to connect the ball either to the syringe or to the manometer. The manometer is made of a cylindrical tube of diameter ranging between 0.79 and 3.18 mm and thick enough to avoid tube buckling under the highest pressure differences 1 bar, which are met with thick squash balls. If required, the total height of the manometer could reach $10\,\mathrm{m}$ so as to measure such depressions. The experiment is run as follows: an increasing amount of liquid $\Delta V_{w}$ is withdrawn from the ball through valve $\mathcal{V}_{s}$ (Fig. \[fig:Principle\]-b), via small volume intakes $\delta V_{i}$ ($\Delta V_{w}=\sum\delta V_{i}$). After each step the ball is put in contact with the sole manometer through valve $\mathcal{V}_{m}$. The displacement $h>0$ of the liquid in the manometer from the initial equilibrium situation yields the pressure difference $P_{ext}-P_{int}=\rho gh$ across the ball membrane, where $\rho$ is the density of water. Because of the fluid volume variations in the manometer, the inner volume variation $\Delta V$ of the shell is slightly different from the volume $\Delta V_{w}$ set through the syringe : $$\Delta V=\Delta V_{w}-\pi r^{2}h,\label{eq:vcorr}$$ where $r$ is the internal radius of the (cylindrical) manometer tube. Even though this correction is systematically taken into account, the problem with large sections $S=\pi r^2$ would be that the volume withdrawn in the syringe has to be much larger than the targeted $\Delta V$, which may possibly make the system jump to another stability branch. This could impede full characterization of the branch of interest ; this is discussed in detail in subsection \[sub:Equilibrium-and-manometer-1\]. On the other hand, the limitation when decreasing $S$ lies in a possibly high equilibration time (see subsection \[sub:StabilsizationTimeExp\]). These experimental precautions being taken into account, for each ball the pressure difference $\Delta P$ at mechanical equilibrium can be plotted with respect to the relative volume variation $\frac{\Delta V}{V_{0}}$, giving insights on the state equation $\wp(\frac{\Delta V}{V_0})$ that is expected to depend on the relative thickness $d/R$ of the ball, and on its material’s properties ($Y_{3D}$, $\nu$). The two-step procedure ensures to work at almost imposed volume and to discuss the time evolution of the system from a known state. Had the valves $\mathcal{V}_{s}$ and $\mathcal{V}_{m}$ always been kept open so as to measure simultaneously volumes and pressures, the interpretation of the dynamics towards equilibrium would have been more tricky, since sucking out fluids in the manometer amounts to imposing pressure in the shell once the withdrawal step is stopped. The relative contribution of the volume withdrawal in the shell and in the manometer would depend on the whole set-up configuration, and in particular on the tubings resistance, as well as on the shell mechanical properties. Deflation of spherical shells ============================= Deflation essentially occurs within two regimes. In a first mode of deformation, the ball roughly keeps its sphericity. Then a sudden transition [@LandauBook; @Knoche2011; @Quilliet_2012; @Hutchinson_2017; @Church_1994; @Hutchinson_2016] transforms the sphere into an axisymmetric shape with a dimple (Fig. \[fig:spi\]). Further deflation makes the dimple size continuously increase (Fig. \[fig:photo-balles\]-a) [@Knoche2011; @Quilliet_2012; @Hutchinson_2017]. Note that quick deflation can lead to multi-dimple deformations, which were shown to correspond to branches of higher energy [@Quemeneur2012], but this was not observed thanks to our small-stepped-deflation. Linear regime before buckling {#sec:linreg} ----------------------------- The first regime corresponds to constraints with a spherical symmetry, which results in a “in-plane" compression of the shell (*i.e*. parallel to the free surfaces). For materials with nonzero Poisson’s ratio, this induces elongationnal shear in the thickness of the shell but, in the surface model that is used to describe thin shells, spherical shrinking can be modelled by a uniform in-plane compression of a spherical surface [@Quilliet_2012]. In a $\Delta P$ versus $\frac{\Delta V}{V_{0}}$ diagram, quadratic compression energy corresponds to a linear evolution [@Quilliet_2012; @Marmottant_2011]: $\Delta P=\frac{4\chi_{2D}}{3R}\left(\frac{\Delta V}{V_{0}}\right)$, where $\chi_{2D}$ is the surface compression modulus. For a thin shell of an isotropic material, this 2D effective parameter can be linked to the 3D properties of the shell through $\chi_{2D}=\frac{Y_{3D}d}{2\left(1-\nu\right)}$, where $Y_{3D}$ is the Young modulus of the material, and $\nu$ its Poisson’s ratio ($\nu\lesssim0.5$ for most of the elastomeric materials, these latter being exclusively used for our experiments because they can undergo a 200% elongation without plastic deformation or fracturation). Hence: $$\Delta P=\frac{2Y_{3D}}{3\left(1-\nu\right)}\times\frac{d}{R}\left(\frac{\Delta V}{V_{0}}\right).\label{eq:LinTheo}$$ Experiments effectively show the expected linear behaviour, as exemplified in Fig. \[fig:PvsVimm\]. Values of the slope are used to nondimensionalise the characteristic post-buckling pressures in subsection \[sub:Plateau-values\], and are compared in section \[sec:Traction\] to traction experiments which provided independent measurements of $Y_{3D}$ and $\nu$. This linear regime persists up to the point where an instability causes a drastic change of shape (“buckling”) toward a configuration with a single axisymmetric dimple, together with a drop of $\Delta P$. The critical pressure at which buckling takes place was predicted from classical buckling theory [@Hutchinson; @1967; @Knoche2011] to be: $$\Delta P_{c}=\frac{2}{\sqrt{3\left(1-\nu^{2}\right)}}\times Y_{3D}\left(\frac{d}{R}\right)^{2}.\label{eq:Hutchinson}$$ In experiments, buckling often occurs before this threshold is reached, because of defects in the material [@Vella2011; @Reis2017], possibly down to 20% of the theoretical predictions for a perfect shell [@Hutchinson_2016]. According to numerical studies [@Knoche2011; @Quilliet_2012; @Marmottant_2011], proceeding with small deflation steps after this buckling hardly changes the value of $\Delta P$, which roughly plateaus during a substantial range of $\frac{\Delta V}{V_{0}}$. Plateauing, which is exemplified in Fig. \[fig:PvsVimm\]-b is specifically studied in the next section. For the thinnest shells, further deflation steps lead to a second, softer transition where radial folds progressively appear in the dimple (see Fig. \[fig:photo-balles\]-b and refs. [@Quilliet2008; @Quilliet2008err; @Quilliet_2012]) ; this aspect is not addressed in the present paper. Post-buckling plateau\[sub:Plateau\] ------------------------------------ ### Stabilization time\[sub:StabilsizationTimeExp\] During the spherical mode of deflation, the water level continuously falls (stabilizing within a few seconds) every time a small amount of water is sucked out from the ball. After buckling, it suddenly rises in the manometer. When deflation is performed further on, several behaviours may take place: - For most ball+manometer devices, the water level in the manometer stabilizes within a few seconds at each post-buckling deflation step. When recorded for a large range of relative volume variations, the slump $h$ under the reference equilibrium level in the manometer hardly varies with $\frac{\Delta V}{V_{0}}$ (“plateauing”). It shows indeed a very weak minimum at some intermediate value (as examplified in fig. \[fig:PvsVimm\]-b for $\frac{\Delta V}{V_{0}}$ above 0.015). Then, for the experiments carried at sufficiently large deflation, it re-increases which corresponds to an expected divergence when $\frac{\Delta V}{V_{0}}$ approaches 1 (ideally emptied ball)[@Knoche2011]. The minimum value $h{}_{min}$ of $h$ when it plateaus allows to determine the so-called “plateauing pressure” $\Delta P_{pl}=\rho gh_{min}$. This quantity underwent a specific study in the numerical simulations of ref. [@Quilliet_2012], which will be revisited hereafter. - Nevertheless, for some ball+manometer devices, at all deflation steps $h$ systematically shows a steep increase (*i.e*. the water level is suddenly sucked down for a few seconds) every time the ball is reconnected to the manometer after the sucking out of $\delta V_{i}$; then it decreases during minutes or more before stabilization, down to a new equilibrium value $h_{asympt}$. In the following, these experimental configurations are named “slow devices”. For most shells where post-buckling equilibrium is not immediately realized, it would have been too long to wait for $h$ reaching the $h_{asympt}$ value for each relative volume variation $\frac{\Delta V}{V_{0}}$ explored. Fortunately, we found out that the decrease of $h(t)$ was exponential for a few cases (fig. \[fig:FitBiExp\]-a), and that in the other cases it could be fitted using a biexponential of general formula: $$h\left(t\right)=h_{asympt}+\left(h_{init}-h_{asympt}\right)\left[p\,e^{-t/\tau_{1}}+\left(1-p\right)e^{-t/\tau_{2}}\right],\label{eq:BiExpRelation}$$ where $\tau_{1}$ and $\tau_{2}$ are respectively the short and long characteristic times, and $p$ the proportion of short-time exponential in the modelled signal (see Fig. \[fig:FitBiExp\]-b). Mechanical equilibrium is realized only when the water level in the manometer reaches its asymptotic value $h_{asympt}$. The $\left(\frac{\Delta V}{V_{0}},\Delta P=\rho gh_{asympt}\right)$ experimental graph shows plateauing as for balls without time delay. Results are presented and discussed in subsection \[sub:Plateau-values\]. ### Equilibrium and manometer\[sub:Equilibrium-and-manometer-1\] The equilibrium configurations, and the route toward them, are obtained while the shell is in contact with the manometer. In the two following subsections, we establish how this coupling influences the way the state diagram is explored and how the dynamical features intrinsic to the shell can be extracted. After closing of valve $\mathcal{V}_{s}$ and opening of valve $\mathcal{V}_{m}$ (see Fig. \[fig:Principle\]), pressure adaptation between the ball and the manometer occurs through water exchange, which in turn modifies (i) the pressure exerted by the water column in the ball (ii) the volume of the ball, hence the pressure exerted by the shell. The final state emerges from this feedback. Two characteristic situations are displayed in Fig. \[fig:TransientStab\]. After a volume $\delta V_{i}$ has been sucked out from a ball at equilibrium with state $\left(\Delta V_{i},\Delta P_{i}\right)$, the ball finds itself in a state $\left(\Delta V_{i}+\delta V_{i},\Delta P_{i+1,interm}\right)$, which we assume here to be an equilibrium state. Nevertheless, features of this new state are not known by the experimentalist, who has to open valve $\mathcal{V}_{m}$ in order to measure the pressure. Once the ball and manometer are in contact, the pressure difference $\Delta P_{i+1,interm}$ between both extremities of the manometer is not *a priori* equilibrated by the water withdrawal $h_{i}$ (that previously equilibrated $\Delta P_{i}$). This leads to a flow in the manometer until the outside-inside pressure difference $\Delta P=P_{ext}-P_{int}$ is equilibrated by the hydrostatic pressure associated with withdrawal $h$: $\Delta P_{i}-\Delta P=\rho g\left(h_{i}-h\right)$. On an other hand, conservation of water volume implies that $\Delta V-\Delta V_{i}=\delta V_i-\pi r^{2}\left(h-h_{i}\right)$; hence: $$\Delta P=\Delta P_{i}+\frac{\rho g}{\pi r^{2}}\left(\Delta V_{i}+\delta V_{i}-\Delta V\right).\label{eq:DteFonctionnt}$$ In a $\Delta P-\Delta V$ diagram, this is the equation of the straight line (“operating curve”) of slope $\left(-\frac{\rho g}{\pi r^{2}}\right)$ that passes through the point $\left(\Delta V_{i}+\delta V_{i},\Delta P_{i}\right)$ (Fig. \[fig:TransientStab\]). The measured equilibrium state $\left(\Delta V_{i+1},\Delta P_{i+1}\right)$ is then found by following the state curve $\wp(\Delta V)$ from the intermediate equilibrium state (with valve $\mathcal{V}_{m}$ closed) $(\Delta V_{i}+\delta V_{i},\Delta P_{i+1,interm})$ up to its intersection with the straight line of equation (\[eq:DteFonctionnt\]). Of course, if several branches of the state function are intersected, the final state is expected to lie on the same branch as that reached by the intermediate state (see Fig. \[fig:TransientStab\]). Two limit cases for the operating curve are horizontality, which marks deformations at imposed pressure difference, and verticality (imposed volume). Comparing the slopes of the linear part of the $\Delta P-\Delta V$ diagram and of the operating curve provides a threshold value $r_{c}=\left(\frac{\rho g R^{4}}{dY_{3D}}\right)^{1/2}$ for the inner radius of the manometer, so that $r\ll r_{c}$ corresponds to deflation at imposed volume, and $r\gg r_{c}$ to deflation at constant pressure. For our experimental conditions, $r_{c}\approx1\,$mm: experiments are done in an intermediate regime where, in particular, the jump between the two states before and after buckling has a negative slope whose absolute value is comparable to the slope of the isotropic part of the deflation (see Figs. \[fig:PvsVimm\]-b and \[fig:PvsVdelay\]). The interplay between the shell and the manometer also sets a limitation for the determination of the state function $\wp(\frac{\Delta V}{V_0})$: only the part of the lower branch corresponding to $\Delta V>\Delta V_{C}$, where $C$ is the point where the tangent has a slope $\left(-\frac{\rho g}{\pi r^{2}}\right)$ (Fig. \[fig:TransientStab\]), can be explored. Also the access to the extremity of the linear part depends on $r$. Finally, a small internal radius $r$ of the manometer allows to explore a bigger part of both the lower and upper branches. The counterpart lies in the dynamics toward equilibrium, which is discussed in the following subsection. The situation is indeed more complex for some “slow devices” (ball+manometer) where, in the post-buckling state, the equilibrium takes more than a few seconds to stabilize after the opening of valve $\mathcal{V}_{m}$. In that case, water outtake generates a steep withdrawal of the water level in the manometer, followed by a slower increase toward a limit value (*via* an exponential or bi-exponential relaxation versus time, as exposed in subsection \[sub:StabilsizationTimeExp\]). We observed experimentally that the slope of the steep withdrawal (light blue in fig. \[fig:PvsVdelay\]) never overtakes the slope of the linear part (which corresponds to pure constriction of the shell). We then assume that the sucking out of $\delta V_{i}$ first generates a (rapid) uniform constriction of the surface (on the figure: green arrows with the same inclination than the linear part of $\wp(\frac{\Delta V}{V_0})$), which has enough time to partly relax via a rolling of the rim that encircles the depression (pink arrows) before the ball is reconnected to the manometer. The relaxation of $h$ observed afterwards, then, corresponds to the end of the rim rolling toward the $\left(\Delta V_{i+1},\Delta P_{i+1}\right)$ equilibrium configuration, possibly slowed down further by other phenomena discussed in the following subsection. A quantitative model for identifying the origin of the characteristic time(s) that are observed after connection to the manometer is proposed in the next subsection. ### Relaxation towards equilibrium\[sub:Model-for-the\] As shown in figure \[fig:TempsEquilibrage\], the characteristic time is around $2-500\,s$ for an exponential decay while when a biexponential fit is necessary, it unveils a longer characteristic time of $\approx300-20000\,s$. Of course, for experiments where the water level stabilized “immediately”, we only have an upper bound for the characteristic time(s), which is the few seconds that are necessary to operate the valves before measuring $h$. When valve $\mathcal{V}_{m}$ is turned open after a deflation step, the water level in the manometer has to move in order to adapt to the new pressure. Assuming a Stokes incompressible flow in the vertical tube due to pressure difference $\Delta P$ between both extremities, this writes: $$8\eta\left(L-h(t)\right)\frac{dh(t)}{dt}- r^{2}\Delta P(t)+r^{2}\rho gh(t)=0,\label{eq:dyn-hp}$$ where $\eta$ is the viscosity of the water, $L$ the total length of the manometer, *i.e.* from the ball entry to the position of the meniscus at initial state. We neglected the section variations at the level of the valves and connections, and in the following we will replace $L-h$ by $L$ because $h\ll L$. Rewriting Eq. (\[eq:dyn-hp\]) then leads to: $$\tau_{f}\frac{dh(t)}{dt}+h(t)=\frac{\Delta P(t)}{\rho g},\label{eq:dyn-h}$$ where $\tau_{f}$ is a caracteristic time for the decay of the water level toward its equilibrium value, and depends on experimental parameters through: $$\tau_{f}=8\eta L/(\rho gr^{2}).\label{eq:TpsDissViscLiq}$$ However, this fluid viscous dissipation is not the only possible contribution to the water level dynamics. As exposed in the end of the previous subsection, internal frictions in the material that forms the shell may be of importance. Our assessment is that, because of dissipation in the shell’s material, the pressure difference between both sides of the shell may evolve with a characteristic time $\tau_s$ toward the equilibrium situation where $\Delta P=\wp(\Delta V)$: $$\tau_{s}\frac{\Delta P(t)}{dt}+\Delta P(t)=\wp(\Delta V(t)).\label{eq:dptdvt}$$ Here, we assume that $\tau_{s}$ is independent from the shape along the equilibration process in the manometer, which is reasonable as soon as small volume variations are imposed at each measurement step. When opening valve $\mathcal{V}_{m}$ in order to measure the pressure, the system evolves from the intermediate state $(\Delta V_{i},$ $\Delta P_{i+1,interm})$ to the state $\left(\Delta V_{i+1},\Delta P_{i+1}\right)$; equations (\[eq:dptdvt\]) and (\[eq:dyn-h\]) together with the relationship $\Delta V=\Delta V_{i}+\delta V_{i}+\pi r^{2}(h_{i}-h)$ eventually lead to the evolution equation for $h$: $$\begin{gathered} \tau_{f}\tau_{s}\frac{d^{2}h(t)}{dt^{2}}+(\tau_{f}+\tau_{s})\frac{dh(t)}{dt}+h(t)\\ =\frac{\wp(\Delta V_{i}+\delta V_{i}+\pi r^{2}(h_{i}-h(t)))}{\rho g}.\label{eq:h}\end{gathered}$$ Before going further in the study of the dynamics towards measurable equilibrium states, let us focus on the latter, which we denote with stars. These states are characterized by hydrostatic relationship $\wp(\Delta V^{*})=\rho gh^{*}$, with: $$\Delta V^{*}=\Delta V_{i}+\delta V_{i}+\pi r^{2}(h_{i}-h^{*}).\label{es:stat}$$ Because we explore the diagram step-by-step, the system is never far from its fixed point (except at the moment of exact buckling, that we do not consider here), so that we can expand the second term of equation (\[eq:h\]) around it : $\wp(\Delta V)=\wp(\Delta V^{*})+\frac{d\wp}{d\Delta V}(\Delta V^{*})\times(\Delta V-\Delta V^{*})$, and eventually: $$\begin{gathered} \tau_{f}\tau_{s}\frac{d^{2}h(t)}{dt^{2}}+(\tau_{f}+\tau_{s})\frac{dh(t)}{dt}\\ +\big[1+\frac{\pi r^{2}}{\rho g}\times\frac{d\wp}{d\Delta V}(\Delta V^{*})\big](h(t)-h^{*})\\ =\frac{\wp(\Delta V^{*})}{\rho g}-h^{*}.\label{eq:hfinal}\end{gathered}$$ Initial conditions at $t=0$ are $h=h_{i}$, and from Eq (\[eq:dyn-h\]), $\tau_{s}\frac{dh}{dt}=\frac{\Delta P}{\rho g}-h_{i}=\frac{\Delta P_{i+1,interm}}{\rho g}-h_i$, which depends on the moment at which the manometer was put in contact with the shell. One can easily show that the characteristic equation associated with the left part of Eq. \[eq:hfinal\] has two roots with negative real parts if $\left[\frac{d\wp}{d\Delta V}(\Delta V^{*})\right]>\left[-\frac{\rho g}{\pi r^{2}}\right]$. If this is not the case, the fixed point is not a stable point and cannot be reached, as already discussed in the geometrical construction of Fig. \[fig:TransientStab\]. This implies we cannot explore parts of the $\wp(\Delta V)$ state function where the slope is too strongly negative. Those are scarce in the diagram [@Knoche2011], which justifies the choice of a U-shape manometer with water below the air at the level of the interface. The strongest slopes are met in the isotropic phase. In that case, $\frac{d\wp}{d\Delta V}(\Delta V^{*})\sim Y_{3D}\times\frac{d}{R}\times\frac{1}{V_{0}}$. Considering $r=0.5$ mm, $Y_{3D}=7$ MPa, the highest value 0.25 for $d/R$ and the lowest value $40$ mm for the shell radius, we find that $\frac{\pi r^{2}}{\rho g}\times\frac{d\wp}{d\Delta V}(\Delta V^{*})$ never exceeds 0.5, so this term can be safely ignored in Eq. (\[eq:hfinal\]) when one studies post-buckling states. For the isotropic phase as for the plateau, the solution of Eq. \[eq:hfinal\] is therefore a biexponential function with characteristic times $\tau_{f}$ and $\tau_{s}$. The theoretical $\tau_{f}$, calculated using Eq. (\[eq:TpsDissViscLiq\]), was compared (Fig. \[fig:TempsEquilibrage\]) to the characteristic time(s) experimentally obtained, to which we incorporated data from the “instantaneous” experiments by estimating the upper bond for the characteristic time as $1\,$s. We observe that apart from one case, the characteristic times are much higher than the viscous time $\tau_{f}$. This suggests that the times experimentally determined are intrinsic to the shells themselves, which are made of commercial polymers. Eq. \[eq:dptdvt\] needs to be refined to account for this more complex relaxation scenario, which depends strongly on the ball under consideration as 0, 1 or 2 characteristic times larger than a few seconds can emerge. One may wonder why the viscous fluid characteristic time was not observed in more cases: the sampling was adapted to the slow relaxation dynamics of the shells, preventing data collection at times necessary to detect exponential contribution(s) with a characteristic time of a few seconds. Finally, the choice of an intermediate section for the manometer enables us to obtain fluid dissipation times well-separated from that associated with the dissipation in the shell material, without hindering our ability to explore the state diagram by the use of too large sections. ### Plateau values\[sub:Plateau-values\] We denominate by $\Delta P_{pl}$ (“plateau value”) the minimum value of the outside-inside pressure difference $\Delta P$, in the very flattened U-shaped part of the curve after buckling. This quantity was previously studied through numerical simulations in Ref. [@Quilliet_2012], and a heuristic dependance had been found between $\Delta P_{pl}$ and $\frac{\Delta V}{V_{0}}$. For the present paper, we extended the simulations range and we use a different formula to fit the simulations for the whole range of experimental $\frac{d}{R}$, *i.e.* from $5.10^{-3}$ to 0.3: $$\Delta P_{pl}=\frac{Y_{3D}}{\left(1-\nu^{2}\right)^{0.75}}\times\left(2.34\,10^{-6}+0.9\left(d/R\right)^{2.57}\right)\label{eq:Magic Formula}$$ In order to check the consistency of the deflation experiments with the theory, we determined for each ball the slope $p_{lin}$ of the linear part. Theoretically, $p_{lin}=\frac{2Y_{3D}}{3\left(1-\nu\right)}\times\frac{d}{R}$ (from Eq. (\[eq:LinTheo\])). We then focussed on the nondimensionalized value $\frac{\Delta P_{pl}}{p_{lin}}$ (which avoids concerns about an independent determination of $Y_{3D}$) with respect to $\frac{d}{R}$, as displayed in Fig. \[fig:Resum1\]. It shows that these experimental points are consistent with the theoretical curve obtained from Eq. (\[eq:Magic Formula\]) and the expression of $p_{lin}$: $$\frac{\Delta P_{pl}}{p_{lin}}=\frac{\left(1-\nu\right)^{0.25}}{\left(1+\nu\right)^{0.75}}\left[3.51\,10^{-6}+1.35\left(\frac{d}{R}\right)^{2.57}\right]\left(\frac{d}{R}\right)^{-1}\label{eq:magic2}$$ This result is new and of practical interest, since equation (\[eq:Magic Formula\]) had been established for thin shells. The experiments presented here show its validity for shells with relative thickness up to $\frac{d}{R}\approx0.3$. ### Towards folding\[sub:folds\] As the ball deflates along the postbuckling plateau, folds appear in the depression for the thinnest of the shells, as in Fig. \[fig:photo-balles\]-b. This secondary buckling transition is documented in literature for thin shells, both experimentally [@Carlson_1968] for very thin shells and theoretically [@Knoche2014; @Hutchinson_2017], but the only results for what concerns shells of medium thickness ($d/R>0.02$) were obtained numerically [@Quilliet2008; @Quilliet_2012]. Experimental domains of existence of emblematic non axisymmetric conformation are represented in Fig. \[fig:folds\] in the $(d/R,\Delta V/V_0)$ space. They show some discrepancies with the boundaries obtained from simulations [@Quilliet2008; @Quilliet_2012]. Primary buckling occurred for a volume loss much lower than that predicted in simulations ; as in Sec. \[sec:linreg\], defects are expected to be the cause of this discrepancy. The secondary buckling towards non-axisymmetric shapes also occurred for values of the relative deflation significantly lower than in simulations. Such shapes present radial folds, the number of which is denominated by $N_F$. In our experiments, the transition out from an axisymmetric shape could happen by way of an elongation of the dimple (the shape is then characterized by $N_F=2$, as in Ref. [@Carlson_1968]) and could be continued by the development of three fold shape ($N_F=3$). Both types of shapes were not obtained in the simulations of Ref. [@Quilliet_2012]. In these simulations, $N_F=4$ was seldom observed while the $N_F=4$ zone shows a great extent in the experimental diagram of Fig. \[fig:folds\]. Finally, the experimental domain of transition from $N_F=3$ to $N_F=4$ is crossed by the heuristic transition line found in Ref. [@Quilliet_2012]Ê for the secondary buckling, that is characterized by a $N_F=1$ to $N_F\ge 4$ direct transition. It may indicate that, for some numerical reason, the energy minima corresponding to low numbers of folds were not found by the solver in the simulations, that were then stuck to axisymmetric shapes. Note that the secondary buckling transition line found by Knoche and Kierfeld in Ref. [@Knoche2014] is close to that proposed in Ref. [@Quilliet_2012], that serves here as a reference for this discussion. Notwithstanding this discrepancy in the boundaries of the axisymmetric zone, we aim here at checking the heuristic dependence with $d/R$ of the number of folds $N_F$ reached at the end of the plateau, proposed in Ref. [@Quilliet_2012]. For the thinnest of the shells, the number of folds clearly departs from this heuristic law, as shown in Fig. \[fig:NF\]. This discrepancy may be due to the intrinsic limitations of an elastic model, failing to describe microscopic phenomena at stake at the apex of the s-cones in thin shells, where sharp creases are likely to host plastic deformation [@Nasto2013]. Interestingly, for thick enough shells ($d/R > 0.01$) less prone to extreme deformations, the number of folds roughly follows the proposed law in $(d/R)^{-1/2}$, thus confirming the relevance of $\sqrt{dR}$ as the key length for the elastic deformations of shells [@Quilliet_2012]. Comparison with traction experiments\[sec:Traction\] ==================================================== Elastic properties (Young modulus and Poisson’s ratio) were directly measured with a tensile tester Shimadzu Autograph AGS-X machine equipped with a 100 N load cell. The tensile tests were performed at ambient temperature on dumbbell-shaped sample cut with a dogbone punch (gauge length$18$mm $\times4$mm) in the ball, hence presenting a thickness $d$. Traction was performed at a maximum crosshead speed of 2 mm/min. For each ball, two different samples were submitted to two tractions at a maximum deformation of 3%, during which force and elongations (both longitudinal and transversal, using instant image treatment) were recorded. The true stress was plotted as a function of the nominal strain, and the Youngs modulus was determined from the initial slope of the stress/strain curves. Video recording of the sample during the deformation was performed in order to measure the Poisson’s ratio. Non-linearity between longitudinal and transversal deformations prevented reliable measurement of $\nu$ for half of the samples. When we were able to unambiguously determine its values, we found $0.45\le \nu\le 0.5$, as is typical for elastomeric materials. Regardless, the Poisson’s ratio has a small effect on values of interest, as shown by theoretical curves of figures \[fig:Resum1\] and \[fig:Traction\]. Figure \[fig:Traction\] shows that there is a satisfactory agreement for most of the shells between the slope $p_{lin}$ of the $\wp(\Delta V)$ equilibrium diagram in the isotropic deflation regime, adimensionalised by $Y_{3D}$ measured by traction experiments, and its theoretical value computed from $d/R$ and $\nu$. We recall that most of the studied shells are low-cost toys obtained by rotational casting with some variations of the thickness along the surface. These results indicates that for moderate deformations, in-plane compression (that operates in deflation experiments) and traction can be described using the same linear Young modulus. Conclusion and discussion ========================= Through theoretical and/or numerical studies, previous literature provided hints about the behaviour of a ball that buckles under pressure, according to its relative volume change, relative thickness and Poisson’s ratio. This was mostly obtained through the use of a model of elastic surface whose range of validity is, a priori, restricted to thin shells ($d/R < 0.02$). The experimental study conducted in this paper showed that thin shells deflate according to these models, with quantitative agreement for the relationships between volume and inside-outside pressure difference controlled by the Young modulus of the ball. More surprisingly, the agreement between the numerical deflation of elastic surfaces and the experimental results on shells of an isotropic material still holds for thicker shells (with important relative thicknesses, up to almost 0.3), when the correspondence between 3D features and the 2D properties of the model surface is kept unchanged. We also identified the dynamics for the rolling of the rim (which encloses the depression formed during the buckling), with 1 or 2 relaxation characteristic times, depending on the properties that are associated to the dissipation in the material. We plan to run dynamical simulations in the future with models for shell membrane incorporating dissipation, so as to identify the source of these different times. These results bring essential clues to the deflation of shells, and quantitative insights in a range of parameters that has not yet been explored experimentally or theoretically. Acknowledgements ================ We thank Pierre Saillé (CERMAV) for introducing us to traction experiments, and Guillaume Laurent and Antonin Borgnon for their involvement as students in the first experiments. A.D.’s position was funded by the European Research Council under the European UnionÕs Seventh Framework Programme (FP7/2007Ð2013)/ERC Grant No. 614655 Bubbleboost. Authors contributions ===================== G.C. and C.Q. have designed the research and the experimental set-up. All the authors carried out the experiments. C.Q. has realised the additional numerical simulations. G.C. and C.Q. were involved in the preparation of the manuscript. All the authors have read and approved the final manuscript. [50]{} J. W. Hutchinson, J. Appl. Mech. **34**, 49 (1967). L. Landau, E. M. Lifschitz, *Theory of elasticity*, 3rd ed., Elsevier Butterworth-Heinemann, Oxford (1986). C. Quilliet, C. Zoldesi, C. Riera, A. van Blaaderen, A. Imhof, Eur. Phys. J. E **27**, 13 (2008). C. Quilliet, C. Zoldesi, C. Riera, A. van Blaaderen, A. Imhof, Eur. Phys. J. E **32**, 419 (2010). S. Knoche and J. Kierfeld, Phys. Rev. E **84**, 046608 (2011). G. A. Vliegenthart, G. Gompper, New J. Phys. **13**, 045020 (2011). C. Quilliet, Eur. Phys. J. E **35**, 48 (2012). J. W. Hutchinson, J. M. T. Thompson, Phil. Trans. R. Soc. A **375**, 20160154 (2017). M. Pezzulla, N. Stoop, M.P. Steranka, A.J. Bade, and D.P. Holmes, Phys. Rev. Lett. **120**, 048002, (2018). R. L. Carlson, R. L. Sendelbeck, N. J. Hoff, Exp. Mech. **7**, 281 (1967). L. Berke, R. L. Carlson, Exp. Mech. **8**, 548 (1968). J. Zhang, M. Zhang, W. Tang, W. Wang, M. Wang, Thin-Walled Str. **111**, 58 (2017). S. Knoche, J. Kierfeld, Eur. Phys. J. E **37**, 62 (2014). C. C. Church, J. Acoust. Soc. Am. **97**, 1510 (1994). A. Djellouli, P. Marmottant, H. Djeridi, C. Quilliet, G. Coupier, Phys. Rev. Lett. **119**, 224501 (2017). Y. Forterre, J. M. Skotheim, J. Dumais, L. Mahadevan, Nat. **433**, 421 (2005). O. Vincent, C. Weisskopf, S. Poppinga, T. Masselter, T. Speck, M. Joyeux, C. Quilliet, P. Marmottant, Proc. Roy. Soc. B: Biol. Sci. **278**, 2909 (2011). K. Son, J. S. Guasto, R. Stocker, Nat. Phys. **9**, 494 (2013). D. P. Holmes, A. J. Crosby, Adv. Mater. **19**, 3589, (2007). D. Yang, B. Mosadegh, A. Ainla, B. Lee, F. Khashai, Z. Suo, K. Bertoldi, G. M. Whitesides, Adv. Mater. **27** 6323 (2015). V. Ramachandran, M. D. Bartlett, J. Wissman, C. Majidi, Extreme Mech. Lett. **9**, 282 (2016). M. Gomez, D. E. Moulton, D. Vella, Phys. Rev. Lett. **119**, 144502 (2017). D. P. Holmes, Curr. Op. Coll. Interf. Science **40**, 118 - 137 (2019). A. V. Pogorelov, *Bending of surfaces and stability of shells* (American Mathematical Society, Providence, 1988). J. W. Hutchinson, Proc. Roy. Soc. A **472**, 20160577 (2016). F. Quéméneur, C. Quilliet, M. Faivre, A. Viallat, B. Pépin-Donat, Phys. Rev. Lett. **108**, 108303 (2012). P. Marmottant, A. Bouakaz, N. De Jong, C. Quilliet, J. Acoust. Soc. Am. **129**, 1231 (2011). D. Vella, A. Ajdari, A. Vaziri, A. Boudaoud, Phys. Rev. Lett. **107**, 174301 (2011). J. Marthelot, F. López Jiménez, A. Lee, J. Hutchinson, P.M. Reis, J. Appl. Mech. **84**, 121005 (2017). A.Nasto, A. Ajdari, A. Lazarus, P. Reis, Soft Matt. **9**, 6796 (2013). [^1]: *Present address:* Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA
{ "pile_set_name": "ArXiv" }
--- address: - '$^{\dagger}$T-8, Theoretical Division, MS B285, Los Alamos National Laboratory, Los Alamos, New Mexico 87545' - '$^{\star}$Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545' author: - 'Salman Habib$^{\dagger}$ and Grant Lythe$^{\star}$' title: 'Dynamics of Kinks: Nucleation, Diffusion and Annihilation' --- 2 Many extended systems have localized coherent structures that maintain their identity as they move, interact and are buffeted by local fluctuations. The statistical mechanics of these objects has diverse applications, [*e.g.*]{}, in condensed matter physics [@SS], biology [@PB], and particle physics [@KRS]. The model to be studied here is a kink-bearing $\phi^4$ field theory in $(1+1)$ dimensions, popular because its properties are representative of those found in many applications. Static equilibrium quantities of this theory, such as the kink density and spatial correlation functions, are now well understood and recent work has shown that theory and simulations are in good agreement [@AHK; @HKS; @bhl]. However, dynamical processes, both close to and far out of equilibrium, are much less well understood. Questions include: What is the nucleation rate of kink-antikink pairs? How is an equilibrium population maintained? How do these dynamical processes depend on the temperature and damping? These questions, among others, are the subject of this Letter. We introduce and analyze below a simple model of kink diffusion and annihilation that predicts the nucleation rate and provides a picture of the physical situation, including the existence of multiple time and length scales. We also carry out high resolution numerical simulations. As one consequence of our work, we are able to settle a recent controversy as to whether the nucleation rate of kinks in an overdamped system is proportional to $\exp(-2E_k\beta)$ [@BC] or $\exp(-3E_k\beta)$ [@HMS] in favor of the first result ($E_k$ is the kink energy and $\beta=1/k_BT$). We consider the dynamics of the $\phi^4$ field obeying the following dimensionless Langevin equation [@AHK]: $$\partial^2_{tt}\phi=\partial^2_{xx}\phi+\phi(1-\phi^2) -\eta\partial_t\phi + \xi(x,t), \label{spde}$$ with the fluctuation-dissipation relation enforced by ${{\big<\xi(x,t)\xi(x',t')\big>}}= 2\eta\beta^{-1}\delta (x-x')\delta (t-t')$. We perform simulations on lattices typically of $10^6$ sites, using a finite difference algorithm that has second-order convergence to the continuum [@bhl]. Typical values of the grid spacing and time step are $\Delta x = 0.4$ and $ \Delta t = 0.01$. At zero temperature, the static kink solution centered at $x=x_0$ is $\phi_k(x) = k(x-x_0)$ where $k(x)=\tanh(x/\sqrt{2})$; the corresponding antikink solution is $\phi_a(x) = -k(x-x_0)$. Because there are only two potential minima, kinks alternate with antikinks on the spatial lattice. Imposing periodic boundary conditions constrains the number of kinks and antikinks to be equal. During the time evolution, we identify kinks and antikinks individually and follow the “lifeline” of each kink or antikink (Fig. \[spacetime\]). Equilibrium properties of one-dimensional systems, such as the free energy density and the correlation function ${{\big<\phi(0)\phi(x)\big>}}$, can be calculated using the transfer integral method [@ssf]. The calculation is exact, although one typically must evaluate eigenvalues of the resulting Schrödinger equation numerically. When the on-site potential has the double-well form, as is the case here, one part of the free energy density at low temperature can be interpreted as due to kinks, forming a dilute gas with density [@ssf]: $\rho_k\propto\sqrt{E_k\beta}\exp(-E_k\beta)$. This WKB approximation is consistent with recent simulations at $\beta > 6$, where unambiguous identification of kinks is possible [@AHK]. An equilibrium density of kinks is maintained by a dynamical balance of nucleation and annihilation of kink-antikink pairs (Fig. \[spacetime\]). The dependence of the nucleation rate $\Gamma$ on temperature and damping, however, is not directly calculable from the transfer integral; nor are unambiguous results for symmetric potentials available from saddle-point calculations [@land]. While analogy with the Kramers’ problem suggests $\Gamma \propto \exp(-2\beta E_k)$ [@kram], the relationship $\Gamma \propto \exp(-3\beta E_k)$ has also been suggested [@HMS]. Our direct counting of nucleation events establishes that their rate is proportional to the square of the equilibrium density, that is $\Gamma \propto \exp(-2\beta E_k)$ (Fig. 2). Below we show how this relation can be understood from a mesoscopic model of diffusing kinks with paired nucleation. At equilibrium, the nucleation rate is related to the mean kink lifetime $\tau$ by $\rho_k = \Gamma \tau$. Previous attempts to evaluate $\Gamma$ numerically [@numprev; @BD] have proceeded by counting the number of kinks $n_k(t)$ and assuming exponential decay of $\langle n_k(t+\tau)n_k(t)\rangle$. Unfortunately this approach provides no information on the underlying processes, and yields incorrect results if kinks are not properly identified on the lattice. In particular, results that appeared to support $\Gamma\propto \exp(-3\beta E_k)$ were performed at temperatures too high for accurate computation of $\langle n_k(t+\tau)n_k(t)\rangle$ [@numprev]. Because we identify individual nucleation events and follow individual kink lifelines, we can distinguish “paired” kinks (whose partner antikink is still alive) from “survivor” kinks (whose partner has been killed). We also distinguish and measure the contributions to the annihilation rate from the recombinant and various non-recombinant mechanisms (Fig. \[diags\]). The most frequent annihilation event is recombination of a recently-nucleated pair (designated I in Fig. \[diags\]) [@BC]. However, the “survivor” kinks that remain after a non-recombinant annihilation event (II or III) have a longer mean lifetime. At finite temperature, the mean-squared displacement of an isolated kink is given by ${{\big<{\bf X}_t^2\big>}}=2Dt$. The diffusivity $D$ can be estimated by using the zero-temperature kink as an ansatz in the equation of motion [(\[spde\])]{}, yielding $D\simeq(E_k\beta\eta)^{-1}$ [@kinkd], where $E_k=\sqrt{8/9}$ for a static kink. Corrections to $D$, arising because of fluctuations in the kink shape, are proportional to $\beta^{-2}$ and subdominant in the temperature range considered here. Our numerical observations, in particular that kink-antikink collisions at moderate to large damping always result in annihilation, motivate us to introduce the following mesoscopic model of kink dynamics: (i) kink-antikink pairs are nucleated at random times and positions with initial separation $b \ll \rho_k^{-1}$; (ii) once born, kinks and antikinks diffuse independently with diffusivity $D$; (iii) kinks and antikinks annihilate on collision. The separation between a kink and its partner performs Brownian motion with diffusivity $2D$. Thus, if only recombinate annihilation (I in Fig. \[diags\]) were allowed, the time ${{\bf t}_{0}}$ between nucleation and annihilation would have the density $ \frac{{{\rm d}}}{{{\rm d}}t}{{\cal P}[{{\bf t}_{0}}< t]} =bt^{-\frac32}(8\pi D)^{-\frac12}\exp(-\frac{b^2}{8Dt}) $ [@kands]. To analyse our model, we use the following approximation for non-recombinant annihilation: as long as both members of a pair are alive, there is a constant probability $\mu$ per unit time of a member being struck and “killed” by an outsider, [*i.e.*]{} of an event II or III. Thus, to each pair we assign a killing time ${{\bf t}_{\mu}}$, distributed according to ${{\cal P}[{{\bf t}_{\mu}}>t]}=\exp(-\mu t)$. Non-recombinant annihilation happens with probability ${{\cal P}[{{\bf t}_{\mu}}<{{\bf t}_{0}}]} = 1-\exp(-b\nu)$, where $\nu^2=\frac{\mu}{2D}$ [@dandj; @hllm]. The killing rate $\mu$ depends on the density of kinks; we estimate it as follows. A new-born pair finds itself in a domain between an existing kink and antikink of typical length $1/(2\rho_k)$. The mean time for a diffusing particle to exit the region is proportional to $(2D\rho_k^2)^{-1}$. Therefore, let $$\mu = 2D\alpha^2\rho_k^2. \label{mu}$$ The value of the dimensionless factor was obtained from numerical measurements of length and timescales (see Figures \[bubt\] and \[fx\] below): we estimate $\alpha \simeq 8$. Let $R(t) = \frac{{{\rm d}}}{{{\rm d}}t}{{\cal P}[{{\bf t}_{0}}<t|{{\bf t}_{0}}<{{\bf t}_{\mu}}]}$. Then $$R(t) = N(b)\exp(-\frac{b^2}{8Dt})t^{-\frac32}\exp(-\mu t), \label{gemlife}$$ where $N(b) = b(8\pi D)^{-\frac12}\exp(\nu b)$. In Fig. \[bubt\] we plot [(\[gemlife\])]{} and a histogram of values of ${{\bf t}_{0}}$ obtained from a large numerical solution of (\[spde\]). The behavior $R(t)\propto t^{-\frac32}$ is characteristic of Brownian excursions [@kands]. Our mesoscopic model has two timescales [@hllm]: $$\begin{aligned} \tau_0 &=& {{\big<{{\bf t}_{0}}|{{\bf t}_{0}}<{{\bf t}_{\mu}}\big>}}=\frac{b}{2\sqrt{\mu D}}~, \label{meant0}\\ \tau_{\mu} &=& {{\big<{{\bf t}_{\mu}}|{{\bf t}_{\mu}}<{{\bf t}_{0}}\big>}} =\frac1{\mu}\left(1-\frac{\nu b}{2}\frac{1}{{{\rm e}}^{\nu b}-1} \right)~. \label{meantmu}\end{aligned}$$ The mean recombination time [(\[meant0\])]{} depends on $b$; in contrast $\tau_{\mu}$ has a non-zero limit for $\nu b\to 0$: $\tau_{\mu}\to 1/(2\mu)$. With the approximation that a “survivor” kink/antikink has the same probability per unit time, $\mu$, of collision and death, the [*mean lifetime of a kink or antikink*]{} is given by $$\tau=\tau_0{{\rm e}}^{-\nu b}+\tau_{\mu}(1-{{\rm e}}^{-\nu b}) +\frac1{2\mu}(1-{{\rm e}}^{-\nu b}). \label{prev}$$ As $\rho b\to 0$, $ \tau\to (3/2){b}(2\mu D)^{-1/2} = (3/4){b}(\alpha D \rho)^{-1} $. In the same limit, combining [(\[mu\])]{} and [(\[prev\])]{} yields the prediction that the nucleation rate is proportional to the square of the equilibrium density: $$\Gamma = \rho_k/\tau = \frac{4}{3b} D \alpha \rho_k^2, \label{nucl2}$$ The relation $\Gamma\propto\rho_k^2$ is also found in the discrete-space Ising model [@racz]. In contrast, the nucleation rate is proportional to $\rho_k^3$ in systems where nucleation does not occur in pairs [@racz; @katja]. The latter scaling was incorrectly predicted for the $\phi^4$ system [@HMS], from an estimate of the annihilation rate that does not take into account paired nucleation. (In the $\phi^4$ system one does, however, find that the rate of survivor-survivor annihilation events – IV in Fig. \[diags\] – is proportional to $\rho_k^3$.) In the $\phi^4$ SPDE, the parameters $D$, $\Gamma$ and $b$ have in general a weak (non-exponential) temperature dependence. The lengthscale $b$ is of the same order as the width of an isolated kink. In Fig. \[nucleta\] we plot the measured nucleation rate versus the damping coefficient at fixed temperature. The nucleation rate is proportional to $\eta^{-1}$ for $\eta\gg 1$ \[in agreement with (\[nucl2\])\] and appears to plateau for $\eta\to 0$. At low damping, however, direct measurement of the nucleation rate is problematic because kink-antikink collisions may result in single or multiple bounces rather than immediate annihilation [@csw]. We now turn to the lengthscales in the system. A histogram of distances between neighboring kinks and antikinks is well-approximated by an exponential with characteristic length $(2\rho_k)^{-1}$. This simple form results from the cancellation of the tendency of paired kinks to be closer together than $(2\rho_k)^{-1}$ with the opposite tendency of survivor kinks. In Fig. \[fx\] we plot $f(x)=$(number of occurrences of separation $\in (x,x+{{\rm d}}x)$)$/(L\,{{\rm d}}x) $. We also construct the histogram for the separations of [*only paired*]{} kinks and antikinks. The dashed curve is the probability of being at $x$, averaged over the lifetime, for a Brownian motion killed at $x=0$ and at rate $\mu$ [@dandj]: $$l(x)=N\exp(-\nu x)=N\exp(-\alpha\rho_k x). \label{fxsx}$$ The classification of kinks into paired kinks and survivors, with the approximation that kinks have a constant probability [(\[mu\])]{} per unit time of non-recombinant annihilation, allows us to construct a macroscopic rate theory for the two densities $n_p(t)$ (paired kinks) and $n_s(t)$ (survivor kinks). The equation for $n_p(t)$ has a positive term due to nucleation and a negative term inversely proportional to the lifetime of pairs, [(\[meant0\])]{}. The terms in the equation for $n_s(t)$ correspond to processes III and IV in Fig. \[diags\]. Note that process II does not change the number of survivor kinks. We obtain $$\begin{aligned} \dot n_p =& \Gamma - 2b^{-1}\alpha D (n_s+n_p)\,n_p\cr \dot n_s =& D\alpha^2(n_s+n_p)^2(n_p-2n_s). \label{ratee} \end{aligned}$$ The steady state solution of [(\[ratee\])]{} gives the relationship [(\[nucl2\])]{} between $\Gamma$ and the equilibrium kink density. Nonequilibrium dynamics are also correctly described: if $\Gamma=0$, the paired density quickly decays and, for late times, $\dot n_s \propto n_s^3$, in agreement with an exact result for the survival probability in the diffusion-limited reaction A$+$A$\to$0 [@tandc]. While not exact, [(\[ratee\])]{} illustrate that at least two coupled equations are necessary to capture the two timescales in the dynamics: no single rate equation can suffice. We have benefited from discussions with Kalvis Jansons, Eli Ben-Naim and Vincent Hakim. Computations were performed at the National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory. See, [*e.g.*]{}, A. Seeger and P. Schiller, in [*Physical Acoustics, Vol. III*]{} edited by W.P. Mason (Academic, New York, 1966); A.R. Bishop, J.A. Krumhansl, and S.E. Trullinger, [Physica D]{} [**1**]{}, 44 (1980). M. Peyrard and A.R. Bishop, Phys. Rev. Lett. [**62**]{}, 2755 (1989). V.A. Kuzmin, V.A. Rubakov, and M.S. Shaposhnikov, Phys. Lett. [**155B**]{}, 36 (1985). F.J. Alexander and S. Habib, Phys. Rev. Lett. [**71**]{}, 955 (1993); F.J. Alexander, S. Habib, and A. Kovner, Phys. Rev. E [**48**]{}, 4284 (1993). S. Habib, A. Khare, and A. Saxena, Physica D [**123**]{}, 341 (1998); A. Khare, S. Habib, and A. Saxena, Phys. Rev. Lett. [**79**]{}, 3797 (1998). L.M.A. Bettencourt, S. Habib, and G. Lythe, Phys. Rev. D (in press), hep-lat/9903007. M. Buttiker and T. Christen, Phys. Rev. Lett. [**75**]{}, 1895 (1995); [*ibid*]{} [**77**]{}, 788 (1996); Phys. Rev. E [**58**]{}, 1533 (1998). F. Marchesoni, Phys. Rev. B [**34**]{}, 6536 (1986); P. Hanggi, F. Marchesoni, and P. Sodano, Phys. Rev. Lett. [**60**]{}, 2563 (1988); F. Marchesoni, [*ibid*]{} [**73**]{}, 2394 (1994); P. Hanggi and F. Marchesoni, [*ibid*]{} [**77**]{}, 787 (1996); F. Marchesoni, C. Cattuto, and G. Constantini, Phys. Rev. B [**57**]{}, 7930 (1998). D.J. Scalapino, M. Sears, and R.A. Ferrell, Phys. Rev. B [**6**]{}, 3409 (1972); J.A. Krumhansl and J.R. Schrieffer, [*ibid*]{} [**11**]{}, 3535 (1975); J.F. Currie, J.A. Krumhansl, A.R. Bishop, and S.E. Trullinger, [*ibid*]{} [**22**]{}, 477 (1980). M. Buttiker and R. Landauer, Phys. Rev. Lett. [**43**]{}, 1453 (1979); Phys. Rev. A [**23**]{}, 1397 (1981). R. Landauer and J.A. Swanson, Phys. Rev. [**121**]{}, 1668 (1961); A. Seeger and P. Schiller, in Ref. [@SS]; W. Wonneberger, Physica A [**103**]{}, 543 (1980). A.I. Bochkarev and Ph. de Forcrand, Phys. Rev. Lett. [**63**]{}, 2337 (1989); Phys. Rev. D [**47**]{}, 3476 (1993); M. Alford, H. Feldman, and M. Gleiser, Phys. Rev. Lett. [**68**]{}, 1645 (1992). T.R. Koehler, A.R. Bishop, J.A. Krumhansl, and J.R. Schrieffer, Solid State Commun. [**17**]{}, 1515 (1975); D.Yu. Grigoriev and V.A. Rubakov, Nucl. Phys. B [**299**]{}, 67 (1988). T.R. Koehler [*et al*]{}, in Ref. [@BD]; W. Wonneberger, Physica A [**108**]{}, 257 (1981); D.J. Kaup, Phys. Rev. B [**27**]{}, 6787 (1983); Mario Salerno, E. Joergensen, and M.R. Samuelesen, Phys. Rev. B [**30**]{}, 2635 (1984); P.J. Pascual and L. Vázquez, Phys. Rev. B [**32**]{}, 8305 (1985); F. Marchesoni, Phys. Lett. A [**115**]{}, 29 (1986). I. Karatzas and S.E. Shreve, [*Brownian Motion and Stochastic Calculus*]{} (Springer, New York, 1988). D. Dean and K.M. Jansons, J. Stat. Phys. [**70**]{}, 1313 (1993); K. Jansons and G. Lythe, [*ibid*]{} [**90**]{}, 227 (1998). S. Habib, K. Lindenberg, G. Lythe and C. Molina-París, (in preparation). Z. Rácz, Phys. Rev. Lett. [**55**]{}, 1707 (1985). K. Lindenberg, P. Argyrakis, and R. Kopelman, J. Phys. Chem. [**99**]{}, 7542 (1995). A.E. Kudryavtsev, JETP Lett. [**22**]{} 82 (1975); M. Moshir, Nucl. Phys. B [**185**]{}, 318, (1981); M.J. Ablowitz, M.D. Kruskal, and J.F. Ladik, SIAM J. Appl. Math. [**36**]{}, 428 (1979); D.K. Campbell, J.F. Schonfeld, and C.A. Wingate, Physica D [**9**]{}, 1 (1983). D.C. Torney and H.M. McConnell, [J. Phys. Chem]{} [**87**]{}, 1941 (1983); D. Balding, J. Appl. Prob. [**25**]{}, 733 (1988).
{ "pile_set_name": "ArXiv" }
--- address: '$^a$ Physics Department, McGill University, 3600 University Street,Montreal, Quebec, H3A 2T8, Canada.1cm $^b$ Instituto de Física, Universidad Nacional Autónoma de México, Apartado Postal 20-364, 01000 México D.F., México .1cm $^c$ TRIUMF, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3, Canada.' author: - 'C.P. Burgess$^a$, A. de la Macorra$^b$, I. Maksymyk$^c$ and F. Quevedo$^b$' title: 'CONSTANT VERSUS FIELD DEPENDENT GAUGE COUPLINGS IN SUPERSYMMETRIC THEORIES[^1]' --- \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} Thanks to the work of Seiberg and many others, the understanding of supersymmetric theories has improved considerably during the past three years $^{-}$. In particular, the nonperturbative dynamics determining the possible phases of these theories has been very well understood. This is important for understanding issues such as chiral symmetry breaking, supersymmetry breaking, the vacuum structure etc. In the context of string theory, having $N=1$ supersymmetric theories as their low energy effective theories, this progress should be reflected on the possibility to address the most important obstacles for the theory to make contact with low energy physics, namely, lifting the vacuum degeneracy and breaking supersymmetry. Superstring theories include always in their spectrum a massless field called the dilaton $S$ which provides the bare gauge coupling. It is not only massless but has an exactly flat potential in perturbation theory, and therefore it is one of the many ‘moduli’ of the theory. Nonperturbative effects generically lift this potential and the dilaton will get a mass. Depending on the nature of these effects, the mass of the dilaton will be determined by the supersymmetry breaking scale and then is expected to be small, or is fixed at the Planck scale and therefore the dilaton does not appear in the low-energy spectrum of the theory. In the first scenario the nonperturbative effect responsible for breaking supersymmetry is the same that fixes the dilaton whereas the second scenario is in two steps: a Planck scale effect fixes the dilaton and a low energy effect breaks supersymmetry. These two different scenarios will generally be differentiated in the low energy action by having the gauge coupling either constant or field dependent. The two-steps scenario fits with the gauge mediated supersymmetry breaking scenario recently revived $^-$ whereas the one-step scenario fits with the more standard gravity mediated supersymmetry breaking scenario. The recent studies in supersymmetric gauge theories consider the gauge coupling as a constant and therefore it applies to the two-steps scenario directly. The search for models that break supersymmetry therefore has a direct application to string theory, only on this scenario. It is then interesting to ask what are the implications of the new understanding of supersymmetric gauge theories for the more standard scenario where the dilaton survives at low energies. Previous attempts to understand these issues were based on a limited knowledge of supersymmetric theories, and the typical models considered included a string hidden sector with several gauge group factors and matter charged under one of the factors . Concentrating on the dilaton field, the standard superpotentials that emerge in this case are of the form $^,$ $$W=\Sigma_i A_i \; e^{-a_i S} \label{W1}$$ These models generally do not break supersymmetry in the $S$ sector, they have a supersymmetric minimum at finite values of $S$, but also have a runaway solution to zero coupling ($S\rightarrow \infty$). This behaviour at infinity has been argued on very general grounds by Dine and Seiberg some time ago . They argue that at zero coupling the theory should be free and therefore the potential should vanish there. This is a source of a cosmological problem pointed out by Brustein and Steinhardt . Taking the superpotential above, being so steep, if the dilaton field starts at any value, it may never end up at the local minimum with non-vanishing coupling but will roll all the way to the runaway vacuum. On the other hand, besides the field $S$, there is usually another modulus, the field $T$ measuring the size of the compact space. This field has also a flat potential to all orders in perturbation theory that gets lifted by nonperturbative effects. The properties of $T$ and $S$ are very similar and this similarity was actually at the origin of the proposal of $S$ duality, given that there existed a better established $T$ duality. There are even some models that have the symmetry $S\leftrightarrow T$. In the same way that $S$ represents the string coupling, $T$ represents the coupling of the underlying $2D$ sigma model. Curiously the potential for the $T$ field found in simple examples, blows up for large values of $T$. Unlike what it was naively expected, that it should runaway to the weak coupling limit $T\rightarrow\infty$. This may be understood in the following way: the gauge coupling in $10D$, $g_{10}$ is related to the gauge coupling in $4D$, $g_4$ by $1/g_4^2=R^6/g_{10}^2$ where $R$ is the size of the compact $6D$ space and so it is the real part of the field $T$. A large value of $T$ is a large value of $R$, combined with a relatively small value of $g_4$ implies a very strong string coupling in $10D$, therefore the blowing-up of the potential is a strong string coupling effect, not controlled in string perturbation theory where the calculation was performed. On the other hand the potentials for the $S$ field seem to behave very different from those for the $T$ field. It is then valid to question the general assumption that the potential for the $S$ field runs away to $\infty$ and study different alternatives to the sum of negative exponentials of the equation above. In this talk we will present several models illustrating the difference between constant and dilaton dependent gauge couplings as well as different examples where the dilaton potential does not runaway to infinity. We also argue that the inclusion of field-dependent gauge couplings can qualitatively change whether or not a given model spontaneously breaks supersymmetry. The main difference is due to the additional requirement of extremizing the superpotential with respect to the coupling-constant field. For instance, it can happen that a supersymmetry-breaking ground state for fixed gauge coupling becomes supersymmetric once the coupling constant is allowed to relax to minimize the energy. In particular we show that most of the models with dynamical supersymmetry breaking, when the gauge coupling is field dependent, do not break supersymmetry. Furthermore, we find that the opposite of this is also possible, supersymmetry can be unbroken for fixed gauge coupling, but breaks down once the gauge coupling is considered as a field. An example on the difference between having field dependent or independent gauge couplings is the simplest case of gaugino condensation for a pure gauge theory having a simple gauge group and no matter multiplets . In this case, for constant gauge couplings, gauginos condense without breaking supersymmetry . The reason is that the gaugino condensate is given as the lowest component of a chiral superfield $U=<\lambda \lambda>$ and a non-vanishing value for the lowest component does not break supersymmetry. On the other hand, once a field dependent coupling constant is introduced via a chiral field $S$, whose real part gives the coupling constant $Re\,S=1/g^2$, a non-vanishing gaugino condensate will break supersymmetry because it will be proportional to the $F$ term of the $S$ field, and a nonvanishing $F$ term breaks supersymmetry. However, the dynamics of the dilaton field for a single gaugino condensate in pure Yang-Mills theory has a runaway behaviour $S\rightarrow \infty$ and the gaugino condensate vanishes $U=const. e^{- a S} \rightarrow 0$. Therefore in both cases, field dependent and field independent coupling constant, supersymmetry is not broken. But in the fist case gauginos condense whereas in the second case they do not condense. We will now study potentials including matter fields and we will consider $N=1$ supersymmetric models with gauge group $SU(N_c) $. We represent the matter multiplets with chiral superfields, $Q^i_\alpha \in R$ (and $\tilde{Q}_i^\alpha \in \tilde{R}$ ), where ‘$i$’ is the flavour index, and ‘$\alpha$’ is the gauge index. The kinetic microscopic action for the model is given by $ \it{L}_{\rm kin} = \frac{1}{4} f \, Tr W_{\alpha}W_{\alpha} $, where $f$ is the gauge kinetic function and $W_{\alpha}$ the chiral gauge superfield and we take standard kinetic terms for the matter supermultiplets. At tree level in string theory on has $f=k\, S$ with $k$ the Kac-Moody level. The microscopic superpotential relating the matter supermultiplets is taken to vanish identically, $W(Q,\tilde{Q}) = 0$. To determine the superpotential for the quantum ‘effective action’ which generates the irreducible correlation functions of the theory (as opposed, say, to the theory’s Wilson action) we study the operators whose correlations we wish to explore. Of particular interest, however, are those fields which can describe the very light scalar degrees of freedom of the model, since these describe the system’s vacuum moduli and symmetries. In the absence of a microscopic superpotential for the matter fields $Q$ and $\tilde{Q}$, these light degrees of freedom are described classically (and hence also to all orders of perturbation theory) by the $D$-flat directions, which parametrize the zeroes of the classical scalar potential. It is well known that these $D$-flat directions can be parametrized in terms of a suitably chosen set of gauge-invariant holomorphic polynomials . We take the arguments of the superpotential to be $W(U, M^i_j)$, where $M^i_j = <Q^i_\alpha \tilde{Q}_j^\alpha >$, $U = < Tr W_{\alpha}W_{\alpha} >$. Although the gaugino condensate field, $U$, does [*not*]{} similarly describe a $D$-flat direction, it is nonetheless convenient to keep it as an argument of the effective action. The superpotential is completely determined by the twin conditions of linearity and symmetry under the model’s global flavour symmetries. As was demonstrated in , the fact that $S$ only couples to the microscopic theory [*via*]{} the kinetic term implies, as an exact result, that the effective superpotential necessarily has the form W = US + f(U, M\^i\_j) . That is, $S$ can only appear linearly, and moreover only in the term $\frac{1}{4} US$. Second, the function $f(U, M^i_j)$ is determined by the various global chiral symmetries of the underlying supersymmetric gauge theory. In the absence of a superpotential for the matter fields, $Q^i_\alpha$ and $\tilde Q^\alpha_i$, the underlying gauge theory admits the classical global symmetry $SU(N_f)_L \times SU(N_f)_R \times U(1)_A \times U(1)_B \times U(1)_R$, of which the factors $U(1)_A \times U(1)_R$ are anomalous. Invariance of the effective superpotential under the anomaly-free symmetries implies the fields $M^i_j$ can appear only through the invariant combination $\det M$. (For $N_c < N_f$ we imagine the expectation value of the baryon operator, $B^{i_1\cdots i_{N_c}} = \epsilon^{\alpha_1 \cdots \alpha_{N_c}} \, {Q^{i_1}}_{\alpha_1} \cdots {Q^{i_{N_c}}}_{\alpha_{N_c}}$ to be minimized by zero). The two anomalous symmetries, $U(1)_A$ and $U(1)_R$, then fix the form of the unknown function $f(U, \det M)$. From these considerations, it is clear that $W$ has the general structure W = US + . \[W2\] Symmetry arguments cannot determine the constants $\mu$ and $C_0$. Indeed $C_0$ may be chosen to vanish through an appropriate choice for $\mu$. Since $W$ is the superpotential for the effective action — as opposed to the Wilson action — the correct procedure for ‘integrating out’ fields is to remove them by solving their extremal equations for $W$, rather than by performing their path integral. Furthermore, for supersymmetric theories this should be done using the effective superpotential, $W$, rather than the effective scalar potential $V$. Performing this operation for the gaugino condensate $U$ one obtains W=c ( )\^[1/(N\_c-N\_f)]{}= c’ ( )\^[1/(N\_c-N\_f)]{} \[W\]where $c= - \; \frac{a}{32\pi^2} \,\exp\left(\frac{C_0+a}{a}\right)$, $c'=c\, \exp\left(-\frac{C_0}{a}\right)$ and the second equality defines the RG-invariant scale, $\Lambda=\mu^{3N_c-N_f} e^{-8\pi^2 S/(3N_c-N_f)}$. It is convenient to distinguish four different cases depending on the matter content: i) $N_f < N_c$ , ii) $N_f=N_c $, iii) $N_f > N_c$ and iv) $N_f > 3 N_c$. In the first case, $N_f<N_c$, the only invariant are meson fields and a non-vanishing superpotential $W= c (\frac{\Lambda ^{3N_c-N_f}}{det M})^{1/(N_c-N_f)}$ is obtained. Since the scale $\Lambda$ in terms of the coupling constant is given by $\Lambda = \mu^{3N_c-N_f} e^{- 8\pi^2 S/(3N_c-N_f)}$ minimizing the superpotential with respect to $S$ gives a runaway behaviour $S \rightarrow \infty$ and $W_S \propto W \propto e^{- 8\pi^2 S/(N_c-N_f)} \rightarrow 0$. As in the pure Yang-Mills case, a superpotential is dynamically generated but its minimum is at vanishing potential and a supersymmetric vacuum is obtained. However, for a field independent gauge coupling we do not extremize the superpotential with respect to $S$ and we will get a non-vanishing vacuum for finite value of $\det M$. So we have a runaway potential in the $M$ direction. Adding tree level terms as a function of $M$ cannot avoid the runaway potential for $S$ in the field dependent case but may avoid the runaway potential for $M$ in the constant case, fixing $M$ at a finite value. Another interesting case is when the matter content is $N_f=N_c$. In this case the second term in eq.(\[W2\]) vanishes and extremizing the superpotential with respect to $U$ gives the quantum constraint $\det M = \Lambda^{2N_c}$[^2], where we have taken the baryons v.e.v. to vanish. If $<M_i^j> = 0$ then the quantum constraint will be satisfied only for a runaway dilaton field (i.e. vanishing $\Lambda$). This is always possible for a field dependent gauge coupling but it will not be satisfied for a finite value of the gauge coupling or a constant gauge coupling and, therefore, supersymmetry will be broken. Furthermore, we can add to the superpotential eq.(\[W2\]) tree level terms like $M^a +M^b$ for the mesons that do not destroy the symmetries yielding a finite value of $M$ and thus stabilizing the dilaton field through the quantum constraint. For $ N_f > N_c +1$ the exponent in eq.(\[W\]) is positive. In this case, a runaway behaviour for the dilaton is no longer favoured since $W\rightarrow \infty$. However, there is always a solution with $M \rightarrow 0$ and a runaway of the superpotential ($W \rightarrow 0$) in the plane $S-M$ is again not avoided. This includes the selfdual region $\frac{3N_c}{2}<N_f<3 N_c$. Finally consider $3N_f > N_c $ with all baryons minimized by zero. There are a number of criticisms which might be raised against using non-asymptotically free gauge theories and against the generation of a non-perturbative superpotential . However, the weakness in the arguments lie, in general, in its making an insufficient distinction between the effective action and the Wilson action . The Wilson action, $S_w$, describes the dynamics of the low-energy degrees of freedom of a given system, and is used in the path integral over these degrees of freedom in precisely the same way as is the classical action. The Wilson action for SQCD at scales for which quarks and gluons are the relevant degrees of freedom would therefore depend on the fields $W_{\alpha}$, ${Q^i}_\alpha$ and $\tilde{Q}_i^\alpha$. As a result, the vanishing of $\det ( Q \tilde{Q} )$ would indeed preclude the generation of a superpotential of the type $\Bigl[ e^{-8 \pi^2 S}/ \det(Q \tilde{Q})\Bigr]$ within the Wilson action. By contrast, it is the effective action, $\Gamma$, which is of interest when computing the [*v.e.v.*]{}s of various fields. And it is ${M^i}_j = <Q^i_\alpha \tilde {Q}_j^\alpha>$ which appears as an argument of $\Gamma$. Since the expectation of a product of operators is not equal to the product of the expectations of each operator, it need not follow that $\det M = 0$ when $N_c < N_f$. Let us introduce a mass term $Tr (\mu M) $ for the quark fields in eq.(\[W\]) and make the mass $\mu$ dynamical, as it is always the case in string theory, by adding a trilinear term for the field $\mu$. Eq.(\[W\]) becomes then $ W(M,\mu,S) = Tr( \mu M) + {h \over 3} \, Tr (\mu^3 ) + k \; \left( {e^{- 8 \pi^2 S} \over \det M } \right)^{1 /( N_c - N_f)} , $ where $k=N_c-N_f$. Extremizing with respect to ${M^i}_j$, and substituting the result back into $W$ gives the superpotential $ W(\mu,S) = {h \over 3} \, Tr \Bigl( \mu^3 \Bigr) + k' \Bigl( e^{- 8 \pi^2 S} \; \det \mu \Bigr)^{1 / N_c}, $ where $k'=N_c$. If ${\mu^i}_j$ were a constant mass matrix this last equation would give the superpotential for $S$ in SQCD. It is noteworthy that so long as $k' \ne 0$ the result has runaway behaviour to $S\rightarrow \infty$ [*regardless*]{} of the values of $N_c$ and $N_f$[^3]. We extremize, now, $W$ with respect to the field ${\mu^i}_j$, to obtain the overall superpotential for $S$. The extremum is obtained for ${\mu^i}_j=\left(-h\, e^{-8\pi^2 S/N_c} \right)^{N_c/(N_f-3N_c)} {\delta^i}_j$, and the superpotential is then given by W(S) = k” ( h\^[N\_f]{} e\^[24 \^2 S]{} )\^[1 / (N\_f - 3N\_c)]{} =k” \^3, \[NAW\] with $k''=(-1)^{3N_c/(N_f-3N_c)}\left(N_c-N_f/3\right)$. Notice that eq.(\[NAW\]) takes the simple form $W \propto \Lambda^3$ when expressed in terms of the renormalization group invariant scale and it is valid for all values of $N_f$ and $N_c$. Eq.(\[NAW\]) gives a positive exponential of $S$ if $N_f > 3 N_c$ where the theory is not asymptotically free. When this is combined with the potential for another, asymptotically-free gauge group we obtain a superpotential of the form of eq. (\[W1\]) with positive and negative exponentials and a non-trivial minimum can be found for $S$. The extremal condition for the dilaton $W_S = 0$ gives a runaway behaviour $S\rightarrow \infty$ for $3N_c > N_f$ but for non-asymptotically free gauge group the equation $W_S=0$ is satisfied only if the mass field $\mu$ has a vanishing v.e.v., i.e. $<\mu>=0$. Minimizing the superpotnetial with respect to $\mu$, $W_\mu=0$, gives two solutions: $\mu=0$ and $\mu=(-h e^{- 8\pi^2 S/N_c})^{3N_c-N_f}$. For asymptotically free gauge group $3N_c >N_f$ both solutions are equivalent in the runaway limit $S\rightarrow \infty$. In this case both minima are continuously connected in the $S-\mu$ plane. On the other hand, if $3N_c < N_f$ then the solution $\mu=0$ and $\mu=(-h e^{- 8\pi^2 S/N_c})^{3N_c-N_f}$ are driven apart by a large value of $S$ and one cannot continuously go from one minimum to the other one. The barrier between both minima increases exponentially with increasing $S$. This property can play an important role in the evolution of the dilaton field for cosmology. Notice that since it is the effective action which we use, rather than the Wilson action, one might worry whether our analysis is invalidated by the appearance of nonlocal terms or holomorphy anomalies. We argue that this is not the case for the solution where $\mu^i_j \ne 0$, since in this case the matter multiplets have masses and for scales below their mass the theory is a pure gauge theory, which has a gap due to confinement. Since holomorphy anomalies arise due to massless states, they cannot occur if the theory has a gap. The same need not be true for the potentially runaway solution, for which $\mu^i_j = 0$, since in this phase there are massless matter and gauge multiplets which can produce such anomalies. An example of dynamical global supersymmetry breaking with constant gauge couplings, where supersymmetry can be restored by the incorporation of the dilaton ($ie$ by the field dependence of the gauge couplings) is the canonical example of dynamical global supersymmetry breaking, the so-called 3-2 model of Affleck [*et al*]{} . In this example the gauge group is $SU(3)\times SU(2)$. The fundamental matter spectrum is such that the $SU(2)$ factor is quantum constrained. The quantum constraint is of the form $YZ=\Lambda_2^4$. It is shown in [@adsone; @adstwo]  that, if we suppose the condensation scale for the $SU(2)$ factor to be much greater than that for the $SU(3)$ factor, and if we suppose a certain superpotential in the microscopic theory, then the effective superpotential can be written as W=XY+(YZ-\_2\^4) . One can easily see that the equation of motion for $X$ implies $Y=0$ and that the equation of motion for the Lagrange multiplier $\lambda$ implies $YZ=\Lambda^4_2$. For the case of constant gauge couplings ($\Lambda_i = $constant), the relations cannot be simultaneously satisfied and supersymmetry is said to be dynamically broken. However, for the case of field dependent gauge couplings ($\Lambda_i = \mu_i e^{-c_i S_i}$), the relations are satisfied by the runaway vacuum $S\rightarrow \infty$, for which $\Lambda_i=0$. Therefore we learn that in this model, supersymmetry is restored by a runaway dilaton if the gauge couplings are conceived to be field-dependent. The opposite can also happen. We can have broken supersymmetry for field dependent gauge coupling but unbroken supersymmetry for field independent gauge coupling. For instance, consider gaugino condensate for two gauge groups with gauge kinetic function $f=f(S,T)$, as in string models when one-loop corrections are included. Once $T$-Duality is imposed on the theory the superpotential becomes a function of $S$ and $T$, $W= A_1(T)e^{-a_1 S}+A_2(T) e^{-a_2 S}$ is given in eq.(\[W1\]) where the $A_i$ coefficients are now $T$ dependent. It is well known that this superpotential in local supersymmetry has a non-supersymmetric vacuum . Supersymmetry is broken through the auxiliary field of the moduli $T$, i.e. $F_T\neq 0$, and the dilaton gets a finite v.e.v. However, for a field independent gauge coupling, in this example, supersymmetry is not broken. In global supersymmetry, there may be cases where this also happens because the field equations of the field $S$ may turn out to be inconsistent with the other field equations, but so far we have not found explicit examples showing this property. Finally we can also write down asymptotically free models which can produce positive exponentials from product group models. As an example, let us consider the $SU(2)\times SU(2)$ model of Intriligator, Leigh and Seiberg [@ils]  with invariants $X$ and $Y$ for which the nonperturbative superpotential is: W\_[np]{}= If we add to this superpotential a tree level one of the form $W_p=\lambda_1\, X+\lambda_2 (Y-A)$ where $\lambda_{1,2}$ are Lagrange multiplier fields and $A$ is a constant. We can see that integration of $\lambda_{1,2}$ implies $X=0, Y=A$ and so the superpotential becomes $W_{np}=A\, \Lambda_1^4/\Lambda_2^2$. In terms of the gauge couplings $k_1 S$ and $k_2 S$, where $k_{1,2}$ are the Kac-Moody levels of each of the two $SU(2)$ factors, this is proportional to $\exp(8\pi^2(k_2-k_1)S)$ and therefore, for $k_2>k_1$ we have a positive exponential. Since $S$ is always positive this superpotential will always break supersymmetry, if we combine this model with a standard asymptotically free model we will have a sum of positive and negative exponentials and the situation will be just like the non-asymptotically free models for which $S$ can be fixed. In this case however, the limit $S\rightarrow \infty$ is never a minimum of the scalar potential, even though we can have $W\rightarrow 0$ in this limit, and so the Dine-Seiberg general argument still holds but in a very special way, because it would imply that the runaway minimum is not continuosly connected to the finite dilaton minimum. This may also lead to very interesting cosmological features. Notice however that the tree-level terms were just chosen to do the job, just as an illustration that this is possible. To summarize, we have illustrated with a few examples the difference of considering constant gauge couplings against field dependent gauge couplings. We have argued that the inclusion of field-dependent gauge couplings can qualitatively change whether or not a given model spontaneously breaks supersymmetry in the sense that a model with broken supersymmetry may turn out to be supersymmetric if $S$ is included. Furthermore, we have found that the opposite is also possible, i.e. supersymmetry can be unbroken for fixed gauge coupling, but breaks down once the gauge coupling is considered as a field. Finally the stabilization of the dilaton field may be achieved in models of product groups and non asymptotically free, in a way that may not lead to the standard runaway solution. Product group models have shown to provide a very rich structure and their study in more general cases than those considered here can lead to further surprises [@ustwo]. [99]{} [^1]: Based on the talk given by F.Q. at the [*Phenomenological Aspects of Superstring Theories*]{} (PAST97) Conference, ICTP Trieste, Italy, October 2-4 1997. Preprint IFUNAM FT98-4. [^2]: Notice that the way this constraint is realized is different from the one assumed in reference [@Dvali] for instance, but this does not change any of the results of that paper. [^3]: We thank G. Dvali for interesting discussions on this point.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Unsupervised hashing can desirably support scalable content-based image retrieval (SCBIR) for its appealing advantages of semantic label independence, memory and search efficiency. However, the learned hash codes are embedded with limited discriminative semantics due to the intrinsic limitation of image representation. To address the problem, in this paper, we propose a novel hashing approach, dubbed as *Discrete Semantic Transfer Hashing* (DSTH). The key idea is to *directly* augment the semantics of discrete image hash codes by exploring auxiliary contextual modalities. To this end, a unified hashing framework is formulated to simultaneously preserve visual similarities of images and perform semantic transfer from contextual modalities. Further, to guarantee direct semantic transfer and avoid information loss, we explicitly impose the discrete constraint, bit–uncorrelation constraint and bit-balance constraint on hash codes. A novel and effective discrete optimization method based on augmented Lagrangian multiplier is developed to iteratively solve the optimization problem. The whole learning process has linear computation complexity and desirable scalability. Experiments on three benchmark datasets demonstrate the superiority of DSTH compared with several state-of-the-art approaches.' author: - 'Lei Zhu, Zi Huang, Zhihui Li, Liang Xie, Heng Tao Shen [^1] [^2][^3] [^4] [^5]' bibliography: - 'IEEEabrv.bib' - 'IEEETNN.bib' title: 'Exploring Auxiliary Context: Discrete Semantic Transfer Hashing for Scalable Image Retrieval' --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{} Unsupervised hashing, content-based image retrieval, visual similarities, semantic transfer, discrete optimization Introduction ============ the explosive growth in popularity of social networks and mobile devices, huge amounts of images are shared on the Web. There is an emerging need to retrieve relevant visual contents from such large-scale image databases with well scalability. Hence, scalable content-based image retrieval (SCBIR) has received substantial attentions over the past decades [@CBIRSurvey]. ![Basic flowchart of hash code learning in DSTH. Our approach *directly* augments the semantics of discrete image hash codes with auxiliary contextual modalities.[]{data-label="fig:framework"}](framework.pdf){width="85mm"} Unsupervised hashing has been developed as one of the promising hashing techniques to support SCBIR [@SPH; @SKLSH; @DBLP:journals/sigpro/XieZPL16; @AGH; @CMFSIGMOD; @ITQ; @LCMH; @zhutkde; @zhutycb; @SGH; @TIP2016binary]. The key objective is to transform the high-dimensional image feature into compact binary codes with various advanced unsupervised learning techniques. By using binary codes as new representation, the memory consumption can be significantly reduced and the search process can be quickly completed with simple but efficient bit operations. Moreover, the learning process is performed without any dependence on semantic labels. Motivated by these desirable advantages, unsupervised hashing has recently received increasing attentions. However, due to the intrinsic semantic limitation of image representation, the hash codes learned on it may suffer from limited discriminative representation capability [@zhuhashingmm]. How to enrich the semantics of image hash codes for SCBIR is an important but challenging task. Fortunately, the images to be retrieved by current Web search engines are generally accompanied with rich contextual modalities, such as text descriptions, GPS positions, audios, and etc [@zhuhashingmm; @DBLP:journals/mta/XieZC16]. These resources of various modalities are noisy but easily to obtain. More importantly, they are semantically relevant to image data. It is promising to exploit them for semantic enrichment of image hash codes. Existing cross-modal hashing (CMH) (e.g. inter-media hashing (IMH) [@CMFSIGMOD] and linear cross-modal hashing (LCMH) [@LCMH]) can leverage contextual semantics. But their main objective is to discover the shared semantic space for cross-modal retrieval. Hence, the original visual information may be lost because of the mandatory heterogeneous modality correlation (validated in our experiments). Multi-modal hashing (MMH) (e.g. multiple feature hashing (MFH) [@MFH] and multi-view latent hashing (MVLH) [@MVLH]) can also enrich the semantics of hash codes. However, it requires both images and contextual modalities as query, which impedes its application for SCBIR where only visual image is provided for online retrieval. [|p[10mm]{}&lt;|p[19mm]{}&lt;|p[22mm]{}&lt;|p[28mm]{}&lt;|p[27mm]{}&lt;|p[32mm]{}&lt;|p[10mm]{}&lt;|]{} Method & Query Modality & Learning Modality & Learning Paradigm & Semantic Transfer & Discrete Optimization & SCBIR\ SGH & visual & visual & unsupervised & $\times$ & $\times$ & $\surd$\ DGH & visual & visual & unsupervised & $\times$ & $\surd$ & $\surd$\ SDH & visual & visual & supervised & $\surd$ & $\surd$ & $\surd$\ CMFH & visual/CM & visual+CM & unsupervised & $\times$ & $\times$ & $\surd$\ CDH & visual/CM & visual+CM & supervised & $\surd$ & $\surd$ & $\surd$\ MFH & visual+CM & visual+CM & unsupervised & $\surd$ & $\times$ & $\times$\ DSTH & visual & visual+CM & unsupervised & $\surd$ & $\surd$ & $\surd$\ In this paper, we propose a novel hashing method, dubbed as *Discrete Semantic Transfer Hashing* (DSTH). The key idea is to directly augment the semantics of discrete hash codes with auxiliary contextual modalities. To achieve this nontrivial aim, DSTH first aligns image hash codes with topic distributions of contextual modalities for semantic transfer. Then, a unified hashing learning framework is formulated to integrate semantic transfer with visual similarity preservation. These two parts interact with each other and guarantee that the valuable semantics can be transferred to image hash codes. Further, DSTH simultaneously imposes discrete constraint, bit–uncorrelation constraint, and bit-balance constraint on hash codes. It can avoid the semantic loss brought by most existing hashing methods which follow a two-step relaxing+rounding optimization framework. An efficient and effective optimization method based on augmented Lagrangian multiplier (ALM) [@ALM] is proposed to iteratively solve the discrete hash codes. The whole learning process has linear computation complexity and desirable scalability. Figure \[fig:framework\] illustrates the basic process of hash code learning in DSTH. It is worthwhile to highlight the main contributions of this paper as follows: 1. DSTH exploits the auxiliary contextual modalities to directly augment the semantics of discrete image hash codes. It can support image retrieval where only visual query is provided. To the best of our knowledge, there does not exist any similar work. 2. To ensure direct semantic transfer and avoid information loss, DSTH explicitly deals with discrete constraint, bit–uncorrelation constraint, and bit-balance constraint together. A novel and efficient optimization approach based on augmented Lagrangian multiplier is developed to directly learn discrete hash codes. The learning process has linear computation complexity and desirable scalability. 3. Extensive experiments demonstrate the state-of-the-art performance of DSTH, and also validate the effects of semantic transfer and discrete optimization. The rest of the paper is structured as follows. Section \[sec:2\] reviews the related work. Details about the proposed methodology are presented in Section \[sec:3\]. In Section \[sec:4\], we introduce the experiments. Section \[sec:5\] concludes the paper. Related Work {#sec:2} ============ Hashing is a quite hot research topic in recent literatures on image indexing. Various approaches are developed in this research field. For the limited space here, only the most related works of this paper are reviewed in this section. For more comprehensive introduction, please refer to [@hashingsurveytwo]. Data-independent Hashing ------------------------ Locality-sensitive hashing (LSH) [@SKLSH] and its extensions are typical data-independent hashing methods. They generate binary codes via random projection. For example, the hash functions of LSH are constructed with the random vectors from a standard Gaussian distribution. As their whole hash code generation process is performed without considering any semantics of underlying image data, data-independent hashing methods generally require more hashing bits and tables to achieve a satisfactory performance. It will result in longer search time and significant storage cost. To enrich the hash codes with semantics, advanced machine learning techniques are applied for hashing. With the trend, various data-dependent hashing methods (supervised and unsupervised hashing) are proposed to capture the data characteristics and embed them into binary hash codes. Supervised Hashing ------------------ Supervised hashing learns hash codes with explicit semantic labels. Via supervised learning, discriminative capability of hash codes can be enhanced by mining semantics in explicit labels. Typical examples include kernel-based supervised hashing (KSH) [@SKH], semantic correlation maximization (SCM) [@SCM], semantics-preserving hashing (SePH) [@SePH], linear subspace ranking hashing (LSRH) [@LSRH], and deep learning hashing (DLH) [@DHCB; @shen2018tpami]. In KSH, Hamming distances between hash codes of similar data pairs are minimized and that of dissimilar data pairs are maximized simultaneously. SCM solves the training time complexity of supervised multimodal hashing methods by avoiding explicit pairwise similarity matrix computing. SePH transforms the supervised semantic affinities of training data into a probability distribution and approximates it with hash codes in Hamming space. LSRH is a typical ranking-based cross-modal hashing with supervised learning. It considers a new class of hash functions that are closely related to rank correlation measures. DLH directly projects original images to binary hash codes via multiple hierarchical nonlinear transformation in a deep neural network [@Alexnet]. Principally, supervised hashing can indeed achieve better performance than unsupervised hashing. However, they require large amounts of high-quality semantic labels to achieve satisfactory performance. This requirement unfortunately limits the retrieval scalability of hashing in practical image retrieval, where high-quality semantic labels are hard and expensive to obtain. Unsupervised Hashing -------------------- Unsupervised hashing generates hash codes without any semantic labels. It has a better scalability. According to the exploited modalities, it can be further categorized into three sub-categories: unsupervised uni-modal hashing, unsupervised cross-modal hashing, and unsupervised multi-modal hashing. **Unsupervised Uni-modal Hashing**. Its learning process only relies on discriminative information in single modality. For image retrieval, only visual information is considered. Typical examples include: spectral hashing (SPH) [@SPH], anchor graph hashing (AGH) [@AGH], iterative quantization (ITQ) [@ITQ], scalable graph hashing (SGH) [@SGH], discrete proximal linearized minimization (DPLM) [@TIP2016binary], latent semantic minimal hashing (LSMH) [@LSMH], and Deepbit [@deepbit]. SPH preserves the image similarities into the projected hash codes with spectral graph. To reduce the training complexity of graph hashing, AGH approximates the image relations with a low-rank matrix, based on which hash functions are learned by binarizing the Nystorm eigen-functions [@SPH]. ITQ minimizes the quantization loss brought by dimension reduction based binary embedding. SGH applies feature transformation to solve large-scale graph hashing. DPLM reformulates the unsupervised discrete hashing learning problem as minimizing the sum of a smooth loss term. The transformed problem can be efficiently solved with an iterative procedure where each iteration admits an analytical discrete solution. LSMH combines minimum encoding and matrix decomposition to learn the hash codes based on the refined feature representation. Deepbit is one of the pioneering unsupervised deep hashing methods, where three criterions are enforced to learn hash codes from the top layer of the designed neural network. As the learning process is independent with semantic labels, unsupervised uni-modal hashing has well scalability when they process large-scale data. However, image features have the intrinsic limitation on representing high-level semantics. Hence, hash codes learned by this kind of method will inevitably suffer from semantic shortage. This disadvantage limits the retrieval performance of unsupervised uni-modal hashing. **Unsupervised Cross-modal Hashing**. This method can exploit contextual modalities to learn hash code supporting the retrieval tasks across different modalities [@DBLP:conf/aaai/XieSZ16]. Cross-view hashing (CVH) [@CMFIJCAI] extends spectral hashing into cross-modal retrieval by minimizing similarity-weighed Hamming distance of the learned codes. Inter-media hashing (IMH) [@CMFSIGMOD] simultaneously preserves inter-modality similarities and correlates heterogeneous modalities to learn cross-modal hash codes. Linear cross-modal hashing (LCMH) [@LCMH] further reduces training complexity of IMH by representing training samples as their distances to centroids of sample clusters. Collective matrix factorization hashing (CMFH) [@TIPCMFH] seeks to detect the shared latent structures of heterogeneous modalities with collective matrix factorization [@Singh:2008:RLV:1401890.1401969] for generating cross-modal hash codes. These cross-modal hashing methods can enhance the descriptive capability of image hash codes with shared semantic learning. However, the main objective of cross-modal hashing is discovering the shared hash codes in heterogeneous modalities to achieve cross-modal retrieval. The valuable semantics in original visual features may unfortunately lost as a result of mandatory heterogeneous modality correlation. For specific task of SCBIR, the lost visual information may deteriorate the image search performance. **Unsupervised Multi-modal Hashing**. Motivated by the success of multiple feature fusion on enhancing the performance [@DBLP:journals/tcyb/ZhuSJZX15; @DBLP:journals/tmm/ZhuSJXZ15; @DBLP:journals/tcyb/ChangMYZH17; @DBLP:journals/tnn/ChangY17; @DBLP:journals/pami/ChangYYX17], this method integrates semantics of contextual modalities and visual information into a unified hash code. Composite hashing with multiple information source (CHMIS) [@CHMIS] is one of the pioneering approaches. It simultaneously adjusts the weights of modalities to maximize the coding performance, and learns hash functions for fast query binary transformation. Multi-view spectral hashing (MVSH) [@MVSH] extends spectral hashing into a multi-view setting. Its key idea is to sequentially learn the integrated hash codes by solving the successive maximization of local variances. In [@6638233], an efficient multi-view anchor graph hashing (MVAGH) is proposed to learn the nonlinear integrated hash codes computed from the eigenvectors of an averaged similarity matrix. The training complexity of multi-view hashing is reduced with a low-rank from of the averaged similarity matrix calculated based on multi-view anchor graphs. Multi-view alignment hashing (MVAH) [@7006770] learns the relaxed hashing representation with a regularized kernel nonnegative matrix factorization, and hash functions via multivariable logistic regression. Multi-view latent hashing (MVLH) [@MVLH] incorporates multi-modal data into hash code learning. In MVLH, the hash codes are determined as the latent factors shared by multiple views from an unified kernel feature space. Compared with uni-modal hashing methods, multi-modal hashing can generate more discriminative hash codes. However, it needs all the involved modalities as input [@zhuijcai; @7984879]. This requirement cannot be satisfied as only visual image is provided in SCBIR. Discrete Hashing ---------------- Most existing hashing approaches exploit a two-step relaxing+rounding to solve hash codes. In these methods, the relaxed hash codes (continuous values) are first learned and further quantized into binary codes via thresholding. This solution, as indicated in recent literature [@DGH; @SDH], may lead to significant information loss. To address the problem, several approaches are proposed to directly solve hash codes within one step. Discrete graph hashing (DGH) [@DGH] aims to directly preserve the data similarity in a discrete Hamming space. It reformulates the unsupervised graph hashing with a discrete optimization framework and solves two subproblems via a tractable alternating maximization. Supervised discrete hashing (SDH) [@SDH] learns the supervised discrete hash codes with the optimal linear classification performance. SDH transforms this learning objective into sub-problems that can admit an analytical solution. A cyclic coordinate descent algorithm is applied to calculate discrete hashing bits in a closed form. Coordinate discrete hashing (CDH) [@CDOECVR] is designed for supervised cross-modal hashing, and its discrete optimization proceeds in a block coordinate descent manner. In each iterative learning step, a hash bit is sequentially updated while others clamped. CDH transforms the sub-problem into an equivalent and tractable quadratic form, so that hash codes can be directly solved with active set based optimization. Column sampling based discrete supervised hashing (COSDISH) [@CSBDSH] and kernel-based supervised discrete hashing (KSDH) [@Shi2016] are also developed for supervised hashing. COSDISH operates in an iterative manner. In each iteration, several columns are first sampled from the semantic similarity matrix. Then, hash code is decomposed into two parts, so that it can be alternately optimized. KSDH solves discrete hash codes via an asymmetric relaxation strategy that preserves the discrete constraint and reduces the accumulated quantization errors. Although these approaches can achieve certain success, their proposed discrete optimization solutions are specially designed for particular hashing types (supervised hashing, unsupervised graph hashing, and etc.). Therefore, they cannot be directly applied to handle our problem. Key Differences between Our Approach and Existing Works ------------------------------------------------------- Our work is an advocate of discrete hashing optimization but focuses on the problem of exploiting contextual modalities to directly augment the semantics of discrete hash codes. Moreover, our hashing optimization strategy can not only explicitly deal with the discrete constraint of binary codes, but also consider the bit–uncorrelation constraint and bit-balance constraint together[^6]. The whole learning process has linear computation complexity and desirable scalability. The proposed approach can well support SCBIR. The main differences between DSTH and existing hashing techniques are summarized in Table \[difference\]. \[symtable\] [|p[12mm]{}&lt;|p[67mm]{}|]{} **Symbols** & **Explanations**\ $X$ & feature representations of images\ $Y$ & feature representations of contextual texts\ $Z$ & hash codes of images\ $H$ & projection matrix in hash functions\ $d_x$ & feature dimension of image representation\ $d_y$ & feature dimension of contextual text representation\ $N$ & number of database images\ $F$ & group of hash functions\ $K$ & number of anchors\ $L$ & hash code length\ $S_x$ & affinity matrix of visual graph\ $L_x$ & Laplacian matrix of visual graph\ $V_x$ & data-to-anchor mapping matrix\ $U$ & basis matrix of visual feature space\ $T$ & latent image semantic topics\ $W$ & semantic transfer matrix\ $A_x, A_y, B$ & auxiliary variables\ $E_x,E_y,E_z$ & measure the difference between the target and auxiliary variables\ The Proposed Methodology {#sec:3} ======================== In this subsection, we detail the proposed methodology. First, we introduce the relevant notations used in this paper and the problem definition. Then, we formulate the overall objective function and present an efficient discrete solution. Finally, we analyse the convergence and time complexity of the proposed iterative optimization method. Notations and Problem Definition -------------------------------- In this paper, we explore to exploit semantics from contextual texts for semantic transfer of discrete hash codes. Note that our approach can be easily extended when more contextual modalities are exploited. We define $X=[x_1,...,x_N]\in \mathbb{R}^{d_x\times N}$ and $Y=[y_1,...,y_N]\in \mathbb{R}^{d_y\times N}$ as feature representations of images and the contextual texts respectively, $d_x$ and $d_y$ denote their corresponding feature dimensions, and $N$ is the number of images. The objective of DSTH is to learn $Z=[z_1, z_2,...,z_N]\in \mathbb{R}^{L\times N}$, where $z_n=[z_{1n}, z_{2n},$ $...,z_{Ln}]^{\texttt{T}}\in \mathbb{R}^{L\times 1}$ are the hash codes of the $n_{th}$ image, $L$ is hash code length. To generate hash codes for query images, DSTH learns a group of hash functions $F$, each of them defines a mapping: $\mathbb{R}^{d_x}\mapsto \{0,1\}$. Main notations used in the paper are listed in Table \[symtable\]. Objective Formulation --------------------- The formulated objective is composed of two parts: visual similarity preservation and semantic transfer. Visual similarity preservation preserves visual correlation of images into hash codes. Semantic transfer part discovers the potential semantics from contextual texts and transfers them into discrete hash codes. **Visual Similarity Preservation**. SCBIR retrieves similar images for the query [@DBLP:conf/mm/ZhuSX15]. Hence, the objective of hashing for SCBIR is visual similarity preservation. It indicates that similar images should be mapped to binary codes with short Hamming distances. In this paper, we seek to minimize the weighted Hamming distance of hash codes. $$\begin{aligned} \label{eq:sh} \min_{\{z_i\}_{i=1}^N} \sum_{i=1}^N \sum_{j=1}^N S_x(i,j) ||z_i-z_j||_F^2 \Rightarrow \min_{Z} \ Tr(ZL_xZ^{\texttt{T}})\\ \end{aligned}$$ where $L_x=D_x-S_x$ is the Laplacian matrix of visual graph, $S_x\in \mathbb{R}^{N\times N}$ characterizes the affinity similarities of images, $D_x=S_x\textbf{1}=I$, $\textbf{1}$ is column vector with ones, and $Tr(\cdot)$ is trace operator, $||\cdot||_F$ is Frobenius norm. The design principle of the Eq.(\[eq:sh\]) is to incur a heavy penalty if two similar images are projected far apart. Explicitly computing $L_x$ will consume $O(N^2)$, which is not scalable for large-scale image retrieval. In this paper, we exploit anchors to reduce the computation complexity. Similar to [@AGH], we approximate the affinity matrix $S_x$ with $S_x=V_x\Lambda V_x^\texttt{T}$, where $\Lambda=\texttt{diag}(V_x^\texttt{T}\textbf{1})$. $V_x=[v(x_1),..,v(x_{N})]^\texttt{T}$, $v(x)$ is data-to-anchor mapping $$\begin{aligned} v(x)=\frac{[\delta_1\texttt{exp}(\frac{-||x-r_1||_2^2)}{\sigma}), ..., \delta_K \texttt{exp}(\frac{-||x-r_K||^2_2}{\sigma})]^\texttt{T}}{{\sum_{k=1}^K \delta_k \texttt{exp}(\frac{-||x-r_k||_2^2}{\sigma})}} \end{aligned}$$ $r_1,...,r_K$ are $K$ anchors obtained by *k*-means, $\delta_k$ is set to 1 if $r_k$ belongs to the $s$ closest exemplars of $x$, and 0 vice versa, $\sigma>0$ is the bandwidth parameter. Accordingly, $L_x$ can be represented as $I-V_x\Lambda V_x^\texttt{T}$. As shown in the subsequent discrete hashing learning, keeping this low-rank form decomposition will avoid explicit Laplacian matrix computation, and reduce the computation complexity of optimization. **Semantic Transfer**. Images in the modern searching engines are generally associated with rich textual descriptions, such as tags, image captions and user comments. These images and the accompanied texts belong to heterogeneous modalities but may be highly correlated with each other. Moreover, contextual texts contain explicit semantics which are complementary to the latent image semantics. Hence, it is promising to exploit contextual modalities for the semantic enrichment of discrete image hash codes. To this end, in this paper, we first adopt matrix factorization to detect the latent semantic structure $T$ of image. Its formulation is $\min_{U, T} \ ||X-UT||_F^2$, where $U\in \mathbb{R}^{d_x\times L}$ is basis matrix of visual feature space, $T\in \mathbb{R}^{L\times N}$ represents latent image semantic topics. Then, for semantic transfer, we align latent image semantics $T$ to explicit textual semantic distribution $Y$ $$\begin{aligned} \label{eq:td} \min_{W, T} \ & ||WT-Y||_F^2\\ \end{aligned}$$ where $W\in \mathbb{R}^{d_y\times L}$ is the transfer matrix. With transfer, the detected $T$ can involve the explicit semantics of contextual text. In hashing learning, we directly force hash codes $Z$ to match the distribution of $T$. This design is reasonable because the hash codes can be understood as semantic topic distribution, if we consider each hashing bit as a latent semantic topic. **Imposing Constraints**. In our formulation, we explicitly consider three constraints on hash codes to ensure direct semantic transfer and avoid information quantization loss. $Z\in \{-1, 1\}^{L\times N}$ is discrete constraint. It guarantees any hash code to be $-1$ or 1. Via simple transformation $(Z+1)/2$, $Z$ will be binary code (0 or 1). With binary codes as image representation, the search process can be significantly accelerated and the storage cost of image database can be greatly reduced. The bit-uncorrelation constraint $ZZ^\texttt{T}=NI$ is to guarantee the learned hashing bits to be uncorrelated. It can reduce the information redundancy of different hash bits. $Z\textbf{1}=0$ is the bit-balance constraint, it requires each bit to occur in database with equal chance ($50\%$). This constraint forces the learned hash code to contain the largest information. **Overall Formulation**. After comprehensively considering visual similarity preservation, semantic transfer and constraints to be imposed, we obtain the overall objective function of DSTH. The formulation is $$\begin{aligned} \label{eq:objective} \min_{Z, U, W} \ & ||X-UZ||_F^2+\beta ||WZ-Y||_F^2 + \alpha Tr(Z(I-V_x\Lambda V_x^\texttt{T})Z^{\texttt{T}})\\ s.t. \ & \underbrace{Z\in \{-1, 1\}^{L\times N}}_{discrete}, \underbrace{ZZ^\texttt{T}=NI}_{bit-uncorrelation}, \underbrace{Z\textbf{1}=0}_{bit-balance} \end{aligned}$$ where $\alpha, \beta>0$ balances the regularization terms. We jointly consider visual similarity preservation and semantic transfer, so that visual similarity preservation can guide the semantic extraction and determine which part of semantics to transfer. Discrete Optimization --------------------- Solving Eq.(\[eq:objective\]) is essentially a non-trivial combinatorial optimization problem for three challenging constraints. Most existing hashing approaches apply relaxing+rounding optimization [@hashingsurveytwo]. They first relax discrete constraint to calculate continuous values, and then binarize them to hash codes via rounding. This two-step learning can simplify the solving process, but it may cause significant information loss. In recent literature, several discrete hashing solutions are proposed. However, they are developed for particular hashing types and formulations. For example, graph hashing [@DGH], supervised hashing [@SDH; @CSBDSH], cross-modal hashing [@CDOECVR]. Therefore, their learning approaches cannot be directly applied to solve our problem. In this paper, we propose a new and effective optimization algorithm based on augmented Lagrangian multiplier (ALM) [@ALM]. Our idea is to introduce auxiliary variables to separate constraints, and transform the objective function to an equivalent one that is tractable. Formally, we introduce three auxiliary variables $A_x, A_y, B$, and set $A_x=X-UZ, A_y=Y-WZ, B=Z$. Eq.(\[eq:objective\]) is reformulated as $$\begin{aligned} \label{eq:tj} \min_{Z,U,W} & ||A_x||_F^2+||A_y||_F^2+ \frac{\mu}{2}(||X-UZ -A_x+\frac{E_x}{\mu}||_F^2 + \\ &\beta ||Y-WZ-A_y+\frac{E_y}{\mu}||_F^2) + \alpha Tr(Z(I-V_x\Lambda V_x^\texttt{T})B^{\texttt{T}}) \\ &+ \frac{\mu}{2} ||Z-B+\frac{E_z}{\mu}||_F^2 \\ & s.t. \quad B\in \{-1, 1\}^{L\times N}, ZZ^\texttt{T}=NI, Z\textbf{1}=0 \end{aligned}$$ where $E_x\in \mathbb{R}^{d_x\times N}$, $E_y\in \mathbb{R}^{d_y\times N}$, $E_z\in \mathbb{R}^{L\times N}$ measure the difference between the target and auxiliary variables, $\mu>0$ adjusts the balance between terms. We can adopt alternate optimization to iteratively solve Eq.(\[eq:tj\]). Specifically, we optimize the objective function with respective to one variable while fixing the other remaining variables. The iteration steps are detailed as follows. **Update $A_x, A_y$.** By fixing other variables, the optimization formulas for $A_x, A_y$ are $$\begin{aligned} \label{eq:tk} &\min_{A_x} \ ||A_x||_F^2+ \frac{\mu}{2}||X-UZ-A_x + \frac{E_x}{\mu}||_F^2 \\ &\min_{A_y} \ ||A_y||_F^2+ \frac{\mu}{2}||Y-WZ-A_y + \frac{E_y}{\mu}||_F^2 \\ \end{aligned}$$ By calculating the derivative of the objective function with respective to $A_x, A_y$, and setting it to 0, we can obtain that $$\begin{aligned} \label{eq:tl} A_x=\frac{\mu X-\mu UZ+E_x}{2+\mu}, A_y=\frac{\mu Y-\mu WZ+E_y}{2+\mu} \end{aligned}$$ **Update $U,W$**. By fixing other variables, the optimization formula for $U$ is $$\begin{aligned} \label{eq:tn} \min_{U} \ & ||X-UZ-A_x + \frac{E_x}{\mu}||_F^2 \\ \end{aligned}$$ By calculating the derivative of the objective function with respective to $U$, and setting it to 0, we can obtain that $$\begin{aligned} UZ = X-A_x+\frac{E_x}{\mu} \\ \end{aligned}$$ Since $ZZ^\texttt{T}=NI$, we can further derive that $$\begin{aligned} \label{eq:tp} U = \frac{1}{N}(X-A_x+\frac{E_x}{\mu})Z^\texttt{T} \end{aligned}$$ Similarly, we can obtain $W = \frac{1}{N}(Y-A_y+\frac{E_y}{\mu})Z^\texttt{T}$. **Update $B$**. By fixing other variables, the optimization formula for $B$ is $$\begin{aligned} \label{eq:tq} \min_{B} \ & \alpha Tr(Z(I-V_x\Lambda V_x^\texttt{T})B^\texttt{T})+\frac{\mu}{2}||Z-B+\frac{E_z}{\mu}||_F^2\\ s.t. \ & B\in \{-1, 1\}^{L\times N}\\ \end{aligned}$$ The objective function in Eq.(\[eq:tq\]) can be simplified as $$\begin{aligned} \label{eq:tre} \min_{B} & \ Tr((\frac{\alpha}{\mu}Z(I-V_x\Lambda V_x^\texttt{T})-Z-\frac{E_z}{\mu})B^\texttt{T})\\ = \min_{B} & \ ||B-(Z+\frac{E_z}{\mu}-\frac{\alpha}{\mu}Z(I-V_x\Lambda V_x^\texttt{T}))||_F^2\\ \end{aligned}$$ The discrete solution of $B$ can be directly represented as $$\begin{aligned} \label{eq:tz} B=\texttt{Sgn}(Z+\frac{E_z}{\mu}-\frac{\alpha}{\mu}ZI + \frac{\alpha}{\mu}ZV_x\Lambda V_x^\texttt{T}) \end{aligned}$$ where $\texttt{Sgn}(\cdot)$ is signum function which returns -1 if $x<0$, 1 if $x\geq 0$. **Update $Z$**. By fixing other variables, the optimization formula for $Z$ is $$\begin{aligned} \label{eq:tr} & \min_{Z} \ \frac{\mu}{2}(||X-UZ-A_x+\frac{E_x}{\mu}||_F^2 + \beta ||Y-WZ-A_y \\ & +\frac{E_y}{\mu}||_F^2)+\alpha Tr(Z(I-V_x\Lambda V_x^\texttt{T})B^{\texttt{T}})+\frac{\mu}{2} ||Z-B+\frac{E_z}{\mu}||_F^2 \\ & s.t. \ ZZ^\texttt{T}=NI, Z\textbf{1}=0 \end{aligned}$$ The objective function in Eq.(\[eq:tr\]) can be transformed as $$\begin{aligned} \label{eq:trrr} & \min_{Z} \ -\mu Tr(Z^\texttt{T}U^\texttt{T}(X-A_x+\frac{E_x}{\mu}))- \mu\beta Tr(Z^\texttt{T}W^\texttt{T}(Y-A_y \\ & + \frac{E_y}{\mu})) + \alpha Tr(Z^\texttt{T}B(I-V_x\Lambda V_x^\texttt{T}))-\mu Tr(Z^\texttt{T}(B-\frac{E_z}{\mu}))\\ & = \min_{Z} \ -Tr(Z^\texttt{T}C) \end{aligned}$$ where $C=B-\frac{E_z}{\mu}-\frac{\alpha}{\mu}BI+\frac{\alpha}{\mu}BV_x\Lambda V_x^\texttt{T}+U^\texttt{T}(X-A_x+ \frac{E_x}{\mu})+\beta W^\texttt{T}(Y-A_y+\frac{E_y}{\mu})$. Eq.(\[eq:tr\]) is equivalent to the following maximization problem $$\begin{aligned} \label{eq:ts} & \max_{Z} \ Tr(Z^\texttt{T}C) \\ s.t. & \ ZZ^\texttt{T}=NI, Z\textbf{1}=0 \end{aligned}$$ By mathematically solving the above equation with singular value decomposition (SVD) [@wall2003svd], $C$ can be decomposed as $C=P\Theta Q^\texttt{T}$, where the columns of $P$ and $Q$ are left-singular vectors and right-singular vectors of $C$ respectively, $\Theta$ is rectangular diagonal matrix and its diagonal entries are singular values of $C$. Then, the optimizing for $Z$ becomes $\max_{Z} \ Tr(Z^\texttt{T}P\Theta Q^\texttt{T}) \Leftrightarrow \max_{Z} \ Tr(\Theta Q^\texttt{T}Z^\texttt{T}P)$. \[calzthre\] Given a matrix $G$ which meets $GG^\texttt{T}=NI$ and diagonal matrix $\Theta \ge 0$, the solution of $\max_{G} Tr(\Theta G)$ is $\texttt{diag}(\sqrt{N})$. Let us assume $\theta_{ii}$ and $g_{ii}$ are the $i_{th}$ diagonal entry of $\Theta$ and $G$ respectively, $Tr(\Theta G)=\sum_{i}\theta_{ii}g_{ii}$. Since $GG^\texttt{T}=NI$, $g_{ii}\leq \sqrt{N}$. $Tr(\Theta G)=\sum_{i}\theta_{ii}g_{ii}\leq \sqrt{N}\sum_{i}\theta_{ii}$. The equality holds only when $g_{ii}=\sqrt{N}, g_{ij}=0, \forall i,j$. $Tr(\Theta G)$ achieves its maximum when $G=\texttt{diag}(\sqrt{N})$. $\Theta \ge 0$ as $\Theta$ is calculated by SVD. On the other hand, we can easily derive that $Q^\texttt{T}Z^\texttt{T}PP^\texttt{T}ZQ=NI$. Therefore, according to the **Theorem \[calzthre\]**, the optimal $Z$ can only be obtained when $Q^\texttt{T}Z^\texttt{T}P=\texttt{diag}(\sqrt{N})$. Hence, the solution of $Z$ is $$\begin{aligned} \label{eq:solutiony} Z=\sqrt{N}PQ^\texttt{T} \end{aligned}$$ Moreover, in order to satisfy the bit-balance constraint $Z\textbf{1}=0$, we apply Gram-Schmidt process as [@DGH] and construct matrices $\hat{P}$ and $\hat{Q}$, so that $\hat{P}^\texttt{T}\hat{P}=I_{L-R}$, $[P, 1]^\texttt{T}\hat{P}=0$, $\hat{Q}^\texttt{T}\hat{Q}=I_{L-R}$, $Q\hat{Q}^\texttt{T}=0$, $R$ is the rank of $C$. The close form solution for $Z$ is $$\begin{aligned} Z=\sqrt{N}[P, \hat{P}][Q, \hat{Q}]^\texttt{T} \end{aligned}$$ **Update $E_x$, $E_y$, $E_z$, $\mu$**. The update rules are ($\rho>1$ is learning rate that controls the convergence.) $$\begin{aligned} \label{eq:tu} &E_x=E_x+\mu(X-UZ-A_x)\\ &E_y=E_y+\mu(Y-WZ-A_y)\\ &E_z=E_z+\mu(Z-B), \mu=\rho\mu \\ \end{aligned}$$ **Convergence**. At each iteration, the updating of variables will monotonically decreases towards the lower bound of objective function in Eq.(\[eq:objective\]) . As indicated by ALM optimization theory [@lin2010augmented], the iterations will make the optimization converge. Further, our empirical experiments on standard benchmarks also validate the convergence of the proposed method. **Hash Function Learning**. In this paper, we leverage linear projection to construct hash functions for its high online efficiency. The objective is to minimize the loss between the hash codes and the projected ones. The formulation is $\min_{H} ||Z-H^\texttt{T}X||_F^2 + \eta ||H||_F$, where $H\in \mathbb{R}^{d_x\times L}$ denotes the projection matrix. The optimal $H$ can be calculated as $H=\left(XX^\texttt{T}+\eta I\right)^{-1} XZ^\texttt{T}$. Hash functions can be constructed as $F(x)=\frac{\texttt{sgn}(H^\texttt{T}x)+1}{2}$. It should be noted that, since DSTH is a two-stage hashing framework, this hash function learning part can be substituted by other models, such as, linear SVM [@STH], kernel logistic regression [@SePH], decision tree [@DecisionTree], and neural network [@7006770]. [|p[16mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[11mm]{}&lt; |p[8mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[11mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt; |p[8mm]{}&lt;|p[11mm]{}&lt;|]{} & & &\ & 16 & 32 & 64 & 128 & 16 & 32 & 64 & 128 & 16 & 32 & 64 & 128\ SKLSH & 0.1440 & 0.1509 & 0.1562 & 0.1609 & 0.5624 & 0.5902 & 0.6023 & 0.6253 & 0.3685 & 0.3645 & 0.3684 & 0.3688\ SPH & 0.1625 & 0.1583 & 0.1628 & 0.1714 & 0.5976 & 0.6093 & 0.6269 & 0.6433 & 0.3430 & 0.3910 & 0.4415 & 0.4582\ ITQ & 0.1717 & 0.1780 & 0.1789 & 0.1810 & 0.6194 & 0.6339 & 0.6532 & 0.6612 & 0.4522 & 0.4676 & 0.4857 & 0.4911\ SGH & 0.1758 & 0.1754 & 0.1786 & 0.1838 & 0.6309 & 0.6465 & 0.6503 & 0.6586 & 0.4759 & 0.4842 & 0.4818 & 0.4862\ DPLM & 0.1609 & 0.1776 & 0.1807 & 0.1835 & 0.6099 & 0.6261 & 0.6350 & 0.6422 & 0.4518 & 0.4744 & 0.4810 & 0.4865\ LSMH & 0.1738 & 0.1780 & 0.1788 & 0.1872 & 0.6369 & 0.6300 & 0.6483 & 0.6602 & 0.4643 & 0.4722 & 0.4892 & 0.4877\ CVH & 0.1676 & 0.1630 & 0.1601 & 0.1757 & 0.6026 & 0.6010 & 0.6094 & 0.6229 & 0.4447 & 0.4300 & 0.4233 & 0.4148\ CHMIS & 0.1507 & 0.1671 & 0.1640 & 0.1787 & 0.5628 & 0.5643 & 0.5643 & 0.5768 & 0.4404 & 0.4394 & 0.4341 & 0.4259\ IMH & 0.1663 & 0.1709 & 0.1742 & 0.1775 & 0.6285 & 0.6338 & 0.6454 & 0.6586 & 0.4475 & 0.4619 & 0.4634 & 0.4890\ LCMH & 0.1752 & 0.1784 & 0.1837 & 0.1809 & 0.6250 & 0.6339 & 0.6346 & 0.6349 & 0.4641 & 0.4726 & 0.4764 & 0.4777\ CMFH & 0.1678 & 0.1688 & 0.1705 & 0.1738 & 0.5846 & 0.6000 & 0.5956 & 0.6106 & 0.4703 & 0.4893 & 0.4972 & 0.4888\ DSTH & **0.2055** & **0.2012** & **0.2041** & **0.2040** & **0.6458** & **0.6603** & **0.6642** & **0.6692** & **0.5074** & **0.5089** & **0.5208** & **0.5251**\ **Complexity Analysis**. The anchor graph construction includes anchor generation and distance computation between images and anchors. The time complexity of this process is $O(NKd_x)$. Solving discrete hash codes is conducted in an iterative process, the computational complexity is $O(\#iter(d_x N+d_yN + d_xL + d_yL + LN))$, where $\#iter$ denotes the number of iterations. Given $N\gg d_x(d_y)>L$, this process scales linearly with $N$. The computation of hash functions solves a linear system, whose time complexity is $O(N)$. Calculating hash codes of database images costs $O(N)$. Therefore, the whole offline learning consumes $O(N)$, which indicates the desirable scalability of the proposed DSTH. In online retrieval, generating hash codes for a query can be completed in $O((d_x+1)L)$. Experiments {#sec:4} =========== In this section, we first introduce the experimental settings, including experimental dataset, evaluation metric and baselines. Then, we present the comparison results with state-of-the-art approaches. Next, we evaluate the effects of discrete optimization and semantic transfer. Finally, we give experimental results on convergence and parameter sensitiveness. [|m[20mm]{}&lt;|m[14mm]{}&lt;|m[16mm]{}&lt;|m[17mm]{}&lt;|]{} Datasets & ***Wiki*** & ***MIR Flickr*** & ***NUS-WIDE***\ \#Database & 2,866 & 25,000 & 186,643\ \#Query & 574 & 250 & 1,867\ \#Training & 2292 & 750 & 5,540\ Visual Feature & BoVW(128-D) & BoVW(1000-D) & BoVW(500-D)\ Text Feature & BoW+LDA(10-D) & BoW(457-D) & BoW(1000-D)\ \[sdd\] Experimental Dataset -------------------- Experiments are conducted on three publicly available image datasets: **Wiki**[^7] [@MMWIKI], **MIR Flickr**[^8] [@MIRFLICKR] , and **NUS-WIDE**[^9] [@CIVRNUS]. All these datasets are comprised of images and their contextual texts. - **Wiki** is composed of 2,866 multimedia documents which belong to 10 semantic categories. All the data are collected from Wikipedia[^10]. Each document contains an image and at least 70 textual words. In experiments, we represent visual contents with 128 dimensional SIFT histogram [@IJCVSIFT] and contextual text contents by 10 dimensional topic vector generated by latent Dirichlet allocation (LDA) [@JMLRLDA]. - **MIR Flickr** consists of 25,000 images describing 38 semantic categories. This dataset is downloaded from the Flickr with its public API[^11]. Each image is associated with tags. The tags that appear less than 50 times are removed, and we finally obtain a vocabulary of 457 tags. In this work, visual contents of images in ****MIR Flickr**** are represented by 1000 dimensional dense SIFT histogram. Contents of contextual texts are represented by 457 dimensional binary vector, where each dimension indicates the presence of a tag. - **NUS-WIDE** is composed of 269,648 images labelled by 81 concepts. Each image is also associated with tags. In experiments, we preserve 10 most common concepts and the corresponding 186,643 pairs. On **NUS-WIDE** dataset, we extract 500 dimensional SIFT histogram to describe the visual contents of images, and 1000 dimensional binary vector to represent the contextual texts. Table \[sdd\] summarizes the key statistics of the test collections. For **Wiki**, as images are labelled by 10 independent categories, images in this dataset are considered to be relevant only if they belong to the same category. For **MIR Flickr** and **NUS-WIDE**, images are labelled by several tags, and therefore images are considered to be relevant if they share at least one tag. [|p[13.5mm]{}&lt;|p[8.5mm]{}&lt;|p[8.5mm]{}&lt;|p[8.5mm]{}&lt;|p[10mm]{}&lt; |p[8.5mm]{}&lt;|p[8.5mm]{}&lt;|p[8.5mm]{}&lt;|p[10mm]{}&lt;|p[8.5mm]{}&lt;|p[8.5mm]{}&lt; |p[8.5mm]{}&lt;|p[10mm]{}&lt;|]{} & & &\ & 16 & 32 & 64 & 128 & 16 & 32 & 64 & 128 & 16 & 32 & 64 & 128\ DSTH-I & 0.1675 & 0.1740 & 0.1725 & 0.1719 & 0.6326 & 0.6343 & 0.6554 & 0.6604 & 0.4963 & 0.5075 & 0.5150 & 0.5181\ DSTH-II & 0.2018 & 0.1997 & 0.2037 & 0.2018 & 0.6425 & 0.6450 & 0.6610 & 0.6634 & 0.4844 & 0.5015 & 0.5133 & 0.5231\ DSTH-III & 0.1981 & 0.1984 & 0.1924 & 0.1972 & 0.6349 & 0.6234 & 0.6540 & 0.6616 & 0.4685 & 0.4784 & 0.4922 & 0.4991\ DSTH-IV & 0.1998 & 0.1939 & 0.2001 & 0.1979 & 0.6301 & 0.6362 & 0.6323 & 0.6246 & 0.4765 & 0.4744 & 0.4650 & 0.4512\ DSTH & **0.2055** & **0.2012** & **0.2041** & **0.2040** & **0.6458** & **0.6603** & **0.6642** & **0.6692** & **0.5074** & **0.5089** & **0.5208** & **0.5251**\ [|p[19mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|]{} \#Training & 0.5K & 1.5K & 2K & 2.5K & 3K\ mAP & 0.5006 & 0.5010 & 0.5106 & 0.5126 & 0.5202\ \#Training & 3.5K & 4K & 4.5K & 5K & 5540\ mAP & 0.5131 & 0.5183 & 0.5241 & 0.5256 & 0.5251\ Evaluation Metric ----------------- In experiments, mean average precision (mAP) [@TIPCMFH; @SGH] is adopted as the evaluation metric. For a given query, average precision (AP) is calculated as $$\begin{aligned} AP=\frac{1}{NR}\sum_{r=1}^R Pre(r)\delta(r) \end{aligned}$$ where $R$ is the total number of retrieved images, $NR$ is the number of relevant images in the retrieved set, $Pre(r)$ denotes the precision of top $r$ retrieval images, which is defined as the ratio between the number of the relevant images and the number of retrieved images $r$, and $\delta(r)$ is indicator function which equals to 1 if the $r_{th}$ image is relevant to query, and vice versa. In experiments, we set the total number of retrieved images as 100 to report experimental results. Furthermore, *Precision-Scope* curve is also plotted to illustrate the retrieval performance variations with respect to the number of retrieved images. ![Variations of training time with training data size.[]{data-label="fig:trainsize"}](trainsize_nuswide.pdf){width="85mm"} Evaluation Baselines -------------------- We compare DSTH with several state-of-the-art uni-modal hashing approaches, which can be used to support SCBIR[^12]. They include: 1. **Shift-invariant kernel locality sensitive hashing** (**SKLSH**) [@SKLSH]. It is a representative data-independent hashing, which generates hash codes by random projections with distribution-free encoding. 2. **Spectral hashing** (**SPH**) [@SPH]. Hash codes are computed by eigenvalue decomposition on visual Laplacian matrix. Hash functions are constructed with an efficient Nystrom method. 3. **Iterative quantization** (**ITQ**) [@ITQ]. ITQ first learns relaxed hash codes with principal component analysis (PCA) [@jolliffe2002principal]. Then, it generates the hash codes by minimizing the quantization errors with optimal iterative rotation. 4. **Scalable graph hashing** (**SGH**) [@SGH]. SGH leverages feature transformation to approximate the visual graph, and thus avoids explicit similarity graph computing. In this method, hash functions are learned in a bit-wise manner with a sequential learning. 5. **Latent semantic minimal hashing** (**LSMH**) [@LSMH]. Minimum encoding and matrix factorization are combined together to simultaneously learn latent semantic feature which refines original features, and hash codes. 6. **Discrete proximal linearized minimization** (**DPLM**) [@TIP2016binary]. We use unsupervised setting of DPLM for comparison. This method directly handles with discrete constraint. The hash codes are solved by iterative procedures with each iteration admitting an analytical solution. Since cross-modal hashing can also be used for SCBIR, we also incorporate several state-of-the-art cross-modal hashing methods for comparison. They include[^13]. : 1. **Cross-view hashing** (**CVH**) [@CMFIJCAI]. CVH extends spectral hashing to learn hash functions by jointly minimizing Hamming distances of similar samples and maximizing that of dissimilar samples. 2. **Composite hashing with multiple information sources** (**CHMIS**) [@CHMIS]. It integrates discriminative information from several heterogeneous modalities into the hash codes with proper weights. For comparison fairness, text input is removed and only visual input is preserved in CHMIS. 3. **Inter-media hashing** (**IMH**) [@CMFSIGMOD]. IMH formulates hash function learning in a framework where intra-similarity of each individual modal and inter-correlations between different modalities are both preserved in hash codes. 4. **Linear cross-modal hashing** (**LCMH**) [@LCMH]. In this method, intra-modality similarity is approximately preserved with the new representations of samples which are calculated as the distances to several centroids of the clusters. The inter-modality correlation is preserved via the shared binary subspace learning. 5. **Collective matrix factorization hashing** (**CMFH**) [@TIPCMFH]. CMFH performs cross-modal similarity search in a latent shared semantic space by collective matrix factorization. All parameters in the compared approaches are adjusted according to the relevant literatures and report the best performance. For implementation of CVH, we kindly use the source code provided by [@MLBE]. For LCMH, we carefully implement the code according to relevant paper. For SPH, SKLSH, ITQ, SGH, DPLM, CHMIS, IMH, and CMFH, we directly download the implementation codes from authors’ websites. Implementation Details ---------------------- 5-fold cross validation is adopted to choose parameters. The best performance of DSTH is achieved when $k$ is set to 5, 5, 8 on three datasets respectively (Three datasets denote **Wiki**, **MIR Flickr**, and **NUS-WIDE** successively. Please find the same below.). Furthermore, DSTH has parameters: $\alpha$ and $\beta$. They control the processes of semantic discovery and transfer. The best performance is achieved when $\{\alpha=0.0001, \beta=10000\}$, $\{\alpha=100, \beta=10000\}$, and $\{\alpha=0.0001, \beta=100\}$ on three datasets respectively. The parameters $\mu$ and $\rho$ are used for ALM optimization. The optimal performance is obtained when $\{\mu=1, \rho=2\}$, $\{\mu=0.01, \rho=2\}$, and $\{\mu=0.0001, \rho=2\}$ on three datasets respectively. $\eta$ is used to learn hash functions. The best $\eta$ is set to 0.1, 1000, and 100 on three datasets, respectively. [|p[16mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[8mm]{}&lt;|p[11mm]{}&lt;|]{} &\ & 16 & 32 & 64 & 128\ SKLSH & 0.01 & 0.01 & 0.01 & 0.01\ SPH & 0.11 & 0.13 & 0.21 & 0.54\ ITQ & 0.22 & 0.27 & 0.38 & 0.82\ SGH & 2.84 & 2.85 & 2.85 & 2.94\ DPLM & 1.43 & 1.31 & 1.42 & 1.40\ LSMH & 0.28 & 0.37 & 0.67 & 1.27\ CVH & 0.37 & 0.38 & 0.42 & 0.72\ CHMIS & 119.77 & 157.74 & 219.48 & 302.90\ IMH & 43.95 & 43.85 & 43.97 & 44.23\ LCMH & 5.23 & 5.14 & 5.00 & 5.20\ CMFH & 12.96 & 14.05 & 16.01 & 20.63\ DSTH & 12.46 & 16.55 & 19.17 & 20.58\ Comparison Results ------------------ Table \[resulttable\] presents the main mAP results. Code length on all datasets is varied in the range of $\{16, 32, 64, 128\}$. Figure \[fig\_nuswide\] reports *Precision-Scope* curve on **NUS-WIDE**. The search scope is ranged from 500 to 5000 with stepsize 500. The presented results clearly demonstrate that DSTH consistently outperforms the compared approaches on all datasets and hashing bits. On **Wiki** and **NUS-WIDE**, DSTH outperforms the second best performance by more than 2%. Among the competitors, SKLSH achieves the worst performance in most cases. This is because SKLSH is data-independent hashing which generates hash codes without integrating any semantics from retrieved images. In addition, it is interesting to find that cross-modal hashing may not obtain better performance than the uni-modal hashing approaches in many cases. This experimental phenomenon can be explained as follows: cross-modal hashing is specially developed for cross-modal retrieval task. Hence, discovering the shared semantics is the main design target. This design may be propitious to the target of cross-modal retrieval. However, the shared semantic space may loss valuable visual discriminative information, which is essential for the task of SCBIR. Therefore, the retrieval performance of cross-modal hashing methods on SCBIR may be impaired. We also investigate the training time of all evaluated approaches. This experiment is conducted on **NUS-WIDE** when hash code length is fixed to 128. The running time is recoded on a PC with Intel(R) Xeon(R) CPU E5-1650@3.60GHz and 64GB RAM. Table \[runtimetable\] presents the main results. We can find that the time consumption of DSTH is on the same order of magnitude as that of CMFH. The time cost of DSTH is acceptable. It is much faster than IMH and CHMIS. Effects of Discrete Optimization -------------------------------- Our approach can directly deal with the discrete constraint, bit-uncorrelation constraint and bit-balance constraint imposed on hash codes. To evaluate the effects of three constraints, we report the performance of DSTH respectively by relaxing the discrete constraint, removing bit-balance constraint and bit-uncorrelation constraint in the Eq.(\[eq:objective\]). We denote DSTH-I as the approach that relaxes discrete constraint. In this experiment, we adopt conventional relaxing+rounding optimization in many existing hashing approaches to solve the hash codes. Specifically, the relaxed hashing values are first solved with ALM, but the final binary hash codes are generated by mean thresholding. We also compare the performance of DSTH with the variant approach DSTH-II that removes bit-balance constraint, and the variant approach DSTH-III that removes bit-−uncorrelation constraint. Table \[discretop\] summarizes the comparison results. We can clearly observe that DSTH achieves superior performance in almost all cases. These results validate the effects of discrete optimization on direct semantic transfer and alleviating information loss. All three constraints contribute positively to the retrieval performance. Effects of Semantic Transfer ---------------------------- Our approach explicitly exploits the latent semantics involved in contextual modalities to enhance the discriminative capability of discrete hash codes. In this subsection, we conduct experiment to investigate the effects of semantic transfer on the overall performance of DSTH. To this end, we compare the performance of DSTH with that of the approach variant which only considers visual similarity preservation (the first part of DSTH). We denote this variant as DSTH-IV. Table \[discretop\] presents the comparison results. It shows that DSTH consistently outperforms the competitor on all code lengths and datasets. On **MIR Flickr** and **NUS-WIDE**, the largest performance increase can reach about 4% and 7% respectively. The performance increase is mainly attributed to the effective semantic enrichment of hash codes by semantic transfer. It also validates the fact that semantics in contextual modalities are indeed complementary with visual contents of images. In addition, we find from the table that the performance gap is different on different code lengths and datasets. This is attributed to the different effects of contextual modalities on enriching semantics of hash codes. Performance Variations with Training Size ----------------------------------------- In this experiment, we evaluate the impact of training size on DSTH performance. We fix the hash code length to 128 and report the performance on **NUS-WIDE**. Table \[trainingsize\] illustrates the performance variations with training size. We can easily observe that the performance of DSTH first increases with training data size and then becomes stable after certain point (training size 4.5K). Specifically, the gap between the performance obtained on 0.5K and that on 4.5K is 0.0235. DSTH can achieve satisfactory performance with a reasonably small training set. This experimental phenomenon illustrates that, even with small training data, DSTH can already effectively capture the valuable semantics to enhance the discriminative capability of image hash codes. It also validates the well training efficiency of DSTH when obtaining promising retrieval performance. In addition, it is interesting to find that, even with smaller training data, DSTH can achieve better performance than several compared approaches trained with more data. The reason of performance improvement is that the discovered semantics in contextual modalities can effectively mitigate the semantic shortage of shorter hash codes. These results also validate the effects of semantic transfer on enhancing the representation capability of hash codes. In addition, we report the training time variations with training size. Figure \[fig:trainsize\] illustrates the main results. We can easily observe that the training time increases linearly with training size. It validates the linear scalability of DSTH and demonstrates that it is suitable for large-scale datasets. Convergence Analysis -------------------- As analysis in Section \[sec:3\], at each iteration, the updating of variables will monotonically decreases towards the lower-bounded objective function in Eq.(\[eq:objective\]). Theoretically, the iterations will make the proposed discrete optimization method converge. In this subsection, we conduct experiment on **NUS-WIDE** with fixed hash code length 128 to verify this claim. Similar results can be obtained on other datasets and hash code lengths. Figure \[nuswide\_conv\] illustrates the main experimental results. We can clearly find that, on three datasets, the objective function value decreases sharply first and does not change significantly after several iterations (about 10). This result empirically validates that the convergence of DSTH can be achieved with augmented Lagrangian multiplier approach. Parameter Sensitivity Experiment -------------------------------- In this subsection, we conduct empirical experiments to validate the parameter sensitivity of DSTH. More specifically, we observe the performance variations of DSTH with $\alpha$, $\beta$, $\mu$ and $\eta$. $\alpha$, $\beta$, and $\mu$ are used in discrete optimization (Eq.(\[eq:tj\])), $\eta$ is used in hash function learning. They are all designed to play the trade-off between regularization terms and empirical loss. In experiment, we report the experimental results when these parameters are varied from $\{10^{-4}, 10^{-2}, 1, 10^2, 10^4\}$. For $\alpha$, $\beta$, and $\mu$, as they are equipped in the same equation, we observe the performance variations with respect to two parameters while fixing the remaining one parameter. For $\eta$, we observe the performance variations by fixing $\alpha$, $\beta$, and $\mu$. Figure \[wiki\_para\], \[mirflickr\_para\], and \[nuswide\_para\] demonstrate the main experimental results. From these figures, we can find that the performance is relatively stable to a wide range of parameter variations ($\alpha$, $\beta$, $\mu$). And it can achieve the best result when $\eta$ is set to a certain value. The best performance is achieved when parameters are set as: **Wiki**$\{\alpha=0.0001, \beta=10000, \mu=1, \eta=1\}$, **MIR Flickr**$\{\alpha=100, \beta=10000, \mu=0.01, \eta=100\}$, **NUS-WIDE**$\{\alpha=0.0001, \beta=100, \mu=0.0001, \eta=100\}$. Conclusions and Future Work {#sec:5} =========================== Because of the intrinsic limitation of image representation on characterizing high-level semantics, existing hashing methods for scalable content-based image retrieval inevitably suffer from semantic shortage. In this paper, we propose the *Discrete Semantic Transfer Hashing* (DSTH) to tackle the problem. It *directly* exploits abundant auxiliary contextual modalities to augment the semantics of discrete image hash codes. We formulate a unified hashing framework to simultaneously preserve visual similarities and perform semantic transfer. Moreover, to guarantee direct semantic transfer and avoid information loss, we explicitly impose the discrete constraint, bit-uncorrelation constraint and bit-balance constraint on hash codes. A novel and effective discrete optimization method with favorable convergence is developed to iteratively solve the optimization problem. The discrete hashing optimization has linear computation complexity and desirable scalability. Experiments on three benchmarks demonstrate the superior performance of DSTH compared with several state-of-the-art hashing methods. In the future, inspired by the recent success of unsupervised deep hashing [@deepbit], our work will be extended to learn a non-linear deep neural network based image hash function while resorting to the semantic augment from contextual modalities. Acknowledgment {#acknowledgment .unnumbered} ============== Heng Tao Shen is corresponding author. The authors would like to thank the anonymous reviewers for their constructive and helpful suggestions. [Lei Zhu]{} received the B.S. degree (2009) at Wuhan University of Technology, the Ph.D. degree (2015) at Huazhong University of Science and Technology. He is currently a full Professor with the School of Information Science and Engineering, Shandong Normal University, China. He was a Research Fellow under the supervision of Prof. Heng Tao Shen at the University of Queensland (2016-2017), and Dr. Jialie Shen at the Singapore Management University (2015-2016). His research interests are in the area of large-scale multimedia content analysis and retrieval. [Zi Huang]{} received the B.Sc. degree from Tsinghua University, Beijing, China, in 2001, and the Ph.D. degree in computer science from the University of Queensland, Brisbane, QLD, Australia, in 2004. She is a Senior Lecturer and ARC Future Fellow with the School of Information Technology and Electrical Engineering, University of Queensland. Her research interests include multimedia search, social media analysis, database, and information retrieval. She has authored or coauthored papers that have been published in leading conferences and journals, including ACM Multimedia, ACM SIGMOD, IEEE ICDE, the IEEE Transactions ON Multimedia, the IEEE Transactions on Knowledge and Data Engineering, the ACM Transactions on Information Systems, and ACM Computing Surveys. [Zhihui Li]{} received the B.S. degree from Beijing University of Posts and Telecommunications in 2008. She is currently working as a research assistant at the School of Computer Science and Technology in Shandong University. After her graduation, she has worked as a Data Analyst in Beijing Etrol Technologies Co., Ltd until December 2017. Her research interests include artificial intelligence, machine learning, and computer vision. [Liang Xie]{} received the B.S. degree from Wuhan University of Technology, China, in 2009, the Ph.D. degree from Huazhong University of Science and Technology, China, in 2015. He is currently an lecturer in the School of Science at Wuhan University of Technology. His current research interests include image semantic learning, cross-modal and multi-modal multimedia retrieval. [Heng Tao Shen]{} is currently a Professor of National “Thousand Talents Plan”, the Dean of School of Computer Science and Engineering, and the Director of Center for Future Media at the University of Electronic Science and Technology of China. He is also an Honorary Professor at the University of Queensland. He obtained his BSc with 1st class Honours and PhD from Department of Computer Science, National University of Singapore in 2000 and 2004 respectively. He then joined the University of Queensland as a Lecturer, Senior Lecturer, Reader, and became a Professor in late 2011. His research interests mainly include Multimedia Search, Computer Vision, Artificial Intelligence, and Big Data Management. He has published 200+ peer-reviewed papers, most of which appeared in top ranked publication venues, such as ACM Multimedia, CVPR, ICCV, AAAI, IJCAI, SIGMOD, VLDB, ICDE, TOIS, TIP, TPAMI, TKDE, VLDB Journal, etc. He has received 6 Best Paper Awards from international conferences, including the Best Paper Award from ACM Multimedia 2017 and Best Paper Award - Honorable Mention from ACM SIGIR 2017. He has served as a PC Co-Chair for ACM Multimedia 2015 and currently is an Associate Editor of IEEE Transactions on Knowledge and Data Engineering. [^1]: L. Zhu is with the School of Information Science and Engineering, Shandong Normal University, Jinan 250358, China. [^2]: Z. Huang are with the School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4072, Australia. [^3]: Z. Li is with School of Computer Science and Technology, Shandong University, Jinan 250101, China. [^4]: L. Xie is with School of Sciences, Wuhan University of Technology, No. 122 Luoshi Road, Hongshan District, Wuhan 430070, China. [^5]: H. T. Shen (Corresponding author) is with the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China (e-mail: shenhengtao@hotmail.com). [^6]: DPLM can cope with bit-uncorrelation and bit-balance constraints. However, it simply transfers two constraints to the objective function and avoids to directly solve the problem. [^7]: http://www.svcl.ucsd.edu/projects/crossmodal/ [^8]: http://lear.inrialpes.fr/people/guillaumin/data.php [^9]: http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm [^10]: https://www.wikipedia.org/ [^11]: https://www.flickr.com/services/api/ [^12]: Multi-modal hashing methods [@CHMIS; @MFH; @MVLH] are not used for comparison because they need both images and texts as query. Their retrieval scenarios are different from SCBIR. [^13]: For cross-modal hashing approaches, as we evaluate the performance of SCBIR, we only use hash codes of images.
{ "pile_set_name": "ArXiv" }
--- abstract: | Neutron scattering has been used to investigate the magnetic correlations in the the spin ice material dysprosium titanate, Dy$_2$Ti$_2$O$_7$. An isotopically enriched sample was used to minimise neutron absorption. In zero field no magnetic order was observed down to 50 mK but the magnetic diffuse scattering was in qualitative agreement with that expected for the disordered low temperature state of dipolar spin ice. Application of a field of $\approx 0.8$T in the $[100]$ direction led to long range order. With the field applied in the $[1\bar{1}0]$ direction a coexistence of long range ferromagnetic and short range antiferromagnetic order was observed. This is attributed to the pinning of only half the spins by the field. The hysteresis loops in both field orientations displayed unusual steps and plateaus. 75.25.+z,75.40.Gb,75.50.Lk author: - 'T. Fennell' - 'O.A. Petrenko[^1]' - 'G. Balakrishnan' - 'S.T. Bramwell' - 'J.D.M. Champion' - 'B. F[å]{}k' - 'M.J. Harris' - 'D. McK. Paul' date: 'Received:16$^{th}$ July 2001 / Revised version: 16$^{th}$ July 2001' title: 'Field Induced Partial Order in the Spin-Ice Dysprosium Titanate.' --- Introduction {#intro} ============ Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$ are regarded as good examples of spin-ice materials [@prl1; @newprl; @art]. The magnetic rare earth ions occupy a pyrochlore lattice, a cubic array of corner-linked tetrahedra. In the near neighbour spin ice model the lattice is occupied by $\langle 111 \rangle$ Ising spins that are coupled ferromagnetically to nearest neighbours [@jpcm]. The result is a disordered, macroscopically degenerate, ground state that is determined only by the rule that two spins must point into and two out of each tetrahedron. This “two in, two out” rule is analogous to the “ice rules” that control the proton arrangement in water ice, and so the near neighbour spin ice model maps exactly onto Pauling’s model of the proton disorder in water ice [@jpcm; @pauling]. The near neighbour spin ice model qualitatively describes much of the behaviour of the real materials, but for an accurate description the dipolar spin ice model has been developed [@byron]. In this model, the near neighbour ferromagnetic coupling between $\langle 111 \rangle$ spins is dipolar in origin and it has been shown that the long range part of the dipolar coupling maintains a spin-ice like disordered ground state even in the presence of an antiferromagnetic near neighbour super-exchange [@byron]. The dipolar spin ice model has been shown to give an accurate description of the zero field magnetic neutron scattering of Ho$_2$Ti$_2$O$_7$ [@newprl] in the spin ice regime ($T < 2$ K) as well as the specific heat of Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$ throughout the low temperature range (50 mK - 20 K) [@newprl; @byron]. The dipolar coupling in Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$ is of similar magnitude ($D_{nn}\approx2.35$K for both Dy and Ho) while the antiferromagnetic exchange is weak in both cases (-1.2K for Dy and -0.52K for Ho) [@newprl; @byron]. The main difference between these two materials appears to arise from the nuclear spins. Ho$_2$Ti$_2$O$_7$ has a single isotope ($^{165}$Ho) and strong hyperfine coupling that complicates the spin ice freezing process below $\approx0.8$ K [@newprl]. Dy$_2$Ti$_2$O$_7$, on the other hand, has much weaker hyperfine coupling that does not affect the dominant spin ice behaviour in the temperature range of interest. It might therefore be considered the simpler spin ice material. However, the natural mixture of Dy isotopes leads to significant neutron absorption (994 barn for 2200ms$^{-1}$ neutrons). Thus, although neutron scattering should be the ideal microscopic probe of the spin correlations and dynamics of this system, the most significant experimental observations so far are bulk measurements [@art]. To minimise the absorption problem we have prepared a single crystal of $^{162}$Dy$_2$Ti$_2$O$_7$. In this paper we present the initial results of our neutron scattering study of this sample, in both zero and applied magnetic field. Experimental {#exp} ============ The absorption cross section of natural Dysprosium is $\sigma_a = 994$ barn, the large value being due mainly to the presence of $^{164}$Dy. Isotopically enriched Dy$_2$O$_3$ with composition $^{156}$Dy$ < 0.01 \%$, $^{158}$Dy$ < 0.01 \%$,$^{160}$Dy$ = 0.02 \%$,\ $^{161}$Dy$ = 0.47 \%$, $^{162}$Dy$= 96.8 \%$,$^{163}$Dy$ = 2.21 \%$ and $^{164}$Dy$\\ = 0.5 \%$ was prepared by Goss Scientific Instruments Ltd. Enrichment in $^{162}$Dy was considered optimum as it has a high abundance and the dominant impurity is $^{163}$Dy which has a slightly lower absorption cross section. Enrichment in $^{163}$Dy would have lead to $^{164}$Dy as the dominant impurity. Through enrichment, the absorption cross section was reduced by a factor of five to $208 \pm 13$ barn. A single crystal of Dy$_2$Ti$_2$O$_7$ was prepared from the isotopically enriched Dy$_2$O$_3$ and TiO$_2$ by the floating zone technique [@don]. Neutron scattering was carried out at the ISIS facility on the indirect geometry spectrometer PRISMA. It was configured in the diffraction mode so that sixteen $^3$He tube detectors are used simultaneously. Rotation of the crystal allows a rapid mapping of a large section of reciprocal space, making PRISMA ideal for observing magnetic diffuse scattering. In the first experiment the crystal was cooled by a $^3$He sorption refrigerator for measurements in zero applied magnetic field. It was aligned with $[1\bar{1}0]$ vertical, such that the scattering plane contained $(hhl)$ wavevectors. In the second experiment an Oxford instruments 7T vertical field cryomagnet, with dilution refrigerator insert, was used. For the applied field measurements two field orientations were studied: $[1\bar{1}0]$ as above and $[100]$ corresponding to a $(0kl)$ scattering plane. The data were normalised to monitor counts and vanadium to remove the characteristic flux profile of the spallation source. Since the sample is still significantly absorbing a correction for this was also applied. Results {#res} ======= With $[100]$ vertical, a map of reciprocal space was made at the base temperature ($\approx 70$ mK) in zero field. The diffuse scattering maxima observed agree with those predicted in the $(hhl)$ scattering plane for the dipolar spin ice model. For example there is a diffuse feature at $3,0,0$ in $(hk0)$ and $0,0,3$ in $(hhl)$ [@newprl]. On application of a field the diffuse scattering disappeared and was replaced by magnetic Bragg peaks at the $Q = 0$ positions. A field of $\approx0.7$T was sufficient to saturate these peaks. This behaviour is readily explained as the applied field breaks the degeneracy of the six “two in, two out” spin configurations of the elementary tetrahedron. The pyrochlore lattice can be described as a face centred cubic lattice with a tetrahedral basis, and the degeneracy breaking means that every tetrahedron adopts the same “two in, two out” state with a net moment in the direction of the applied field, $[100]$. This non-collinear ferromagnetic structure allows the observed magnetic Bragg peaks at the $Q = 0$ positions such as $2,0,0$. Interestingly the experimental magnetization did not develop smoothly, but in a series of steps. Hysteresis was observed on cycling the field. ![Diffuse scattering in the $(hhl)$ plane. In zero field at 270mK no magnetic Bragg peaks are observed. All resolution limited intense features are of nuclear origin.[]{data-label="add"}](icnsaddoj.eps){width="46.00000%"} ![Scattering in the $(hhl)$ plane with a field of 1.5T applied on $[1\bar{1}0]$ at $\approx 60$mK. Magnetic Bragg peaks have appeared at positions such as 2,0,0 and the diffuse features observed in zero field at positions such as 0,0,3 have sharpened into features elongated on $[00l]$.[]{data-label="tiger"}](dark2oj.eps){width="46.00000%"} ![Integrated intensity of the 0,0,2 magnetic Bragg peak and the 0,0,3 diffuse feature (scaled$\times 3$ for this plot) as the field is scanned at $\approx 70$mK.[]{data-label="loop"}](icnsxm.eps){width="46.00000%"} The $[1\bar{1}0]$ direction is a hard direction of magnetization [@Cornelius]. In zero field, the magnetic scattering was observed to be diffuse, and characteristic of the disordered low temperature state of dipolar spin ice [@newprl], see Figure \[add\]. Application of a small field ($\approx 0.1$ T) in this direction again caused the appearance of the $Q = 0$ Bragg peaks; however, unlike the $[100]$ direction, the diffuse scattering features did not disappear. As the field was raised up to $1.5$T the diffuse scattering sharpened around the $Q = X$ positions such as $0,0,1$ without becoming resolution limited. This is seen clearly in figure \[tiger\]. Similar features have been observed in the neutron scattering of Ho$_2$Ti$_2$O$_7$ [@prl1; @dten], adding strength to the idea that the true ground state of these systems is a $Q = X$ structure that is dynamically inhibited from being accessed on experimental timescales [@Michel]. Again the magnetization versus field curve was observed to have several sharp steps and plateaus (Figure \[loop\]). The formation of the $Q = X$ structure is consistent with the spin ice rules [@prl1]. In this field orientation, assuming perfect $\langle 111\rangle$ spins, only two of the spins of the tetrahedral basis have a component along the field direction. These form “in-out” $[1\bar{1}0]$ chains parallel to the field forcing the remaining two spins per tetrahedra into “in-out” $[110]$ chains perpendicular to the field, as illustrated in Figure \[xstr\]. The perpendicular chains are not coupled by the spin ice rules which, in the absence of any further neighbour coupling, would lead to two dimensional $[110]$ Bragg sheets of scattering extended along $[00l]$ in the scattering plane. The diffuse features in the experimental pattern are indeed extended along this direction, but the sharp build up of intensity around the $Q = X$ points indicates a strong tendency to prefer $Q = X$ short range ordering of the perpendicular rods (see Figure \[tiger\]). Neglecting interference between the scattering from the two spin sets (perpendicular and parallel to the field) one arrives at the conclusion that the $Q = 0$ Bragg scattering arises from the parallel rods and the $Q = X$ diffuse scattering arises from the perpendicular rods. However, it may be a crude approximation to separate the scattering in this way. In conclusion, we have obtained accurate neutron scattering data for Dy$_2$Ti$_2$O$_7$ that is in qualitative agreement with theoretical expectations for a spin ice material [@prl1; @newprl; @jpcm; @byron]. It is of interest that, even in a relatively strong field along $[1\bar{1}0]$, the system remains only partially ordered. It is also noteworthy that the magnetic hysteresis loop shows several steps and plateaus. An understanding of these effects awaits a detailed study of the static and dynamic properties of the dipolar spin ice model [@byron] in an applied magnetic field. ![A fragment of the pyrochlore lattice illustrating the $Q=X$ structure. The applied field is projected into the plane of the page and pins the unshaded spins into ferromagnetic chains. The shaded spins are not coupled to the field but are governed by the “ice rules”. The shaded spins may form two structures, either $Q=0$ with all shaded spin chains parallel or $Q=X$ with shaded spin chains antiparallel. This degeneracy is not raised by the field.[]{data-label="xstr"}](icnsqx3.eps){width="46.00000%"} Acknowledgements {#ack} ================ We acknowledge the EPSRC for funding of beam time and studentships (TF and JDMC). M. J. Harris et al, Phys. Rev. Lett., [**7**9]{}, 2554 (1997). S. T. Bramwell et al, cond-mat/0101114, to be published in Phys. Rev. Lett. A. P. Ramirez et al, Nature, [**399**]{}, 333 (1999). S. T. Bramwell and M. J. Harris, J. Phys.: Condens. Matter, [**10**]{}, L215 (1998). L. Pauling, J. Am. Chem. Soc.,[**57**]{}, 2680 (1935). B. C. den Hertog, M. J. P. Gingras, Phys. Rev. Lett., [**84**]{}, 3430-3433 (2000). T. Fennell et al, unpublished data. G. Balakrishnan et al, J. Phys:Condens. Matter, [**10**]{}, L723 (1998). A. L. Cornelius, J. S. Gardner, cond-mat/0105362. R. G. Melko et al, cond-mat/0009225. [^1]: *Present Address: Department of Physics, University of Warwick, Coventry, CV4 7AL.*
{ "pile_set_name": "ArXiv" }
--- abstract: | We present soft X-ray spectra of 74 BL Lacertae objects observed with the [ PSPC]{} detector on board of the [ ROSAT]{} satellite. The sample contains all BL Lac objects detected during the pointed observation phase as a target or serendipitously. We have investigated the soft X-ray and broad band spectral properties and discuss the consequences for the X-ray emission processes. For the first time a clear dependence of the X-ray spectral steepness on the radio to X-ray spectral energy distribution is found: and are [*correlated*]{} in the X-ray selected (XBL) subsample and [*anticorrelated*]{} in the radio selected (RBL) subsample. The objects with intermediate values thus do have the steepest soft X-ray spectra. Simulated [ PSPC]{} spectra based on a set of simple two component multifrequency spectra are in good agreement with the measurements and suggest a broad range of synchrotron cutoff energies. We have calculated synchrotron self-Compton beaming factors for a subsample of radio bright objects and find a correlation of the beaming factors with and . The most extreme RBL objects are very similar to flat spectrum radio quasars in all their broad band and X-ray properties. author: - 'G. Lamer' - 'H. Brunner' - 'R. Staubert' date: 'Received ; accepted ' title: ROSAT Observations of BL Lacertae Objects --- psfig.tex Introduction ============ BL Lac objects are active galactic nuclei which by definition do not have strong emission lines, are highly variable, and show strong polarization in the radio to optical emission. Commonly the properties of BL Lacs are explained by the concept that the emission in all spectral bands is dominated by relativistic jets. Relativistic electrons emit synchrotron radiation and may scatter up either synchrotron photons (synchrotron self-Compton emission) or photons from other regions (e.g. from an accretion disc) to higher energies. Originally most BL Lacs have been found as counterparts of flat spectrum radio sources (radio selected BL Lacs, RBLs), but an increasing number of less radio bright objects were discovered by the identification of X-ray sources (XBLs). Due to the lack of spectral features in the optical no complete optically selected samples exist. The total number of BL Lac objects in the catalogue of ([@veron]) is less than 200. A new catalogue consisting of 233 sources is being published by Padovani & Giommi ([@PG95b]). The ongoing search for new BL Lacs from [ ROSAT]{} sources (Kock et al. [@kock], Nass et al. [@nass]) will significantly increase the number of known objects. Recently BL Lacs have gained great interest due to the detection of several objects in high energy $\gamma$-rays by EGRET on the Compton Gamma Ray Observatory (von Montigny et al. [@montigny]). There is some evidence that RBLs and XBLs form distinct subclasses as they show a bimodal distribution in the plane of the broad band spectral indices versus (e.g. Giommi et al. [@giommi]). Regardless of the discovery waveband we will use these terms for a distinction of the subclasses based on the spectral energy distribution: RBL is used for [*radio*]{} bright objects having $>$0.75 and XBL for [*X-ray*]{} bright objects with $<$0.75. There have been several attempts to explain the physical differences of the XBL and RBL objects and their relation to flat spectrum radio quasars ([ FSRQs]{}). The concept that parts of the continuum emission from radio loud AGN arises from jets of radiogalaxies more or less aligned with the line of sight is widely accepted. Based on the study of number count relations and luminosity functions several authors (e.g. Padovani & Urry [@PU]) proposed that BL Lacs are the beamed subpopulation of FR I galaxies with RBLs having higher beaming factors than XBLs. Ghisellini & Maraschi ([@GM]) discussed an “accelerating jet” model with lower bulk Lorentz factors $\Gamma$ in the X-ray emitting regions resulting in broader beaming cones for the X-ray emission and narrow radio cones. Celotti et al. ([@celotti]) developed a “wide jet” model with geometrically wider opening angles in the inner, X-ray emitting, parts of the jet. Assuming that RBLs have smaller viewing angles than XBLs, both models are able to explain both the relative numbers and different spectral energy distributions of XBLs and RBLs with an intrinsically uniform population of objects. Maraschi & Rovetti ([@maraschi]) have extended these considerations on [ FSRQs]{} and propose that all radio loud AGN only essentially differ in viewing angle and intrinsic power of the central engine. An alternative approach to explain the differences between XBLs and RBLs was made by Padovani & Giommi ([@PG95a]) with a “different energy cutoff” hypothesis. They argue that both types form a uniform class of objects spanning a wide range in the intrinsic energy distribution caused by different cutoff frequencies of the synchrotron component. The X-ray spectra of RBLs in average were found to be significantly steeper than the spectra of (higher redshifted) [ FSRQs]{} in [*Einstein*]{} IPC (Worrall & Wilkes [@ww]) and [ ROSAT]{} [ PSPC]{} (Brunner et al. [@brunner]) investigations. Furthermore, the X-ray spectral indices of BL Lacs showed a broad distribution in both investigations. The mean X-ray spectra of XBLs and RBLs were not found to be significantly different in [*Einstein*]{} IPC (Worrall & Wilkes [@ww]), EXOSAT ME + LE (Sambruna et al. [@sambruna]), and [ ROSAT]{} [ PSPC]{} (Lamer et al. [@lamer]) observations. Ciliegi et al. ([@ciliegi]) found significant steepening of the mean spectrum of XBLs between the soft (0.2–4 keV) and medium (2–10 keV) energy X-ray band. The [ ROSAT]{} data archive presently comprises the largest X-ray database for BL Lac objects collected by a single instrument. In this paper we present the analysis of X-ray and broad band spectra of 74 BL Lac objects with the X-ray data obtained from pointed [ ROSAT]{} observations. We find a strong interdependence of the X-ray spectral index with the radio to X-ray energy index which we interprete as the signature of two spectral components intersecting each other at different frequencies. The sample ========== Due to the difficulties in the identification and classification of BL Lacs only relatively few objects form complete flux limited samples. Nearly complete samples were selected from radio sources (1 Jy sample, Stickel et al. [@stickel]; 34 objects) and with limited sky coverage from soft X-ray sources (EMSS sample, Morris et al. [@morris]; 22 objects). Our sample comprises all BL Lacs listed in the catalogue of Véron-Cetty & Véron ([@veron]) and of which [ ROSAT]{} [ PSPC]{} observations exist in the archives. Only sources which had been detected with more than 50 net counts have been analysed: 74 objects in total. In case of multiple [ ROSAT]{} observations of an object the longest observation available at the time of analysis has been selected. Objects in the catalogue which have meanwhile been classified as quasars or as radio galaxies have not been included (e.g. Stickel et al. [@stickel2]). The resulting coverage of various complete samples of BL Lac objects is listed in Table \[samples\]. According to their radio to X-ray energy distribution 40 objects have been assigned to the XBL subsample, 34 objects to the RBL subsample (see Sect. 3). [lll]{} sample & Reference & observed (total)\ 1 Jy (5 GHz) & Stickel et al. ([@stickel]) & 29 (34)\ EMSS & Morris et al. ([@morris]) & 19 (22)\ S5 ($\delta>70^{\circ})$&Eckart et al. ([@eckart])& 5 (5)\ Throughout this paper the catalogue designations according to ([@veron]) are used. ROSAT observations and data analysis ==================================== Archival data were taken from the [ ROSAT]{} data archives at MPE (Garching) and at GSFC (Greenbelt). Both the author’s proprietary data and archival data were reduced in the same way using the EXSAS software (Zimmermann et al. [@zimm]). Table \[obs\] list the objects, [ ROSAT]{} observation request (ROR) numbers, and dates of observations which have been analysed. The source photons were extracted within a circle of radius 100”–200” (depending on the signal to noise ratio) and the background determined in an annulus of radii 250” and 500”. We produced spectra in the energy range 0.1–2.4 keV of all objects by binning according to pulse height amplitude, yielding a SNR per spectral bin ranging from 4 for the weakest sources and 50 for the strongest. All spectra were background subtracted and corrected for telescope vignetting, dead time losses, and incomplete extraction of source photons with respect to the point spread function. The pulse height spectra were then fitted by power law spectra combined with the absorption model of Morrison and McCammon ([@mmcc] ). For sources with more than 250 detected counts both fits with an absorbing column density fixed to the galactic value (Stark et al. [@stark], Elvis et al. [@elvis]) and free were performed. For the weaker sources only fits with fixed were obtained. In general the latest version of the [ PSPC]{} detector response matrix (nr. 36) has been used for the fits. Except for observations carried out before fall 1991, when the gain setting of the [ PSPC]{} was different, an earlier version (nr. 6) was used. We combined the [ ROSAT]{} measurements with noncontemperaneous flux measurements at 5 GHz and in the optical V band (both taken from ) in order to calculate the broad band spectral indices , , and . Sources with $> 0.75$ were assigned to the RBL subsample and sources with $<0.75$ to the XBL subsample. Power law spectral indices $\alpha$ are given as energy indices ($f_{\nu}\propto \nu^{-\alpha}$) throughout the paper. We used a maximum likelihood (ML) method to deconvolve the measurement errors and the intrinsic distribution of the X-ray spectral indices and other measured parameters when calculating mean values and their errors (see Worrall [@worrall] for a description of the method and Brunner et al. [@brunner] as a recent application). Assuming that both the intrinsic distribution of a parameter $p$ and the distribution of measurement errors are Gaussian, confidence contours of the mean $\langle p \rangle$ and width $\sigma_{\rm G}$ of the intrinsic distribution of $p$ can be calculated. Results of spectral analysis ============================ The distribution of (Fig. \[arx\]) is double peaked with a gap at $=0.6$–0.8. As can been seen from Fig. \[arx\], all EMSS objects exhibit an XBL energy distribution. Two objects of the 1 Jy sample have $<0.75$, they therefore belong to the XBL sample. This confirms, based on homogeneus X-ray data, the relations given by Padovani & Giommi ([@PG95a]). Figure \[ro\] shows the interdependence of the calculated broad band spectral indices $\alpha_{\rm ox}$ and $\alpha_{\rm ro}$. Note that the gap near = 0.75 is dominated by objects which do neither belong to the EMSS sample nor to the 1 Jy sample. This indicates that the gap is due to selection effects and may be filled in the future when radio or X-ray selected objects at lower flux limits will be identified. Single power law spectra with photoelectric absorption due to the interstellar medium in general yielded acceptable fits if the absorbing column density was left free to vary. The results of the spectral fits with free and the broad band spectral indices are given in Table 3. If a source was detected with less than 250 counts, the entry $N_{\rm H,fit}$ is omitted and the results with fixed are given. In order to investigate whether deviations of the resulting values from galactic HI radio measurements are significant we calculated the difference $\Delta$for each object. The error of the difference was calculated by quadratic addition of the X-ray and radio measurement errors using $10^{20}$cm$^{-2}$ for the Stark et al. ([@stark]) values and $10^{19}$cm$^{-2}$ for the Elvis et al. ([@elvis]) values. A maximum likelihood analysis of the results yields a mean $N_{\rm H}$-excess of $(0.48\pm0.23)\cdot 10^{20}{\rm cm}^{-2}$ in the XBL sample. In the RBL sample the individual errors of the fitted values are generally large and therefore no statement about deviations from galactic can be made. Note that due to poor energy resolution in the soft band of the [ PSPC]{} an apparent excess of may also be caused by a steepening of the intrinsic spectrum. As for the fainter objects the statistical error of the measured is of the same order as the expected variations within the sample, we used the maximum likelihood method to deconvolve the measurement errors and the intrinsic distribution of the X-ray spectral indices. We find $ = 1.30\pm0.13$, $\sigma_{\rm{G}} = 0.32 \pm 0.12$ for the RBL sample (34 objects) and $ = 1.40\pm0.09$, $\sigma_{\rm{G}} = 0.32 \pm 0.07$ for the XBL sample (40 objects). Although no significant difference can be found between the mean spectra of RBLs and XBLs, there is a dependence of the spectral index on the characterising parameter . As can be read from Fig. \[banana\], objects with extreme values of on both sides tend to have flatter X-ray spectra than intermediate objects. For the RBLs ($ > 0.75$) a Spearman Rank test yields an [*anti*]{}correlation of and with 99.9% probability. For the XBLs ($< 0.75$) the significance of the positive correlation is 98%. The steepest X-ray spectra are therefore found for objects which fall into the gap between XBLs and RBLs ($0.6<$$<0.8$) for which X-ray spectral indices up to $\sim$2.0 are measured. In a previous paper (Brunner et al. [@brunner]) we stated that the mean X-ray spectral index $\langle$$\rangle$ of RBLs is similar to the mean optical to X-ray spectral slope $\langle$$\rangle$, whereas the mean X-ray spectrum of [ FSRQs]{} is significantly flatter than their optical to X-ray broad band spectrum. However, a relatively large dispersion of the differences $-$was found within the RBL sample. In the larger sample presented here a dependence of $-$on is visible (Fig. \[ox\_x\]). XBLs show a steepening of the spectrum ($-$$<0$), whereas RBLs can exhibit both steepening or flattening, depending on . This explains the mean $\langle$$-$$\rangle$ being zero with a large dispersion of the individual values in an RBL sample. The extreme RBLs ($\sim 0.9$) exhibit spectral flattening of the same amount as [ FSRQs]{} ($-$$=0.6$, Brunner et al. [@brunner]). The spectral results are available as a more detailed version of Tab. 3 via WWW (http://astro.uni-tuebingen.de/prepre/), where also an electronic version of this paper can be found. Simulated spectra ================= The dependence of the X-ray spectral index on the radio to X-ray broad band index can be explained qualitatively by a two component spectrum as resulting from synchrotron self-Compton jet models (e.g. Königl [@koenigl], Ghisellini et al. [@GMT]). The correlated variations in and are then caused by a varying high frequency cutoff of the synchrotron component. In order to test this hypothesis we simulated [ ROSAT]{} [ PSPC]{} spectra with a simple two component spectrum as a sum of a parabolically steepening soft component and power law hard component (see Fig. \[simul\]): $$C_{\rm S}(\nu) = 1 \quad \quad \mbox{ for }\; \nu<\nu_1$$ $${\rm log}_{10}C_{\rm S}(\nu) = \left(\frac{{\rm log}_{10}\nu_1-{\rm log}_{10}\nu} {{\rm log}_{10}a}\right)^{\eta} \quad \mbox{ for } \; \nu>\nu_1$$ $$C_{\rm H}(\nu) = N \cdot \nu^{-\alpha_{\rm H}}$$ Below $\nu_1$ the soft component mimics the flat radio spectrum which BL Lacs have in common; above $\nu_1$ is a parabola in the ${\rm log}\; \nu$ – ${\rm log}\; f_{\nu}$ plane. The normalization $N$ of the hard component was chosen so that (1keV) and (5 GHz) result in a given $\alpha_{\rm rx,max}$. The extent of the soft component can be varied with the parameter $a$. Figure \[simul\] shows the set of calculated spectra using the parameters from Tab. \[para\]. [cccccc]{} $\nu_1$ & $\eta$ & $\alpha_{\rm H}$ & $\alpha_{\rm rx,max}$ & &\ $ 5\cdot10^{10}$& 2.00 & 0.70 & 0.90 & &\ & & & &\ \ 100 & 200 &300 & 400 & 500 & 800\ 1000 & 2000 & 3000 & 4000 & 6000 &\ From the resulting spectra corresponding [ ROSAT]{} [ PSPC]{} pulse height spectra were determined by applying a galactic absorption model (Morrison & McCammon [@mmcc]) with $N_{\rm H} = 3\cdot10^{20}{\rm cm}^{-2}$ and folding the spectra with the [ PSPC]{} efficiency and detector response matrix. The resulting pulse height spectra were fitted with an absorbed power law model in the same way as the BL Lac spectra. Broad band spectral indices , , and were determined from the flux values at 5 GHz, 5517Å, and 1 keV of each spectrum. The locations of simulated and observed spectra in the – plane are plotted in Fig. \[comp\]. Mean values of with $1\sigma$ errors have been determined in each interval using maximum likelihood contours. We find that the two component model is able to reproduce the measured interdependence of and . As curvature of the incident photon spectra is able to cause deviations of the values resulting from single power law fits, we also compared the resulting in the simulated and measured spectra. The deviations in as derived from the simulations are small ($< 6 \cdot 10^{19}{\rm cm}^{-2}$) and thus are hard to detect in individual spectra. Averaging the $(N_{\rm H}-N_{\rm H,gal})$ values in bins of using the ML method results in an overall excess of the measured values (Fig. \[nh\]) over the simulations. Possibly this excess is caused by intrinsic absorption in individual sources. The simulations are able to reproduce the measured dependence of the change in the spectral slope between the optical to X-ray broad band spectrum and the X-ray spectrum, ($-$) on (Fig. \[ox\_x\]). Note that the $\alpha_{\rm rx}$ values of the simulated spectra cannot exceed $\alpha_{\rm rx,max}$, which was set to 0.9. Therefore with this parameter set the model does not cover objects with higher (Figs. \[banana\] – \[ox\_x\]). However, the number of objects with significantly exceeding 0.9 is small (Fig. \[banana\]). In order to test whether the two component models do fit the whole radio to X-ray continuum, the broad band spectral indices $\alpha_{\rm ox}$ and $\alpha_{\rm ro}$ of simulated and object spectra were compared (Fig. \[ro\]). The path of the model spectra with varying parameter $a$ in the $\alpha_{\rm ox}$ – $\alpha_{\rm ro}$ plane reasonably well reproduces the distribution of the measured broad band spectra. Inverse Compton beaming factors =============================== As inverse Compton beaming factors are often used to estimate the viewing angles of radio sources, we investigated the dependency of on and in our sample. The Doppler beaming factor $\delta$ in a relativistic jet can be estimated by the condition that the inverse Compton flux from synchrotron self-Compton models must not exceed the observed X-ray flux. A method for the calculation of from radio brightness temperature and X-ray flux was given by Marscher ([@marscher]): $$\delta_{\rm IC}= f(\alpha)F_{\rm m}\left(\frac {{\rm ln}(\nu_{\rm b}/\nu_{\rm m})} {F_{\rm x}\theta_{\rm d}^{6+4\alpha} \nu_{\rm x}^\alpha\nu_{\rm m}^{5+3\alpha}}\right)^{1/(4+2\alpha)}\cdot (1+z)$$ $F_{\rm m}\;$\[Jy\]: Synchrotron flux at $\nu_{\rm m}$\[GHz\]\ $F_{\rm x}\;$\[Jy\]: X-ray flux at $\nu_{\rm x}$\[keV\]\ $\theta_{\rm d}\;$\[mas\]: VLBI core size\ $\nu_{\rm b}\;$\[Hz\]: Synchrotron high frequency cutoff\ $\alpha$: Optically thin synchrotron spectral index\ $f(\alpha)\simeq 0.08\alpha + 0.14$ (Ghisellini et al. [@ghisellini]) We calculated Doppler factors from the compilation of VLBI data by Ghisellini et al. ([@ghisellini]) and the [ ROSAT]{} [ PSPC]{} fluxes at 1 keV; $\alpha=0.75$ and $\nu_{\rm b}=10^{14}$Hz was assumed. The results are given in Table 3. The distributions of the objects in the – and – planes in Fig. \[delta\] show a strong correlation of with . As the subsample with available VLBI data contains predominantly RBLs, which show an anticorrelation of and , and are also anticorrelated. Looking at Eq. (1) two reasons may be responsible for the correlation of and : 1\. The values of must be considered as lower limits, as direct synchrotron emission may contribute to or even dominate the X-ray flux and thus the inverse Compton flux $F_{\rm IC}$ is overestimated. In this case will be underestimated by the factor $$\delta_{\rm true}/\delta_{{\rm IC}}=(F_{\rm x}/F_{\rm IC})^{-1/5.5}$$ Assuming that the diversity in $$\alpha_{\rm rx}=({\rm log}_{10}(F_{\rm 5GHz}) -{\rm log}_{10}(F_{\rm 1keV}))/7.68$$ is caused by a more or less energetic synchrotron component, $$\delta_{\rm true}/\delta_{\rm IC} = 10^{1.40\cdot\alpha_{\rm rx}}$$ results. This function is indicated in Fig. \[delta\] a) as solid line. 2\. Both and depend on the radio flux of the source. The total fluxes $F_{\rm 5GHz}$ determining and the VLBI core fluxes $F_m$ strongly correlate. Varying the core flux $F_{\rm m}$ and $F_{\rm 5GHz}$ by the same factor while leaving $F_{\rm x}$ constant results in $$\delta_{\rm IC} \propto 10^{7.68\cdot \alpha_{\rm rx}}$$ as indicated by the dashed line in Fig. \[delta\] a). Figure \[delta\] shows that the first possibility, overestimation of the Compton flux, cannot fully account for the measured correlation and viewing angle effects cannot be ruled out to cause the differences in . Ghisellini et al. (1993) noted that BL Lacs show a broader distribution of than flat spectrum radio quasars ([ FSRQs]{}) with a tail towards low values of . It is apparent from Fig. \[delta\] that one group of objects has properties very similar to flat spectrum radio quasars: flat X-ray spectra, $\sim 0.9$, and $\delta_{\rm IC}=1..10$. The corresponding properties of [ FSRQs]{} are: $\langle\alpha_{\rm x}\rangle = 0.59$, $\langle\alpha_{\rm rx}\rangle = 0.88$, (Brunner et al. [@brunner] ), $\delta_{\rm IC}=1..10$ (Ghisellini et al. [@ghisellini]). The remaining RBL objects have steeper X-ray spectra, lower  and lower . EGRET detected objects ====================== 8 objects of the sample have been detected in high energy $\gamma$-rays ($>$100 MeV) by EGRET on the Compton Gamma Ray Observatory: AO 0235+16, PKS 0537-44, S5 0716+71, S4 0954+65, MARK 421, ON 231, PKS 2005-48 (4–5$\sigma$), and PKS 2155-30 (von Montigny et al. [@montigny], Vestrand et al. [@vest]). The EGRET objects are indicated by filled symbols in Figs. \[banana\] and \[delta\]. All these objects except the low distance XBLs MARK 421 and PKS 2155-304 have $>$ 0.7. As can be read from Fig. \[banana\], the EGRET sources show the same – dependence as the remaining objects. Note that not only “FSRQ-like” BL Lac objects have been detected by EGRET, but also objects with steep X-ray spectra and moderate . Comparison with other ROSAT studies =================================== During the refereeing process of this paper we learned that a number of [ ROSAT]{} studies using different BL Lac samples are going to be published. In this section we will briefly discuss their results in comparison with ours. Based on an analysis of 12 objects from the 1 Jy sample Comastri et al. ([@comastri]) show that the more radio bright objects ($\alpha_{\rm rx} > 0.75$) in their radio selected sample on average have flatter X-ray spectra than the more X-ray bright ($\alpha_{\rm rx} < 0.75$) ones. As their sample covers only the radio bright part of the distribution, this finding fits well to our overall picture of the spectrum of BL Lacs, where objects of intermediate do have the steepest X-ray spectra. A detailed investigation of the [ ROSAT]{} observations of the 1 Jy sample has been undertaken by Urry et al. ([@urry]). The above authors both interprete steep X-ray spectra of BL Lacs as a sign for synchrotron emission, while flat X-ray spectra should be dominated by self-Compton emission. Perlman et al. ([@perlman]) performed a [ ROSAT]{} investigation of the EMSS XBL sample and found a distribution of X-ray spectra similar to the 1 Jy sample. By considering the whole sample of BL Lac objects we are able to verify the view, that BL Lacs except the extreme RBLs are dominated by synchrotron emission. We show that the extreme XBLs have flat X-ray spectra caused by synchrotron spectra with cutoff energies beyond the soft X-ray band. Our spectral simulations yield a good measure for the synchrotron cutoff energy and show that the range of cutoff energies is large. Padovani&Giommi ([@PG96]) have calculated X-ray spectral indices in a large sample of BL Lacs from hardness ratios provided by the [ ROSAT]{} WGA catalogue (White et al. [@WGA]) and with this method obtain a similar dependency of on the radio to X-ray flux ratio as we do. Discussion ========== We find that the broad distribution of spectral slopes in the soft X-ray spectra of BL Lacs is due to a strong dependence of on the broad band spectral index . Objects with extreme values of exhibit flat X-ray spectra, whereas intermediate objects have steeper spectra. The symmetry of this dependence prevented the detection of significant differences between the mean X-ray spectra of XBLs and RBLs in previous investigations (Worrall & Wilkes [@ww], Sambruna et al. [@sambruna], Lamer et al. [@lamer]). By comparison with simulated [ PSPC]{} spectra we showed that a two component model with a hard power law ($\alpha=0.7$) component and a steepening soft component is appropriate to explain the observed spectra. The frequency where the components intersect each other is below the soft X-ray band for extreme RBLs and crosses the energy band of the [ ROSAT]{} [ PSPC]{} with declining . In the framework of the SSC models this means that Compton emission causes the flat X-ray spectra of extreme RBLs, whereas the likewise flat X-ray spectra of extreme XBLs are due to synchrotron emission. The steep X-ray spectra of objects with intermediate spectral energy distribution ($0.6<$$<0.8$) represent the first direct evidence of the synchrotron high energy cutoff. Padovani & Giommi ([@PG95a]) explain the different spectral energy distributions of XBLs and RBLs by different energy cutoffs of the synchrotron spectra. They postulate the cutoff energy being intrinsic properties of the sources without discussing the physics of the emission processes. This scenario also is the most straightforward explanation for our findings, including the correlation of and for the XBL subsample. The wide range in synchrotron cutoff energies, and consequently the cutoff in the energy spectrum of the relativistic electrons, has to be explained. The SSC cooling of the jet electrons may be more efficient in the more powerful jets of [ FSRQs]{} and RBLs than in the jets of XBLs. Ghisellini & Maraschi ([@GM94]) proposed a more rapid cooling of jet electrons in [ FSRQs]{} by external UV photons compared to jets of BL Lac objects. It is conceivable that RBLs are intermediate objects between XBLs and [ FSRQs]{} regarding the ambient photon density. The more physically motivated beaming models, such as the “accelerating jet” model (Ghisellini & Maraschi [@GM]) and the “wide jet” model (Celotti et al. [@celotti]) do not as naturally satisfy our data. The crossover frequencies of soft and hard components in the spectra calculated by Ghisellini and Maraschi ([@GM]) do not span a sufficient range and do not move across the soft X-ray range when tilting the viewing angle, as required by the new data. Nevertheless, further tuning of the free parameters may provide spectra which are in accordance with the measurements. The [ ROSAT]{} spectra therefore are suitable to constrain the beaming models. This work was supported by DARA under grant 50 OR 90099 and has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration. Brunner, H., Lamer, G., Worrall, D.M., and Staubert, R. 1994, A&A, 287, 436 Celotti, A., Maraschi, L., Ghisellini, G., Caccianiga, A., and T. Maccacaro, 1993, ApJ, 416, 118 Ciliegi, P., Bassani, L., and Caroli, E. 1995, ApJ, 439, 80 Comastri, A., Molendi, S., and Ghisellini, G. 1995, MNRAS, in press Eckart, A., Witzel, A., Biermann, P.L., et al. 1986, A&A, 168, 17 Elvis, M., Lockman, F.J., and Wilkes, B.J, 1989, AJ, 97, 777 Ghisellini, G., Maraschi, L., and Treves, A., 1986, A&A, 146, 204 Ghisellini, G. and Maraschi, L., 1989, ApJ, 340, 181 Ghisellini, G., Padovani, P., Celotti, A., and Maraschi, L., 1993, ApJ, 407, 65 Ghisellini, G., and Maraschi, L., 1994, in [*The second Compton Symposium*]{}, AIP Proceedings 304, eds. C.E. Fichtel, N. Gehrels, and J.P. Norris, 616 Giommi, P., Barr, P., Garilli, B., Maccagni, D., and Pollock, A.M.T., 1990, ApJ 356, 432 Kock, A., Meisenheimer, K., Brinkmann, W., Neumann, M. and Siebert, J. 1996, A&A, in press Königl, A., 1981, ApJ, 243, 700 Lamer, G., Brunner, H., Staubert, R., 1994, in [*Multi-Wavelength Continuum Emission of AGN*]{}, eds. T.J.-L. Courvoisier & A. Blecha (Dordrecht: Kluwer), p. 377 Maraschi, L. and Rovetti, F., 1994, ApJ, 436, 79 Marscher, A.P., 1987, in [*Superluminal Radio Sources*]{}, eds. A. Zensus & T.J. Pearson (Cambridge Univ. Press), 280 Morris, S.L., Stocke, J.T., Gioia, I.M., et al. 1991, ApJ, 380, 49 Morrison, R., and McCammon, D., 1983, ApJ, 270, 119 Nass, P., Bade, N., Kollgaard, R.I. et al., 1996, A&A, in press Padovani, P. and Urry, C.M., 1990, ApJ, 356, 75 Padovani, P. and Giommi, P., 1995, ApJ, 444, 567 Padovani, P. and Giommi, P., 1996a, MNRAS, in press Padovani, P. and Giommi, P., 1996b, MNRAS, in press Perlman, E., Stocke, J. T., Wang, Q. D., et al. 1996, ApJ, in press Sambruna, R.M., Barr, P., Giommi, P., et al. 1994, ApJ, 434, 468 Stark, A.A., Gammie, C.F., Wilson, R.W. et al., 1992, ApJS, 79, 77 Stickel, M., Padovani, P., Urry, C.M., Fried, J.W., and Kühr, H., 1991, ApJ, 374, 431 Stickel, M., Meisenheimer, K., and Kühr, H., 1994, A&AS, 105, 211 Urry, C. M., Sambruna, R. M., Worrall, D. M., et al. 1996, ApJ, submitted Véron-Cetty, M.P., and Véron, P., 1993, ESO Scientific Report, 13 Vestrand, W.T., Stacy, J.G., and Sreekumar, P. 1995, IAU Circ. 6169 Von Montigny, C., Bertsch, D.L., Chiang, J., et al. 1995, ApJ, 440, 525 White, N. E., Giommi, P., and Angelini, L., 1994, IAU Circ. 6100 Worrall, D.M., 1989, Proc. 23rd ESLAB Symp., ESP SP-296, 719 Worrall, D.M., and Wilkes, B.J, 1990, ApJ, 360, 396 Zimmermann, H.U., Belloni, T., Izzo, C., Kahabka, P., Schwentker, O., 1993, EXSAS User’s Guide, MPE Report 244
{ "pile_set_name": "ArXiv" }
--- author: - 'Yu-Ru Lin' - 'James P. Bagrow' - David Lazer date: 'April 4, 2011' title: 'More Voices Than Ever? Quantifying Media Bias in Networks' --- “In the end, we’ll have more voices and more options." – Dan Gillmor, *We the media* Introduction ============ Gillmor [[@gillmor2004we]]{} envisioned social media, powered by the growth of the Internet and related technologies, as a form of grassroots journalism that blurs the line between producers and consumers and changes how information and opinions are distributed. He argued that “the communication network itself will be a medium for everyone’s voice, not just the few who can afford to buy multimillion-dollar printing presses, launch satellites, or win the government’s permission to squat on the public airways." This view has been embraced by activists who consider social media as a balancing force to the conventionally assumed slanted or biased elite media. Indeed, social media can be used by underprivileged citizens, promising a profound impact and a healthy democracy. Many believe that the mainstream media is slanted, but disagree about the *direction* of slant. The conventional belief about media bias has held for decades, but attempts at developing objective measurement have only recently begun. The study by Groseclose and Milyo [[@groseclose2005measure]]{} showed the presence of bias in mass media (cable and print news) and new media (Internet websites, etc.). Their results, despite receiving criticism, are fairly consistent with conventional wisdom. On the other hand, researchers have observed an “echo chamber” effect within the new media – people select particular news to reinforce their existing beliefs and attitudes. Iyengar and Hahn [[@iyengar2009red]]{} argued that such selective exposure is especially likely in the new media environment due to information overload. With search, filtering, and communication technologies, people can easily discover and disseminate information that are supportive or consistent with their existing beliefs. Do social media exhibit more or less bias than mass media and, if so, to what extent? Identifying media bias is challenging for a number of reasons. First, bias is not easy to observe. It has been recognized that “bias is in the eyes of the beholder" meaning that, e.g., conservatives tend to believe that there is a liberal bias in the media while liberals tend to believe there is a conservative bias [@groseclose2005measure; @yano2010shedding]. Hence, finding textual indicators of bias is difficult, if not impossible. Second, the assessment of bias usually implies knowing what “fairness" would be, which may not be available or consistent across different viewpoints. Third, Internet-based communication promises easy, inexpensive, and instant information distribution, which not only increases the number of online media outlets, but also the amount and frequency of information and opinions delivered through these outlets. The scale and dynamic nature of today’s communication should be accounted for. In this paper, our major contribution is that we propose empirical measures to quantify the extent and dynamics of “bias” in mainstream and social media (hereafter referred to as ** and **, respectively). Our measurements are not normative judgment, but examine bias by looking at the attributes of those being mentioned, against a null model of “unbiased” coverage. We focus on the number of times a member of the 111th US congress was *referenced*, and study the distribution and dynamics of the references within a large set of media outlets. We consider “the unbiased" as a configurable baseline distribution and measure how the observed coverage deviates from this baseline, with the measurement uncertainty of observations taken into account. We demonstrate bias measures for slants in favor of specific political parties, popular front-runners, or certain geographical regions. Using these measures to examine newly collected data, we have observed distinct characteristics of how and cover the US congress. Our analysis of party and ideological bias indicates that are not significantly less slanted than . However, their slant orientations are more sensitive to exogenous factors such as national elections. In addition, blogs’ interests are less concentrated on particular front-runners or regions than news outlets. While our measures are independent of content, we further investigate two aspects of the content related to our measures: the hyperlinks embedded in articles and sentiments detected from the articles. The hyperlink patterns suggest that outlets with a Democrat-slant (D-slant for short) are more likely to cite each other than outlets with a Republican-slant (R-slant). The sentiment analysis suggests there is a weak correlation between negative sentiments and our measures. To better understand the distinctive slant structures between the two media, we propose to use a simple “wealth allotment” model to explain how legislators gain attention (references) from different media. The results about blog media’s inclination to a rich-get-richer mechanism indicates they are more likely to echo what others have mentioned. This observation does not contradict our measures of bias – compared with news media, blogs are weaker adherents to particular parties, front-runners or regions but are more susceptible to the network and exogenous factors. The rest of this paper is organized as follows. We first discuss related work, followed by the details of our collected data. We then detail the different types of coverage bias and how to quantify them and then examine the results, both structurally (via hyperlinking) and textually (via text-based sentiment analysis). Finally, we present a simple generative model of media coverage and conclude with a discussion of open issues and future work. Related Work ============ Concerns about mainstream media bias have been a controversial and critical subject in journalism due to the media’s power to shape a democratic society. Studies on media bias can involve surveys and interviews [@lichter1986media], and content analysis [@eldridge1995glasgow], as well as theoretical models such as structural economic causes. Apart from these qualitative arguments, Groseclose and Milyo [[@groseclose2005measure]]{} proposed a media bias measure that counts how often a particular media outlet cites various think tanks and policy groups. There have been controversial responses to prior studies, and the origin in part lies in the difficulty to separate the recognition of bias from the belief of bias. A dependence on viewers’ beliefs has been observed in studies [@groseclose2005measure; @yano2010shedding], which is relevant to the theories on how supply-side forces or profit-related factors cause slants in media [@mullainathan2005market; @gentzkow2010drives]. Because of such a dependency, computationally identifying bias from media content remains an emerging research topic, and requires insights from other language analysis studies such as sentiment analysis [@pang2008opinion] or partisan features in texts [@monroe2008fightin; @gentzkow2010drives]. While mass media have the ability to affect the public’s interests, social media represent large samples of expression from both influencers and those being influenced. Hence the “crowd voice” collected in social media has attracted considerable research. The viral behavior and predictive power of social media in response to politics, the economy and other areas has been examined in recent studies [@leskovec2009meme; @o2010tweets]. For example, Leskovec et al. [[@leskovec2009meme]]{} tracked the traversal of “memes” based on short distinctive phrases echoed by online news and blogs over time. Another work by O’Connor et al. [[@o2010tweets]]{} studied the relationship between tweet sentiments and polls in order to examine how the sentiments expressesed in the Twitter microblogging social media can be used as political or economic indicators. In this paper, we do not attempt to tackle the computationally difficult task of identifying bias in media text. Instead, we study the characteristics of the two media based on purely quantitative measures independent of media content. We are interested in studying the role of today’s social media, and we hope our analysis will contribute to the growing understanding of this subject. Data Model ========== ### Data Collection Our data is based on RSS feeds aggregated by OpenCongress[^1][^2]. OpenCongress is a non-profit, non-partisan public resource website that brings together official government data with timely information about what is happening in Congress. We continuously monitor and collect the OpenCongress RSS feeds for each individual member of Congress[^3]. This paper examines and coverage about the 111th US Congress, both Senators and Representatives. The dataset spans from September 1 to January 4, covering the 2010 mid-term election on November 2. Figure \[volume\_w\] shows the volume (total number of news articles or blog posts) over time in this dataset. The central peak corresponds to the mid-term election. In total, there are 57,221 news articles and 66,830 blog posts being collected in the four-month period. ### Networked Data Model We study the structure of the two media by constructing a modal network containing different types of nodes and edges. The network structure is illustrated in Fig. \[fig\_tripartite\]. More specifically, we have: Nodes : There are three sets of nodes: a news set, denoted by $\Vn$, that contains 5,149 news outlets, a blog set $\Vb$ of 19,693 blogs[^4], and a legislator set $\Vl$ that covers 530 lawmakers. Edges : Each edge $e_{ik}$ records when media outlet $i$ publishes an article referencing legislator $k$. We extract 64,222 such edges in 46,501 news articles, denoted as edge set $\Enl$, and 91,837 edges in 62,301 blog posts, denoted as $\Ebl$. Edges are associated with timestamps and texts. Node attributes : For legislators, we record attributes such as party, district, etc., based on the legislators’ profiles and external data sources. While we focus on “reference" or citation edges, this networked model can also include other types of edges, e.g. hyperlinks between outlets, voting preferences among legislators, etc. Types of Bias ============= In journalism, the term “media bias" refers to the selection of which events and stories are reported and how they are covered within the mass media. The most commonly discussed biases include reporting that supports (or attacks) particular political parties, candidates, ideologies, corporations, races, etc. In this paper, we begin with perhaps the simplest form of measurable bias – the distribution of coverage quantity, i.e. how many times an entity of interest is referenced by a media outlet. We argue that, regardless of a positive or negative stance towards an entity, an imbalanced *quantity* of coverage, if present, is itself a form of bias[^5]. An outlet’s references can be biased in a number of ways: Party : References are focused on a particular political party. Front-runner : References are concentrated on a few legislators who we term “front-runners", while the majority of legislators receive little or no attention. Region : References focus on certain geographical locations. Ideology : An ideology is a collection of ideas spanning the political spectrum. Ideological bias indicates that frequently referenced legislators favor certain ideological tendencies. Gender : The preference towards covering legislators of one gender. We discuss how to measure different types of bias in a unified model. Other types of bias, such as those in favor of a particular race or ethnic group, can also be measured through our method. Based on the measurements associated with individual media outlets, we derive system-wide bias measures that allow us to characterize and compare the bias structure between the news and blog media. Quantifying Bias ================ In this section, we describe our method for quantifying and comparing bias in and . ### Notation Let $n_{ik}^c$ be number of times media outlet $i$ references legislators in group $k$, where $c\in\{\mbox{\nn, \bb}\}$ is the media category ($c$ is omitted when there is no need to distinguish the categories). In the case of measuring party bias, $k\in\{\Dem, \Rep\}$ indicates the Democratic or Republican political parties. Let $n_i=\sum_k{n_{ik}}$ be the total number of references made by outlet $i$. We begin with a specific case – measuring the two-party bias, and then describe a more general model for measuring other types of bias. Party Slant ----------- A naive approach for measuring an outlet’s biased coverage of two political parties is to compare the number of times members in each party are referenced. The ratio of the reference counts of one party against the other may be used to compare outlets that reference different parties with different frequencies. There are two issues with this approach: (i) this ratio may lack statistical significance for some outlets, and (ii) it assumes that fair coverage of the two parties requires roughly equal quantities of references to each. To resolve these issues, we use the *log-odds-ratio* as follows. We define $\theta_{ik}$, the “slant score” of outlet $i$ to party $k$, as $$\label{eqn:logodds} \theta_{ik} = \log(\mbox{odds-ratio})=\log\left(\frac{n_{ik}/(n_i-n_{ik})}{p_k/(1-p_k)}\right),$$ where $p_k$ is the *baseline probability* that $i$ refers to $k$, and here we assume this variable is fixed for all $i$. The advantage of having such a baseline probability is that “fairness" become configurable. For example, one can consider fairness as a 50-50 chance to reference either party (i.e. $\pD=\pR=0.5$). One can also define $\pD=0.6$ since roughly 60% of the studied legislators are Democrats. No matter what baseline probability is given, we have a simple interpretation: $\theta = 0$ means no bias w.r.t that baseline. In this two-party case, we take $\theta_i\equiv\theta_{ik}$, with $k=\mathrm{D}$, and $\theta_{i}>0$ means outlet $i$ is more likely to be D-slanted. A slant score with value $\alpha$ can be interpreted as follows: the number of times outlet $i$ references Democratic legislators is $e^\alpha$ times more than if those references followed the baseline. The slant score’s variance is given by the Mantel-Haenszel estimator [[@mantel1959statistical]]{}: $$\var(\theta_i)=\frac1{n_{ik}}+\frac1{n_i-n_{ik}}+\frac1{n_ip_k}+\frac1{n_i(1-p_k)}.$$ The variance gives the significance of the slant score measure, which relies on the number of observations ($n_i$ and $n_{ik}$) we have for each outlet. Figure \[party-scatter\] (a) shows the number of references as a function of party slant scores for outlets with more than 20 articles in our dataset. The distribution of outlets’ slant scores appears to be roughly symmetric in both directions, and outlets making more references tend to be less slanted. Table \[top-outlets\] lists the slant scores for some major news outlets and the most slanted blogs. ![\[party-scatter\]The scatter plot of number of references (observations) against party (left) and front-runner (right) slant scores for and . Outlets with less than 20 articles are not shown.](figs/plot_slant_numrefs.pdf){width="1\columnwidth"} Party ($\theta$) Front-runner ($\theta$) Region ($\theta$) -- ----------------------------- ---------------------------------- ---------------------------------- nbc (0.51) washington post (1.03) los angeles times (1.30) new york times (0.07) cnn (1.02) nbc (1.19) washington post (-0.01) fox (0.91) cbs (1.12) abc (-0.03) wall street journal (0.86) cnn (1.04) cbs (-0.03) cbs (0.84) washington times (1.00) los angeles times (-0.07) nbc (0.83) u.s. news (0.98) newshour (-0.10) los angeles times (0.82) wall street journal (0.96) cnn (-0.11) msnbc (0.74) usa today (0.96) fox (-0.13) u.s. news (0.71) washington post (0.95) npr (-0.14) new york times (0.70) msnbc (0.92) wall street journal (-0.15) washington times (0.70) npr (0.92) u.s. news (-0.22) usa today (0.66) new york times (0.89) bbc (-0.38) npr (0.64) abc (0.87) usa today (-0.39) abc (0.61) fox (0.84) msnbc (-0.39) newshour (0.32) newshour (0.78) washington times (-0.96) bbc (0.00) bbc (0.20) dissenting times (5.22) arlnow.com (9.41) blue jersey (8.32) cool wicked stuff (3.89) janesville (9.05) \[...\] virginia politics (7.86) justicedenied13501 (3.58) take back idaho’s \[...\] (8.84) politics on the hudson (7.34) polifrog.com (3.54) moral science club (8.84) calwatchdog (7.23) dennis miller (3.46) murray for congress (8.67) staradvertiser \[...\] (7.19) : \[top-outlets\]Slant scores $\theta$ for major news outlets and most slanted blogs. For party slant, a positive (negative) score means the outlet is likely to be D-slanted (R-slanted). For front-runner and regional slant, a larger score indicates the outlet is more focused on few particular legislators or states. ### Summary statistics In order to characterize the overall bias within a media, we derive a system-wide bias measure based on the individual outlets’ measures. We use a *random effect* model, which assumes not only variation within each outlet, but also variation across different outlets in the system. More specifically, the model assumes that the slant scores for $n$ outlets $(\theta_1, \ldots,\theta_n)$ are sampled from $\mathcal{N}(\theta,\tau^2)$, and there are two sources of variation: the variance between outlets $\tau^2$ and the variance within outlets $\sigma^2$. Hence, the model is given by $$\hat\theta_i\sim \mathcal{N}(\theta,\sigma^2+\tau^2).$$ We use the DerSimonian-Laird estimator [[@dersimonian1986meta]]{} to obtain $\theta^*$ and $\var(\theta^*)$, where $\theta^*$ is the asymptotically unbiased estimator for $\theta$. The media-wide *collective party slant score*, $\Theta$, is defined as $\Theta\equiv\theta^*$ with a $\pm1.96\sqrt{\var(\theta^*)}$ confidence interval. Table \[slant-scores\] summarizes slants with respect to different baselines. The measure is based on the party composition of members in Congress, and is based on the fraction of the US population represented by the legislators (in each party). The statistical significance of each measure is represented by the variance. Note that in this two-party case, a different baseline can be obtained simply by shifting the score. For example, if one chooses to use $\pD=\pR=0.5$ as the baseline probability, the measure $\Theta_{0.5}$ can be calculated from by adding $\log(\frac{\pD}{1-\pD})\approx 0.405$ (where in terms of Congress composition $\pD\approx0.6$). ----------------------- ------- -------------- -------------- -------------- -------------- (r)[3-4]{} (r)[5-6]{} News -0.02 (0.02) -0.06 (0.02) -0.22 (0.03) -0.45 (0.04) Blogs -0.11 (0.02) -0.15 (0.02) -0.18 (0.04) -0.41 (0.04) News -0.05 (0.02) -0.08 (0.02) -0.19 (0.04) -0.45 (0.04) Blogs -0.16 (0.02) -0.19 (0.02) -0.12 (0.04) -0.39 (0.04) News -0.26 (0.04) 0.07 (0.03) -0.28 (0.06) 0.45 (0.05) Blogs -0.29 (0.04) 0.03 (0.04) -0.32 (0.07) 0.41 (0.06) Front- News 0.68 (0.01) 0.60 (0.01) 0.66 (0.02) 0.55 (0.03) runner Blogs 0.33 (0.01) 0.23 (0.01) 0.39 (0.02) 0.29 (0.03) News 0.97 (0.01) -0.13 (0.01) 0.76 (0.01) 0.45 (0.03) Blogs 0.61 (0.01) -0.21 (0.02) 0.44 (0.02) 0.18 (0.03) ----------------------- ------- -------------- -------------- -------------- -------------- : \[slant-scores\]The collective slant scores. Parenthetical values indicate standard deviation of the measured slant score. We also separate our measures for referencing members of the House and Senate to see if outlets exhibit different slants when covering the two chambers. Evaluated on the party percentage baseline, both media show R-slant when referencing Senators, and blogs are more R-slanted when referencing members of the House. Hence are overall more R-slanted than . This interpretation depends on what baseline is chosen, however. For example, if we choose to use the 50-50 convention, both media become D-slanted. However, it is important to note that the absolute difference between the bias measures for the two media do not change with baseline. Slant Dynamics -------------- To study how media bias may change over time, we calculate the slant scores using references made during running windows. We measure $\Theta(t,w)$ as a function of time $t$ and window length $w$. Figure \[party-slant-dynamics\] shows the temporal slant scores for the two media during the four-month period, based on a $w=\mbox{2-week}$ running window. The slant of both media changes slightly after the mid-term election: Compared with their pre-election slants, become slightly more R-slanted when referencing Senators and are more R-slanted when referencing Representatives. Overall, the media, especially , become more R-slanted after election. This is reasonable due to the Republican victories. These results raise an important question: do the majority of outlets become more R-slanted after the election, or do R-slanted outlets become more active while D-slanted outlets become quieter? To examine what caused the slant change we plot in Fig. \[party-slant-diff\] the change in slant score $\Delta \theta_i = \theta_i(t_2) - \theta_i(t_1)$, where $t_1\in\mbox{[Sep.~1, Oct.~30]}$ and $t_2\in\mbox{[Nov.~7, Jan.~4]}$, for each outlet against its slant score before the election. (Point size indicates the amount of references observed after the election.) We use a linear regression to quantify the slant change. Surprisingly, we see media outlets shifted slightly toward the other side after the election regardless of their original slants, but overall the originally D-slanted outlets become more R-slanted. Front-Runner Slant ------------------ To evaluate whether or not the media pay excessive attention on popular front-runners, we extend the dichotomous-outcome measure used in the previous section. We consider a generalization of the odds ratio proposed by Agresti [[@agresti1980generalized]]{}. Let $n_{ik}^c$ now be the number of times outlet $i$ refers to the $k$-th legislator, where $c\in\{\mbox{\nn, \bb}\}$ as before, and $k\in\{1,2,...,L\}$ is the *rank index* for one of the $L$ legislators, ordered by the number of references received from outlet $i$. We can replace $n_{ik}$ by the sample proportion $p_{ik}=n_{ik} / n_i$. The slant score $\theta_i$ of outlet $i$ is defined by a generalized log-odds-ratio: $$\label{glor} \theta_i = \log\left(\frac{\sum_{j>k}{p_{ik}p_j}}{\sum_{j<k}{p_{ik}p_j}}\right),$$ where $p_j$ is, again, the baseline probability that $i$ refers to the $j$-th legislator, and the $\{p_j\}$ can be chosen to be uniform or any other distribution. For convenience we commonly fix the baseline distribution for all $i$. When $L=2$, Eq. \[glor\] reduces to a dichotomous-outcome log-odds-ratio measure similar to Eq. \[eqn:logodds\]. When $L>2$ and the $\{p_j\}$ are not uniform, changing to a different baseline is not a simple linear shift. With Eq. \[glor\], a slant score with value $\alpha$ can be interpreted as follows: the number of times outlet $i$ mentions high ranked legislators is $e^{\alpha}$ times more than if the legislators were ranked according to their baseline probabilities. The variance in the slant score is now given by [@agresti1980generalized]: $$\label{glor-var} \var(\theta_i)= \frac{\sum_j p_{ij}\left(\alpha_{ij} \right)^2 + \sum_j p_j\left(\beta_{ij}\right)^2} {n_i\left(\sum_{k>j}p_{ik}p_j\right)^2}$$ where $$\alpha_{ij}=\theta_i \sum_{k<j}p_k - \sum_{k>j}p_k, ~~~~ \beta_{ij}=\theta_i\sum_{k>j}p_{ik} - \sum_{k<j}p_{ik}.$$ Figure \[party-scatter\] (b) plots the number of references (observations) against front-runner slant scores for media and blog outlets with more than 20 posts in our dataset. We expect the frontrunner slant scores to be mostly positive since the legislators are already ranked by popularity ($n_{ik}$). The system-wide frontrunner slant score for both news and blog media can be calculated as before. Table \[slant-scores\] summarizes front-runner slants with respect to various baselines. Note that the two media show different biases when referencing the two chambers: Blogs are more front-slanted than news about Senators, while news outlets are more front-slanted when referencing Representatives. Other Types of Slant -------------------- ### Ideology The concept of ideology is closely related to that of political party – members of the same party usually share similar or less contradictory ideologies. We study the ideological bias using a method similar to the party slant analysis. We first locate each legislator relative to an identifiable ideological orientation such as left or right, and then use the dichotomous-outcome measure to obtain ideological slant scores for individual outlets as well as system-wide scores for and . We use the DW-NOMINATE scores for the U.S. Congress [@lewis2004measuring] as measures of legislators’ ideological locations[^6]. The estimates are based on the history of roll call votes by the members of Congress and have been widely used in political science studies and related fields. We classify each legislator as either ideologically-left or -right, based on the sign of their estimates[^7]. We then calculate the ideological slant score $\theta_{ik}$, $k\in\{\mbox{Left, Right}\}$ for each outlet $i$ with $k=\mathrm{Left}$ so that $\theta_{i}>0$ indicates outlet $i$ is more likely to be Left-slanted. Our ideological slant measurements are also summarized in Table \[slant-scores\]. We find this measure is highly correlated with the party slant measurement (with Pearson correlation $r=0.958$ and $p<10^{-5}$). This suggests that, while party members may be found at different positions in the left-right spectrum, media outlets tend to pick legislators who are representatives of the two parties’ main ideologies, such as Left-wing Democrats or Right-wing Republicans. ### Gender Gender is also treated as a dichotomous variable, where $\theta_{i}>0$ indicates that the coverage of outlet $i$ favors male legislators. The results, summarized in Table \[slant-scores\], show that blogs have a slightly stronger female-slant than news. However, when considering the population baseline, the slant for both media is significant for the Senate but nearly insignificant for the House. The gender composition in both chambers is similar – 20% of the members are women. The differences in the estimates based on different baselines reflect a very different voter population represented by the female/male legislators in both chambers. ### Region We consider region as a categorical variable. For each legislator, the state or territory of his or her district is used. The region slant is calculated like the front-runner slant: the slant score $\theta_i$ is defined as per Eqs. \[glor\] and \[glor-var\], where $k\in\{1,2,...,S\}$ is the rank index for one of the $S$ states in the US, ordered by the number of references received from outlet $i$. The results are again summarized in Table \[slant-scores\]. Overall, news outlets show a much stronger regional bias than blogs. The negative slant scores in the House, based on the population baseline, indicate outlets’ favor those representatives from more populous states. Examining Coverage ================== As mentioned earlier, the slant scores of media outlets are calculated based only on the quantity of references to legislators, and are independent of the coverage content. In this section, we examine two intrinsic aspects of this coverage, the hyperlinks between outlets and the sentiments of the textual content, as related to the party slants. Links ----- We extract the hyperlinks embedded in each news article or blog post and study how media outlets with different slants link to one another. Using the sign of the party slant score $\theta_p$, we divide and into four sectors: D-slanted news, R-slanted news, D-slanted blogs, and R-slanted blogs. Table \[tbl:hyperlinks\] shows the prevalence of links among the four sectors. Each entry $(i,j)$ represents the total number of hyperlinks from outlets in category $i$ pointing to the articles of outlets in category $j$. The linking pattern exhibits interesting phenomena: first and the most obvious characteristic between the two media is that news outlets have far fewer hyperlinks in their articles compared with blog posts. Blogs with more hyperlinks can also be seen as second-hand reporters or commentators in response to some news articles and other blog posts. Second, articles in the D-slanted outlets, including news and blogs, are more likely to be cited, including by outlets with the opposite slant. For example, the R-slanted blogs have a large number of hyperlinks to the D-slanted news outlets. Third, the matrix shows a strong assortativity [@newman2003mixing] in the D-slanted community – the D-slanted blogs are more likely to cite articles from D-slanted news and blogs than the R-slanted blogs are to cite R-slanted news and blogs. In fact, linking patterns among the R-slanted community appear to be disassortative. It would be interesting to compare our results with those of Adamic, *et al.* [[@adamic2005political]]{}. News (R) News (D) Blogs (R) Blogs (D) ----------- ---------- ---------- ----------- ----------- News (R) 99 125 68 67 News (D) 84 234 69 152 Blogs (R) 256 500 287 293 Blogs (D) 298 895 299 623 : \[tbl:hyperlinks\]The strength of hyperlinks among and with Democrat or Republican slants. Each entry $(i,j)$ represents the total number of hyperlinks from category $i$ to $j$. Texts ----- Our slant estimation is based on how many times an outlet references a legislator, regardless of positive or negative attitude. Without any sentiment information, the estimated scores need to be interpreted carefully: a significant slant score only reflects the existence of bias, but not the polarity (if any) of such bias. This subsection describes our attempt to study sentiment information within the media. We employ the OpenAmplify APIs[^8] to extract the sentiment information of each reference. The APIs return, for each article, the detected name entities and the sentiment values associated with the entities. We derive sentiment information for (outlet, legislator) pairs by matching legislator names to the names detected in each article, then aggregate the sentiment scores associated with these legislators over all of the outlet’s articles. The sentiment scores for parties can be derived from the scores received by party members. Figure \[fig:senti\] shows the probability density of the resultant negative sentiment scores against the party slant scores. The results show a weak correlation between sentiment values and the party slant scores. Outlets’ sentiments for Democratic legislators are positively correlated to their slant scores, while sentiments for Republican legislators are negatively correlated. This suggests the outlets with slants to a particular party tend to mention that party less negatively. Then tendency is easier to discover in than in , but this can be caused by differences in the use of language rather than the level of bias. Modeling the reference-generating process ========================================= What are the underlying mechanisms governing how and choose to reference legislators? Are there similarities or differences between these two media? We propose to use a simple generative model [@bagrow2008phase] for the probability $P(n)$ that a legislator is referenced a total of $n$ times. Comparing the results of the model’s isolated mechanism with the actual data will give intuition about factors contributing to the observed $P(n)$. The model is as follows. Initially ($t=0$), we assume[^9] a single reference to some legislator $k'$ such that $n_k(0) = \delta(k,k')$, for all $k$. At each time step the media ( or ) selects a random legislator to reference in an article. With probability $q$, however, the media rejects that legislator and instead references a legislator with probability proportional to his or her current coverage. That is, at each time step $t$, $n_k(t+1) = n_k(t) +1$ occurs with probability $p_k(t)$: $$p_{k}(t)=\begin{cases} 1 / \left|\Vl \right| & \text{with prob.~$1-q$ }, \\ n_k(t) / \sum_{k'}n_{k'}(t) & \text{with prob.~$q$}. \end{cases}$$ This captures the intuitive “rich-get-richer” notion of fame, while the parameter $q$ tunes its relative strength. Those legislators lucky (or newsworthy) enough to be referenced early on are likely to become heavily referenced, since they have more opportunities to receive references, especially as $q$ increases. Since one reference is handed out at each timestep, the total number of references measured empirically fixes the timespan over which the model is run; $\left|\Vl\right|$ is also fixed, so the model has one parameter, $q$. Asymptotically ($\left|\Vl\right|\to\infty$), this model gives a pure power law $P(n) \sim n^{-1-1/q}$ for all $q>0$ [@bagrow2008phase]. The distribution of $n$ is more complex for finite $\left|\Vl\right|$, however, obtaining a gaussian-like form for $q<1/2$ and a heavy-tailed distribution for $q>1/2$. Figure \[fig:model\] compares the observed $P(n)$ with that generated using the model process. We observe good qualitative agreement, better than fitted poisson or log-normal distributions, although there is a slight tendency to overestimate popular legislators and underestimate unpopular legislators. The empirical distributions also exhibit a slight bimodality, perhaps due to the 2010 election, that is not captured by the model. The larger value of $q$ for than for provides evidence that collectively are more driven by a rich-get-richer selection process than , although this may not hold at the individual outlet level. The measures of front-runner slant indicate that have a stronger front-runner bias than . This seems to conflict with the reference generating model, which showed that blog behavior is more explainable by the rich-get-richer mechanism ($q$ is larger for than for ). However, we argue that the measures and the model are in fact consistent, since the model only treats the aggregate of the entire media class – the stronger front-runner bias in outlets means that each outlet is more likely to reference their own *intrinsic* set of front-runners, which may be different from others’; for , the “stickiness” of their individual set of front-runners is weaker and hence over time globally popular front-runners are more likely to emerge. Further examination of this argument would be to explicitly model the bias of individual outlets. This one-parameter model neglects a number of dynamical features that may be worth future pursuit. For example, generalizations may be able to explain temporal dynamics of the references, the joint distributions $n_{ik}$ between media outlet $i$ and legislator $k$, etc. Discussion and Open Issues ========================== Our results show that and , in aggregate, have only slightly different slants in terms of party and ideology. However, the dynamics of the party slant measures suggest blogs are more sensitive to exogenous shocks, such as the mid-term election. Our observations were made over a short, four-month timeframe, yet long-term, continuous tracking of slant dynamics would be necessary to reveal any consistently different dynamical behavior between the two media. Our measures and model are solely based on the quantity of coverage. We have conducted preliminary sentiment analysis using an off-the-shelf tool and compared the extracted sentiment results with our measures. The results suggest a weak connection between the quantity and semantics of referencing a subject. It would be worth investigating the accuracy of sentiment detection on different media content and how sentiment analysis can be used to identify bias from texts. In addition, critical content analysis (which examines not only the text but also the relationship with audience) and multivariate analysis (since multiple types of slants are inter-related) may be leveraged for further analysis. Conclusion ========== In this paper, we develop system-wide bias measures to quantify bias in mainstream and social media, based on the number of times media outlets reference to the members of the 111th US Congress. In addition to empirical measurements, we also present a generative model to explore how each media’s global distribution of the number of references per legislator evolves over time. We observe that social media are indeed more social, i.e. more affected by network and exogenous factors, resulting in a more heavily-skewed and uneven distribution of popularity. Perhaps, there are more voices than ever, but many are echoes. We plan to continue work along the lines discussed in the previous section, such as long-term tracking of slant dynamics in the two media, modeling individual outlets’ biases, and leveraging content analysis and multivariate analysis. Acknowledgments {#acknowledgments .unnumbered} --------------- We thank F. Simini and J. Menche for many useful discussions, and gratefully acknowledge support from NSF grant \# 0429452. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. [10]{} D. Gillmor. . O’Reilly, 2004. T. Groseclose and J. Milyo. A measure of media bias. , 120(4):1191–1237, 2005. S. Iyengar and K.S. Hahn. Red media, blue media: Evidence of ideological selectivity in media use. , 59(1):19–39, 2009. T. Yano, P. Resnik, and N.A. Smith. Shedding (a thousand points of) light on biased language. In [*NAACL Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk*]{}, 2010. S.R. Lichter, S. Rothman, and L.S. Lichter. . Adler & Adler Publishers, 1986. J. Eldridge and G. Philo. . Routledge, 1995. S. Mullainathan and A. Shleifer. The market for news. , 95(4):1031–1053, 2005. M. Gentzkow. What drives media slant? evidence from us daily newspapers. , 78(1):35–71, 2010. B. Pang and L. Lee. Opinion mining and sentiment analysis. , 2(1-2):1–135, 2008. B.L. Monroe, M.P. Colaresi, and K.M. Quinn. Fightin’words: Lexical feature selection and evaluation for identifying the content of political conflict. , 16(4):372, 2008. J. Leskovec, L. Backstrom, and J. Kleinberg. Meme-tracking and the dynamics of the news cycle. In [*SIGKDD 2009*]{}, pages 497–506. ACM, 2009. B. O’Connor, R. Balasubramanyan, B.R. Routledge, and N.A. Smith. From tweets to polls: Linking text sentiment to public opinion time series. In [*ICWSM 2010*]{}, pages 122–129. AAAI, 2010. N. Mantel and W. Haenszel. Statistical aspects of the analysis of data from retrospective studies. , 22(4):719–748, 1959. R. DerSimonian and N. Laird. Meta-analysis in clinical trials. , 7(3):177–188, 1986. A. Agresti. Generalized odds ratios for ordinal data. , 36(1):59–67, 1980. J.B. Lewis and K.T. Poole. Measuring bias and uncertainty in ideal point estimates via the parametric bootstrap. , 12(2):105, 2004. M.E.J. Newman. Mixing patterns in networks. , 67(2):26126, 2003. L.A. Adamic and N. Glance. . In [*Proceedings of the 3rd international workshop on Link discovery*]{}, pages 36–43. ACM, 2005. James P. Bagrow, Jie Sun, and Daniel [ben-Avraham]{}. Phase transition in the rich-get-richer mechanism due to finite-size effects. , 41(18):185001, 2008. [^1]: [www.opencongress.org](www.opencongress.org) [^2]: OpenCongress uses Daylife ([www.daylife.com](www.daylife.com)) and Technorati ([technorati.com](technorati.com)) to aggregate articles from these feeds. The possible selection biases in these filtering processes are not considered in this paper. [^3]: An example news/blog coverage feed can be found at <http://www.opencongress.org/people/news_blogs/300075_Lisa_Murkowski> [^4]: We also have a small number of blogs hosted by mass media news outlets, e.g. CNN (blog). This paper does not include analysis of such blogs. [^5]: Our view on the meaningfulness of a measurement based solely on quantity is similar to the study of Groseclose and Milyo [[@groseclose2005measure]]{}. [^6]: Based on their method, each member’s ideological point is estimated along two dimensions. Previous research has shown that – the first dimension reveals standard left-right or economic cleavages, and the second dimension reflects social and sectional divisions. In this paper we use only the first dimension. [^7]: Estimates for the 111th Congress are available at: <http://voteview.spia.uga.edu/dwnomin.htm> [^8]: <http://community.openamplify.com/> [^9]: This initial condition differs from the flat start of Bagrow, et al. [[@bagrow2008phase]]{}, with important consequences for finite-time models.
{ "pile_set_name": "ArXiv" }
--- abstract: 'From a rigorous multichannel quantum-defect formulation of bimolecular processes, we derive a fully quantal and analytic model for the total rate of exoergic bimolecular reactions and/or inelastic processes that is applicable over a wide range of temperatures including the ultracold regime. The theory establishes a connection between the ultracold chemistry and the regular chemistry by showing that the same theory that gives the quantum threshold behavior agrees with the classical Gorin model at higher temperatures. In between, it predicts that the rates for identical bosonic molecules and distinguishable molecules would first decrease with temperature outside of the Wigner threshold region, before rising after a minimum is reached.' author: - Bo Gao bibliography: - 'bgao.bib' - 'twobody.bib' - 'chem.bib' date: 'October 11, 2010' title: Universal model for exoergic bimolecular reactions and inelastic processes --- The recent experiment by the JILA group [@osp10] represents a milestone in studies of chemical reactions. For the first time, reactions are studied in a temperature regime where the quantum nature of the relative motion of the reactants becomes unequivocally important, as reflected in the quantum threshold behavior and in the importance of quantum statistics. More importantly, the experiment strongly suggests that bimolecular reactions in the ultracold regime follow universal behaviors determined by the long-range interaction, as spelled out in more detail in related theoretical works by Julienne and Idziaszek [@jul09; @idz10] and by Quéméner and Bohn [@que10]. The significance of such experiments goes beyond exploring chemical reactions in a new temperature regime with many unique characteristics, such as controllability via moderate external fields [@Krems2008; @ni10; @que10]. By forcing a new perspective on the quantum theory of reactions, as demanded by their interpretation, they have potential to improve our understanding of reactions and inelastic processes at all temperatures. This paper is a illustration of such an outcome. From a rigorous multichannel quantum-defect formulation of bimolecular processes, we derive here a fully quantal model for the total rate of exoergic bimolecular reactions and/or inelastic processes that is applicable over a wide range of temperatures. The theory establishes a connection between the ultracold chemistry and the regular chemistry by showing that the same theory that gives the quantum threshold behavior [@osp10; @jul09; @idz10] agrees with the classical Gorin model [@gor38; @fer06] at higher temperatures. In between, it shows that the rates for identical bosonic molecules and distinguishable molecules would first decrease with temperature outside of the Wigner threshold region, before rising after a minimum is reached. The theory further illustrates explicitly how the quantum effects, including effects of quantum statistics, gradually diminish at higher temperatures, and establishes the van der Waals temperature scale as the one that separates the quantum and the semiclassical behaviors of reactions. The same formalism is applicable to ion-molecule reactions where our quantum model, with details to be presented in a separated publication, would approach the classical Langevin model [@lan05; @fer06] at high temperatures. Consider the collision of two distinguishable molecules $A$ and $B$ in the absence of any external fields. The cross section for a transition from an entrance channel $i$ to an exit channel $f$ can be written in terms of the $S$ matrix as [@mot65] $$\begin{aligned} \sigma_{fi}(\epsilon) &=& \frac{\pi}{(2F_{Ai}+1)(2F_{Bi}+1)k_i^2} \nonumber\\ & &\times\sum_{F_t,F_i,l_i,\{q_f\}} (2F_t+1)|S^{(F_t)}_{fi}-\delta_{fi}|^2 \;.\end{aligned}$$ Here $\epsilon\equiv E-E_i=\hbar^2k_i^2/2\mu$ is the energy relative to the entrance channel $i$, with $\mu$ being the reduced mass. $F_{A}$ and $F_{B}$ are the total (internal) angular momenta of molecules $A$ and $B$, respectively. $\mathbf{F}=\mathbf{F}_{A}+\mathbf{F}_{B}$ is the total angular momentum excluding $l$, which is the relative angular momentum between $A$ and $B$. $F_t$ is the total angular momentum of the system, which is conserved in the absence of external fields. $\{q_f\}$ represents the quantum numbers, excluding $F_t$, that are required to characterize an exit channel $f$. The exit channels can be classified into elastic channels, labeled by $\{e\}$, inelastic channels, labeled by $\{u\}$, and reactive channels, labels by $\{r\}$. From the unitarity of the $S$ matrix [@mot65], the total cross section, $\sigma_{\mathrm{ur}}\equiv\sum_{f\in\{u,r\}}\sigma_{fi}$, for the combination of all inelastic and reactive processes, can be written as $$\begin{aligned} \sigma_{\mathrm{ur}}(\epsilon) &=& \frac{\pi}{(2F_{Ai}+1)(2F_{Bi}+1)k_i^2}\nonumber\\ & &\times\sum_{F_t,F_i,l_i} (2F_t+1)(1-\sum_{F_e,l_e}|S^{(F_t)}_{ei}|^2) \;. \label{eq:xsur}\end{aligned}$$ The implication is that such a total cross section is completely determined by the $S$ matrix elements for *elastic channels only*. The corresponding rate constant at temperature $T$ is given in terms of $\sigma_{\mathrm{ur}}$ by $$K(T) = \left(\frac{8k_BT}{\pi\mu}\right)^{1/2} \frac{1}{(k_BT)^2}\int_0^\infty \epsilon\sigma_{\mathrm{ur}}(\epsilon)\exp(-\epsilon/k_BT)d\epsilon \;, \label{eq:tave}$$ where $k_B$ is the Boltzmann constant. Considerably further understanding of bimolecular processes can be achieved through a multichannel quantum-defect theory (MQDT) (see Ref. [@gao05a] and references therein), especially through an $S$ matrix formulation in terms of quantum reflection and transmission amplitudes associated with the long-range potential [@gao08a]. The theory, which is a multichannel generalization of the $S$ matrix formulation of Ref. [@gao08a], gives $$\begin{aligned} S^{(F_t)} &=& -(-1)^{l}\left[r^{(oi)}_{oo} +t^{(io)}_{oo}S^{c}_{\mathrm{eff}} (I-r^{(io)}_{oo}S^{c}_{\mathrm{eff}})^{-1}t^{(oi)}_{oo}\right] \;, \label{eq:Smqdt1} \\ &=& -(-1)^{l}\left\{r^{(oi)}_{oo} +t^{(io)}_{oo}S^{c}_{\mathrm{eff}} \left[\sum_{m=0}^{\infty}(r^{(io)}_{oo}S^{c}_{\mathrm{eff}})^m \right]t^{(oi)}_{oo}\right\} \;. \label{eq:Smqdt1e}\end{aligned}$$ Here $(-1)^l$ is a diagonal matrix with elements $(-1)^{l_j}$ for channel $j$. The $r^{(oi)}_{oo}$ and $t^{(oi)}_{oo}$ are diagonal matrices for the open channels with elements $r^{(oi)}_{l_j}(\epsilon_{sj})$ and $t^{(oi)}_{l_j}(\epsilon_{sj})$ representing the (complex) quantum reflection and the quantum transmission amplitudes, respectively, for molecules going outside-in (approaching each other) [@gao08a]. They are universal functions of scaled energies, $\epsilon_{sj}\equiv (E-E_j)/s_{Ej}$, that are uniquely determined by the exponent, $\alpha_j$, of the long-range interaction, $-C_{\alpha j}/R^{\alpha_j}_j$, in channel $j$, and the $l_j$. Such long-range interactions have length scales $\beta_{\alpha j}=(2\mu_j C_{\alpha j}/\hbar^2)^{1/(\alpha_j-2)}$ and corresponding energy scales $s_{Ej}=(\hbar^2/2\mu_j)(1/\beta_{\alpha j}^2)$, associated with them. The $r^{(io)}_{oo}$ and $t^{(io)}_{oo}$ are similar, except that their elements are amplitudes for molecules going inside-out (moving away from each other) [@gao08a]. The $S^{c}_{\mathrm{eff}}$ is an effective short-range $S$ matrix [@gao08a], after the elimination of the closed channels [@gao05a]. It has the physical meaning of being an effective reflection amplitudes by the inner potential. Equation (\[eq:Smqdt1\]) for the $S$ matrix has a clear physical interpretation as discussed in Ref. [@gao08a]. In particular, the $m$-th term in its expansion, Eq. (\[eq:Smqdt1e\]), corresponds to the contribution from a path in which the fragments are reflected $m+1$ times by the inner potential. Further simplification can be achieved by recognizing that $r^{(io)}_{l_j}(\epsilon_{sj})\approx 0$ for $\epsilon_{sj}\gg s_{Ej}$ [@gao08a]. Dividing all open channels into elastic and near-degenerate channels with $\epsilon_{sj}<\sim s_{Ej}$, and other channels with $\epsilon_{sj}\gg s_{Ej}$, we have $$\begin{aligned} S^{(F_t)}_{ei} &\approx& -(-1)^{l_e}\left\{r^{(oi)}_{l_i} \delta_{ei}\right.\nonumber\\ & &+\left.t^{(io)}_{l_e}\left[\widetilde{S}^{c}_{\mathrm{eff}} (1-r^{(io)}\widetilde{S}^{c}_{\mathrm{eff}})^{-1}\right]_{ei}t^{(oi)}_{l_i}\right\} \;, \label{eq:Smqdt2}\end{aligned}$$ where $\widetilde{S}^{c}_{\mathrm{eff}}$ is a submatrix of the effective short-range $S^{c}_{\mathrm{eff}}$ that includes only the elastic and other near-degenerate channels for which the quantum reflection amplitude $r^{(io)}_l$ differs substantially from zero. A number of different theories and models, both exact and approximate, can be derived from Eq. (\[eq:xsur\]), and either Eq. (\[eq:Smqdt1\]) or (\[eq:Smqdt2\]). The universal model to be presented here, which we call the quantum Langevin (QL) model, results from the assumption of no reflection by the inner potential, namely, $$\widetilde{S}^{c}_{\mathrm{eff}}\approx 0 \;. \label{eq:LangAssump}$$ It is a rigorous mathematical representation of the Langevin assumption [@lan05; @fer06] in a quantum theory. In plain language, it assumes that whenever two molecules come sufficiently close to each other, so many “bad” things can happen that they can never get out of it in their initial configurations. It can be expected to be a good approximation whenever there are a large number of open exit channels that are strongly coupled to the entrance channel in the inner region. For it to be satisfied in the limit of zero energy, the reactions and inelastic processes under consideration have to be at least exoergic. Under the Langevin assumption, Eq. (\[eq:Smqdt2\]) gives $$S^{(F_t)}_{ei} \approx -(-1)^{l_i}r^{(oi)}_{l_i}\delta_{ei}\;. \label{eq:SQT}$$ It implies that the elastic $S$ matrix elements in the QL model, and therefore the total cross section and the corresponding total rate for inelastic and reactive processes, are all described by universal functions that are uniquely determined by the long-range interaction in the entrance channel. Substituting Eq. (\[eq:SQT\]) into Eq. (\[eq:xsur\]) and subsequently into Eq. (\[eq:tave\]), the total rate of inelastic and reactive processes in the QL model can be written as $$K(T) = s_K {\cal K}^{(\alpha)}(T_s) \;.$$ Here $s_K$ is the rate scale corresponding to the long-range, $-C_\alpha/R^\alpha$, interaction in the entrance channel. $$s_K = (\hbar/\mu\beta_{\alpha})\pi\beta_{\alpha}^2 = \pi\hbar\beta_{\alpha}/\mu \;,$$ in which $\hbar/\mu\beta_{\alpha}$ is the velocity scale corresponding to the length scale $\beta_{\alpha}$. ${\cal K}^{(\alpha)}(T_s)$ is a universal function of the scaled temperature, $T_s = T/(s_E/k_B)$, that is uniquely determined by the exponent $\alpha$. Specifically, $${\cal K}^{(\alpha)}(T_s) = \frac{2}{\sqrt{\pi}} \int_0^\infty dx\: x^{1/2} e^{-x} {\cal W}^{(\alpha)}(T_s x)\;,$$ where ${\cal W}^{(\alpha)}(\epsilon_s)$ is a scaled total rate before thermal averaging. It depends on energy only through the scaled energy $\epsilon_s = \epsilon/s_E$, and has contributions from all partial waves: $${\cal W}^{(\alpha)}(\epsilon_s) = \sum_l{\cal W}^{(\alpha)}_{l}(\epsilon_s) \;.$$ Here ${\cal W}^{(\alpha)}_{l}$ is a scaled partial rate given by $${\cal W}^{(\alpha)}_{l}(\epsilon_s) =(2l+1){\cal T}^{c(\alpha)}_l(\epsilon_s)/\epsilon_s^{1/2} \;,$$ in which ${\cal T}^{c(\alpha)}_l(\epsilon_s)=|t^{(oi)}_l(\epsilon_s)|^2$ is the quantum transmission probability through the long-range potential at the scaled energy $\epsilon_s$ and for partial wave $l$ [@gao08a]. This QL model for reactions and inelastic processes is applicable to both neutral-neutral systems, for which $\alpha=6$ corresponding to the van der Waals potential, and charge-neutral systems, for which $\alpha=4$ corresponding to the polarization potential. We focus here on the neutral-neutral case to make connection with existing theories and experiments [@osp10; @jul09; @idz10; @que10]. The results for charge-neutral systems will be presented elsewhere. For $\alpha=6$, the quantum transmission probability through the long-range potential, ${\cal T}^{c(\alpha)}_l(\epsilon_s)$, which is the only quantity required to determine the universal rate functions in the QL model, can be found analytically by substituting Eqs. (A1)-(A4) of Ref. [@gao08a] into the Eq. (52) of the same reference. The result is $${\mathcal T}^{c(6)}_l(\epsilon_s) = \frac{2M_{\epsilon_s l}[\cos(\pi\nu)-\cos(3\pi\nu)]} {1-2M_{\epsilon_s l}\cos(3\pi\nu)+M_{\epsilon_s l}^2}\;. \label{eq:tp6}$$ Here $\nu$ is the characteristic exponent for $-1/R^6$ potential [@gao98a], and $$\begin{aligned} M_{\epsilon_s l}(\nu) &=& |\Delta|^{2\nu}\left[\frac{\Gamma(1-\nu)}{\Gamma(1+\nu)}\right] \left[\frac{\Gamma(1+\nu_0-\nu)}{\Gamma(1+\nu_0+\nu)}\right] \nonumber\\ & &\times \left[\frac{\Gamma(1-\nu_0-\nu)}{\Gamma(1-\nu_0+\nu)}\right] \left[\frac{C_{\epsilon_s l}(-\nu)}{C_{\epsilon_s l}(\nu)}\right] \;, \label{eq:Mnu}\end{aligned}$$ where $\Delta = \epsilon_s/16$, $\nu_0 = (2l+1)/4$, and $$C_{\epsilon_s l}(\nu) = \prod_{j=0}^{\infty} Q(\nu+j) \;, \label{eq:cj}$$ in which $Q(\nu)$ is given by a continued fraction: $$Q(\nu) = \frac{1}{1-\Delta^2\frac{1}{(\nu+1) [(\nu+1)^2-\nu_0^2](\nu+2)[(\nu+2)^2-\nu_0^2]} Q(\nu+1)} \;. \label{eq:Qcf}$$ The resulting universal rate function, ${\cal K}^{(\alpha)}(T_s)$, applicable to neutral-neutral distinguishable molecules, is illustrated in Figure \[fig:urf6\]. Similar results can be obtained for neutral-neutral interactions of identical molecules, following considerations similar to those of Ref. [@gao96]. They are given generally by a combination of two universal rate functions defined by $${\cal K}^{S(\alpha)}(T_s) = \frac{2}{\sqrt{\pi}} \int_0^\infty dx\: x^{1/2} e^{-x} {\cal W}^{S(\alpha)}(T_s x)\;,$$ where $${\cal W}^{S(\alpha)}(\epsilon_s) = 2\sum_{l=\mathrm{even}}{\cal W}^{(\alpha)}_{l}(\epsilon_s) \;,$$ and $${\cal K}^{A(\alpha)}(T_s) = \frac{2}{\sqrt{\pi}} \int_0^\infty dx\: x^{1/2} e^{-x} {\cal W}^{A(\alpha)}(T_s x)\;,$$ where $${\cal W}^{A(\alpha)}(\epsilon_s) = 2\sum_{l=\mathrm{odd}}{\cal W}^{(\alpha)}_{l}(\epsilon_s) \;.$$ For example, in terms of ${\cal K}^{S(\alpha)}$ and ${\cal K}^{A(\alpha)}$, $K^S(T)=s_K{\cal K}^{S(\alpha)}$ gives the rate for identical bosonic molecules in the same internal ($M$) state, and $K^A(T)=s_K{\cal K}^{A(\alpha)}$ gives the rate for identical fermionic molecules in the same internal state. The three rate functions are related by ${\cal K}^{(\alpha)}=({\cal K}^{S(\alpha)}+{\cal K}^{A(\alpha)})/2$, and are all illustrated in Figure \[fig:urf6\] for $\alpha=6$. At ultracold temperatures such that $T_s \ll 1$, a QDT expansion [@gao09a] of ${\mathcal T}^{c(6)}_l(\epsilon_s)$ gives $${\cal K}^{S(6)}(T_s) = 8\bar{a}_{sl=0}\left[ 1-\frac{4\bar{a}_{sl=0}}{\sqrt{\pi}}T_s^{1/2} +3\bar{a}_{sl=0}^2T_s + O(T_s^{3/2})\right]\;, \label{eq:KSexp}$$ where $\bar{a}_{sl=0} = 2\pi/[\Gamma(1/4)]^{2} \approx 0.4779888$ is the scaled mean scattering length for $l=0$ [@gao09a], $${\cal K}^{A(6)}(T_s) = 36\bar{a}_{sl=1}T_s\left[ 1-\frac{16\bar{a}_{sl=1}}{\sqrt{\pi}}T_s^{3/2} + O(T_s^{2})\right]\;, \label{eq:KAexp}$$ where $\bar{a}_{sl=1} = [\Gamma(1/4)]^{2}/36\pi \approx 0.1162277$ is the scaled mean scattering length for $l=1$ [@gao09a], and $$\begin{aligned} {\cal K}^{(6)}(T_s) &=& 4\bar{a}_{sl=0} -\frac{(4\bar{a}_{sl=0})^2}{\sqrt{\pi}}T_s^{1/2} \nonumber\\ & &+\left(12\bar{a}_{sl=0}^3+18\bar{a}_{sl=1}\right)T_s + O(T_s^{3/2})\;. \label{eq:Kexp} \end{aligned}$$ At high temperatures as characterized by $T_s\gg 1$, it is straightforward to show, from the semiclassical limit of the transmission probabilities [@gao08a], that $${\cal K}^{S(6)}(T_s)\approx{\cal K}^{A(6)}(T_s) \approx {\cal K}^{(6)}(T_s)\sim \frac{2^{4/3}\Gamma(2/3)}{\sqrt{\pi}}T_s^{1/6} \;, \label{eq:Gorin}$$ in agreement with the classical Gorin model [@gor38; @fer06]. All scaled results can be put on absolute scales using a single parameter, the $C_6$ coefficient for the entrance channel, from which both the temperature scale $s_E/k_B$ and the rate scale $s_K$ can be determined [@sup]. In the Wigner threshold region, in which the rates are accurately characterized by the first terms of Eqs. (\[eq:KSexp\])-(\[eq:Kexp\]), our results are consistent with those of Julienne and Idziaszek [@jul09; @idz10]. Outside of this region, both ${\cal K}^{(6)}$ and ${\cal K}^{S(6)}$ are predicted to first decrease with temperature, a behavior that deviates strongly from the prediction of the classical Gorin model. Specifically, ${\cal K}^{(6)}$ is predicted to reach a minimum value of ${\cal K}^{(6)}_{\mathrm{min}}\approx 1.587$ at $T_{s\mathrm{min}}^{(6)}\approx 0.1154$, for a drop of about 17% from its value at zero temperature. The ${\cal K}^{S(6)}$ is predicted to reach a minimum value of ${\cal K}^{S(6)}_{\mathrm{min}}\approx 1.908$ at $T^{S(6)}_{s\mathrm{min}}\approx 1.114$, for a drop of about 50% from its value at zero temperature. For the JILA experiment [@osp10], $T_{s\mathrm{min}}^{(6)}$ translates, using the $C_6$ coefficients of Kotochigova [@kot10; @sup], to $T_{\mathrm{min}}^{(6)}\approx 2.58$ $\mu$K for $^{40}$K$^{87}$Rb+$^{40}$K$^{87}$Rb in different internal states, and to $T_{\mathrm{min}}^{(6)}\approx 11.9$ $\mu$K for $^{40}$K+$^{40}$K$^{87}$Rb. It is worth noting that an experimental measurement of either $T_{\mathrm{min}}^{(6)}$ or $T^{S(6)}_{\mathrm{min}}$ would constitute a measurement of the $C_6$ coefficient, a fact that can be valuable especially for more complex molecules for which theoretical calculations of $C_6$ [@der99; @kot10] become increasingly difficult and unreliable. At higher temperatures, our results show how the quantum effects, including that of quantum statistics, gradually diminish, and all rates approach that of the classical Gorin model [@gor38; @fer06]. As illustrated in Fig. \[fig:urf6\], such a transition from quantum to semiclassical behavior occurs over a range of the van der Waals temperature scale $s_E/k_B$. The QL model gives the total rate that includes both reactive and inelastic processes. For experiments with only reactive channels open [@osp10], it give the total rate of reactions. For experiments with no open reactive channels, it gives the total rate of inelastic processes. In all case, the requirement for its validity is that there are many open channels that are strongly coupled to the entrance channel by the short-range interactions. Of interest in the context of cold-atom physics, the QL model serves to unify theories of ultracold chemistry [@jul09; @idz10; @que10; @kot10] with theories for atom-atom [@orz99], atom-molecule [@hud08] and molecule-molecule inelastic processes. For example, for a vibrational highly excited molecule, except Feshbach molecules with with very small binding energies [@kno10], the theory predicts that its collisional lifetime is approximately independent of its initial state, and that the rates for atom-molecule and molecule-molecule inelastic processes are related. More specifically, for Cs$_2$ in a highly excited rovibrational state [@dan08], it predicts that the Cs$_2$-Cs$_2$ inelastic rate should have a minimum around 6.31 $\mu$K (assuming that they are prepared in the same state), and Cs$_2$-Cs inelastic rate has a minimum around 1.70 $\mu$K. More detailed discussion of such applications will be presented elsewhere. In conclusion, we have presented a universal model of exoergic bimolecular reactions and/or inelastic processes that is applicable over a wide range of temperatures, illustrating the evolution from quantum behavior to semiclassical behavior. It is an important baseline model in which rates for different systems differ from each other only in scaling, and has an intriguing and useful property of being more accurate for more complex systems. Simple analytic formulas, to higher orders than those of Julienne and Idziaszek [@jul09; @idz10], and applicable over a substantially wider range of temperatures, are also presented. Equally important, we believe, is that the underlying MQDT formulation, such as Eq. (\[eq:Smqdt2\]), lays a solid foundation for new types of theories of reactions and inelastic processes, either rigorous or approximate, that goes further beyond the QL model. I thank Jun Ye and Timur Tscherbul for motivations and helpful discussions. This work was supported by NSF under the Grant number PHY-0758042.
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper, we provide the Rényi entropy and complexity measure for a novel, flexible class of skew-gaussian distributions and their related families, as a characteristic form of the skew-gaussian Shannon entropy. We give closed expressions considering a more general class of closed skew-gaussian distributions and the weighted moments estimation method. In addition, closed expressions of Rényi entropy are presented for extended skew-gaussian and truncated skew-gaussian distributions. Finally, additional inequalities for skew-gaussian and extended skew-gaussian Rényi and Shannon entropies are reported.\ address: - 'Division of Fisheries Research, Fisheries Development Institute, Blanco 839, Valparaíso, Chile' - 'Department of Mathematics, Universidad Técnica Federico Santa María, Valparaíso, Chile' author: - 'Javier E. Contreras-Reyes' title: | Rényi entropy and complexity measure for\ skew-gaussian distributions and related families --- skew-gaussian; Rényi entropy; complexity; weighted moments; Jensen’s inequality Introduction ============ The family of skew-gaussian distributions has been popularized by @Azzalini_1985 and ever since it has been discussed extensively in the literature. Such discussions include a wide variety of skewed models in addition to having gaussian distribution as a special case and flexibility in capturing skewness in the data [@Azzalini_Dalla-Valle_1996; @Azzalini_Capitanio_1999; @Azzalini_2013]. In this sense, @Gonzalez-Farias_et_al_2004 present the closed skew-gaussian distribution as an extension of the skew-gaussian case, but closed under operations such as sums, marginalization, and linear conditioning [@Rezaie_et_al_2014]. Another generalization of the skew-gaussian distribution is the extended skew-gaussian distribution [@Capitanio_et_al_2003] that adds a fourth real parameter to accommodate both skewness and heavy tails. In some cases where observed variables can be simultaneously skewed and restricted to a fixed interval, the truncated skew-gaussian distribution is a good choice for those applications, especially for environmental and biological variables in which the observations are positives [@Flecher_et_al_2010]. In many applications, the empirical distribution of some observed variables was modeled by a skew-gaussian distribution. For example, the closed skew-gaussian distribution is used by @Rezaie_et_al_2014 to simulate seismic amplitude variations. @Contreras-Reyes_Arellano-Valle_2012 consider the skew-gaussian distribution for seismic magnitudes of aftershocks catalogue of the 2010 Maule earthquake in Chile; @Arellano-Valle_et_al_2013 for the optimization of ozone’s monitoring network; and @Figiel_2014 for a digital reconstruction of nanocomposite morphologies from TEM (Transmission Electron Microscopy) images. An implementation of the extended skew-gaussian (in logarithmic form) can be found in @Zhou_Wang_2008 for pricing of both Asian and basket options. As mentioned above, @Flecher_et_al_2010 considers the truncated skew-gaussian distribution to fit the daily relative humidity measurements. See more applications in @Genton_2004. More recently, @Contreras-Reyes_Arellano-Valle_2012 and @Arellano-Valle_et_al_2013 compute the Kullback-Leibler divergence measure for skew-gaussian distribution and Shannon entropy for the full class of skew-elliptical distributions, respectively. They highlight that the Kullback-Leibler information measure should be represented in quadratic form, including a non-analytical expected value. In addition, they gave the Kullback-Leibler divergence of a multivariate skew-gaussian distribution with respect to multivariate gaussian distribution. Information measure applications dealing with skewed data have been performed by @Contreras-Reyes_Arellano-Valle_2012, @Arellano-Valle_et_al_2013, @Contreras-Reyes_2014 and references therein. In this work, we focus on the Rényi entropy [@Renyi_1970] as a characteristic form of the Shannon entropy to give a closed expression of skew-gaussian densities. Additionally, the LMC complexity measure [@Lopez-Ruiz_et_al_1995] is derived by the difference between the extensive Rényi entropy and Shannon entropy [@Yamano_2004]. To do this, we briefly describe the main properties of closed skew-gaussian distributions. Finally, we compute the Rényi entropy and complexity measure for the extended skew-gaussian and univariate truncated skew-gaussian densities.\ Rényi entropy and complexity measure ==================================== Consider the $\alpha$th-order Rényi entropy [@Renyi_1970] of probability density $f(\bx)$ on a variable $\bx\in\Delta\subset\mathds{R}^d$: $$\label{red} R_{\alpha}[f]=\frac{1}{1-\alpha}\ln\int [f(\bx)]^{\alpha}d\bx,$$ where normalization to unity as given by $\int f(\bx)d\bx=1$ [@Sanchez-Moreno_et_al_2014]. @Golshani_Pasha_2010 provide some important properties of the Rényi entropy: 1. $R_{\alpha}[f]$ can be negative, 2. $R_{\alpha}[f]$ is invariant under a location transformation, 3. $R_{\alpha}[f]$ is not invariant under a scale transformation, and 4. for any $\alpha_1<\alpha_2$, $\bx\in\Delta$, we have $R_{\alpha_1}[f]\geq R_{\alpha_2}[f]$, which are equal if and only if $\bx$ is uniformly distributed. From (\[red\]), the Shannon entropy is obtained by the limit $$\label{sha} S[f]=\lim_{\alpha\rightarrow 1}R_{\alpha}[f]=-\int f(\bx)\ln f(\bx)d\bx$$ by applying l’Hôpital’s rule to $R_{\alpha}[f]$ with respect to $\alpha$ [@Renyi_1970]. This measure is the expected value of $g(\bx)=-\ln f(\bx)$ with respect to $f(\bx)$, i.e., $S[f]=\langle g(\bx)\rangle$ [@Liu_et_al_2012]. Hereafter, we will refer to this as the expected information of $g(\bx)$ in $\bx$. See @Cover_Thomas_2006 for additional properties of the Shannon entropy. [@Dembo_et_al_1991; @Cover_Thomas_2006]. Let $\bx$ be gaussian with mean vector $\bmu\in\mathds{R}^d$ and $\bJ$ is a $d\times d$ variance matrix (with determinant $|\bJ|>0$). Then, the Rényi and Shannon entropies of $\bx$ are given by $$\begin{aligned} R_{\alpha}[f]&=&\frac{1}{2}\ln[(2\pi)^d|\bJ|] + \frac{d\ln\alpha}{2(\alpha-1)},\quad 1<\alpha<\infty,\label{rn}\\ S[f]&=&\frac{1}{2}\ln[(2\pi e)^d|\bJ|],\label{sn}\end{aligned}$$ respectively. Another important concept is the statistical complexity that measures the randomness and structural correlations of a known system [@Carpi_et_al_2011]. @Lopez-Ruiz_et_al_1995 proposed a measure of statistical complexity (LMC) in order to determine the [*disequilibrium*]{} of the system attributed to entropy measure [@Anteneodo_Plastino_1996; @Sanchez-Moreno_et_al_2014]. LMC measure is defined as the product $$\label{LMC} C_{LMC}[f]=e^{S[f]-R_2[f]},$$ where $R_2[f]$ is the quadratic Rényi entropy of $\bx$ ($\alpha=2$). @Yamano_2004 provide an extensive entropy instead of an additive Shannon entropy in (\[LMC\]), characterised as a difference between the $\alpha$th-order Rényi entropy and quadratic Rényi entropy as $$\label{CM} C_{\alpha}[f]=e^{R_{\alpha}[f]-R_2[f]}.$$ Note that $C_{\alpha}[f]$ reflects the shape of the distribution of $\bx$ and takes unity for all distributions when $\alpha=2$. In addition, $\mathcal{C}_{\alpha}$ satisfies a great variety of interesting mathematical and physical properties. Let us just recall here the following properties: 1. $C_{\alpha}[f]>1$, $\forall\,\alpha\leq2$, and, $0<C_{\alpha}[f]\leq1$, $\forall\,\alpha>2$; 2. $C_{\alpha}[f]$ is invariant under a location and scale transformation in the distribution of $\bx$; and 3. is invariant under replications of the original distribution of $\bx$.\ Skew-gaussian distribution and related families =============================================== The closed skew-gaussian distribution has interesting properties inherited from the Gaussian distribution and corresponds to a generalization of the skew-gaussian distribution. We briefly describe some of its inferential properties and present the weighted moments method in Proposition 2 [@Flecher_et_al_2009], necessary to calculate Rényi entropy of skew-gaussian random vectors.\ Closed skew-gaussian distributions ---------------------------------- Concerning the definition of @Flecher_et_al_2009 and @Gonzalez-Farias_et_al_2004, let $\by\in\Delta\subset\mathds{R}^d$ be a random vector with closed skew-gaussian distribution denoted as $CSN_{d,s}(\bmu,\bJ,\bD,\nu,\bA)$ and with density function $$\label{csn} f_{d,s}(\by)=\phi_d(\by;\bmu,\bJ)\frac{\Phi_s(\bD^{\top}(\by-\bmu);\bnu,\bA)}{\Phi_s(\bzero;\bnu,\bA+\bD^{\top}\bJ\bD)},$$ where $\bmu\in\mathds{R}^d$, $\bnu\in\mathds{R}^s$, $\bJ\in\mathds{R}^{d\times d}$ and $\bA\in\mathds{R}^{s\times s}$ are both covariance matrices, $\bD\in\mathds{R}^{d\times s}$, $\bD^{\top}$ denotes the transposed $\bD$ matrix, $$\phi_d(\by;\bmu,\bJ)=\frac{1}{(2\pi)^{d/2}|\bJ|^{1/2}}\exp\left(-\frac{1}{2}(\by-\bmu)^{\top}\bJ^{-1}(\by-\bmu)\right)$$ and $\Phi_d(\by;\bmu,\bJ)$ are the probability function (pdf) and cumulative distribution function, respectively, of the $d$-dimensional gaussian distribution with mean vector $\bmu$ and variance matrix $\bJ$. The closed skew-gaussian distribution is closed under translations, scalar multiplications, and full, row rank linear transformations [@Gonzalez-Farias_et_al_2004; @Genton_2004]. Let $\bT\in\mathds{R}^{n\times d}$ be a matrix with rank $n$ such that $d\leq n$, then $$\label{acsn} \bT\by=CSN_{n,s}(\bT\bmu,\tilde{\bJ},\tilde{\bD},\bnu,\tilde{\bT})$$ where $\tilde{\bJ}=\bT^{\top}\bJ\bT$, $\tilde{\bD}=\bD^{\top}\bJ \bT\tilde{\bJ}^{-1}$, and $\tilde{\bT}=\bA + \bD^{\top}\bJ\bD-\tilde{\bD}^{\top}\tilde{\bJ}\tilde{\bD}$ [@Genton_2004 see Proposition 2.3.1]. A particular case of (\[acsn\]), is the standardised random vector $\bz_0=\bJ^{-1}(\by-\bmu)$. In this case, Eq. (\[csn\]) is rewritten as $$\label{scsn} f_{d,s}(\bz_0)=\phi_d(\bz_0)\frac{\Phi_s(\bD^{\top}\bJ^{1/2}\bz_0;\bnu,\bA)}{\Phi_s(\bzero;\bnu,\bA + \bD^{\top}\bJ\bD)}.$$ Given that the closed skew-gaussian distribution is closed under translations and by property (\[acsn\]), the standardised random vector $\bZ_0$ follows $CSN_{d,s}(\bzero,\bI_d,\bD^{\top}\bJ^{1/2},\bnu,\bA)$, where $\bI_d$ denotes the $d$-dimensional identity matrix. For more details, see @Flecher_et_al_2009 and @Genton_2004. \[Pf\] [@Flecher_et_al_2009]. Let $\bY$ be a $CSN_{d,s}(\bmu,\bJ,\bD,\bzero,\bA)$, $r$ a positive integer and $h(\by)=h(y_1,\ldots,y_d)$ be any real valued function such that $\langle h(\bY)\rangle$ is finite, then $$\label{ew} \langle h(\bY)[\Phi_d(\bY;\bzero,\bI_d)]^r\rangle=\langle h(\tilde{\bY})\rangle \frac{\Phi_{rd+s}(\bzero;\tilde{\bnu},\tilde{\bA}+\tilde{\bD}^{\top}\bJ\tilde{\bD})}{\Phi_s(\bzero;\bzero,\bA+\bD^{\top}\bJ\bD)},$$ where $\tilde{\bY}\sim CSN_{d,rd+s}(\bmu,\bJ,\tilde{\bD},\tilde{\bnu},\tilde{\bA})$ with $\tilde{\bD}=(\bE^{\top},\,\bD^{\top})$, $\bE$ a $d\times rd$ matrix defined by $\bE=(\bI_d,\ldots,\bI_d)$, $\tilde{\bnu}=(-\bmu,\ldots,-\bmu,\bzero_s)$ a $(rd+s)$ vector and $$\tilde{\bA}=\left( \begin{array}{cc} \bI_{rd} & \bzero \\ \bzero & \bA \\ \end{array} \right).$$ Skew-gaussian distribution -------------------------- A special case of closed skew-gaussian is the gaussian density when $\bD=\bzero$. When $s=1$, the skew-gaussian density function is obtained [@Azzalini_Dalla-Valle_1996; @Azzalini_Capitanio_1999; @Azzalini_2013]. For simplicity, a slight variant of the original definition is considered here. In this work it is posited that a random vector $\bZ\in\Delta\subset\mathbb{R}^d$ has a skew-gaussian distribution with mean vector $\bmu\in\mathbb{R}^d$, variance matrix $\bJ\in\mathbb{R}^{d\times d}$ and shape/skewness parameter ${\mbox{\protect\boldmath $\eta$}}\in\mathbb{R}^d$, denoted by $\bZ\sim SN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}})$, if its probability density function is $$\begin{aligned} f(\bz)=2\phi_d(\bz;\bmu,\bJ)\Phi_1[{\mbox{\protect\boldmath $\eta$}}^{\top}(\bz-\bmu)].\label{SN-pdf}\end{aligned}$$ The mean vector and the variance matrix of $\bZ$ are $$\begin{aligned} \langle \bz\rangle&=&\bmu+\sqrt{\frac{2}{\pi}}\,\bdelta,\nonumber\\ \langle \bz^2\rangle&=&\bJ-\frac{2}{\pi}\bdelta\bdelta^{\top},\nonumber\end{aligned}$$ respectively, where $\bdelta=\bJ{\mbox{\protect\boldmath $\eta$}}/\sqrt{1+{\mbox{\protect\boldmath $\eta$}}^{\top}\bJ{\mbox{\protect\boldmath $\eta$}}}$ [@Azzalini_Capitanio_1999; @Contreras-Reyes_Arellano-Valle_2012]. \[T0\] Let $\bZ$ be a $SN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}})$. Then: $$\label{re1} \int [f(\bz)]^{\alpha}d\bz=\psi_{\alpha,d}(\bJ)\, \frac{\Phi_{\alpha+1}(\bzero;\bzero,\tilde{\bJ})}{\Phi_1(0;0,\sigma^2)},\quad\alpha\in\mathbb{N},\,\alpha>1,$$ where $$\psi_{\alpha,d}(\bJ)=\frac{2^{\alpha}}{\alpha^{d/2}}[(2\pi)^d|\bJ|]^{(1-\alpha)/2},$$ $\tilde{\bJ}=\bI_{\alpha+1}+\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2\tilde{\bD}^{\top}\tilde{\bD}$, $\tilde{\bD}=({\bf 1}_{\alpha},\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|)^{\top}$, ${\bf 1}_{\alpha}$ is the $\alpha$-dimensional vector of ones, $\sigma^2=1+\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^4$, $\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|=\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\tilde{{\mbox{\protect\boldmath $\eta$}}}$ and $\tilde{{\mbox{\protect\boldmath $\eta$}}}=\alpha^{-1/2}\bJ^{1/2}{\mbox{\protect\boldmath $\eta$}}$. By (\[red\]) and (\[re1\]), the Rényi entropy of a random variable $\bZ\sim SN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}})$ is retrieved. Taking ${\mbox{\protect\boldmath $\eta$}}=\bzero$ in (\[re1\]), the Rényi entropy of the gaussian distribution given by (\[rn\]) is obtained. Lemma \[Pf\] allows the computing of the expected value of the cumulative density function of a gaussian density. Considering the standarised closed skew-gaussian variable in (\[scsn\]), the Proposition \[T0\] is solved by (\[ew\]), by setting $\bnu=\bzero$ and $\bA=\bI_d$, with $d=s=1$. However, the case $\bnu\neq\bzero$ and $\bA\neq\bI_d$, $d>1$, is still an open problem and, it is useful to find the Rényi entropy for closed skew-gaussian distributions. By (\[red\]) and (\[scsn\]), the Shannon entropy for closed skew-gaussian distributions is rewritten as $$\begin{aligned} S[f]&=&-\langle\ln[f_{d,s}(\bY)]\rangle\nonumber\\ &=&\frac{1}{2}\ln|\bJ|-\ln[\Phi_s(\bzero;\bnu,\bA + \bD^{\top}\bJ\bD)] - \langle\ln[\phi_d(\bZ_0)\Phi_s(\tilde{\bD}^{\top}\bZ_0;\bnu,\bA)]\rangle\nonumber\\ &=&S[f_0]-\ln[\Phi_s(\bzero;\bnu,\bA + \bD^{\top}\bJ\bD)] - \langle\ln[\Phi_s(\tilde{\bD}^{\top}\bZ_0;\bnu,\bA)]\rangle\label{ShCSN},\end{aligned}$$ where $f_0$ is the standardised gaussian distribution and $S[f_0]=(1/2)\ln(2\pi e)$. \[NRH\] Let $\bZ\sim SN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}})$, $\bZ_N\sim N_d(\bmu,\bJ)$, $\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|=\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\tilde{{\mbox{\protect\boldmath $\eta$}}}$ and $\tilde{{\mbox{\protect\boldmath $\eta$}}}=\bJ^{1/2}{\mbox{\protect\boldmath $\eta$}}$. Then, - $\displaystyle\begin{aligned}[t] R_{\alpha}[f]&=R_{\alpha}[f_0]-N_{\alpha}[f],\quad\alpha\in\mathbb{N},\,\alpha>1,\,\mbox{where}\\ \end{aligned}$ $$N_{\alpha}[f]=\frac{1}{\alpha-1}{\rm ln}\left[2^{\alpha}\frac{\Phi_{\alpha+1}(\bzero;\bzero,\tilde{\bJ})}{\Phi_1(0;0,\sigma^2)}\right]$$ is the so-called [*Negentropy*]{}, $R_{\alpha}[f_0]$ is given by (\[rn\]), and $\tilde{\bJ}$ and $\sigma^2$ are defined as in Proposition \[T0\]. - $\displaystyle\begin{aligned}[t] \lim_{\alpha\rightarrow 1}N_{\alpha}[f]=\langle{\rm ln}[2\Phi_1(\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|W)]\rangle. \end{aligned}$ - $\displaystyle\begin{aligned}[t] S[f]&=S[f_0]-\langle{\rm ln}[2\Phi_1(\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|W)]\rangle, \end{aligned}$ where $S[f_0]$ is given by (\[sn\]) and $W\sim SN_1(0,1,\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|)$. - $\displaystyle\begin{aligned}[t] S[f_0] - {\rm ln}(4e)&\leq S[f] \leq S[f_0],\,\forall\,{\mbox{\protect\boldmath $\eta$}}.\\ \end{aligned}$ @Contreras-Reyes_Arellano-Valle_2012 define the negentropy as the departure from gaussianity of the distribution of $\bZ$. Therefore, the skew-gaussian Rényi entropy corresponds to the difference between gaussian Rényi entropy and negentropy, that depends on the skewness parameter ${\mbox{\protect\boldmath $\eta$}}$. On the another hand, by setting $\bnu=\bzero$ and $\bA=\bI_d$ in (\[ShCSN\]) with $d=s=1$, we obtain the property (ii) of Corollary \[NRH\]. By properties (iii) and (iv), $$-0.967\leq S[f_0]-{\rm log}\,(4e)\leq S[f]$$ because, the minimum value of normal Shannon entropy is obtained for $d=1$ and, $$0\leq\langle\ln[2\Phi_1(\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|W)]\rangle\leq 2.386,$$ for all ${\mbox{\protect\boldmath $\eta$}}$. In addition, @Contreras-Reyes_Arellano-Valle_2012 reported a maximum value of this expected value equal to 2.339, using numerical approximations. Considering (\[red\]), (\[CM\]) and (\[re1\]); the complexity measure for skew-gaussian distribution is obtained.\ Extended skew-gaussian distributions ------------------------------------ Consider a slight variant of the extended skew-gaussian distribution proposed by @Capitanio_et_al_2003. Let $\bZ\sim ESN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}},\tau)$, $\bZ\in\Delta\subset\mathbb{R}^d$, with mean vector $\bmu\in\mathbb{R}^d$, variance matrix $\bJ\in\mathbb{R}^{d\times d}$, shape/skewness parameter ${\mbox{\protect\boldmath $\eta$}}\in\mathbb{R}^d$, extended parameter $\tau\in\mathbb{R}$, and with pdf given by: $$\label{dmesn} p(\bz)=\frac{1}{\Phi_1(\tau)}\phi_d(\bz;\bmu,\bJ)\Phi_1[{\mbox{\protect\boldmath $\eta$}}^{\top}(\bz-\bmu)+\tilde{\tau}],$$ where $\bz\in\mathbb{R}^d$ and $\tilde{\tau}=\tau\,\sqrt{1+{\mbox{\protect\boldmath $\eta$}}^{\top}\bJ{\mbox{\protect\boldmath $\eta$}}}$. The mean vector and the variance matrix of $\bZ$ are $$\begin{aligned} \langle\bz\rangle&=&\bmu+\bdelta\zeta_1(\tau),\label{esn-mom1}\\ \langle\bz^2\rangle&=&\bJ-\zeta_1(\tau)[\tau+\zeta_1(\tau)]\bdelta\bdelta^\top,\label{esn-mom2}\end{aligned}$$ respectively; where $\zeta_1(\bz)=\phi(\bz)/\Phi_1(\bz)$ is the [*zeta*]{} function [@Azzalini_Capitanio_1999; @Capitanio_et_al_2003]. \[T2\] Let $\bZ$ be a $ESN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}},\tau)$, $\bz\in\mathbb{R}^d$. Then: $$\label{re3} \int [f(\bz)]^{\alpha}d\bz=\psi_{\alpha,d}(\bJ)\langle\left[\frac{\Phi_1(W)}{2\Phi_1(\tau)}\right]^{\alpha}\rangle,\quad\alpha\in\mathbb{N},\,\alpha>1,$$ where $\psi_{\alpha,d}(\bJ)$ is defined as in Proposition \[T0\] and $W={\tilde{{\mbox{\protect\boldmath $\eta$}}}}^{\top}\bZ_0+\tilde{\tau}\sim ESN_1(\tilde{\tau},\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2,\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|,\tau)$, $\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|=\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\tilde{{\mbox{\protect\boldmath $\eta$}}}$, and $\tilde{{\mbox{\protect\boldmath $\eta$}}}=\bJ^{1/2}{\mbox{\protect\boldmath $\eta$}}$. \[NHESN\] Let $\bZ\sim ESN_d(\bmu,\bJ,{\mbox{\protect\boldmath $\eta$}},\tau)$, $\bZ_N\sim N_d(\bmu,\bJ)$ and $W$ are defined as in Proposition \[T2\]. Then, - $\displaystyle\begin{aligned}[t] R_{\alpha}[f]&=R_{\alpha}[f_0]-N_{\alpha}[f],\quad\alpha\in\mathbb{N},\,\alpha>1,\\ \end{aligned}$ where $$N_{\alpha}[f]=\frac{1}{\alpha-1}{\rm ln}\langle\left[\frac{\Phi_1(W)}{\Phi_1(\tau)}\right]^{\alpha}\rangle,$$ and $R_{\alpha}[f_0]$ is given by (\[rn\]). - $\displaystyle\begin{aligned}[t] R_{\alpha}[f]&\leq R_{\alpha}[f_0] + \frac{\alpha}{1-\alpha}{\rm ln}\left[\frac{\Phi_1(\tilde{\tau} + \tilde{\delta}\zeta_1(\tau))}{\Phi_1(\tau)}\right],\\ \end{aligned}$ where $\tilde{\delta}=\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^3/\sqrt{1+\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^4}$. - $\displaystyle\begin{aligned}[t] S[f]&=S[f_0]-\langle{\rm ln}\left[\frac{\Phi_1(W)}{\Phi_1(\tau)}\right]\rangle.\\ \end{aligned}$ - $\displaystyle\begin{aligned}[t] S[f_0]+{\rm ln}[\Phi_1(\tau)]-\Phi_1\left(\frac{\tilde{\tau}}{\sqrt{1+{\mbox{\protect\boldmath $\eta$}}{\mbox{\protect\boldmath $\eta$}}^\top}}\right)&\leq S[f]\leq \frac{1}{2}{\rm ln}\left[(2\pi e)^d\left|\bJ-\zeta_1(\tau)[\tau+\zeta_1(\tau)]\bdelta\bdelta^\top\right|\right],\, \forall\,{\mbox{\protect\boldmath $\eta$}}.\\ \end{aligned}$ - $\displaystyle\begin{aligned}[t] \lim_{\alpha\rightarrow 1}N_{\alpha}[f]=\langle{\rm ln}[\frac{\Phi_1(W)}{\Phi_1(\tau)}]\rangle. \end{aligned}$ @Pourahmadi_2007 illustrated the behaviour of $\zeta_1(\tau)$, $\tau\in\mathbb{R}$. This function is strictly decreasing for any $\tau\in\mathbb{R}$, tends to 0 when $\tau\rightarrow +\infty$, and diverge when $\tau\rightarrow -\infty$. For $\tau=0$, the property (iv) of Corollary \[NHESN\] becomes property (iii) of Corollary \[NRH\]. By properties (iii) of Corollary \[NHESN\] and (ii) of Corollary \[NRH\], the negentropy of an extended skew-gaussian random vector is always larger than the negentropy of a skew-gaussian random vector. Therefore, we obtain the following relationship among the Shannon entropies of gaussian ($f_0(\bz)$), skew-gaussian ($g(\by)$), and extended skew-gaussian ($f(\bx)$) distributions: $S[f_0]\geq S[g]\geq S[f]$. Considering (\[red\]), (\[CM\]) and (\[re3\]); the complexity measure for extended skew-gaussian distribution is obtained.\ Truncated skew-gaussian distributions ------------------------------------- The truncated skew-gaussian pdf given by @Flecher_et_al_2010, consider the random variable $Z\sim SN_1(\mu,\omega,\lambda)$, $\bZ\in\Delta\subset\mathbb{R}$, and the definition given in (\[SN-pdf\]) for the case $d=1$. @Flecher_et_al_2010 gives the expressions of the higher order and weighted moments of truncated skew-gaussian distributions. We also consider the following definition based on (\[SN-pdf\]) for a truncated skew-gaussian random variable $W\in[a,b]\subset\mathbb{R}$, denoted by $W\sim TSN(\mu,J,\lambda)$, and with density $$\label{TSN} g(w)=\frac{f(w)}{[F(w)]_a^b}, \quad\mbox{$a<w\leq b$},$$ where $f(z)$ is defined in (\[SN-pdf\]) for $d=1$ with $\bJ=J$, ${\mbox{\protect\boldmath $\eta$}}=\lambda$; and $F(z)$ is the cumulative density function of $Z$ with $$[F(w)]_a^b=F(b)-F(a)=\int_{a}^{b}f(u)du.$$ The following Remark allows the computation of $[F(w)]_a^b$ in terms of the gaussian cumulative density function and a bivariate integral term. Let $Z\sim SN_1(\mu,J,\lambda)$, @Owen_1956 and @Azzalini_1985 gives the expressions to compute $F(z)$ as follows $$\label{SNcdf} F(z)=2\int_{z}^{-\infty}\int_{-\infty}^{\lambda s}\phi(s)\phi(t)\,dt\,ds=\Phi_1(z)-2\int_{z}^{\infty}\int_{0}^{\lambda s}\phi(s)\phi(t)\,dt\,ds.$$ Then, by replacing (\[SNcdf\]) in $[F(w)]_a^b$ we obtain $$[F(w)]_a^b=\Phi_1(b)-\Phi_1(a)-2\int_{a}^{b}\int_{0}^{\lambda s}\phi(s)\phi(t)dtds.$$ \[T1\] Let $Z,\,W$ be a $SN_1(\mu,J,\lambda)$ and $TSN_1(\mu,J,\lambda)$, respectively, $\lambda\neq0$. Then: $$\label{re2} \int_a^b [g(w)]^{\alpha}dw=2\psi_{\alpha,1}(J)\,\Phi_{\alpha+1}(\bzero;\bzero,\tilde{\bJ})\frac{[H(v)]_{a_0}^{b_0}}{([F(z)]_{a}^{b})^{\alpha}},$$ where $\psi_{\alpha,1}(J)$ is defined as in Proposition \[T0\] with $d=1$ and $\bJ=J$; $\tilde{\bJ}=\bI_{\alpha+1}+\tilde{\lambda}^2\tilde{\bD}^{\top}\tilde{\bD}$, $\tilde{\lambda}^2=\omega\lambda^2/\alpha$, $\tilde{\bD}=({\bf 1}_{\alpha},\tilde{\lambda})^{\top}$ and $V\sim CSN_{1,2}(0,\tilde{\lambda}^2,\tilde{\bB},\bzero,\bI_2)$ with cumulative density function $H(v)$, $\tilde{\bB}=(1,\tilde{\lambda})^{\top}$, $a_0=\lambda(a-\mu)/\omega$ and $b_0=\lambda(b-\mu)/\omega$. By Lemma 2.2.1 of @Genton_2004, $H(v)$ is easily computable by a tri-variate gaussian cumulative density function as $$\begin{aligned} H(v)&=&\frac{\Phi_3\left[\left( \begin{array}{c} v \\ {\bf 0} \\ \end{array} \right); \left( \begin{array}{c} 0 \\ \bzero \\ \end{array} \right), \left( \begin{array}{ccc} \tilde{\lambda}^2 & - \tilde{\lambda}^2\tilde{\bB} \\ - \tilde{\lambda}^2\tilde{\bB}^{\top} & \bI_2 + \tilde{\lambda}^2\tilde{\bB}^{\top}\tilde{\bB} \\ \end{array} \right)\right]}{\Phi_2({\bf 0};{\bf 0},\bI_2 + \tilde{\lambda}^2\tilde{\bB}^{\top}\tilde{\bB})},\end{aligned}$$ where $\tilde{\lambda}$ and $\tilde{\bB}$ are defined as in Proposition \[T1\]. Considering (\[red\]), (\[CM\]) and (\[re2\]); the complexity measure for extended skew-gaussian distribution is obtained.\ Conclusions =========== In this paper, we have presented some solutions to compute the Rényi entropy with discrete $\alpha$-order and for a wide range of asymmetric distributions. Specifically, we find a closed expression for skew-gaussian, extended skew-gaussian, and truncated skew-gaussian distributions. Finally, additional inequalities for skew-gaussian and extended skew-gaussian entropies were reported.\ Appendix {#appendix .unnumbered} ======== [**Proof of Proposition \[T0\].**]{} To compute the integral $\int [f(\bz)]^{\alpha}d\bz$, we use the change of variables $\bJ_{\alpha}=\alpha^{-1}\bJ$ and $\bZ_0=\bJ_{\alpha}^{-1/2}(\bZ-\bmu)$, $\bZ_0\sim SN_d(\bzero,\bI_d,\tilde{{\mbox{\protect\boldmath $\eta$}}})$, $\tilde{{\mbox{\protect\boldmath $\eta$}}}=\bJ_{\alpha}^{1/2}{\mbox{\protect\boldmath $\eta$}}$. We shall use the fact that $|\bJ_{\alpha}|=\alpha^{-d}|\bJ|$ for $d$-dimensional matrices [@Nielsen_Nock_2012]. Then, according to Lemma 2 of @Arellano-Valle_et_al_2013, the integral $\int [f(\bz)]^{\alpha}d\bz$ should be rewritten in terms of an expected value with respect to a standardized gaussian density as $$\begin{aligned} \int [f(\bz)]^{\alpha}d\bz&=&\frac{2^{\alpha}}{|\bJ|^{\alpha/2}}|\bJ_{\alpha}|^{1/2}(2\pi)^{(1-\alpha)d/2} \langle[\Phi_1(\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\bZ_{0})]^{\alpha}\rangle\\ &=&\frac{2^{\alpha}}{\alpha^{d/2}}(2\pi)^{(1-\alpha)d/2}|\bJ|^{(1-\alpha)/2}\langle[\Phi_1(W)]^{\alpha}\rangle.\end{aligned}$$ where $W\sim SN_1(\bzero,\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2,\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|)$ with $\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|=\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\tilde{{\mbox{\protect\boldmath $\eta$}}}$ [@Contreras-Reyes_Arellano-Valle_2012; @Arellano-Valle_et_al_2013], i.e., the expected value $\langle[\Phi_1(\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\bZ_{0})]^{\alpha}\rangle$ is reduced from $d$ dimensions to one dimension [@Arellano-Valle_et_al_2013; @Contreras-Reyes_2014]. By Lemma \[Pf\] and setting $\bmu=\bzero$, $\bJ=\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2$, $\bD=\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|$, $r=\alpha$, $\bA=s=h(w)=1$; we obtain $\tilde{\bA}=\bI_{\alpha+1}$ and $\tilde{\bD}=({\bf 1}_{\alpha},\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|)^{\top}$. Therefore, the expected value of the integral is reduced to $$\langle[\Phi_1(W)]^{\alpha}\rangle=\frac{\Phi_{\alpha+1}(\bzero;\bzero,\bI_{\alpha+1}+ \|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2\tilde{\bD}^{\top}\tilde{\bD})}{\Phi_1(0;0,1+\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^4)}.\,\,\,\Box$$ [**Proof of Corollary \[NRH\]**]{} - Follows from (\[rn\]) and Proposition \[T0\]. - See Proposition 2 of @Arellano-Valle_et_al_2013. - Right side: see @Contreras-Reyes_Arellano-Valle_2012. Left side: consider the nonsymmetrical entropy of @Liu_2009 given by $$S(\bu)=-\int f(\bu)\ln[\beta(\bu)f(\bu)]\,d\bu,$$ where $f(\bu)$ is the probability density function of a gaussian variable $\bu$. By choosing $\beta(\bu)=2\Phi_1({\mbox{\protect\boldmath $\eta$}}^{\top} \bJ^{-1/2}(\bu-\bmu))$, $\bu=\bZ_N$, it follows that $\langle{\rm\ln}\beta(\bZ_N)\rangle={\rm\ln}[2]+\Phi_1(0)=(1/2)\,{\rm\ln}(4e)$ [see Proposition 4 of @Azzalini_Dalla-Valle_1996]. Then, as $\langle{\rm\ln}\beta(\bZ)\rangle\leq 2\langle{\rm\ln}\beta(\bZ_N)\rangle$, the result is obtained. - Follows from properties (i), (ii) and (\[red\]). $\Box$\ [**Proof of Proposition \[T2\]**]{} By (\[dmesn\]), $\phi_d(\by;\bmu,\bJ)=|\bJ|^{-1/2}\phi_d\left(\bJ^{-1/2}(\by-\bmu)\right)$, where $\phi_d(\bz)$ is the probability density function of $N_d({\bf 0},\bI_d)$. Then, as in (\[T0\]), to compute the integral $\int [f(\bz)]^{\alpha}d\bz$ we use the change of variables $\bJ_{\alpha}=\alpha^{-1}\bJ$ and $\bZ_0=\bJ_{\alpha}^{-1/2}(\bZ-\bmu)$. In this case, $\bZ_0\sim ESN_d(\bzero,\bI_d,\tilde{{\mbox{\protect\boldmath $\eta$}}},\tau)$ with $\tilde{{\mbox{\protect\boldmath $\eta$}}}= \bJ_{\alpha}^{1/2}{\mbox{\protect\boldmath $\eta$}}$. We shall use the fact that $|\bJ_{\alpha}|=\alpha^{-d} |\bJ|$ for $d$-dimensional matrices [@Nielsen_Nock_2012]. Then, according to Lemma 2 of @Arellano-Valle_et_al_2013, the integral $\int [f(\bz)]^{\alpha}d\bz$ should be rewritten in terms of an expected value with respect to a standardized gaussian density as $$\begin{aligned} \int [f(\bz)]^{\alpha}d\bz&=&\frac{1}{[\Phi_1(\tau)]^{\alpha}}|\bJ|^{-\frac{\alpha}{2}}|\bJ_{\alpha}|^{1/2} (2\pi)^{(1-\alpha)\frac{d}{2}}\langle[\Phi_1(\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\bz_{0}+\tilde{\tau})]^{\alpha}\rangle\\ &=&\frac{1}{[\Phi_1(\tau)]^{\alpha}}\alpha^{-d}(2\pi)^{(1-\alpha)d/2}|\bJ|^{(1-\alpha)/2}\langle[\Phi_1(W)]^{\alpha}\rangle.\end{aligned}$$ where $W=\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\bZ_0+\tilde{\tau}\sim ESN_1(\tilde{\tau},\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2,\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|,\tau)$ with $\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|=\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\tilde{{\mbox{\protect\boldmath $\eta$}}}$ [@Contreras-Reyes_Arellano-Valle_2012; @Arellano-Valle_et_al_2013], i.e., the expected value $\langle[\Phi_1(\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\bz_{0}+\tilde{\tau})]^{\alpha}\rangle$ is reduced from $d$ dimensions to one dimension [@Arellano-Valle_et_al_2013; @Contreras-Reyes_2014]. $\Box$\ [**Proof of Corollary \[NHESN\]**]{} - From Proposition \[T2\], we obtain directly $$\begin{aligned} R_{\alpha}[f]&=&\frac{1}{1-\alpha}\left(\ln[\psi_{\alpha,d}(\bJ)]-\alpha\ln[2\Phi_1(\tau)] + \ln[\langle[\Phi_1(W)]^{\alpha}\rangle]\right),\\ &=&R_{\alpha}[f_0] + \frac{\alpha}{1-\alpha}\ln\left[\frac{1}{\Phi_1(\tau)}\right] + \frac{1}{1-\alpha}\ln[\langle[\Phi_1(W)]^{\alpha}\rangle].\end{aligned}$$ - Considering Jensen’s inequality, we obtain $\langle[\Phi_1(W)]^{\alpha}\rangle\geq[\Phi_1(\langle W\rangle)]^{\alpha}$. Then, (ii) is straightforward from (\[esn-mom1\]). - By (\[red\]), it follows that $$S[f]=-\langle\ln\left[\phi_d(\bZ_0)\frac{\Phi_1(\tilde{{\mbox{\protect\boldmath $\eta$}}}^{\top}\bZ_0 + \tilde{\tau})}{\Phi_1(\tau)}\right]\rangle =S[f_0] - \langle{\rm \ln}\left[\frac{\Phi_1(W)}{\Phi_1(\tau)}\right]\rangle,$$ where, as in Proposition \[T2\], $\bZ_0=\bJ^{-1/2}(\bZ-\bmu)\sim ESN_d(\bzero,\bI_d,\tilde{{\mbox{\protect\boldmath $\eta$}}},\tau)$ and $W={\tilde{{\mbox{\protect\boldmath $\eta$}}}}^{\top}\bZ_0+\tilde{\tau}\sim ESN_1(\tilde{\tau},\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|^2,\|\tilde{{\mbox{\protect\boldmath $\eta$}}}\|,\tau)$. - Right side: by @Cover_Thomas_2006, for any density $g(\bx)$ of a random vector $\bx\in\Delta\subset\mathbb{R}^d$ (not necessary gaussian) with zero mean and variance $\bJ=\langle\bX\bX^{\top}\rangle$, the Shannon entropy of $\bx$ is maximized under gaussianity as $S[g] \leq (1/2)\ln[(2\pi e)^d|\bJ|]$. Then, the result is obtained from (\[esn-mom2\]). Left side: as in Corollary \[NRH\] (iii), by choosing $\beta(\bu)=\Phi_1({\mbox{\protect\boldmath $\eta$}}^{\top}\bJ^{-1/2} (\bu-\bmu)+\tilde{\tau})/\Phi_1(\tau)$ in the nonsymmetrical entropy, it follows that $$\langle{\rm\ln}\beta(\bZ_N)\rangle=\Phi_1\left(\frac{\tilde{\tau}}{\sqrt{1+\|{\mbox{\protect\boldmath $\eta$}}\|}}\right)-{\rm\ln}[\Phi_1(\tau)]$$ [see Proposition 4 of @Azzalini_Dalla-Valle_1996]. Then, as $\langle{\rm \ln}\beta(\bZ)\rangle\leq\langle{\rm\ln}\beta(\bZ_N)\rangle/\Phi_1(\tau)$, the result is obtained. - Follows from properties (i), (iii) and (\[red\]). $\Box$\ [**Proof of Proposition \[T1\]**]{} By (\[TSN\]), it follows that $$\int_a^b [g(w)]^{\alpha}dw=\frac{1}{([F(z)]_a^b)^{\alpha}}\int_a^b [f(w)]^{\alpha}dw$$ and, by Proposition \[T0\], the integral $\int_{a}^{b} [f(w)]^{\alpha}dw$ should be rewritten in terms of an expected value as $$\int_a^b [f(w)]^{\alpha}dw=\psi_{\alpha,1}(J) \langle[\Phi_1(u)]^{\alpha}|a_0<u\leq b_0\rangle,$$ where $U\sim SN_1(0,\tilde{\lambda}^2,\tilde{\lambda})$, $\tilde{\lambda}^2=\omega\lambda^2/\alpha$, $a_0=\lambda(a-\mu)/\omega$ and $b_0=\lambda(b-\mu)/\omega$. Again, by Lemma \[Pf\] and setting $\bmu=0$, $J=\tilde{\lambda}^2$, $r=\alpha$, $d=s=\bA=h(u)=1$; we obtain $\tilde{\bA}=\bI_{\alpha+1}$, $\tilde{\bD}=({\bf 1}_{\alpha},\tilde{\lambda})^{\top}$ and $\tilde{\bJ}=\bI_{\alpha+1}+\tilde{\lambda}^2\tilde{\bD}^{\top}\tilde{\bD}$. Then, the expected value is $$\begin{aligned} \langle[\Phi_1(u)]^{\alpha}|a_0<u\leq b_0\rangle&=&2\Phi_{\alpha+1}(\bzero;\bzero,\tilde{\bJ})[H(v)]_{a_0}^{b_0},\end{aligned}$$ where $H(v)$ is the cumulative density function of a closed skew-gaussian variable $V\sim CSN_{1,2}(0,\tilde{\lambda}^2,\tilde{\bB},\bzero,\bI_2)$ with $\tilde{\bB}=(1,\tilde{\lambda})^{\top}$ [see Proposition 3 of @Flecher_et_al_2010]. $\Box$\ Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by Instituto de Fomento Pesquero (IFOP, [http://www.ifop.cl/]{}), Valparaíso, Chile. The author would like to thank the editor and an anonymous referee for their helpful comments and suggestions.\ [100]{} Azzalini, A., 1985. A Class of Distributions which includes the Normal Ones. Scand. J. Stat. 12, 171-178. Azzalini, A., Dalla-Valle, A., 1996. The multivariate skew-normal distribution. Biometrika 83, 715-726. Azzalini, A., Capitanio, A., 1999. Statistical applications of the multivariate skew normal distributions. J. Roy. Stat. Soc. Ser. B 61, 579-602. Azzalini, A., 2013. The Skew-Normal and Related Families. Vol. 3, Cambridge University Press. González-Farías, G., Domínguez-Molina, J., Gupta, A., 2004. Additive properties of skew normal random vectors. J. Stat. Plann. Inference 126, 521-534. Rezaie, J., Eidsvik, J., Mukerji, T., 2014. Value of information analysis and Bayesian inversion for closed skew-normal distributions: Applications to seismic amplitude variation with offset data. Geophys. 79, R151-R163. Capitanio, A., Azzalini, A., Stanghellini, E., 2003. Graphical models for skew-normal variates. Scand. J. Stat. 30, 129-144. Flecher, C., Allard, D., Naveau, P., 2010. Truncated skew-normal distributions: moments, estimation by weighted moments and application to climatic data. Metron 68, 265-279. Contreras-Reyes, J.E., Arellano-Valle, R.B., 2012. Kullback-Leibler divergence measure for Multivariate Skew-Normal Distributions. Entropy 14, 1606-1626. Arellano-Valle, R.B., Contreras-Reyes, J.E., Genton, M.G., 2013. Shannon entropy and mutual information for multivariate skew-elliptical distributions. Scand. J. Stat. 40, 42-62. Figiel, [Ł]{}., 2014. Effect of the interphase on large deformation behaviour of polymer–clay nanocomposites near the glass transition: 2D RVE computational modelling. Comput. Mater. Sci. 84, 244-254. Zhou, J., Wang, X., 2008. Accurate closed-form approximation for pricing Asian and basket options. Appl. Stochastic Models Bus. Ind. 24, 343-358. Genton, M.G., 2004. Skew-elliptical distributions and their applications: A journey beyond normality. Chapman & Hall/CRC, Boca Raton, FL. Contreras-Reyes, J.E., 2014. Asymptotic form of the Kullback-Leibler divergence for multivariate asymmetric heavy-tailed distributions. Physica A 395, 200-208. Rényi, A., 1970. Probability theory. North-Holland, Amsterdam. López-Ruiz, R., Mancini, H.L., Calbet, X., 1995. A statistical measure of complexity. Phys. Lett. A 209, 321-326. Anteneodo, C., Plastino, A.R., 1996. Some features of the López-Ruiz-Mancini-Calbet (LMC) statistical measure of complexity. Phys. Lett. A 223, 348-354. Yamano, T., 2004. A statistical measure of complexity with nonextensive entropy. Physica A 340, 131-137. Carpi, L.C., Rosso, O.A., Saco, P.M., Ravetti, M.G., 2011. Analyzing complex networks evolution through Information Theory quantifiers. Phys. Lett. A 375, 801-804. Sánchez-Moreno, P., Angulo, J.C., Dehesa, J.S., 2014. A generalized complexity measure based on Rényi entropy. Eur. Phys. J. D 68, 212. Golshani, L., Pasha, E., 2010. Rényi entropy rate for Gaussian processes. Inform. Sci. 180, 1486-1491. Liu, T., Zhang, P., Dai, W-.S., Xie, M., 2012. An intermediate distribution between Gaussian and Cauchy distributions. Physica A 391, 5411-5421. Cover, T.M., Thomas, J.A., 2006. Elements of information theory. Wiley & Son, Inc., New York, NY, USA. Dembo, A., Cover, T.M., Thomas, J.A., 1991. Information Theoretic Inequalities. IEEE Trans. Inform. Theory 37, 1501-1518. Flecher, C., Naveau, P., Allard, D., 2009. Estimating the Closed Skew-Normal distributions parameters using weighted moments. Stat. Prob. Lett. 79, 1977-1984. Pourahmadi, M., 2007. Skew-Normal ARMA Models with Nonlinear Heteroscedastic Predictors. Commun. Stat. A-Theor. 36, 1803-1819. Owen, D.B., 1956. Tables for computing bivariate normal probabilities. Ann. Math. Stat. 27, 1075-1090. Nielsen, F., Nock, R., 2012. A closed-form expression for the Sharma–Mittal entropy of exponential families. J. Phys. A: Math. Theor. 45, 032003. Liu, C.-S., 2009. Nonsymmetric entropy and maximum nonsymmetric entropy principle. Chaos Soliton. Fract. 40, 2469-2474.
{ "pile_set_name": "ArXiv" }
--- author: - 'J. Forbrich' - 'Th. Stanke' - 'R. Klein' - 'Th. Henning' - 'K. M. Menten' - 'K. Schreyer' - 'B. Posselt' bibliography: - '0598.bib' date: 'Received / Accepted' title: 'A multi-wavelength study of a double intermediate-mass protostar – from large-scale structure to collimated jets' --- Introduction ============ In the investigation of the earliest stages of intermediate- and high-mass star formation, the relative importance of formation processes as different as disk accretion and coalescence is still an open question (for a recent review, see e.g. [@beu07]). Detailed multi-wavelength studies are needed to illuminate the often confusing observational picture. Since high-mass protostars are less ubiquitous than their low-mass counterparts, they are on average more distant. The resulting low linear resolution often leads to an oversimplified picture. This is well illustrated by the study of @bsg02, which reported a case where a single bipolar molecular outflow discovered previously with single-dish radio telescopes in a massive-star–forming (MSF) region was resolved into a multiple outflow system in interferometric data (see also [@bss03]). @dav04 analyzed near-infrared data of two collimated jets in a similar MSF region. They conclude that these jets are very similar to their low-mass counterparts. We report the results of a comprehensive multi-wavelength follow-up study of a candidate massive protostellar source that was discovered close to IRAS 07029–1215 during a millimeter continuum survey of the surroundings of luminous IRAS sources in the outer galaxy [@kle05]. UYSO1 is a deeply embedded, very young source at a distance of only 1 kpc, powering a high-velocity bipolar CO outflow; @jan04 derived the mass of the UYSO1 cloud core from the CO(3-2) line emission to be 44 $M_{\odot}$, inferring the hydrogen column density from the integrated CO emission, while the mass derived from optically thin dust continuum emission ($\lambda=850{\mbox{\,$\mu$m}}$), tracing high column densities, is 15 $M_{\odot}$. The mass of the outflow was estimated to be 5.4 $M_{\odot}$. For details on these earlier mass estimates as well as for a discussion of the distance, we refer to . Compared to other massive molecular outflows, this mass is at the lower end of the range reported by @bss02, but most of their sources are much more distant, at an average distance of $4.7\pm3.4$ kpc. No plausible driving source could be identified in IRAS and MSX mid- and far-infrared data. In single-dish observations, @beu08 detected C$_2$H submillimeter emission towards UYSO1, a transition that may be a tracer of the earliest stages of (massive) star formation. The aim of our follow-up observations of UYSO1 was to study the region at higher angular resolution and to search for the driving source of the enormous CO outflow. For these purposes, we conducted observations in the infrared regime as well as in the millimetric to centimetric wavelength ranges. We note that for very early evolutionary stages, outflow mass entrainment rates and column densities are more conclusive in determining whether a massive star forms than a mass determination since material is still rapidly accreted. In Section \[sec\_obse\], we describe the variety of observations carried out before presenting the results in Section \[sec\_resu\]. We discuss our findings in Section \[sec\_disc\] and conclude in Section \[sec\_conc\]. ![image](0598_fig3.eps){width="\linewidth"} Observations and data analysis {#sec_obse} ============================== #### Near- and mid-infrared observations In the near infrared, observations were first performed with the 3.5m telescope on Calar Alto, Spain, using the OmegaPrime camera on March 5, 2004, both in $K'$ broadband and H$_2$ S(1) narrowband (NB2122). After a tentative discovery of *two* jets, intersecting at the position of UYSO1, follow-up observations were conducted on several occasions between January 10 and March 1, 2005 with the Infrared Spectrometer and Array Camera (ISAAC) at the Very Large Telescope (VLT) of the European Southern Observatory (ESO). These observations included $K_s$, NB2.09 and NB2.13 as well as *L*-band imaging. In January and February 2006, UYSO1 was observed with the mid-infrared instrument VISIR at the VLT in the PAH2 filter centered on $\lambda = 11.3{\mbox{\,$\mu$m}}$, with an estimated $3\sigma$ point-source sensitivity of 9 mJy (using the VISIR Exposure Time Estimator v3.2.1). The Calar Alto data were reduced using IRAF while we used the ESO Eclipse 5.0 software for the VLT data. #### Far-infrared observations We used the Multiband Imaging Photometer for Spitzer [MIPS,  @rie04] onboard the *Spitzer* Space Telescope to obtain maps at 24 and 70 as well as a low-resolution spectrum from 52 to 105 in the SED mode. The observations were executed on November, 11 and 12, 2005. The imaging data presented here are post-BCD maps processed with version S16.1 of the pipeline. Inspection of the post-BCD data products and manual reduction with the mopex software package showed that the pipeline reduction of the data is reliable in our case. The widespread extended emission made estimating the background difficult. Some striping along the detector columns remain in the 70 maps. For display, the stripes were reduced by smoothing with a three-pixel median filter aligned perpendicularly to them. Measurements, however, were done on the unsmoothed maps. The MIPS-SED data were reprocessed with mopex, starting with the BCD products created by the pipeline (S16.1.1) to create the spectra along the slit. Name ----------------- ------------------ ------------------ UYSO1a 07:05:10.940(1) -12:19:00.64(3) UYSO1b 07:05:10.811(3) -12:18:56.84(9) H$_2$O maser 07 05 10.8105(1) -12:18:56.807(3) 1st 70 peak 07:05:10.96 -12:19:09.9 2nd 70 peak 07:05:11.09 -12:19:33.1 24 peak 07:05:11.56 -12:19:19.4 IRAS 07029–1215 07:05:16.9 -12:20:02 : Table of relevant positions$^{\rm a}$ (Epoch 2000) \[tab:pos\] #### Millimeter and submillimeter observations Since the discovery observation of the CO outflow, two considerably larger and fully sampled CO(3-2) maps of the region were taken on June 4 and 6, 2005, with the facility receiver B3 at the James-Clerk-Maxwell Telescope (JCMT). In December 2003, a map of UYSO1 and its surroundings in CO(2-1) was taken with the Heterodyne Receiver Array HERA at the IRAM 30m telescope. However, since the CO(3-2) data cover a larger area at a better signal-to-noise ratio, we focus our analysis on these data. In July 2005, N$_2$H$^+$(1-0), N$_2$D$^+$(1-0), HCO$^+$(3-2) and DCO$^+$(2-1) were observed towards UYSO1 with the IRAM 30m telescope and its facility receivers in frequency switching mode. After first, very short observations with the IRAM Plateau de Bure Interferometer (PdBI) in its low-angular resolution D configuration in September/October 2003, UYSO1 was reobserved in the newly extended high-resolution A configuration in February 2006. While the synthesized beam size in the former observations was $13\farcs9\times5\farcs9$ for the 3mm continuum, this improved to $2.2'' \times 0.6''$ in the latter ($0.85'' \times 0.23''$ at 1 mm). Besides the continuum, the CS(2-1) and CO(2-1) transitions were observed. In order to be able to account for short-spacings in the $uv$ plane, the source was previously mapped in both transitions with the IRAM 30m telescope. In November 2005, UYSO1 was observed with the Caltech Submillimeter Observatory (CSO) on four occasions (Nov 17/18, Nov 19, Nov 20, Nov 22), looking for several submillimeter transitions in the 345 GHz and 220 GHz ranges. The beam sizes[^1] (FWHM) of the CSO are $\sim22''$ and $\sim31''$ in the 345 GHz and 220 GHz bands, respectively. In the same month, the source was also observed with the Atacama Pathfinder Experiment (APEX) telescope [@gus06] to obtain a better CO(3-2) spectrum at the source position. The FWHM beam size[^2] of the APEX telescope at 345 GHz is $\sim18''$. All data from the 30m, APEX, and the PdBI were reduced using the GILDAS software developed by IRAM and Observatoire de Grenoble. #### Centimeter radio observations In November 2003, H$_2$O maser emission towards UYSO1 was discovered with the Effelsberg 100m radio telescope. The discovery was confirmed with the same telescope in March 2004. At the same time, as well as in January 2004, the source was searched for NH$_3$(1,1) emission with the Effelsberg 100m telescope. On April 13, 2005, it was searched for signs of CH$_3$OH maser emission, using the Toruń 32m radio telescope. In March 2006, high-resolution radio observations were carried out with the NRAO Very Large Array (VLA) in A configuration. Both the 8.4 GHz radio continuum emission as well as the newly discovered H$_2$O maser emission were studied. The Effelsberg data were reduced using the IRAM GILDAS software, and for the VLA data, we used the NRAO AIPS software. ![image](0598_fig4.eps){width="\linewidth"} \[uyso1mips2\] Results {#sec_resu} ======= Based on the new multi-wavelength data, we report the discovery of two collimated jets that appear to be connected to two previously unresolved protostars at the location of UYSO1. In spite of deep searches, we did not detect any infrared counterparts of these protostars. Towards one of the two protostars, water maser emission was discovered. Observations of molecular transitions reveal that UYSO1 does not show typical hot-core chemistry. In the following, we present these results in more detail, differentiating large-scale structure (the clump in which UYSO1 is embedded, the outflow, and the jets) and small-scale structure (the protostars UYSO1a and UYSO1b) before discussing the spectral energy distribution (SED). Relevant positions are listed in Table \[tab:pos\]. Large-scale structure {#ssec:largescale} --------------------- #### Near- and mid-infrared observations The initial discovery of two collimated jets, obtained with the Calar Alto 3.5m telescope (Fig. \[calar\_vlt\]), was clearly confirmed by the VLT observations (Fig. \[jcmtisaac\]). The larger field of view of the Calar Alto narrowband image additionally shows H$_2$ S(1) emission in the outer parts of the H[II]{} region Sh2-297 which is powered by HD 53623. The two jets intersect with an angle of 75$^\circ$ at about the submillimeter continuum position of UYSO1, a first indication that UYSO1 may harbor more than one source. The larger north-south jet has an apparent size of 1.3’, corresponding to 0.4 pc at a distance of 1 kpc. It also has multiple bow shocks, best seen in the continuum-subtracted image in Fig. \[nircont\]. This jet deviates considerably from a straight line in its northern lobe, possibly a sign of interaction with the surrounding medium. The smaller jet, terminating in a bright, very compact bow shock at its eastern end, is at least 0.2pc long (it might be even longer, as a barely visible chain of features extends well beyond the bright section of its western lobe). Both jets have a knotty structure. A quantitative estimate of the inclination angles of the two jets with respect to the line of sight is difficult. However, the fact that we clearly see both lobes of both jets in spite of the surrounding material suggests that they lie close to the plane of the sky. The narrowband luminosities of the two jets, as determined from the 2.12 $\mu$m data, are $L_{\rm 2.12\mu m}=0.02L_\odot$ for the larger, north-south jet and $L_{\rm 2.12\mu m}=0.0007L_\odot$ for the smaller, east-west jet. The luminosity of the larger flow is at the upper end of the range of values found in a survey of Orion A [@sta02], a first suggestion that the driving source is at least of intermediate mass. Another prominent feature is the ridge of strong H$_2$ emission east of the core, extending from the north to the south, also visible in the Calar Alto image. This ridge appears to be the edge of the molecular clump where UYSO1 is embedded in since also the CO and millimeter continuum emission drop dramatically in that region. The emission ridge indicates where the H[II]{} region is interfacing the molecular cloud. Whether the H$_2$ emission is excited by shock fronts in the ionisation front or by UV pumping in the photon-dominated region (PDR) between the H[II]{} region and the molecular cloud, we cannot decide, although a PDR appears more likely (see the discussion of oxygen fine structure lines in Section \[ssec:photo\]). This would require the comparison of the relative intensities of several H$_2$ lines [e.g., @hol77], data that are not available at present. The H$_2$ emission in jets is usually shock-excited. #### Far-infrared observations Neither 24$\mu$m nor 70$\mu$m mid-infrared sources were found to be in direct relation to the millimetre continuum peak (Fig. \[uyso1mips\]), even though the 70$\mu$m observations may show some emission at that position (see discussion of the SED in Section \[ssec:photo\]). Instead, the 24$\mu$m emission closely follows the H$_2$ S(1) emission ridge as the PDR is also heating the dust at the surface of the clump. The 70$\mu$m emission also starts at this rim, but extends further into the cloud as it traces colder material deeper in the cloud. The emission peak in the 70$\mu$m image is separated by 13 to the north-west from the 24$\mu$m peak. It is still another 11 away from the millimeter sources. In Sect. 3.3, we discuss the possibility of UYSO1 being swamped by surrounding extended emission and try to determine its potential contribution to the FIR emission. #### (Sub-)millimeter and centimeter radio observations The new CO(3-2) single-dish map, shown in Fig. \[jcmtisaac\], corroborates the results presented in . There is a strong gradient in emission towards the H[II]{} region east of the position of UYSO1. The APEX CO(3-2) spectrum taken at the position of UYSO1 shows the line wings most clearly (Fig. \[apex\_uyso1\_sum\]). The outflow is prominent in both the CO(3-2) and the CO(2-1) maps. Within the uncertainties, the molecular outflow coincides with the larger of the two NIR jets. Thus, in the single-dish CO maps, one massive outflow already discussed in is detected, but the new, larger CO(3-2) map additionally shows an indication of an east-west outflow, at least at its western end. Possibly, the millimeter outflow related to the smaller jet is weaker than the large previously known outflow and as such difficult to detect when both systems are superimposed. From the CO(3-2) map, the mass of the outflow can be estimated in direct comparison to , following the procedure outlined there as well as in @hen00, using the proportionality of the integrated CO main-beam temperature and H$_2$ column density. The new, larger map has a slightly smaller SNR than the data used in , affecting the line wings. For the entire line profile, the 40% intensity contour (which is not entirely within the map) traces 33 $M_\odot$, the red- and blueshifted line wings trace 2 $M_\odot$ and 1 $M_\odot$, respectively (down to the 10% contour). These numbers are slightly lower than those derived from the deeper data in , where the cloud mass was estimated to be 40 $M_\odot$. While it is difficult to give quantitative uncertainties for these mass estimates, we note that the masses are uncertain by at least a factor of a few, i.e., they are order-of-magnitude estimates. While for the extent and the maximum velocities of the outflows, the original JCMT data in the outflow lobes are still the best due to their SNR, we note that our deeper APEX pointing on UYSO1 shows line wings that are comparable to what was previously only detectable in the outflow lobes. The outflow velocity $v_{\rm proj}=30$ kms$^{-1}$ from compares to $v_{\rm proj}=26.5$ kms$^{-1}$ derived at the position of UYSO1 (Fig. \[apex\_uyso1\_sum\], compare to Fig. 4 in Paper I). In the new, large CO(3-2) map, the 10% contours of the line wings indicate a total size of the molecular outflow of 52”, or 0.25 pc. Compared to , the inferred size of the outflow is basically unchanged. Assuming that we have an edge-on disk geometry rather than a pole-on view (as is suggested by the clear detection of both near-infrared jet lobes) allows us to better constrain the outflow and its mass entrainment rate. An inclination angle of 80$^\circ$ with respect to the line of sight yields an outflow dynamical timescale ($t_d(i)=R_{\rm out}(i)/v_{\rm out}(i$) of only a few hundred years and an outflow mass entrainment rate of $\dot{M}=4\times10^{-3}\,M_\odot$yr$^{-1}$. Even for an inclination of $i=57.3^\circ$ (see Paper I), the dynamical timescale is still only 3000 years with a mass entrainment rate of $\dot{M}=1\times10^{-3}\,M_\odot$yr$^{-1}$. We can conservatively constrain the dynamical timescale to less than $10^4$ yr except for very small inclination angles ($< 25\degr$) and the mass entrainment rate correspondingly to $>3\times10^{-4}\,M_\odot\rm yr^{-1}$: as discussed above, also the NIR observations of the jet lobes suggest large inclination angles with respect to the line of sight. Also, in a pole-on configuration with small inclination angles, we should see the driving sources in the NIR/MIR when looking into the outflow cavities, which is not the case. Based on these outflow properties, we estimate and discuss the accretion luminosity in Section \[sec\_disc\]. New molecular line observations beyond those in are summarized in Table \[linetab\]. To better trace the overall mass, we also studied UYSO1 in $^{13}$CO(2-1). Following @sco86, the emission peak at the position of UYSO1 corresponds to a mass of 8 $M_\odot$ in the CSO beam size of $\sim33$” (FWHM). Interestingly, neither NH$_3$ nor N$_2$H$^+$(1-0) emission was found towards the source, with upper limits of $<$0.15 K and $<0.1$ K, respectively. HCO$^+$(3-2) and DCO$^+$(2-1) were detected. The HCO$^+$(3-2) line may consist of several velocity components. Several submillimeter molecular lines that we searched for using the CSO were not detected (see footnote in Table \[linetab\]). These results indicate that UYSO1, with only very few detectable molecular transition lines, does not show typical hot-core chemistry. Based on the upper limit of N$_2$H$^+$(1-0) emission, we can estimate corresponding upper limits for the column density and the abundance. For the column density, $N_{N_2H^+}\approx 8\times10^{11}\Delta v T_R$ cm$^{-2}$ [@ben98], we derive an upper limit of $N_{N_2H^+}\approx1.6\times10^{11}$ cm$^{-2}$, assuming a line width of 1 kms$^{-1}$. An upper limit for the abundance follows when relating this to the hydrogen column density derived by @kle05 from the submillimeter radio data at 850 $\mu$m, $N_H = 5.8\times10^{22}$ cm$^{-2}$ (for $T=20$ K). The result, an abundance of only $N_{N_2H^+}/N_H\le2.6\times10^{-12}$, is surprisingly low compared to values of $\approx 10^{-10}$ found in starless cores by @taf04b. Transition Telescope $T_a^*$ \[K\] ---------------------------------- ----------------- --------------- NH$_3$(1,1) Effelsberg 100m $<0.15$ K N$_2$H$^+$(1-0) IRAM 30m $<0.1$ K HCO$^+$(3-2) IRAM 30m 3.5 K DCO$^+$(2-1) IRAM 30m $0.5$ K $^{13}$CO CSO$^{a}$ 1.8 K HCN(4-3) CSO 0.7 K CCH$^{b}$ CSO $<0.4$ K H$_2$CO(5$_{(1,5)}$-4$_{(1,4)}$) CSO 0.3 K : Molecular transition lines observed towards UYSO1[]{data-label="linetab"} $^{a}$ notable *undetected* molecules in our 345 GHz-range CSO observations: CH$_3$CN, CH$_3$OH, SO, SO$_2$, HNCO, HCOOCH$_3$, HCCCN, CH$_3$CH$_2$CN, and additionally in the 220 GHz range: SiO\ $^{b}$ blends at 349.338 GHz and 349.400 GHz Small-scale structure {#ssec:smallscale} --------------------- #### Millimeter observations The millimeter continuum data from the two PdBI observations are ideal to study the small-scale dust continuum emission. In the low-resolution (D configuration) data, only a single source is detected at 3mm. In the high-resolution data (extended A configuration), this source is resolved into two protostars, UYSO1a and UYSO1b (Table \[tab:pos\]); both sources remain unresolved. At 1mm, only the brighter southern source, UYSO1a, is clearly detected due to limited sensitivity. Results of the two 3mm continuum datasets are shown in Fig. \[uyso1pdbi1\]. We follow @bss03 [@beu05err] in determining the mass from the 3mm continuum data, as traced by the optically thin dust emission of the protostellar envelopes. We assume a temperature of $T= 45$ K while leaving the gas-to-dust ratio (100:1) and other quantities the same as in @bss03 who estimate the results to be accurate within a factor of five. The single source detected in the low-resolution data corresponds to a gas mass of 9.5 $M_\odot$. In the high-resolution data, two continuum sources are detected in the 3mm band. These are at roughly the positions of two faint near-infrared continuum sources, but do not fit exactly (see discussion of the NIR observations below). Both objects appear to be intermediate-mass protostars with gas masses of 3.5 $M_\odot$ for the more massive component, UYSO1a, and 1.2 $M_\odot$ for the second component, UYSO1b. The linear separation between the two sources is $4\farcs17$, or 4200 AU. The hydrogen column densities corresponding to the derived gas masses are $N_{\rm H}=6.2\times10^{24}$ cm$^{-2}$ and $N_{\rm H}=2.2\times10^{24}$ cm$^{-2}$ for UYSO1a and UYSO1b, respectively. As a crude approximation, we convert these to visual extinction according to the empirical relation $N_{\rm H}$\[cm$^{-2}]\approx 2\times10^{21}\times A_{\rm V}$\[mag\] [@ryt96; @vuo03], resulting in visual extinctions of $A_V>$1000 mag. We note that the derived column densities are well above typical values for low-mass protostars (e.g., [@mot01]) and also above the minimum column density of $3\times10^{23}$ cm$^{-2}$ (or 1gcm$^{-2}$) that @kru08 derived for the formation of massive stars. The molecular line data collected with the PdBI suffers from missing flux, and the two extreme configurations are difficult to combine due to virtually no overlap in the *uv* plane. Thus, we only briefly discuss the CS(2-1) data here. While in the D configuration, again only a single source is detected with barely noticable velocity structure, the A configuration data show emission that is largely resolved out. #### Centimeter radio observations H$_2$O maser emission, a signpost of low- and high-mass star formation [@hen92], was discovered in single-dish observations towards UYSO1 in two velocity components symmetrically spaced around the system velocity of v$_{\rm lsr}$=12 kms$^{-1}$ (Fig. \[uyso1eberg\]). One of the two components has a double-peaked substructure. In high-resolution VLA A-array observations, however, only a single maser spot was found at the position of the north-western millimeter continuum source, UYSO1b, close in velocity to the brighter component in the Effelsberg maser spectrum. No CH$_3$OH maser emission was detected ($<0.3$ Jy). In the centimeter continuum at wavelengths of 1.3 cm and 3.5 cm, only 3$\sigma$ upper limits of 5.6 mJy and 0.21 mJy for the flux densities could be determined, respectively. Thus, no detectable H[II]{} region has formed yet. #### Near- and mid-infrared observations In deep NIR imaging, two sources were detected in the NB2.09 continuum and the *L* band which are close to the positions of the millimeter protostars, but they do not coincide with them (Figs. \[jcmtisaac\] and \[nircont\]). The astrometric accuracy of the near-infrared data, refined with positions from the 2MASS catalogue, is estimated to be $<0.2''$. Given the enormous visual extinction towards the two protostars (see above), we probably only see scattered light from their vicinities [e.g., @lin05; @wei06]. Since UYSO1a lies in the large north-south jet, and UYSO1b lies in the east-west jet, the two millimeter sources probably are their driving sources. Then, UYSO1a would also power the dominant molecular outflow coinciding with its jet while UYSO1b excites the water maser. Notably, the NB2.09 NIR continuum source just to the south-east of UYSO1a and the source just to the east of UYSO1b appear not to be point sources; it is also noteworthy that both appear on the sides of the blueshifted outflow lobes, which would be tilted towards us and thus suffer less extinction. In the VISIR observations at 11, no source was detected at the position of UYSO1. As an upper limit, we use the expected $3\sigma$ point source sensitivity of 9 mJy. Spectral Energy Distribution {#ssec:photo} ---------------------------- Compared to the spectral energy distribution of UYSO1 in which was based on the submillimeter detections and upper limits derived from IRAS data, we now have much more comprehensive information to constrain the combined luminosity of UYSO1a and 1b. #### Broad-band Spitzer observations As noted in Sect. \[ssec:largescale\], UYSO1 remains undetected in the MIPS scan maps. Bright extended emission from the clump around UYSO1 is detected but no compact emission peak that could be ascribed to the core containing UYSO1a and 1b is seen. The question is how to estimate upper limits for the flux density of UYSO1 at 24$\mu$m and 70$\mu$m. As a conservative estimate, we perform aperture photometry at the position of UYSO1, using apertures of [14]{} and [32]{} in diameter with background annuli from [80]{} to [100]{} and [78]{} to [130]{}, for 24 and 70$\mu$m, respectively. These apertures include UYSO1 but do not cut into the PDR, whereas the background annuli lie beyond the main extended emission. We applied the appropriate aperture and color corrections as described in the MIPS Data Handbook v3.3 as derived from the theoretical PSF. This method yields conservative upper limits of 0.3Jy and 70Jy for UYSO1 at 24$\mu$m and 70$\mu$m, respectively. However, these values mainly measure the large-scale emission of the externally heated envelope at the position of UYSO1. Inspired by unsharp masking, we estimate the extended emission by convolving the maps with a Gaussian profile and then subtracting it to retain the small-scale structure. The FWHM of the Gaussian has been 40. The unsharp-masked maps are displayed in the lower panels of Fig. \[uyso1mips\]. The small-scale structure is now much more pronounced, but no peak at the position of UYSO1 becomes apparent. Repeating the aperture-photometry should now result in values much less affected by the extended emission. Virtually no flux is left in the 24$\mu$m map at the position of UYSO1. Only an upper limit of 0.1Jy can be derived. At 70$\mu$m, the aperture photometry retains $25\pm2$Jy at the position of UYSO1. This number is a second upper limit since there is no compact emission apparent at this position (although there is a conspicuous extension of the 70$\mu$m emission towards the positions of UYSO1a and b, possibly indicating that these protostars start to become visible at about this wavelength). There is considerable uncertainty as to how well the extended emission and the cloud background are subtracted. ![MIPS Spectrum of UYSO1 with $3\sigma$ error bars; in grey a 40K black body spectrum; inset: Location, size, and integrated intensity of the MIPS slit on 70$\mu$m-map with 850-contours[]{data-label="fig:sed"}](0598_fig8.eps){width="\linewidth"} #### Low resolution Spitzer spectroscopy (SED mode) Even though UYSO1 remains undetected in the MIPS imaging data, the low-resolution spectroscopy can help characterizing the surroundings. The inset of Fig. \[fig:sed\] shows the location, the size, and the pixels of the MIPS slit relative to the 70 imaging data; the resulting spectrum is shown in the main part of the figure. This spectrum was extracted from the three pixels closest to the position of UYSO1. The cloud background was removed by subtracting the average of the three pixels east and three pixels west of the UYSO1 spectrum. All the spectra are derived by averaging over three pixels along the slit and applying the proper aperture correction. The $3\sigma$ error bars are the uncertainties propagated from those reported in the post-BCD products. The spectrum derived for the position of UYSO1 is compatible with the flux derived from the 70 map. It does not show any strong features and the slope can be described by a black body of temperature 40K. Note again that UYSO1 itself is probably not detected. ![The [[\[O[i]{}\]]{}]{} and [[\[O[iii]{}\]]{}]{} line: The intensity in the MIPS slit are the [[\[O[i]{}\]]{}]{} 63-line on top of the 70-map (850 contours). The inset displays again the variation of the [[\[O[i]{}\]]{}]{} 63-line across the slit plus the [[\[O[iii]{}\]]{}]{} 88-line (grey). Error bars are $3\sigma$.[]{data-label="fig:oi"}](0598_fig9.eps){width="\linewidth"} #### Oxygen fine structure lines A feature that is detected in the *Spitzer* SED-mode observations despite its low spectral resolution are oxygen fine structure lines at the eastern rim of the molecular core. Fig. \[fig:oi\] again shows the 70 map with the MIPS slit, this time with the intensities of the [[\[O[i]{}\]]{}]{} line at 63. The inset shows the variation of the [[\[O[i]{}\]]{}]{} line and the [[\[O[iii]{}\]]{}]{} (88) line along the slit. The subtracted continuum was obtained by averaging the two pixels neighboring the line-containing pixel in wavelength. We clearly see here a PDR radiating in two important cooling lines on the eastern side of the core where the radiation from IRAS 07029-1215 impacts the molecular cloud. The emission comes only from a small area as atomic oxygen is only present up to optical depths of $A_V\approx10$mag, and the PDR must be seen edge on. The [[\[O[i]{}\]]{}]{} line’s intensity of a few $10^{-3}\rm\,erg\,cm^{-2}s^{-1}sr^{-1}$ is expected for PDRs with densities of $10^3$ to $10^4\rm\,cm^{-3}$ [@hol99]. Discussion {#sec_disc} ========== In view of our previous knowledge on UYSO1, the main new result that warrants discussion is its newly constrained SED. Also, the nondetections in N$_2$H$^{+}$ and NH$_3$ are of special interest. The non-detection of UYSO1 at mid-infrared wavelengths is surprising given the earlier estimate of the SED in . The mid-infrared part of that SED was only weakly constrained by upper limits from IRAS data, clearly contaminated by the nearby bright source IRAS 07029-1215. The previous estimate includes the entire cloud core, heated also externally. The previous luminosity upper limit of $<1900\,L_\odot$ corresponds to the large-scale emission in the *Spitzer* maps. Between 24 and 70, the luminosity of the filtered-out large-scale emission is $\sim1500\,L_\odot$. We can estimate the amount of energy that is intercepted and re-radiated by the PDR neighboring UYSO1. The luminosity estimate in includes any such emission inside the diameter of 80% encircled energy of IRAS at 100 $\mu$m. Our MIPS data indicate that extended emission is indeed present around UYSO1 on these scales. The projected distance between UYSO1 and HD 53623 is 0.46pc. Since the cloud is illuminated from the side, the real distance should not be much different, i.e., $<$1pc (if it protrudes less than 60 out of the plane of the sky). When seen from HD 53623, the size of the extended emission subtends an angle of 28–60$^\circ$, or a solid angle of 0.18–0.85 sr, assuming that the cloud is at a distance of 1–0.46pc and appears circular. The remaining uncertainty is the luminosity class of HD 53623 for which spectral types of B1V [@cla74] and B1II/III [@hou88] are given in the literature. Correspondingly assuming luminosities of $16000\,L_\odot$ or $39000\,L_\odot$ [@sch82], the cloud core intercepts and eventually re-radiates 230–2640$L_\odot$. With the uncertainties involved, it seems reasonable to assume that the previously estimated upper limit for the luminosity was dominated by externally heated extended emission. In spite of the new data, the SED is only constrained by upper limits in the infrared regime. Since the newly discovered two components are not resolved in the submillimeter continuum, we use the unresolved PdBI continuum data at 3 mm (from the D configuration) and do not take into account the high-resolution PdBI data for the SED fit. Assuming an isotropic radiation field, a new modified-blackbody fit (with dust opacities of $\kappa_\nu \propto \nu^\beta$ and $\beta=2$) yields a luminosity upper limit of $\sim50\,L_\odot$ for both components combined (see Fig. \[uyso1SEDN.eps\]), assuming a distance of 1 kpc. This may be a severe lower bound if most of the luminosity escapes along the outflow cavities and is not re-processed into infrared radiation. In spite of the uncertainties involved, it is interesting to compare this luminosity to an estimate of the accretion luminosity. The outflow mass entrainment rate translates into a mass loss rate of the driving jet which in turn translates into the actual accretion rate onto the star. @bss02 estimate that this actual accretion rate is lower than the outflow mass entrainment rate by a factor of about 6. The above-mentioned conservative lower limit for the mass entrainment rate thus translates into a continuous accretion rate of $>5\times10^{-5}\,M_\odot\rm yr^{-1}$. A mass entrainment rate of $\dot{M}=4\times10^{-3}\,M_\odot$yr$^{-1}$ translates into an accretion rate of $7\times10^{-4}\,M_\odot$yr$^{-1}$. Based on the accretion rate, it is possible to estimate the accretion luminosity, $L_{\rm acc} = G \cdot M_\star \cdot \dot{M}_{\rm acc} \cdot r_\star^{-1}$. Of course, the stellar parameters are barely constrained in this case, and the situation is further complicated by the assumption of isotropic radiation, but we can carry out an order-of-magnitude check. For a 3 $M_\odot$ star and a corresponding radius of 5–8 $R_\odot$ [@pal92; @yor08], the lower limit of the accretion rate corresponds to an accretion luminosity of 600–900 $L_\odot$. For comparison, a 10 $M_\odot$ star with a corresponding radius of 7–12 $R_\odot$ would have an accretion luminosity of 1300–2200 $L_\odot$. The estimated accretion luminosity thus appears to be about an order of magnitude larger than the luminosity upper limit deduced from the SED with the assumption of isotropic radiation. However, we note again that, given the likely presence of accretion disks, the radiation field is highly anisotropic. In particular, such disks would have the highest extinction, poorly determinable in the mid-infrared, in the direction towards the observer. Therefore, and due to the fact that the accretion luminosity is but a simple estimate with uncertain assumptions, the above discussion should be regarded as very tentative. It remains unclear whether the empirical relation between the mass entrainment rate of the outflow and the luminosity of the central object that was used in is reliable ([@shc96; @hen00; @bss02], see also [@wu05]). We note nevertheless that the mass entrainment rate appears to be higher than what would be expected when applying this relation for a driving source with a luminosity of $\sim$50 $L_\odot$. UYSO1 was not observed to show centimetric NH$_3$ emission or millimetric N$_2$H$^+$ emission although we detect HCO$^+$(3-2) at the same position. @wom91 find a similar effect towards Orion-IRc2 and speculate that N$_2$H$^+$ may be depleted by shocks or winds close to that source. On the other hand, @taf04 found an extremely young starless core with constant C$^{18}$O and an unusually low N$_2$H$^+$ abundance. The fact that we do not see typical hot-core chemistry towards UYSO1, combined with the still low luminosity of the two sources suggests that we see very young sources. While it remains unclear how exactly the outflow fits into this picture due to the unknown inclination angle, a sufficiently large inclination angle would explain the extinction towards the central source and lead to a high mass entrainment rate of the outflow. Conclusions {#sec_conc} =========== We report the detection of two highly collimated jets intersecting close to the position of the previously identified candidate massive protostar UYSO1. Follow-up high-resolution millimeter radio observations show two continuum sources with masses of 3.5 $M_\odot$ (UYSO1a) and 1.2 $M_\odot$ (UYSO1b) and high column densities ($>10^{24}$ cm$^{-2}$) close to the geometric centers of the jets which probably are their powering sources. The projected distance between the two sources is 4200 AU. UYSO1a appears to power the energetic molecular outflow that was previously discovered towards UYSO1. UYSO1b contains a previously unknown H$_2$O maser source while a search for CH$_3$OH maser emission remained unsuccessful. The two millimeter protostars are not detected in deep near-infrared imaging and remain undetected also at mid- to far-infrared wavelengths. Also, no developing H[II]{} regions were found. Submillimeter observations show that the region is rather cold and does not show typical hot-core chemistry. Curiously, not even emission in NH$_3$ or N$_2$H$^+$ was detected. An attempt to constrain the combined SED of the two sources yields an estimated luminosity of roughly $L\approx50\,L_\odot$, which is about an order of magnitude lower than a tentative estimate of the accretion luminosity. The key to combining these new results into a coherent picture probably lies in the unknown outflow inclination angle with respect to the line of sight. A large inclination angle would lead to configurations in which the circumstellar disk is seen nearly edge-on, obscuring the central object and helping to explain their infrared non-detections. For moderately large inclination angles of about $>25\,^\circ$, where a thick circumstellar disk and a dense envelope would obscure the central object, the molecular outflow powered by UYSO1a has a dynamical timescale of $<10000$ years with enormous outflow mass entrainment rates of $>\dot{M}=3\times10^{-4}\,M_\odot$yr$^{-1}$. The luminosity of the corresponding near-infrared jet is comparable to the most luminous jets found in a survey of Orion A. In addition to the peculiar chemistry and the still relatively low luminosity, this indicates a very early evolutionary stage. We wish to thank Sandra Bruenken for additional observations at Effelsberg, Jens Kauffmann for help with the 30m observations, Marian Szymczak for trying to detect CH$_3$OH emission towards UYSO1 at Toruń, Philippe Salomé and Robert Zylka, both at IRAM Grenoble, for support in analyzing the PdBI data, as well as Henrik Beuther for helpful discussions. We would like to thank the referee, John Bally, for his helpful comments. T.S. is grateful for support through the A. v. Humboldt Foundation during his stay at the University of Hawaii. R.K. acknowledges support through Spitzer grant JPL no.1276999. Partly based on Director’s Discretionary Time observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC). The National Radio Astronomy Observatory (NRAO) is operated by Associated Universities, Inc., under a cooperative agreement with the National Science Foundation. Partly based on observations carried out with the IRAM Plateau de Bure Interferometer and the IRAM 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). Partly based on observations made with ESO Telescopes at the Paranal Observatory under programme IDs 074.C-0648(A) and 076.C-0773(B). The James Clerk Maxwell Telescope is operated by The Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Partly based on observations with the 100-m telescope of the MPIfR (Max-Planck-Institut für Radioastronomie) at Effelsberg. [^1]: http://www.submm.caltech.edu/cso/receivers/beams.html [^2]: http://www.apex-telescope.org/telescope/
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove that every spherical football is a branched cover, branched only in the vertices, of the standard football made up of $12$ pentagons and $20$ hexagons. We also give examples showing that the corresponding result is not true for footballs of higher genera. Moreover, we classify the possible pairs $(k,l)$ for which football patterns on the sphere exist satisfying a natural generalisation of the usual incidence relation between pentagons and hexagons to $k$-gons and $l$-gons.' address: - 'Mathematisches Institut, Ludwig-Maximilians-Universität München, Theresienstr. 39, 80333 München, Germany' - 'Mathematisches Institut, Ludwig-Maximilians-Universität München, Theresienstr. 39, 80333 München, Germany' author: - 'V. Braungardt' - 'D. Kotschick' date: 'April 5, 2006; MSC 2000 classification: primary 52B05, 57M12, 57M15; secondary 05C25, 20E08' title: The classification of football patterns --- Introduction {#introduction .unnumbered} ============ A football pattern[^1] is a graph embedded in the two-sphere in such a way that all faces are pentagons and hexagons, satisfying the conditions that the edges of each pentagon meet only edges of hexagons, and that the edges of each hexagon alternately meet edges of pentagons and of hexagons. If one requires that there are exactly three faces meeting at each vertex, then Euler’s formula implies that the pattern consists of $12$ pentagons and $20$ hexagons. Moreover, in this case the combinatorics of the pattern is uniquely determined. This pattern, which we shall refer to as the standard football, has a particularly symmetric realisation with all polygons regular, which can be thought of as a truncated icosahedron. If one drops the requirement that there are exactly three polygons meeting at each vertex, then one can exhibit infinitely many distinct football patterns by lifting to branched covers of the standard football branched only in vertices of the pattern. In the first part of this paper we shall prove that these are the only football patterns on the two-sphere. For the proof we consider the dual graph of a football pattern as a coloured ribbon graph. The dual of the standard football with $12$ pentagons and $20$ hexagons is shown in the picture on the first page of this paper. To make the picture symmetric, a black vertex is spread out at infinity and is depicted by the shaded ring around the rest of the graph. Of course we could consider the dual graph as a map, but in order to make clear the distinction between football patterns and their duals, we will always use the language of maps, cf. [@Maps], for the football patterns themselves, and the language of ribbon graphs, cf. [@RibbonGraphs], for their duals. All the football graphs, that is ribbon graphs dual to football patterns, have the same universal covering, which is a certain tree $T$. We shall determine the automorphism group of $T$ and prove that every cofinite subgroup of the automorphism group whose quotient gives rise to the dual of a spherical pattern is a subgroup of the group giving rise to the standard football. We also consider football patterns on surfaces of higher genera and show that the classification theorem does not hold for them; in other words, not all of them can be obtained by taking branched covers of the standard football. However, football patterns on surfaces of higher genera always admit branched covers of degree at most $60$ which in turn are also branched covers of the standard spherical football. One may wonder what special rôle pentagons and hexagons play in this discussion. We shall address this question in the second part of this paper, where we determine all the possibilities for triples $(k,l,n)$ that can be realised by maps on the two-sphere whose faces are $k$-gons and $l$-gons satisfying the conditions that the edges of each $k$-gon meet only edges of $l$-gons, and that every $n^{\textrm{th}}$ edge of each $l$-gon meets an edge of a $k$-gon, and its other edges meet $l$-gons. Not surprisingly, the determination of these triples is a generalisation of the topological argument determining the Platonic solids. In most cases the generalised football patterns have realisations dual to very symmetric triangulations of the sphere that have been known and studied since the $19^{\textrm{th}}$ century. That purely topological or combinatorial considerations lead to a list that contains almost only the usual symmetric patterns and their degenerations is a kind of rigidity phenomenon associated with these spherical triangulations. We shall see that the classification theorem for spherical football patterns proved in the first part of this paper for the triple $(5,6,2)$ actually holds for all generalised football patterns with $n=2$: each generalized football with a pattern of type $(k,l,2)$ is a branched cover of the corresponding minimal pattern. We shall also see that this result does not extend to $n>2$. ### Acknowledgement {#acknowledgement .unnumbered} We are grateful to B. Hanke and B. Leeb for helpful comments, and to the [*Deutsche Forschungsgemeinschaft*]{} for support. Ribbon graphs and branched covers ================================= Football graphs --------------- A football pattern is a map in the sense of [@Maps] on the two-sphere satisfying the usual conditions that at least three edges meet at every vertex, that all faces are pentagons and hexagons, that the edges of each pentagon meet only edges of hexagons, and that the edges of each hexagon alternately meet edges of pentagons and of hexagons. We make no regularity assumption, so that a football pattern is not a geometric, but a combinatorial-topological object. A football pattern determines, and is determined by, its dual graph. This graph has a vertex for every polygon in the pattern, and the vertices are coloured, say black for the vertices corresponding to pentagons and white for the vertices corresponding to hexagons. Two vertices are connected by an edge if the corresponding polygons share an edge. The edges meeting at a vertex are cyclically ordered (with respect to this endpoint) by remembering that the sides of a polygon are cyclically ordered. Therefore the dual graph is a fatgraph or ribbon graph in the sense of [@RibbonGraphs], leading to the following definition: \[d:basic\] A football graph is a ribbon graph with black and white vertices satisfying the following conditions: 1. each black vertex has valence five, and all five edges connect the given vertex to white vertices, and 2. each white vertex has valence six, and the six edges alternately[^2] connect the given vertex to black and white vertices. The picture at the beginning of this paper shows the dual ribbon graph of the standard football pattern on $S^2$ consisting of $12$ pentagons and $20$ hexagons. To make the picture symmetric, a black vertex is spread out at infinity and is depicted by the shaded ring around the rest of the graph. As every finite ribbon graph corresponds to a unique closed oriented surface, we have a natural bijection between football graphs and football patterns on arbitrary closed oriented surfaces. A covering map between football graphs corresponds to a possibly branched covering map between surfaces, with any branching restricted to the centers of the faces of the decompositions given by the football graphs. As the football graphs are dual to actual football patterns, branching can only occur at the vertices of patterns. Let $b$ and $w$ be the numbers of black and white vertices in a football graph $\Gamma$. Then the total number of vertices is $v=b+w$, and the number of edges is $e=\frac{1}{2}(5b+6w)$. Of these edges, $e_1=\frac{3}{2}w$ have white endpoints, and $e_2=5b=3w$ have a black and a white endpoint. It follows that there is a natural number $d$ such that $b=6d$ and $w=10d$. \[l:triangle\] The only football graph giving a triangulation of a closed surface is the dual graph $\Gamma_0$ of the standard spherical football with $d=2$. Let $\Sigma$ be the closed oriented surface defined by a football graph $\Gamma$. By Euler’s formula, the number of faces in the cell decomposition of $\Sigma$ determined by $\Gamma$ is $$f = \chi (\Sigma ) + e - v = \chi (\Sigma ) + 45d - 16d = \chi (\Sigma ) + 29d \ .$$ If $\Gamma$ defines a triangulation of $\Sigma$, then we must have $2e = 3f$, which, rewritten in terms of $d$, means $90d = 3\chi (\Sigma )+ 87d$, or $d=\chi (\Sigma )$. Thus $d=2$, and $\Sigma$ is $S^2$. The combinatorics of the corresponding football pattern is uniquely determined in this case, as can be seen from the proof of Theorem \[t:main\] below. The football tree ----------------- There is precisely one connected and simply connected football graph, which we shall call the football tree $T$. It is the universal cover of any football graph. As $T$ thought of as a ribbon graph is an orientable surface, it makes sense to speak of orientation-preserving automorphisms, and we shall denote the group of all such automorphisms by ${\operatorname{Aut}}(T)$. This can be determined explicitly: The automorphism group ${\operatorname{Aut}}(T)$ of the football tree is isomorphic to the free product ${\mathbb{ Z}}_2\star{\mathbb{ Z}}_3\star{\mathbb{ Z}}_5$. This is a straightforward application of the Bass–Serre theory of groups acting on trees, cf. [@Trees]. This theory is usually formulated for groups acting on trees without inverting edges. In our situation, there are automorphisms inverting edges that connect a pair of white vertices. Therefore, we subdivide each of these edges by introducing red vertices in the middle of each edge connecting two white edges of $T$. We obtain a new tree $T'$, which has three kinds of edges: the white and black ones of valence $6$ and $5$ respectively, and red ones of valence $2$. The red and black vertices are only connected to white ones, and the edges meeting at a white vertex lead alternately to red and black vertices. Now ${\operatorname{Aut}}(T)$ acts on $T'$ without inverting any edges. The action is simply transitive on the two kinds of edges, black-white and red-white. The action is also transitive on the vertices of a given colour, with isotropy groups of orders $2$, $3$ and $5$ for the red, white and black vertices. The quotient graph $T'/{\operatorname{Aut}}(T)$ is a tree with three vertices, one for each colour, and with one edge connecting the white vertex to each of the other vertices. We think of this as a graph of groups by labeling the vertices with the isotropy groups. As the edges have trivial isotropy, the fundamental group of this graph of groups is the free product of the labels of the vertices. By the structure theorem of Section I.5.4 in [@Trees], this fundamental group is isomorphic to ${\operatorname{Aut}}(T)$. The classification of spherical footballs ----------------------------------------- We now want to prove that every football pattern on the sphere is obtained as a branched cover of the standard football branched only in the vertices. Equivalently, we prove that the spherical dual ribbon graphs are all obtained as covering spaces of the dual graph of the standard football. The first step is the following: \[l:prep\] Let $\Gamma$ be a football graph with universal covering $\pi\colon T\longrightarrow\Gamma$. Suppose that $\gamma$ is an oriented path in $T$ consisting of a sequence of edges without backtracking. If $\pi$ maps $\gamma$ to a closed path that corresponds to a boundary component of $\Gamma$ thought of as a surface with boundary, then $\gamma$ consists of $3n$ edges for some natural number $n$, and the image of $\gamma$ in the standard football graph $\Gamma_0$ is a loop that is the $n^{\textrm{th}}$ power of the loop formed by a triangle in the triangulation defined by $\Gamma_0$, cf. Lemma \[l:triangle\]. We think of $T$ as a surface with boundary. Choose a boundary component $C$ covering the boundary component $\pi(C)$ of $\Gamma$ to which $\gamma$ is mapped. As a boundary component of $T$, $C$ runs right along a sequence $\{e_i\}_{i\in{\mathbb Z}}$ of oriented edges in $T$ such that, of course, the origin $o(e_i)$ of each edge coincides with the endpoint of the previous edge, and, in addition, with respect to the cyclic order of edges emanating from $o(e_i)=o(\bar e_{i-1})$, $e_i$ is the successor of $\bar e_{i-1}$. (The bar denotes edge inversion.) It follows from Definition \[d:basic\] that the sequence of vertices $o(e_i)$ is of the form *black, white, white, black, white, white*, etc. The setwise stabilizer of $C$ in ${\operatorname{Aut}}(T)$ is the infinite cyclic group generated by the translation $\tau$, which maps $e_i$ to $e_{i+3}$. Now $\pi(C)=C/G$, for some subgroup $G {\leqslant}{\mathop{\textrm{Stab}}\nolimits}(C)$, i.e. $G = \langle \tau^n \rangle$ for some $n\in{\mathbb N}$. Thus $\pi(C)$ runs along a $3n$-gon, which must be $\pi(\gamma)$. By assumption, the path $\gamma$ is a piece $e_{j+1},\ldots,e_{j+k}$ of the sequence $\{e_i\}$ and projects to $\pi(\gamma)$. Therefore $k=3n$. The images of $e_i$ in the standard football still satisfy the condition that consecutive edges be related by the cyclic ordering defining the ribbon graph structure. But this means that the image of $e_{j+1}e_{j+2}e_{j+3}$ is a triangle. Here is the classification theorem for spherical football patterns: \[t:main\] Every football graph dual to a football pattern on $S^2$ is a finite covering space of the standard football graph $\Gamma_0$. Equivalently every football pattern on $S^2$ is obtained from the standard one by passing to a branched cover branched only in vertices of the pattern. Given a spherical football graph $\Gamma$, fix a universal covering $\pi\colon T\rightarrow\Gamma$ of $\Gamma$ and a universal covering $\pi_0\colon T \rightarrow \Gamma_0$ of the standard football graph $\Gamma_0$. We are going to show that the group of deck transformations ${\operatorname{Aut}}_\Gamma T$ is a subgroup of ${\operatorname{Aut}}_{\Gamma_0}T$. This implies that $\pi_0$ factors through $\pi$. Choose a point $p$ on an edge of $T$ that is not an endpoint or a midpoint, so that it has trivial stabilizer in ${\operatorname{Aut}}(T)$. Covering space theory identifies the group of deck transformations ${\operatorname{Aut}}_{\Gamma}T$ with the fundamental group $\pi_1(\Gamma;\pi(p))$. Since $\Gamma$ is spherical, this fundamental group is generated by paths of the form $\beta\gamma\beta^{-1}$, where $\gamma$ is a loop along a boundary component, $\beta$ runs from the base point $\pi(p)$ to the origin of $\gamma$ and $\beta^{-1}$ is the way back. Lifting $\beta$ and $\gamma$ to $T$ we obtain a path $\tilde\beta\tilde\gamma\tau^n(\tilde\beta^{-1})$ leading from $p$ to $\tau^n(p)$, with $\tau$ from the proof of the previous lemma. Hence $\tau^n$ is the deck transformation corresponding to the given generator of $\pi_1(\Gamma;\pi(p))$. This proves the result, because $\tau$ is a deck transformation over $\Gamma_0$. This proof also shows: The subgroup $\pi_1(\Gamma_0)\subset{\operatorname{Aut}}(T)$ is normal. The quotient ${\operatorname{Aut}}(T)/\pi_1(\Gamma_0)$ is the icosahedral group of order $60$. Footballs of positive genera ---------------------------- Every ribbon graph corresponds to a unique closed oriented surface, and of course every such surface does indeed arise from a football graph, for example because it is a branched covering of the two-sphere, which we can arrange to be branched only in the vertices of a suitable football pattern. We now want to show that there are other football patterns on surfaces of positive genera, that are not lifted from the two-sphere. The proof of Theorem \[t:main\] does not extend, because for a ribbon graph corresponding to a surface of positive genus there are generators in the fundamental group that arise from handles, rather than from the punctures. Recall that the parameter $d$ for a finite football graph specifies the number of black and white vertices by the formulae $b=6d$ and $w=10d$. Passing to a $D$-fold covering multiplies $d$ by $D$. As the standard football graph $\Gamma_0$ has $d=2$, all its non-trivial coverings have $d{\geqslant}4$. Therefore, to exhibit football graphs that are not coverings of $\Gamma_0$, it suffices to find examples of positive genus with $d<4$. Performing certain cut-and-paste operations on $\Gamma_0$, we can actually produce examples with $d=2$ and rather large genera. The simplest example is the following. \[ex:sum\] Pick two disjoint edges in the standard football pattern, that are of the same type, so that they both separate hexagons from each other, or they both separate a pentagon from a hexagon. Open up the two-sphere along these edges to obtain a cylinder whose two boundary circles each have two vertices and two edges. As the two edges along which we opened the sphere were of the same type, the two boundary circles of the cylinder can be identified in such a way that the resulting torus carries an induced football pattern with $d=2$. In this example there are $58$ vertices instead of the $60$ in the standard spherical football. All but two of them are $3$-valent, and the exceptional two are $6$-valent. In the language of ribbon graphs, the surgery performed in the above example amounts to cutting two ribbons and regluing the resulting ends in a different pairing. This can also be done with ribbons corresponding to edges that share a vertex, in which case instead of cutting and pasting, the surgery can be described through the reordering of edges: \[ex:surgery\] Let $e_1,\ldots,e_5$ be the edges emanating from a black vertex in the standard football graph $\Gamma_0$. Define a new football graph $\Gamma$ by reordering the edges as $e_1$, $e_3$, $e_2$, $e_4$, $e_5$. This procedure glues the three triangles whose edges include $e_2$ or $e_3$ into a single $9$-gon boundary component of $\Gamma$. This new graph still has $d=2$, but the underlying surface is a torus. In the dual football pattern there are $58$ vertices, of which $57$ are $3$-valent and one is $9$-valent. \[ex:symmetric-genus-24\] Let $e_1,\ldots,e_6$ be the edges emanating from a white vertex in the standard football graph $\Gamma_0$, enumerated in their cyclic order and labelled such that $e_1$, $e_3$ and $e_5$ have black ends. Define a new ribbon graph by reordering the edges cyclically as $e_1,e_4,e_3,e_6,e_5,e_2$. This means that the edges leading to white vertices are cut and reattached after a cyclic permutation given geometrically by a rotation by angle $\frac{2\pi}{3}$ around the vertex in the realisation of $\Gamma_0$ with icosahedral symmetry. We apply this procedure to every white vertex of $\Gamma_0$. The resulting football graph $\Gamma$ is symmetric in the sense that it admits rotations of order $2$, $3$ and $5$ around an edge, a white vertex and a black vertex, respectively. Hence the full football group ${\operatorname{Aut}}(T)$ acts on $\Gamma$. In particular all faces are conjugate. One can verify by inspection that the faces are $15$-gons. Hence the Euler characteristic of the underlying surface is $-46$, and its genus is $24$. This is a football graph with $d=2$ and is therefore not a covering of $\Gamma_0$. Although football graphs of positive genera are not in general coverings of $\Gamma_0$, we have the following: Every football pattern on a closed oriented surface admits a branched cover of degree $D{\leqslant}60$ that is also a branched cover of the standard minimal pattern on $S^2$. The bound for $D$ is sharp. Let $\Gamma$ be a finite football graph. As $\pi_1(\Gamma_0)\subset{\operatorname{Aut}}(T)$ is a subgroup of index $60$, the intersection $\pi_1(\Gamma_0)\cap\pi_1(\Gamma)$ has index at most $60$ in $\pi_1(\Gamma)$. The intersection corresponds to a covering of $\Gamma$ of degree $D{\leqslant}60$ that is also a covering of $\Gamma_0$. To prove that coverings of degree strictly less than $60$ do not always suffice, recall that ${\operatorname{Aut}}(T)$ acts on the genus $24$ football graph $\Gamma$ from Example \[ex:symmetric-genus-24\]. This is equivalent to the fundamental group $\pi_1(\Gamma)$ being a normal subgroup of ${\operatorname{Aut}}(T)$. The covering of $\Gamma$ corresponding to the subgroup $N=\pi_1(\Gamma_0)\cap\pi_1(\Gamma)$ of ${\operatorname{Aut}}(T)$ is a Galois covering with Galois group $\pi_1(\Gamma)/N$. Now the injection $\pi_1(\Gamma){\rightarrow}{\operatorname{Aut}}(T)$ induces an embedding of $\pi_1(\Gamma)/N$ as a normal subgroup of the orientation-preserving icosahedral group ${\operatorname{Aut}}(T)/\pi_1(\Gamma_0)$, isomorphic to the alternating group $A_5$. Since this is a simple group we must have $\pi_1(\Gamma)/N \cong A_5$ or $\{1\}$. The second case is excluded because $\Gamma$ is not a covering of $\Gamma_0$. Non-orientable footballs ------------------------ Although we have modelled football patterns as ribbon graphs, we can also consider them on non-orientable surfaces, because the condition that every other edge emanating from a white vertex should connect to a black vertex is preserved by inversion of the cyclic order. Here are the simplest examples for the projective plane. A football pattern on the real projective plane is readily constructed from the standard football. In the dual ribbon graph $\Gamma_0$ cut a single ribbon and reglue it with a half-twist so that the surface becomes non-orientable. This gives a football pattern with $d=\text2$ that, instead of the $60$ vertices of valence $3$ in the standard football, has $58$ vertices of valence $3$ and a unique vertex of valence $6$. Therefore the Euler number of the surface is $1$. If we lift the pattern in this example to the universal covering of the projective plane, we obtain a football pattern with $d=4$ on the two-sphere, which, by Theorem \[t:main\], is a $2$-fold branched cover of the standard pattern. Of course in this case the branched covering structure can be seen directly, by focussing on the two vertices of valence $6$. The usual symmetric realisation of the standard football pattern on the sphere as a truncated icosahedron is symmetric under the antipodal involution. Thus it descends to a pattern on ${\mathbb{ R}}P^2$ with $d=\text1$. Generalised football patterns ============================= In this section we consider generalisations of the traditional football patterns. A generalised football pattern is a map on the two-sphere whose faces are $k$-gons and $l$-gons satisfying the conditions that the edges of each $k$-gon meet only edges of $l$-gons, and that every $n^{\textrm{th}}$ edge of each $l$-gon meets an edge of a $k$-gon, and its other edges meet $l$-gons. To avoid degenerate cases we always assume $k{\geqslant}3$, $l{\geqslant}3$ and $l=n\cdot m$ with positive integers $m$ and $n$. As usual, at least three edges meet at every vertex. If a given triple $(k,l,n)$ can be realised by a generalised football pattern, then it can be realised in infinitely many ways, for example by taking branched covers branched only in the vertices of a given pattern. We will determine all possible triples, and we will find a minimal realisation for each of them. We will also see that in some cases there are realisations that are not branched covers of the minimal one. Before proceeding to the classification, we list some examples for future reference. Some examples ------------- The standard football realising the triple $(5,6,2)$ can be thought of as a truncated icosahedron. More generally, we have: \[ex:tP\] The truncated Platonic solids realise the triples $(3,6,2)$, $(3,8,2)$, $(3,10,2)$, $(4,6,2)$ and $(5,6,2)$ as generalised football patterns. There are also infinite series of examples obtained by truncating the degenerate Platonic solids: \[ex:Amfoot\] Start with a subdivision of the sphere along $k{\geqslant}3$ halves of great circles running from the north to the south poles. We shall call this an American football. If we now truncate at one of the poles, we obtain a realisation of the triple $(k,3,3)$. (For $k=3$ this is a tetrahedron.) If we truncate at both poles, we obtain a tin can pattern realising $(k,4,2)$. (For $k=4$ it is a cube.) This is also known as a $k$-prism. If we add $k$ edges along the equator to this last example, we obtain a double tin can realising $(k,4,4)$, for any $k{\geqslant}3$. Here is a variation on the above tin can pattern: \[ex:zz\] Take a $k$-gon, with $k{\geqslant}3$ arbitrary, and surround it by pentagons in such a way that the two pentagons meeting a pair of adjacent sides of the $k$-gon share a side. We can fit together two such rings made up of a $k$-gon and $k$ pentagons each along a zigzag curve to obtain a realisation of the triple $(k,5,5)$ by a generalised football pattern. (For $k=5$ we obtain a dodecahedron.) The next two examples need to be visualised using the accompanying figures. \[ex:3\] Take a Platonic solid, and subdivide each face as follows. If the face is a $k$-gon, put a smaller $k$-gon in its interior, and radially connect each corner of this smaller $k$-gon to the corresponding corner of the original face. In this way each face of the original polyhedron is divided into a $k$-gon and $k$ quadrilaterals. The cases $k=3$ and $4$ are shown in Figure \[figure1\]. Next we erase the edges of the original polyhedron, so that the two quadrilaterals of the subdivison meeting along an edge are joined to form a hexagon. In this way we obtain a realisation of the triple $(k,6,3)$. We shall refer to this construction as a variation on the original Platonic solid. \[ex:6\] Again we start with a Platonic solid whose faces are $k$-gons. We subdivide each face into a smaller $k$-gon and $k$ hexagons as shown in Figure \[figure2\]. This gives a realisation of the triple $(k,6,6)$. The classification of generalised football patterns --------------------------------------------------- Now we want to prove that the previous examples exhaust all possible generalised football patterns with $n{\geqslant}2$. We shall also treat the case $n=1$. The results of this classification are summarised in the table in Figure \[fig:table\]. If $S^2$ is endowed with a generalised football pattern of type $(k,l,n)$, we shall think of the $k$-gons as being black and the $l$-gons as being white. Their numbers are denoted by $b$ and $w$ respectively. The pattern then has $f=b+w$ many faces, it has $e=\frac{1}{2}(bk+wl)$ many edges, and the number $v$ of vertices is bounded by $v{\leqslant}\frac{1}{3}(bk+wl)$ as there have to be at least $3$ faces meeting at every vertex. Counting the number of edges at which a $k$-gon meets an $l$-gon in two different ways, we find $$\label{eq:fund} k\cdot b=\frac{1}{n}\cdot l\cdot w = m\cdot w \ .$$ Computing the Euler characteristic and using  leads to: $$2=f-e+v{\leqslant}b+w-\frac{1}{6}(k\cdot b+l\cdot w)=b+w-\frac{1}{6}k\cdot b\cdot (n+1) \ .$$ Dividing by $2k\cdot b$ we obtain the key inequality $$\label{eq:key} \frac{1}{k\cdot b}+\frac{n+1}{12}{\leqslant}\frac{1}{2k}+\frac{1}{2m} \ .$$ ### The case $n{\geqslant}2$ If $n{\geqslant}2$, then the left hand side of  is strictly larger than $\frac{1}{4}$. If both $k$ and $m$ are at least $4$, then the right hand side is at most $\frac{1}{4}$. Thus $n{\geqslant}2$ implies $k=3$ or $m\in\{1,2,3\}$. We now discuss these cases separately. For every possible triple $(k,l,n)$ the inequality  gives a lower bound for $b$. With  we then have a lower bound for $w$. Checking these values against the examples of the previous section, one sees for most of the examples that they actually give minimal realisations. There are only a few cases when this simple check does not suffice, because the minimal realisations have vertices of valence strictly larger than $3$. \[l:k=3\] Suppose the triple $(k,l,n)$ with $k=3$ and $n{\geqslant}2$ is realised by a generalised football pattern. Then $(k,l,n)$ is one of the triples $(3,4,2)$, $(3,6,2)$, $(3,8,2)$, $(3,10,2)$, $(3,3,3)$, $(3,6,3)$, $(3,4,4)$, $(3,5,5)$, $(3,6,6)$. Minimal realisations are given by the Examples \[ex:tP\], \[ex:Amfoot\], \[ex:zz\], \[ex:3\] and \[ex:6\]. Putting $k=3$ in  we obtain $$\label{eq:3key} \frac{1}{3b}+\frac{n}{12}{\leqslant}\frac{1}{12}+\frac{1}{2m} \ .$$ Together with $n{\geqslant}2$, this implies $m{\leqslant}5$. If $m=5$, then we obtain $n=2$ from . Thus $l=10$. The truncated dodecahedron from Example \[ex:tP\] is a minimal realisation. If $m=4$, then again $n=2$ from . Thus $l=8$. The truncated cube from Example \[ex:tP\] is a minimal realisation. If $m=3$, then again $n=2$ from . Thus $l=6$. The truncated tetrahedron from Example \[ex:tP\] is a minimal realisation. If $m=2$, then  gives $n{\leqslant}3$. For $n=3$, equivalently $l=6$, a minimal realisation is the variation on the tetrahedron from Example \[ex:3\]. For $n=2$, equivalently $l=4$, we get the case $k=3$ of the truncated American football in Example \[ex:Amfoot\]. If $m=1$, then  only gives $n{\leqslant}6$. For $n=6$, therefore $l=6$, the subdivision of the tetrahedron from Example \[ex:6\] gives a minimal realisation. For $n=l=5$, the case $k=3$ in Example \[ex:zz\] is a minimal realisation. For $n=l=4$, the case $k=3$ of the double tin can in Example \[ex:Amfoot\] is a realisation. As it has vertices of valence $4$, it is not immediately obvious that it is a minimal realisation. In this case  gives $b{\geqslant}1$, but one can easily check that $b=1$ is not possible. Thus $b$ is at least $2$, and the double tin can is indeed a minimal realisation. For $n=l=3$, the case $k=3$ of the partially truncated American football in Example \[ex:Amfoot\] is a minimal realisation. Note that $m=1$ implies $n{\geqslant}3$, so that we have now exhausted all cases with $k=3$ and $n{\geqslant}2$. \[l:m=3\] Suppose the triple $(k,l,n)$ with $l/n=m=3$ and $n{\geqslant}2$ is realised by a generalised football pattern. Then $(k,l,n)$ is one of the triples $(3,6,2)$, $(4,6,2)$, $(5,6,2)$. Minimal realisations are given by the truncated Platonic solids in Example \[ex:tP\]. \[l:m=2\] Suppose the triple $(k,l,n)$ with $l/n=m=2$ and $n{\geqslant}2$ is realised by a generalised football pattern. Then $(k,l,n)$ is one of the triples $(3,6,3)$, $(4,6,3)$, $(5,6,3)$, or $(k,4,2)$ with $k{\geqslant}3$. Minimal realisations are given by the variations on the Platonic solids in Example \[ex:3\], respectively by the truncated American football in Example \[ex:Amfoot\]. We omit the proofs of Lemmas \[l:m=3\] and \[l:m=2\], because they are completely analogous to, and even simpler than, the proof of Lemma \[l:k=3\]. \[l:m=1\] Suppose the triple $(k,l,n)$ with $l/n=m=1$ and $n{\geqslant}2$ is realised by a generalised football pattern. Then $(k,l,n)$ is one of the triples $(3,6,6)$, $(4,6,6)$, $(5,6,6)$, or $(k,3,3)$, $(k,4,4)$, $(k,5,5)$ with $k{\geqslant}3$. Minimal realisations are given by the subdivisions on the Platonic solids in Example \[ex:6\], respectively by the infinite sequences in Examples \[ex:Amfoot\] and \[ex:zz\]. For $m=1$, equivalently $l=n$, we obtain $n{\leqslant}6$ for all $k{\geqslant}3$ from . If $l=n=6$, then we also have $k{\leqslant}5$ from . Thus $k$ is $3$, $4$ or $5$, and minimal realisations are given in Example \[ex:6\]. If $l=n=5$, then all $k{\geqslant}3$ are possible, and minimal realisations are given in Example \[ex:zz\]. If $l=n=4$, then all $k{\geqslant}3$ are possible, and realisations are given by the double tin cans in Example \[ex:Amfoot\]. To see that these realisations are minimal, it suffices to exclude the case $b=1$, which is easily done by contradiction. If $l=n=3$, then again all $k{\geqslant}3$ are possible, and minimal realisations are given by the partially truncated American football in Example \[ex:Amfoot\]. This completes the classification of generalised football patters with $n{\geqslant}2$ on the two-sphere. ### The case $n=1$ A generalised football pattern with $n=1$ consists of $b$ black $k$-gons and $w$ white $l$-gons so that the two polygons meeting along an edge always have different colours. Note that here the situation is completely symmetric in $k$ and $l$. \[l:n=1\] Suppose the triple $(k,l,1)$ is realised by a generalised football pattern on $S^2$. Then, up to changing the rôles of $k$ and $l$, $(k,l)$ is one of the pairs $(3,3)$, $(3,4)$ or $(3,5)$. Minimal realisations are obtained by painting the faces of an octahedron, a cuboctahedron, respectively an icosidodecahedron, in a suitable manner. Counting the edges in two different ways leads to $b\cdot k = w\cdot l$. As every edge separates a black from a white polygon, there must be an even number of faces meeting at every vertex. Therefore the valence of every vertex is ${\geqslant}4$, giving rise to $$v{\leqslant}\frac{1}{4}(b\cdot k + w\cdot l) \ ,$$ which is of course stronger than what we had before, when the valence of a vertex was only ${\geqslant}3$. Computing the Euler characteristic as before, we obtain instead of  the stronger $$\label{eq:key1} \frac{1}{k\cdot b}+\frac{1}{4}{\leqslant}\frac{1}{2k}+\frac{1}{2l} \ .$$ If both $k$ and $l$ are ${\geqslant}4$, then the right hand side is ${\leqslant}\frac{1}{4}$, which is impossible. Thus $k$ or $l$ is $=3$. By the symmetry between $k$ and $l$ we may assume that $k=3$. Substituting this into , we find $$\label{eq:key2} \frac{1}{3b}+\frac{1}{12}{\leqslant}\frac{1}{2l} \ ,$$ which implies $l{\leqslant}5$. If $l=3$, then $b=w{\geqslant}4$. A realisation is obtained by painting the faces of an octahedron in black and white, so that each edge separates a black face from a white one. If $l=4$, then  implies $b{\geqslant}8$, so that $w{\geqslant}6$. The cuboctahedron, cf. [@S], is a realisation. If $l=5$, then  implies $b{\geqslant}20$, so that $w{\geqslant}12$. The icosidodecahedron, cf. [@S], is a realisation. All these realisations are minimal, because they have precisely four faces meeting at every vertex, so that  becomes an equality. We summarise the above classification of the generalised football patterns as follows: \[t:table\] Suppose that $S^2$ admits a map whose faces are $k$-gons and $l$-gons with $k,l{\geqslant}3$, such that the edges of each $k$-gon meet only edges of $l$-gons, and so that every $n^{\textrm{th}}$ edge of each $l$-gon meets an edge of a $k$-gon, and its other edges meet $l$-gons. Then $l{\leqslant}10$ and $n{\leqslant}6$. There are $16$ different sporadic triples $(k,l,n)$ with $k{\leqslant}5$ that occur, together with $4$ infinite sequences with variable $k$ and fixed $l$ and $n$. All the possibilities are listed in the table in Figure \[fig:table\], which also gives minimal realisations for all cases. $$\vbox{\tabskip=.5em\offinterlineskip \halign{\strut#&\hfil#\hfil & \vrule# & \hfil#\hfil & \vrule# & \hfil#\hfil & \vrule# & \hfil#\hfil & \vrule# & \hfil#\hfil & \vrule# & \hfil#\hfil & \vrule# & \hfil#\hfil\cr & && k && m && n && {\bf minimal realisation} && b && w \cr \noalign{\hrule} \noalign{\hrule} & 1. && 3 && 3 && 1 && octahedron && 4 && 4 \cr & 2. && 3 && 4 && 1 && cuboctahedron && 8 && 6 \cr & 3. && 4 && 3 && 1 && cuboctahedron && 6 && 8 \cr & 4. && 3 && 5 && 1 && icosidodecahedron && 20 && 12 \cr & 5. && 5 && 3 && 1 && icosidodecahedron && 12 && 20 \cr & 6. && 3 && 3 && 2 && truncated tetrahedron && 4 && 4 \cr & 7. && 3 && 4 && 2 && truncated cube && 8 && 6 \cr & 8. && 4 && 3 && 2 && truncated octahedron && 6 && 8 \cr & 9. && 3 && 5 && 2 && truncated dodecahedron && 20 && 12 \cr & 10. && 5 && 3 && 2 && truncated icosahedron = {\bf football} && 12 && 20 \cr & 11. && ${\geqslant}$ 3 && 2 && 2 && truncated American football && 2 && k \cr & 12. && 3 && 2 && 3 && variation on the tetrahedron && 4 && 6 \cr & 13. && 4 && 2 && 3 && variation on the cube && 6 && 12 \cr & 14. && 5 && 2 && 3 && variation on the dodecahedron && 12 && 30 \cr & 15. && ${\geqslant}$ 3 && 1 && 3 && partially truncated American football && 1 && k \cr & 16. && ${\geqslant}$ 3 && 1 && 4 && double tin can && 2 && 2k \cr & 17. && ${\geqslant}$ 3 && 1 && 5 && zigzag tin can && 2 && 2k \cr & 18. && 3 && 1 && 6 && subdivision of the tetrahedron && 4 && 12 \cr & 19. && 4 && 1 && 6 && subdivision of the cube && 6 && 24 \cr & 20. && 5 && 1 && 6 && subdivision of the dodecahedron && 12 && 60 \cr }}$$ The different minimal realisations were described in our earlier examples, and in the course of the proof. There are several alternative descriptions of items 12.–14. The variation on the tetrahedron is nothing but a partially truncated cube, truncated at $4$ of its $8$ vertices, chosen so that each face is truncated at two diagonally opposite corners. Similarly, the variations on the cube and the dodecahedron are partial truncations of the rhombic dodecahedron and of the rhombic triacontrahedron respectively. The subdivisions in items 18.–20. can also be thought of as partial truncations of the dodecahedron, the pentagonal icositetrahedron, respectively the pentagonal hexecontrahedron. In order to make the symmetries more obvious, the table does not list triples $(k,l,n)$, but rather $(k,m,n)$ with $m=l/n$. Note that this makes no difference when $n=1$, in which case there is a complete symmetry between $k$ and $l=m$. Therefore, the cases 2. and 3., respectively 4. and 5., are dual to each other, obtained by switching the roles of $k$ and $l$. Case 1. is self-dual. Similarly, cases 7. and 8., respectively 9. and 10., are dual to each other with the duality induced by the duality of Platonic solids, and case 6. is self-dual. Branched covers for generalised football patterns ------------------------------------------------- Now that we have an overview of all the generalised football patterns, one may ask whether Theorem \[t:main\] can be extended, to prove that every generalised spherical football is a branched cover of the corresponding minimal example, branched only in the vertices. It is not hard to see that for $n=2$ this is indeed the case: \[t:cover\] If $(k,l,2)$ is realised by a generalised football pattern on $S^2$, then every spherical realisation is a branched cover, branched only in the vertices, of the minimal realisation. In particular the minimal realisation is unique. The minimal realisations for $n=2$ have all vertices of valence $3$. Equivalently, the dual graph triangulates the sphere. Moreover, every realisation has the property that the valence of every vertex is a multiple of $3$, with every third face a black $k$-gon. Therefore the proofs of Lemma \[l:prep\] and of Theorem \[t:main\] go through. For larger values of $n$ one loses control of the structure of the vertices, and the proof breaks down. \[ex:cO\] Consider the case $k=l=n=3$. The minimal realisation is a partially truncated American football with $k=3$, which we can also think of as a painted tetrahedron, in which one face is black and the others are white. Another, non-minimal, realisation is obtained by painting the faces of an octahedron so that two opposite faces are black, and the remaining six faces are white. This is not a branched cover of the painted tetrahedron. Note that in this example the minimal realisation has two kinds of vertices: one at which all faces meeting there are white, and three at which one black and two white faces meet. For the non-minimal realisation described above one has one black and three white faces meeting at every vertex. It is a general feature of the generalised football patterns with $n{\geqslant}3$ that the combinatorial definition of the pattern does not imply any control over the local structure at a vertex. For $n=2$ we do have such control, leading to Theorem \[t:cover\]. [10]{} H. S. M. Coxeter, [*Regular Polytopes*]{}, Methuen & Co. Ltd. London 1948. G. A. Jones and D. Singerman, [*Theory of maps on orientable surfaces*]{}, Proc. London Math. Soc. [**37**]{} (1978), 273–307. E. Looijenga, [*Cellular decompositions of compactified moduli spaces of pointed curves*]{}, in [*The moduli space of curves*]{}, ed. R. Dijkgraaf, C. Faber and G. van der Geer, Progr. Math. [**129**]{}, Birkhäuser-Verlag Boston 1995. J.-P. Serre, [*Trees*]{}, Springer-Verlag Berlin, Heidelberg, New York 1980. [^1]: We use English terminology. American readers might want to call our football patterns “soccer ball patterns”. [^2]: The alternating condition is with respect to the cyclic order of the edges.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We construct an explicit, multiplicative Chow–Künneth decomposition for the Hilbert scheme of points of a K3 surface. We further refine this decomposition with respect to the action of the Looijenga–Lunts–Verbitsky Lie algebra.' address: - 'MIT, Department of Mathematics, Cambridge, MA, USA' - 'Simion Stoilow Institute of Mathematics, Bucharest, Romania' - 'University of Bonn, Institut für Mathematik, Bonn, Germany' - 'Peking University, BICMR, Beijing, China' author: - Andrei Negu - Georg Oberdieck - Qizheng Yin title: Motivic decompositions for the Hilbert scheme of points of a K3 surface --- Introduction ============ This note is a continuation of [@Ob] by the second author. We study the motivic aspects of the Looijenga–Lunts–Verbitsky ([@LL; @V], LLV for short) Lie algebra action on the Chow ring of the Hilbert scheme of points of a K3 surface. Using a special element of the LLV algebra and formulas of [@MN] by Maulik and the first author, we construct an explicit Chow–Künneth decomposition for the Hilbert scheme, prove its multiplicativity, and show that all divisor classes and Chern classes lie in the correct component of the decomposition. This confirms expectations of Beauville [@Be] and Voisin [@Voi]. We also obtain a refined motivic decomposition for the Hilbert scheme by taking into account the LLV algebra action. Both results parallel the case of an abelian variety, which we shall briefly review. Abelian varieties ----------------- Let $X$ be an abelian variety of dimension $g$. Recall the classical result of Deninger–Murre on the decomposition of the Chow motive ${{\mathfrak{h}}}(X)$. There is a unique, multiplicative Chow–Künneth decomposition $$\label{dmdec} \mathfrak{h}(X) = \bigoplus_{i = 0}^{2g} {{\mathfrak{h}}}^i(X)$$ such that for all $N \in {{\mathbb{Z}}}$, the multiplication $[N]: X \to X$ acts on ${{\mathfrak{h}}}^i(X)$ by $[N]^*\! = N^i$. The decomposition specializes to the Künneth decomposition in cohomology (hence the name Chow–Künneth), and to the Beauville decomposition [@Be0] in Chow. The latter takes the form $$\label{bdec} A^*(X) = \bigoplus_{i, s} A^i(X)_s$$ with $$A^i(X)_s = A^i({{\mathfrak{h}}}^{2i - s}(X)) = \{\alpha \in A^i(X) \,|\, [N]^*\alpha = N^{2i - s} \alpha, \text{ for all } N \in {{\mathbb{Z}}}\}.$$ The *multiplicativity* of stands for the fact that the cup product $$\cup: {{\mathfrak{h}}}(X) \otimes {{\mathfrak{h}}}(X) \to {{\mathfrak{h}}}(X)$$ respects the grading, in the sense that $$\cup: {{\mathfrak{h}}}^i(X) \otimes {{\mathfrak{h}}}^j(X) \to {{\mathfrak{h}}}^{i + j}(X)$$ for all $i, j \in \{0, ..., 2g\}$. This can be seen by simply comparing the actions of $[N]^*$. As a result, the bigrading in  is multiplicative, *i.e.*, compatible with the ring structure of $A^*(X)$. The Beauville decomposition is expected to provide a multiplicative *splitting* of the conjectural Bloch–Beilinson filtration on $A^*(X)$. A difficult conjecture of Beauville (and consequence of the Bloch–Beilinson conjecture) predicts the vanishing $A^*(X)_s = 0$ for $s < 0$ and the injectivity of the cycle class map $$\mathrm{cl}: A^*(X)_0 \to H^*(X).$$ Further, any symmetric ample class $\alpha \in A^1(X)_0$ induces an $\mathfrak{sl}_2$-triple $(e_\alpha, f_\alpha, h)$ acting on $A^*(X)$. A Lefschetz decomposition of ${{\mathfrak{h}}}(X)$ with respect to the $\mathfrak{sl}_2$-action was obtained by Künnemann [@Ku], refining . More generally, Moonen [@Mon] constructed an action of the Néron–Severi part of the Looijenga–Lunts [@LL] Lie algebra ${{\mathfrak{g}}}_{\mathrm{NS}}$ on $A^*(X)$, which contains all possible $\mathfrak{sl}_2$-triples above (he actually considered the slightly larger Lie algebra $\mathfrak{sp}(X \times X^\vee)$; see [@Mon Section 6]). He then obtained a refined motivic decomposition with respect to the ${{\mathfrak{g}}}_{\mathrm{NS}}$-action. There is a unique decomposition $$\label{modec} {{\mathfrak{h}}}(X) = \bigoplus_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})}{{\mathfrak{h}}}_\psi(X)$$ where $\psi$ runs through all isomorphism classes of finite-dimensional irreducible representations of ${{\mathfrak{g}}}_{\mathrm{NS}}$, and ${{\mathfrak{h}}}_\psi(X)$ is $\psi$-isotypic under $\gNS$. Here being *$\psi$-isotypic* means that ${{\mathfrak{h}}}_\psi(X)$ is stable under $\gNS$ and that for any Chow motive $M$, the $\gNS$-representation ${\operatorname{Hom}}(M, {{\mathfrak{h}}}_\psi(X))$ is isomorphic to a direct sum of copies of $\psi$. Again specializes to refined decompositions in cohomology and in Chow. Chow–Künneth ------------ We switch to the Hilbert scheme case. Let $S$ be a projective K3 surface and let $X = \Hilb_n(S)$ be the Hilbert scheme of $n$ points on $S$. In [@Ob], the action of the Néron–Severi part of the LLV algebra ${{\mathfrak{g}}}_{\mathrm{NS}}$ was lifted from cohomology to Chow. In particular, there is an explicit grading operator $$h \in A^{2n}(X \times X)$$ which appears in every $\mathfrak{sl}_2$-triple $(e_\alpha, f_\alpha, h)$ in ${{\mathfrak{g}}}_{\mathrm{NS}}$. We normalize $h$ so that it acts on $H^{2i}(X)$ by multiplication by $i-n$. We regard $h$ as a natural replacement for the operator $[N]^*$ in the abelian variety case. Our first result decomposes the Chow motive ${{\mathfrak{h}}}(X)$ into eigen-submotives of $h$. \[thm:Decomposition\] There is a unique Chow–Künneth decomposition $$\label{ckdec} \mathfrak{h}(X) = \bigoplus_{i = 0}^{2n} {{\mathfrak{h}}}^{2i}(X)$$ such that $h$ acts on ${{\mathfrak{h}}}^{2i}(X)$ by multiplication by $i - n$. The mutually orthogonal projectors in the decomposition are written explicitly in terms of the Heisenberg algebra action [@Groj; @Nak]. We also show that  agrees with the Chow–Künneth decomposition obtained by de Cataldo–Migliorini [@dCM] and Vial [@Vial]. As before specializes to a decomposition in Chow $$\label{chowdec} A^*(X) = \bigoplus_{i, s} A^i(X)_{2s}$$ with $$A^i(X)_{2s} = A^i({{\mathfrak{h}}}^{2i - 2s}(X)) = \{\alpha \in A^i(X) \,|\, h(\alpha) = (i - s - n) \alpha\}.$$ Multiplicativity ---------------- In the seminal paper [@Be], Beauville raised the question whether hyper-Kähler varieties behave similarly to abelian varieties in the sense that the conjectural Bloch–Beilinson filtration also admits a multiplicative splitting. As a test case, he conjectured that for a hyper-Kähler variety, the cycle class map is injective on the subring generated by divisor classes. For the Hilbert scheme of points of a K3 surface, Beauville’s conjecture was recently proven in [@MN]; see also [@Ob] for a shorter proof. But the ultimate goal remains to find the multiplicative splitting. Meanwhile, Shen and Vial [@SV; @SV2] introduced the notion of a *multiplicative Chow–Künneth decomposition*, upgrading Beauville’s question from Chow groups to the level of correspondences/Chow motives. The main result of this paper confirms that provides a multiplicative Chow–Künneth decomposition for the Hilbert scheme. \[main\] Let $S$ be a projective K3 surface and let $X = \eHilb_n(S)$. 1. The Chow–Künneth decomposition is multiplicative, *i.e.*, the cup product $$\cup: {{\mathfrak{h}}}(X) \otimes {{\mathfrak{h}}}(X) \to {{\mathfrak{h}}}(X)$$ respects the grading, in the sense that $$\cup: {{\mathfrak{h}}}^{2i}(X) \otimes {{\mathfrak{h}}}^{2j}(X) \to {{\mathfrak{h}}}^{2i + 2j}(X)$$ for all $i, j \in \{0, ..., 2n\}$. As a result, the bigrading in is multiplicative. 2. All divisor classes and Chern classes of $X$ belong to $A^*(X)_0$. Theorem \[main\] (ii) is related to the Beauville–Voisin conjecture [@Voi], which predicts that for a hyper-Kähler variety, the cycle class map is injective on the subring generated by divisor classes and Chern classes. In the Hilbert scheme case, one may further ask the vanishing $A^*(X)_{2s} = 0$ for $s < 0$ and the injectivity of the cycle class map $$\mathrm{cl}: A^*(X)_0 \to H^*(X).$$ We do not tackle these questions in the present paper. The key to the proof of Theorem \[main\] is the compatibility between the grading operator $h$ and the cup product. For example, at the level of Chow groups, we show that the operator $$\widetilde{h} = h + n \Delta_X \in A^{2n}(X \times X)$$ acts on $A^*(X)$ by derivation, *i.e.*, $$\label{eqn:derivation} \widetilde{h}(x \cdot x') = \widetilde{h}(x) \cdot x' + x \cdot \widetilde{h}(x')$$ for all $x, x' \in A^*(X)$. We achieve this by explicit calculations using the Chow lifts [@MN] of the well-known machinery for the Heisenberg algebra action [@Lehn; @LQW]. In fact, our argument yields at the level of correspondences; see Section \[sec:mult\]. Once the compatibility is established, Theorem \[main\] is deduced by simply comparing the eigenvalues. Previous work ------------- Theorem \[main\] was previously obtained by Vial [@Vial] based on Voisin’s announced result [@Voi2 Theorem 5.12] on *universally defined cycles*. A second proof, also relying on Voisin’s theorem, was given by Fu and Tian [@FT]. They interpreted Theorem \[main\] (i) as the motivic incarnation of Ruan’s crepant resolution conjecture [@Ru]. Our proof has the advantage of being explicit and unconditional at the moment. We note that multiplicative Chow–Künneth decompositions, for both hyper-Kähler and non-hyper-Kähler varieties, have been studied in [@FLV; @FLV2; @FT0; @FTV; @FV; @FV2; @LV]. Refined decomposition --------------------- We further obtain a refined decomposition of the Chow motive ${{\mathfrak{h}}}(X)$ with respect to the action of the Néron–Severi part of the LLV algebra ${{\mathfrak{g}}}_{\mathrm{NS}}$. Both the statement and the proof parallel the abelian variety case. \[thm:redec\] Let $S$ be a projective K3 surface and let $X = \eHilb_n(S)$. There is a unique decomposition $$\label{redec} {{\mathfrak{h}}}(X) = \bigoplus_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})}{{\mathfrak{h}}}_\psi(X)$$ where $\psi$ runs through all isomorphism classes of finite-dimensional irreducible representations of ${{\mathfrak{g}}}_{\mathrm{NS}}$, and ${{\mathfrak{h}}}_\psi(X)$ is $\psi$-isotypic under $\gNS$. As before specializes to refined decompositions in cohomology and in Chow. We may also consider a Cartan subalgebra ${{\mathfrak{t}}}\subset {{\mathfrak{g}}}_{\mathrm{NS}}$ and note that the decomposition in terms of the irreducible representations of ${{\mathfrak{g}}}_{\mathrm{NS}}$ implies a similar decomposition in terms of the irreducible representations (*i.e.*, characters) of ${{\mathfrak{t}}}$. More precisely, recall the weight decomposition $${{\mathfrak{g}}}_{\mathrm{NS}} = {{\mathfrak{g}}}_{\mathrm{NS}, -2} \oplus {{\mathfrak{g}}}_{\mathrm{NS}, 0} \oplus {{\mathfrak{g}}}_{\mathrm{NS}, 2}, \quad {{\mathfrak{g}}}_{\mathrm{NS}, 0} = \overline{{{\mathfrak{g}}}}_{\mathrm{NS}} \oplus {{\mathbb{Q}}}\cdot h$$ where $\overline{{{\mathfrak{g}}}}_{\mathrm{NS}}$ is the Néron–Severi part of the *reduced LLV algebra* (terminology taken from [@GKLR]). Let $\overline{{{\mathfrak{t}}}} \subset \overline{{{\mathfrak{g}}}}_{\mathrm{NS}}$ be a Cartan subalgebra and write $$\widetilde{{{\mathfrak{t}}}} = \overline{{{\mathfrak{t}}}} \oplus {{\mathbb{Q}}}\cdot \widetilde{h}.$$ (so $\widetilde{{{\mathfrak{t}}}}$ differs from ${{\mathfrak{t}}}$ in that the element $h$ was replaced by $\widetilde{h} = h + n \Delta_X$). Hence we obtain a motivic decomposition $$\label{redec cartan} {{\mathfrak{h}}}(X) = \bigoplus_{\lambda \in \widetilde{{{\mathfrak{t}}}}^*} {{\mathfrak{h}}}_\lambda(X).$$ We expect to also be multiplicative, in the sense that the cup product sends $$\label{eqn:cup} \cup: {{\mathfrak{h}}}_\lambda(X) \otimes {{\mathfrak{h}}}_\mu(X) \to {{\mathfrak{h}}}_{\lambda + \mu}(X).$$ To this end, it would suffice to prove the analogue of when $\widetilde{h}$ is replaced by any element in $\overline{{{\mathfrak{g}}}}_{\mathrm{NS}}$. The generators of $\overline{{{\mathfrak{g}}}}_{\mathrm{NS}}$ are denoted by $$h_{\alpha \beta} \in A^{2n}(X \times X)$$ and indexed by $\alpha \wedge \beta$ in $\wedge^2(A^1(X))$. For $n >1$, we have $$\label{eqn:pic} A^1(X) \cong A^1(S) \oplus {{\mathbb{Q}}}\cdot \delta$$ where $\delta$ corresponds to the exceptional divisor of the Hilbert scheme. Toward the goal of proving , we obtain the following result. \[thm:ref\] For all $\alpha, \beta \in A^1(S) \subset A^1(X)$, we have $$h_{\alpha\beta}(x \cdot x') = h_{\alpha\beta}(x) \cdot x' + x \cdot h_{\alpha\beta}(x')$$ for all $x, x' \in A^*(X)$. We believe that the statement below can be proved with the machinery described in Section \[sec:mult\], but the combinatorics is quite involved, and we therefore do not tackle it. \[conj:ref\] For all $\alpha \in A^1(S) \subset A^1(X)$, we have $$h_{\alpha\delta}(x \cdot x') = h_{\alpha\delta}(x) \cdot x' + x \cdot h_{\alpha\delta}(x')$$ for all $x, x' \in A^*(X)$. An equivalent formulation of Conjecture \[conj:ref\] is that $\overline{{{\mathfrak{g}}}}_{\mathrm{NS}}$ acts on $A^*(X)$ by derivations. Since the modified Cartan subalgebra $\widetilde{{{\mathfrak{t}}}}$ is contained in $\overline{{{\mathfrak{g}}}}_{\mathrm{NS}} \oplus {{\mathbb{Q}}}\cdot \widetilde{h}$, Theorem \[thm:ref\] and Conjecture \[conj:ref\] (both at the level of correspondences) would imply the multiplicativity of the refined decomposition . On its own, Theorem \[thm:ref\] can only imply the weaker statement where the direct sum in goes over the dual of a modified Cartan subalgebra of dimension 1 smaller than $\widetilde{{{\mathfrak{t}}}}$. Conventions {#sub:conv} ----------- Throughout the present paper, Chow groups and Chow motives will be taken with ${{\mathbb{Q}}}$-coefficients. We refer to [@MNP] for the definitions and conventions of Chow motives. We will often switch between the languages of correspondences and operators on Chow groups, in the following sense. Every operator $f : A^*(X) \rightarrow A^*(Y)$ will arise from a correspondence $F \in A^*(X \times Y)$ by the usual construction $$\xymatrix{& X \times Y \ar[ld]_{\pi_1} \ar[rd]^{\pi_2} & \\ X & & Y} \qquad f = \pi_{2*}(F \cdot \pi_1^*)$$ and any compositions and equalities of operators implicitly entail compositions and equalities of correspondences. For example, the operator $$\mult_\tau : A^*(X) \rightarrow A^*(X)$$ of cup product with a fixed element $\tau \in A^*(X)$ is associated to the correspondence $\Delta_*(\tau) \in A^*(X \times X)$, where $\Delta : X \hookrightarrow X \times X$ is the diagonal embedding. Moreover, a family of operators $f_\gamma : A^*(X) \rightarrow A^*(Y)$ labeled by $\gamma \in A^*(Z)$ will arise from a correspondence $F \in A^*(X \times Y \times Z)$, by the assignment $$f_\gamma \text{ arises from } \pi_{12*}(F \cdot \pi_3^*(\gamma)) \in A^*(X \times Y)$$ for all $\gamma \in A^*(Z)$. We employ the language of “operators indexed by $\gamma \in A^*(Z)$" instead of cycles on $X \times Y \times Z$ because it makes manifest the fact that $\gamma$ does not play any role in taking compositions. For instance, the family of operators $$\mult_\gamma : A^*(X) \rightarrow A^*(X)$$ labeled by $\gamma \in A^*(X)$ is associated to the small diagonal $\Delta_{123} \subset X \times X \times X$. We will often be concerned with cycles on a variety of the form $S^n = S \times ... \times S$ for a smooth algebraic variety $S$ (most often an algebraic surface). We let $$\Delta_{a_1 ... a_k} \in A^*(S^n)$$ denote the diagonal $\{(x_1, ..., x_n) \,|\, x_{a_1} = ... = x_{a_k}\}$, for all collections of distinct indices $a_1, ..., a_k \in \{1, ..., n\}$. Moreover, given a class $\Gamma \in A^*(S^k)$, we may choose to write it as $\Gamma_{1 ... k}$ in order to indicate the power of $S$ where this class lives. Then for any collection of distinct indices $a_1,...,a_k \in \{1,...,n\}$, we define $$\Gamma_{a_1 ... a_k} = p_{a_1 ... a_k}^{\ast}(\Gamma) \in A^*(S^n)$$ where we let $p_{a_1 ... a_k} = (p_{a_1}, ..., p_{a_k}): S^n \to S^k$ with $p_i : S^n \to S$ the projection to the $i$-th factor. Finally, if $\bullet$ denotes any index from $1$ to $k+1$, we write $$\int_\bullet : A^*(S^{k+1}) \rightarrow A^*(S^k)$$ for the push-forward map which forgets the factor labeled by $\bullet$. Acknowledgements ---------------- We would like to thank Lie Fu, Alina Marian, Davesh Maulik, Junliang Shen, Catharina Stroppel, and Zhiyu Tian for useful discussions. A. N. gratefully acknowledges the NSF grants DMS–1760264 and DMS–1845034, as well as support from the Alfred P. Sloan Foundation. Q. Y. was supported by the NSFC grants 11701014, 11831013, and 11890661. Hilbert schemes =============== Throughout the present paper, $S$ will denote a projective K3 surface over ${{\mathbb{C}}}$, and $A^*(S)$ will denote its Chow ring with coefficients in ${{\mathbb{Q}}}$, graded by codimension. Beauville and Voisin [@BV] have studied the class $c \in A^2(S)$ of any closed point on a rational curve in $S$, and they proved the following formulas in $A^*(S)$: $$\begin{gathered} \label{eqn:bv 1} c_2(\Tan_S) = 24c \\ \label{eqn:bv 2} \alpha \cdot \beta = \langle \alpha, \beta \rangle c\end{gathered}$$ for all $\alpha, \beta \in A^1(S)$ (above, we write $\langle \cdot, \cdot \rangle : A^*(S) \otimes A^*(S) \rightarrow {{\mathbb{Q}}}$ for the intersection pairing). Moreover, we have the following identities in $A^*(S^2)$: $$\begin{gathered} \label{eqn:bv 3} \Delta \cdot c_1 = \Delta \cdot c_2 = c_1 \cdot c_2 \\ \label{eqn:bv 4} \Delta \cdot \alpha_1 = \Delta \cdot \alpha_2 = \alpha_1 \cdot c_2 + \alpha_2 \cdot c_1 \end{gathered}$$ where $\Delta \in A^*(S^2)$ is the class of the diagonal, and the following identity in $A^*(S^3)$: $$\label{eqn:bv 5} \Delta_{123} = \Delta_{12} \cdot c_3 + \Delta_{13} \cdot c_2 + \Delta_{23} \cdot c_1 - c_1 \cdot c_2 - c_1 \cdot c_3 - c_2 \cdot c_3 \,.$$ As a corollary of and , one can prove by induction the following identity: $$\label{eqn:bv 6} \sum_{i=1}^k \Delta_{1...i-1,i+1...k} c_i = (k-2) \Delta_{1...k} + \sum_{i=1}^k c_1...c_{i-1} c_{i+1} ... c_k$$ in $A^*(S^k)$ for all $k$. \[formulas\] For any $\Gamma_{12...k} \in A^*(S^k)$, we have the following identity: $$\begin{gathered} \label{eqn:formula 1} (c_0 - c_1)(\Gamma_{02...k} - \Gamma_{12...k}) = \Delta_{01} \left( \int_\bullet \Gamma_{\bullet 2 ... k} (c_\bullet - c_1) - \Gamma_{1...k} \right)\\ \label{eqn:formula 2} (\alpha_0 \beta_1 - \alpha_1 \beta_0)(\Gamma_{02...k} - \Gamma_{12...k}) = \Delta_{01} \int_\bullet \Gamma_{\bullet 2...k} (\alpha_\bullet \beta_0 - \alpha_0 \beta_\bullet )\end{gathered}$$ in $A^*(S^{k+1})$, for any $\alpha, \beta \in A^1(S)$. To prove , let us consider the identity on $S^3$: $$\Delta_{01\bullet} = \Delta_{0\bullet} \cdot c_1 + \Delta_{1\bullet} \cdot c_0 + \Delta_{01} \cdot c_\bullet - c_0 \cdot c_1 - c_0 \cdot c_\bullet - c_1 \cdot c_\bullet \,.$$ Let us pull this identity back to $S^{k+2}$ with indices $\bullet, 0, ...,k$, multiply it by $\Gamma_{\bullet 2...k}$, and then push forward by forgetting the factor represented by the index $\bullet$: $$\begin{gathered} \label{123} \Delta_{01} \Gamma_{12...k} = c_1 \Gamma_{02...k} + c_0 \Gamma_{12...k} \\ + \int_\bullet \left( \Delta_{01} c_\bullet \Gamma_{\bullet 2...k} - c_0 c_1 \Gamma_{\bullet 2...k} - c_0 c_\bullet \Gamma_{\bullet 2...k} - c_1 c_\bullet \Gamma_{\bullet 2...k} \right).\end{gathered}$$ Using the identity we have: $$\int_{\bullet} c_0 c_\bullet \Gamma_{\bullet 2...k} = \int_{\bullet} c_0 \Delta_{0 \bullet} \Gamma_{\bullet 2 ... k} = c_0 \Gamma_{02 ... k}$$ and similarly: $$\int_{\bullet} c_0 c_1 \Gamma_{\bullet 2...k} = \Delta_{01} \int_{\bullet} \Gamma_{\bullet 2 ... k}\,, \quad \int_{\bullet} c_1 c_\bullet \Gamma_{\bullet 2...k} = c_1 \Gamma_{1 ... k}\,.$$ Inserting these into the formula then follows from rearranging the terms. As for , we start from the identity: $$(\alpha_0 \beta_1 - \alpha_1 \beta_0) (\Delta_{0\bullet} - \Delta_{1\bullet}) = \Delta_{01}(\alpha_\bullet \beta_0 - \alpha_0 \beta_\bullet )$$ which is a straightforward application of . If we pull this identity back to $S^{k+2}$ with indices $\bullet, 0, ...,k$, multiply it by $\Gamma_{\bullet 2...k}$, then we obtain the following: $$(\alpha_0 \beta_1 - \alpha_1 \beta_0) (\Delta_{0\bullet} \Gamma_{0 2...k} - \Delta_{1\bullet} \Gamma_{1 2...k}) = \Delta_{01}(\alpha_\bullet \beta_0 - \alpha_0 \beta_\bullet ) \Gamma_{\bullet 2...k}\,.$$ If we push forward by forgetting the factor represented by $\bullet$, we obtain . Consider the Hilbert scheme $\Hilb_n$ of $n$ points on $S$ and the Chow rings: $$\Hilb = \bigsqcup_{n=0}^\infty \Hilb_n, \quad A^*(\Hilb) = \bigoplus_{n=0}^\infty A^*(\Hilb_n)$$ always with rational coefficients. We will consider two types of elements of the Chow rings above. The first of these are defined by considering the universal subscheme: $${{\mathcal Z}}_n \subset \Hilb_n \times S$$ (${{\mathcal Z}}_n$ is flat over $\Hilb_n$, and its fibers have the property that ${{\mathcal Z}}_n |_{\{Z\} \times S} \cong Z$ for any point $\{Z\} \in \Hilb_n$ corresponding to a closed subscheme ). For any $k \in {{\mathbb{N}}}$, let $\pi : \Hilb_n \times S^k \rightarrow \Hilb_n$ denote the usual projection, and ${{\mathcal Z}}_n^{(i)} \subset \Hilb_n \times S^k$ denote the pull-back of ${{\mathcal Z}}_n \subset \Hilb_n \times S$ via the $i$-th projection map $S^k \rightarrow S$. \[def:universal\] A **universal class** is any element of $A^*(\Hilb_n)$ of the form: $$\label{eqn:tautological} \pi_{*} \Big[ P(...,{{\mathrm{ch}}}_{j}({{\mathcal O}}_{{{\mathcal Z}}^{(i)}_n}),...)^{1 \leq i \leq k}_{j \in {{\mathbb{N}}}} \Big]$$ for all $k \in {{\mathbb{N}}}$ and for all polynomials $P$ with coefficients pulled back from $A^*(S^k)$. The following theorem holds for every smooth quasi-projective surface (see [@N; @taut]), but we only prove it here in the case where $S$ is a K3 surface (the argument herein easily generalizes to any smooth projective surface using the results of [@GT]). \[thm:taut\] Any class in $A^*(\eHilb_n)$ is universal, *i.e.*, of the form . Consider the product $\Hilb_n \times S^k \times \Hilb_n$, and we will write $\pi_1$, $\pi_2$, $\pi_3$, $\pi_{12}$, $\pi_{23}$ and $\pi_{13}$ for the various projections to its factors. As a consequence of [@M] (see also [@GT]), the diagonal $\Delta \subset \Hilb_n \times \Hilb_n$ can be written as follows: $$[\Delta] = \pi_{13*} \left[\sum_a \pi_2^*(\gamma_a) \prod_{(i,j)} {{\mathrm{ch}}}_j \left( {{\mathcal O}}_{{{\mathcal Z}}_n^{(i)}} \right) \prod_{(\widetilde{i}, \widetilde{j})} {{\mathrm{ch}}}_{\widetilde{j}} \left( {{\mathcal O}}_{\widetilde{{{\mathcal Z}}}_n^{(\widetilde{i})}} \right) \right]$$ for suitably chosen $k \in {{\mathbb{N}}}$, where we do not care much about the specific coefficients $\gamma_a$ and indices $i,j,\widetilde{i}, \widetilde{j}$ which appear in the sum above (we write ${{\mathcal Z}}_n$ and $\widetilde{{{\mathcal Z}}}_n$ for the universal subschemes in $\Hilb_n \times S \times \Hilb_n$ corresponding to the first and second, respectively, copies of $\Hilb_n$). Since the diagonal corresponds to the identity operator, the equality above implies that: $$\begin{gathered} \nonumber \text{Id}_{\Hilb_n} = \pi_{1*} \left[\sum_a \pi_2^*(\gamma_a) \prod_{(i,j)} {{\mathrm{ch}}}_j \left( {{\mathcal O}}_{{{\mathcal Z}}_n^{(i)}} \right) \prod_{(\widetilde{i}, \widetilde{j})} {{\mathrm{ch}}}_{\widetilde{j}} \left( {{\mathcal O}}_{\widetilde{{{\mathcal Z}}}_n^{(\widetilde{i})}} \right) \pi_3^* \right] \\ \label{eqn:last} = \sum_a \pi_* \left[ \prod_{(i,j)} {{\mathrm{ch}}}_j \left( {{\mathcal O}}_{{{\mathcal Z}}_n^{(i)}} \right) \rho^*\left( \gamma_a \cdot \rho_* \left( \prod_{(\widetilde{i}, \widetilde{j})} {{\mathrm{ch}}}_{\widetilde{j}} \left( {{\mathcal O}}_{\widetilde{{{\mathcal Z}}}_n^{(\widetilde{i})}} \right) \cdot \pi^* \right) \right) \right]\end{gathered}$$ where in , $\pi, \rho : \Hilb_n \times S^k \rightarrow \Hilb_n, S^k$ denote the standard projections. Formula implies the existence of a surjective homomorphism: $$\label{eqn:surj} \begin{gathered} \bigoplus_a A^*(S^k) \twoheadrightarrow A^*(\Hilb_n) \\ \sum_a \Gamma_a \leadsto \sum_a \pi_* \left[ \prod_{(i,j)} {{\mathrm{ch}}}_j \left( {{\mathcal O}}_{{{\mathcal Z}}_n^{(i)}} \right) \rho^*(\Gamma_a) \right] \end{gathered}$$ where the sums over $a$ are in one-to-one correspondence with the sums in . {#sub:nakajima} Let us present another important source of elements of $A^*(\Hilb_n)$, based on the following construction independently due to Grojnowski [@Groj] and Nakajima [@Nak] (in the present paper, we will mostly use the presentation by Nakajima). For any $n,k \in {{\mathbb{N}}}$, consider the closed subscheme: $$\Hilb_{n,n+k} = \Big\{(I \supset I') \,|\, I/I' \text{ is supported at a single }x \in S \Big\} \subset \Hilb_n \times \Hilb_{n+k}$$ endowed with projection maps: $$\label{eqn:diagram zk} \xymatrix{& \Hilb_{n,n+k} \ar[ld]_{p_-} \ar[d]^{p_S} \ar[rd]^{p_+} & \\ \Hilb_{n} & S & \Hilb_{n+k}}$$ that remember $I$, $x$, $I'$, respectively. One may use $\Hilb_{n,n+k}$ as a correspondence: $$\label{eqn:nakajima} A^*(\Hilb_n) \xrightarrow{{{\mathfrak{q}}}_{\pm k}} A^*(\Hilb_{n \pm k} \times S)$$ given by: $$\label{eqn:nak def} {{\mathfrak{q}}}_{\pm k} = (\pm 1)^{k} \cdot (p_\pm \times p_S)_* \circ p_\mp^*.$$ Because the correspondences above are defined for all $n$, it makes sense to set: $$A^*(\Hilb) \xrightarrow{{{\mathfrak{q}}}_{\pm k}} A^*(\Hilb \times S).$$ We also set ${{\mathfrak{q}}}_0 = 0$. The main result of [@Nak] is that the operators ${{\mathfrak{q}}}_k$ obey the commutation relations in the Heisenberg algebra, namely: $$\label{eqn:heis} [{{\mathfrak{q}}}_k, {{\mathfrak{q}}}_l] = k \delta_{k+l}^0 \left( \text{Id}_{\Hilb} \times \Delta \right)$$ as correspondences $A^*(\Hilb) \rightarrow A^*(\Hilb \times S^2)$. In terms of self-correspondences $A^*(\Hilb) \rightarrow A^*(\Hilb)$, the identity reads, for all $\alpha, \beta \in A^*(S)$: $$\label{eqn:heis op} [{{\mathfrak{q}}}_k(\alpha), {{\mathfrak{q}}}_l(\beta)] = k ( \alpha, \beta ) {\operatorname{Id}}_\Hilb. $$ {#section-2} More generally, we may consider: $$\label{eqn:composition 1} {{\mathfrak{q}}}_{n_1}...{{\mathfrak{q}}}_{n_t} : A^*(\Hilb) \rightarrow A^*(\Hilb \times S^t)$$ where the convention is that the operator ${{\mathfrak{q}}}_{n_i}$ acts in the $i$-th factor of $S^t = S \times ... \times S$. Then associated to any $\Gamma \in A^*(S^t)$, one obtains an endomorphism of $A^*(\Hilb)$: $$\label{eqn:composition 2} {{\mathfrak{q}}}_{n_1}...{{\mathfrak{q}}}_{n_t}(\Gamma) = \pi_{*} (\rho^*(\Gamma) \cdot {{\mathfrak{q}}}_{n_1}...{{\mathfrak{q}}}_{n_t})$$ where $\pi , \rho : \Hilb \times S^t \rightarrow \Hilb, S^t$ denote the standard projections. \[thm:dcm\] We have a decomposition: $$\label{eqn:decomp} A^*(\eHilb) = \bigoplus^{n_1 \geq ... \geq n_t \in {{\mathbb{N}}}}_{\Gamma \in A^*(S^t)^{\emph{sym}}} {{\mathfrak{q}}}_{n_1}... {{\mathfrak{q}}}_{n_t}(\Gamma) \cdot v$$ where “*sym*" refers to the part of $A^*(S^t)$ which is symmetric with respect to those transpositions $(ij) \in \mathfrak{S}_t$ for which $n_i = n_j$, and $v$ is a generator of $A^*(\eHilb_0) \cong {{\mathbb{Q}}}$. Since we will need it later, we recall the precise relationship between Nakajima operators and the correspondences studied in [@dCM]. Let $\lambda$ be a partition of $n$ with $k$ parts, let $S^{\lambda} = S^k$ and let $S^{\lambda} \to S^{(n)}$ be the map that sends $(x_1, ..., x_k)$ to the cycle $\lambda_1 x_1 + ... + \lambda_k x_k$ in the $n$-th symmetric product of the surface $S$. We consider the correspondence: $$\begin{aligned} \Gamma_{\lambda} & = (\Hilb_n \times_{S^{(n)}} S^{\lambda})_{\text{red}} \\ & = \{ (I,x_1, ..., x_{k}) \,|\, \sigma(I) = \lambda_1 x_1 + ... + \lambda_{k} x_{k} \} \end{aligned}$$ where $\sigma : S^{[n]} \to S^{(n)}$ is the Hilbert–Chow morphism sending the subscheme $I$ to its underlying support. The subscheme $\Gamma_{\lambda}$ is irreducible of dimension $n+k$ and the locus $\Gamma_{\lambda}^{\text{reg}} \subset \Gamma_{\lambda}$, where the points $x_i$ are distinct, is open and dense; see [@dCM Remark 2.0.1]. Similarly, the Nakajima correspondence ${{\mathfrak{q}}}_{\lambda_1} ... {{\mathfrak{q}}}_{\lambda_k}$ is a cycle in $\Hilb_n \times S^k$ of dimension $n+k$ supported on a subscheme that contains $\Gamma_{\lambda}^{\text{reg}}$ as an open subset and whose complement is of smaller dimension [@Nak 4(i)]. Moreover the multiplicity of the cycle on $\Gamma_{\lambda}^{\text{reg}}$ is $1$. Hence we have the equality of correspondences: $$\Gamma_{\lambda} = {{\mathfrak{q}}}_{\lambda_1} ... {{\mathfrak{q}}}_{\lambda_k} \in A^{\ast}(\Hilb_n \times S^{\lambda}). \label{nak=dCM}$$ The result follows now from [@dCM Proposition 6.1.5], which says that: $$\Delta_{\Hilb_n} = \sum_{\lambda \vdash n} \frac{(-1)^{n-l(\lambda)}}{|{\operatorname{Aut}}(\lambda)|\prod_i \lambda_i} (\Gamma_{\lambda})^t \circ \Gamma_{\lambda} \label{diagonal_eqn}$$ where $\lambda$ runs over all partitions of size $n$, and we let $l(\lambda)$ and $\lambda_i$ denote the length and the parts of $\lambda$. As shown in [@N; @taut], there is an explicit way to go between the descriptions  and of $A^*(\Hilb)$. Concretely, for all $n_1 \geq ... \geq n_t$ there exists a polynomial $P_{n_1,...,n_t}$ with coefficients in $\rho^*(A^*(S^t))$ such that for all $\Gamma \in A^{\ast}(S^t)$: $$\label{eqn:connection} {{\mathfrak{q}}}_{n_1}...{{\mathfrak{q}}}_{n_t}(\Gamma) = \pi_* \Big[ P_{n_1,...,n_t}(...,{{\mathrm{ch}}}_{j}({{\mathcal O}}_{{{\mathcal Z}}^{(i)}}),...)^{1 \leq i \leq t}_{j \in {{\mathbb{N}}}} \cdot \rho^*(\Gamma) \Big]$$ where $\pi, \rho : \Hilb \times S^t \rightarrow \Hilb, S^t$ are the standard projections. Moreover, *loc. cit.* gives an algorithm for computing the polynomial $P_{n_1,...,n_t}$. {#section-3} We will now present another connection between universal classes and the operators ${{\mathfrak{q}}}_n$. Let us consider any class of the form: $$\label{eqn:formula taut 1} \univ(\Gamma) = \pi_{*} \left[\prod_{i=1}^t {{\mathrm{ch}}}_{d_{i}}({{\mathcal O}}_{{{\mathcal Z}}_n^{(i)}}) \cdot \rho^*(\Gamma) \right] \in A^*(\Hilb_n)$$ where $\pi, \rho : \Hilb_n \times S^t \rightarrow \Hilb_n, S^t$ are the standard projections, while the natural numbers $d_1,...,d_t$ and the class $\Gamma \in A^*(S^t)$ are arbitrary. We will write: $$\univ_{d_1,...,d_t}(\Gamma) = \univ(\Gamma)$$ if we wish to emphasize the particular numbers $d_1,...,d_t$ which appear in , although they will often be inconsequential. Note that the codimension of is: $$\label{eqn:codim} \deg \univ_{d_1,...,d_t}(\Gamma) = \deg \Gamma + \sum_{i=1}^t (d_i - 2).$$ \[claim:1\] The operator of multiplication by $\euniv(\Gamma)$ is given by: $$\begin{gathered} \label{eqn:formula taut 2} \emult_{\euniv(\Gamma)} = \sum_{\varepsilon_1,...,\varepsilon_t \in \{0,2\}} \quad \sum^{\lambda_{b_{s-1}+1} \geq ... \geq \lambda_{b_s} \in {{\mathbb{Z}}}, \forall s \in \{1,...,t\}}_{\lambda_{b_{s-1}+1} +... + \lambda_{b_s} = 0, \forall s \in \{1,...,t\}} \\ \emph{ct} \cdot {{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_{b_t}} \left(\Delta_{b_0+1...b_1} \Delta_{b_1+1...b_2} ... \Delta_{b_{t-1}+1...b_t} \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t} \right)\end{gathered}$$ where in each summand we write, for all $s \in \{0,...,t\}$: $$\label{eqn:indices} b_s = \sum_{i=1}^s (d_s - \varepsilon_s)$$ and $\phi_{b_s}$ is either $1$ or $c_{b_s}$, depending on whether $\varepsilon_s$ is $0$ or $2$. The constants “*ct*" in depend on the particular numbers $d_i$, $\varepsilon_s$ and $\lambda_k$ but not on $\Gamma$. In the course of this proof, let $\pi, \rho : \Hilb \times S \rightarrow \Hilb, S$ denote the standard projections. Let us recall the operators of multiplication by universal classes: $${{\mathfrak{G}}}_d : A^*(\Hilb) \stackrel{\pi^*}\longrightarrow A^*(\Hilb \times S) \xrightarrow{\mult_{{{\mathrm{ch}}}_d({{\mathcal O}}_{{{\mathcal Z}}})}} A^*(\Hilb \times S).$$ The following formulas were proved in cohomology by [@LQW] and in Chow by [@MN]: $$\label{eqn:lqw} {{\mathfrak{G}}}_d = \sum_{\lambda_1+...+\lambda_d = 0}^{\lambda_1 \geq ... \geq \lambda_d} \text{ct} \cdot {{\mathfrak{q}}}_{\lambda_1} ... {{\mathfrak{q}}}_{\lambda_d} \Big|_{\Delta_{1...d}} + \sum_{\lambda_1+...+\lambda_{d-2} = 0}^{\lambda_1 \geq ... \geq \lambda_{d-2}} \text{ct} \cdot {{\mathfrak{q}}}_{\lambda_1} ... {{\mathfrak{q}}}_{\lambda_{d-2}} \Big|_{\Delta_{1...d}} \cdot \rho^*(c)\,.$$ The constants “ct" that appear in the formulas above are certain rational numbers that will not be important to us. The meaning of the notation $|_{\Delta_{1...d}}$ is that we restrict the target of the operator ${{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_d}$ from $\Hilb \times S^d$ to $\Hilb \times S$ via the small diagonal. The meaning of the notation “$\cdot \rho^*(c)$" is that after this restriction, we also multiply by the pull-back of the class $c$ from the second factor of $\Hilb \times S$. Formula simply entails composing $t$ of the operators , multiplying with the pull-back of $\Gamma \in A^*(S^t)$, and pushing forward to $\Hilb$. {#section-4} In this section, $\Delta$ will refer to the smallest diagonal of any $S^t$. Two interesting collections of elements of $A^*(\Hilb_n)$ can be written as universal classes: divisors and Chern classes of the tangent bundle. It is well-known that: $$A^1(S) \oplus {{\mathbb{Q}}}\cdot \delta \cong A^1(\Hilb_n)$$ (with the convention that $\delta = 0$ if $n=1$) where: $$\begin{aligned} \label{eqn:div 1} l \in A^1(S) & \ \leadsto \ \univ_2(\Delta_*(l)) \\ \label{eqn:div 2} \delta & \ \leadsto \ \univ_3(\Delta_*(1)).\end{aligned}$$ Similarly, the Chern character of the tangent bundle to $\Hilb_n$ is given by the following well-known formula (let $\pi, \rho : \Hilb_n \times S \rightarrow \Hilb_n, S$ denote the projections): $${{\mathrm{ch}}}(\Tan_{\Hilb_n}) = \pi_* \left[ \left({{\mathrm{ch}}}\left({{\mathcal O}}_{{{\mathcal Z}}_n} \right) + {{\mathrm{ch}}}\left({{\mathcal O}}_{{{\mathcal Z}}_n} \right)' - {{\mathrm{ch}}}\left({{\mathcal O}}_{{{\mathcal Z}}_n} \right) {{\mathrm{ch}}}\left({{\mathcal O}}_{{{\mathcal Z}}_n} \right)' \right) \rho^*(1+2c)\right]$$ where $( \text{ })'$ is the operator which multiplies a codimension $d$ class by $(-1)^d$. Therefore, the Chern character of the tangent bundle is a linear combination of the following particular universal classes: $$\label{eqn:tan} \univ_d(\Delta_*(\phi)) \quad \text{and} \quad \univ_{d,d'}(\Delta_*(\phi))$$ where $\phi \in \{1,c\}$, and the natural numbers $d$ and $d'$ are arbitrary. Motivic decompositions ====================== {#sec:LLV_algebra} Let us recall the Lie algebra action ${{\mathfrak{g}}}_{\mathrm{NS}} \curvearrowright A^*(\Hilb_n)$ from [@Ob], which lifts the classical construction of [@LL; @V] in cohomology. To this end, note that the Beauville–Bogomolov form is the pairing on $$V = A^1(\Hilb_n) \cong A^1(S) \oplus {{\mathbb{Q}}}\cdot \delta$$ which extends the intersection form on $A^1(S)$ and satisfies $$(\delta, \delta) = 2 - 2n, \quad (\delta, A^1(S)) = 0.$$ Let $U=(\begin{smallmatrix} 0& 1 \\1 &0\end{smallmatrix})$ be the hyperbolic lattice with fixed symplectic basis $e,f$. We have $${{\mathfrak{g}}}_{\mathrm{NS}} = \wedge^2 ( V \overset{\perp}{\oplus} U_{{{\mathbb{Q}}}} )$$ where the Lie bracket is defined for all $a,b,c,d \in V \oplus U_{{\mathbb{Q}}}$ by $$[a \wedge b, c \wedge d] = (a,d) b \wedge c - (a,c) b \wedge d - (b,d) a \wedge c + (b,c) a \wedge d.$$ Consider for all $\alpha \in A^1(S)$ the following operators: $$\begin{gathered} e_{\alpha} = -\sum_{n > 0} {{\mathfrak{q}}}_{n} {{\mathfrak{q}}}_{-n} ( \Delta_{\ast} \alpha) \nonumber \\ \label{LLV_operators} \begin{gathered} e_{\delta} = -\frac{1}{6} \sum_{i+j+k=0} : {{\mathfrak{q}}}_i {{\mathfrak{q}}}_j {{\mathfrak{q}}}_k ( \Delta_{123} ): \\ \widetilde{f}_{\alpha} = -\sum_{n > 0} \frac{1}{n^2} {{\mathfrak{q}}}_{n} {{\mathfrak{q}}}_{-n}( \alpha_1 + \alpha_2 ) \end{gathered} \\ \nonumber \widetilde{f}_{\delta} = -\frac{1}{6} \sum_{i+j+k=0} :{{\mathfrak{q}}}_i {{\mathfrak{q}}}_j {{\mathfrak{q}}}_k \left( \frac{1}{k^2} \Delta_{12} + \frac{1}{j^2} \Delta_{13} + \frac{1}{i^2} \Delta_{23} + \frac{2}{j k} c_1 + \frac{2}{i k} c_2 + \frac{2}{i j} c_3 \right): \,.\end{gathered}$$ Here $: - :$ is the normal ordered product defined by $$: {{\mathfrak{q}}}_{i_1} ... {{\mathfrak{q}}}_{i_k}\!: \, = \, {{\mathfrak{q}}}_{i_{\sigma(1)}} ... {{\mathfrak{q}}}_{i_{\sigma(k)}}$$ where $\sigma$ is any permutation such that $i_{\sigma(1)} \geq ... \geq i_{\sigma(k)}$. We define operators $e_{\alpha}$ and $\widetilde{f}_\alpha$ for general $\alpha \in A^1(\Hilb_n)$ by linearity in $\alpha$. By [@MN] we have that $e_\alpha$ is the operator of cup product with $\alpha$. If $(\alpha,\alpha) \neq 0$, the multiple $\widetilde{f}_\alpha / (\alpha,\alpha)$ acts on cohomology as the Lefschetz dual of $e_\alpha$. In [@Ob], it was show that the assignment $$\label{eqn:action} \begin{gathered} \act : {{\mathfrak{g}}}_{\mathrm{NS}} \rightarrow A^*(\Hilb_n \times \Hilb_n) \\ \act(e \wedge \alpha) = e_\alpha, \quad \act(\alpha \wedge f) = \widetilde{f}_\alpha \end{gathered}$$ for all $\alpha \in V$, induces a Lie algebra homomorphism. In particular, the element  acts by $$\label{eqn:def h} h = \sum_{k > 0} \frac{1}{k} {{\mathfrak{q}}}_{k} {{\mathfrak{q}}}_{-k}( c_2 - c_1 ).$$ The operator $h$ specializes in cohomology to the Lefschetz grading operator, which by our normalization acts on $H^{2i}(\Hilb_n)$ by multiplication by $i-n$. From a straightforward calculation (see [@Ob Lemma 3.4]), one obtains the commutation relations $$\left[ h, {{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_k}(\Phi) \right] = {{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_k}(\overline{\Phi}) \label{eqn:comm formula 1}$$ for all $\Phi \in A^{\ast}(S^k)$, where we write $$\begin{aligned} \label{eqn:bar 1} \overline{\Phi} &= \sum_{i=1}^{k} ({\operatorname{Id}}_{S^{i-1}} \times h \times {\operatorname{Id}}_{S^{k-i}})( \Phi ) \\ &= \sum_{i=1}^k \int_\bullet \underbrace{\Phi_{1...i-1,\bullet,i+1...k}(c_i - c_\bullet)}_{\text{this class lies in }A^*(S^k \times S)}\end{aligned}$$ with the last factor in $S^k \times S$ represented by the index $\bullet$. Proof of Theorem \[thm:Decomposition\] -------------------------------------- We start with the decomposition of the diagonal into Nakajima operators: $$\label{diagonal} \Delta_{\Hilb_n} = \sum_{\lambda \vdash n} \frac{(-1)^{l(\lambda)}}{{{\mathfrak{z}}}(\lambda)} {{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\Delta)$$ where $\lambda$ runs over all partitions of $n$, $${{\mathfrak{z}}}(\lambda) = |\mathrm{Aut}(\lambda)| \prod_{i} \lambda_i$$ is a combinatorial factor, and for any $\pi \in A^{\ast}(S \times S)$ we write $$\begin{aligned} {{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\pi) & = {{\mathfrak{q}}}_{\lambda_1} ... {{\mathfrak{q}}}_{\lambda_{l(\lambda)}} {{\mathfrak{q}}}_{-\lambda_1} ... {{\mathfrak{q}}}_{-\lambda_{l(\lambda)}} \left( \pi_{1,l(\lambda)+1} \pi_{2,l(\lambda)+2} ... \pi_{l(\lambda),2l(\lambda)} \right) \\ & =\, : \! {{\mathfrak{q}}}_{\lambda_1} {{\mathfrak{q}}}_{-\lambda_1}(\pi) ... {{\mathfrak{q}}}_{\lambda_{l(\lambda)}} {{\mathfrak{q}}}_{-\lambda_{l(\lambda)}}(\pi)\! : \,.\end{aligned}$$ The formula follows directly from , , and the fact that ${{\mathfrak{q}}}_m^t = (-1)^m {{\mathfrak{q}}}_{-m}$ (which is incorporated in the definition ). Consider the decomposition of the diagonal of $S$ as $$\Delta = \pi_{-1} + \pi_0 + \pi_{1} \label{4234}$$ where $$\pi_{-1} = c_1, \quad \pi_0 = \Delta - c_1 - c_2, \quad \pi_{1} = c_2.$$ It is easy to note that $\pi_{-1}$, $\pi_0$, $\pi_1$ are the projectors onto the $-1, 0, +1$ eigenspaces of the action of $h$ on $A^{\ast}(\Hilb_1) = A^{\ast}(S)$. To define projectors corresponding to the action of $h$ on $A^*(\Hilb_n)$, we insert the decomposition into , and then expand and collect the terms of degree $i$. Concretely, for every integer $i$, we let $$P_{i} = \sum^{\lambda, \mu, \nu}_{\substack{|\lambda| + |\mu| + |\nu| = n \\ -l(\lambda) + l(\nu) = i}} \frac{(-1)^{l(\lambda) + l(\mu) + l(\nu)}}{{{\mathfrak{z}}}(\lambda) {{\mathfrak{z}}}(\mu) {{\mathfrak{z}}}(\nu)} : \! {{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\pi_{-1}^{t}) {{\mathfrak{q}}}_{\mu} {{\mathfrak{q}}}_{-\mu}(\pi_0^{t}) {{\mathfrak{q}}}_{\nu} {{\mathfrak{q}}}_{-\nu}(\pi_{1}^{t}) \! : \,.$$ Let us check that $P_i$ are indeed projectors onto the eigenspaces of $h$. For all $i,j \in {{\mathbb{Z}}}$ we have the following equalities in $A^{\ast}(\eHilb_n \times \eHilb_n)$: 1. $P_{i} \circ P_{j} = P_{i} \delta^i_{j}$ 2. $h \circ P_i = i P_i$. \(a) We determine $P_i \circ P_j$ by commuting all Nakajima operators with negative indices to the right, and then using that we act on $\Hilb_n$ so all products of Nakajima operators with purely negative indices of degree $>n$ vanish. Since every summand in $P_j$ contains such a product of degree $n$, we find that for a term to contribute all operators with negative indices coming from $P_i$ have to interact with operators (with positive indices) from the second term. The interactions are described as follows. For a single term (let $a,b > 0$ and $r,s \in \{ -1, 0, 1 \}$) we have $${{\mathfrak{q}}}_{a} {{\mathfrak{q}}}_{-a}(\pi_{r}^t) {{\mathfrak{q}}}_{b} {{\mathfrak{q}}}_{-b}(\pi_{s}^t) = {{\mathfrak{q}}}_{a} [ {{\mathfrak{q}}}_{-a} , {{\mathfrak{q}}}_{b} ] {{\mathfrak{q}}}_{-b}\left( (\pi_{r}^t)_{12} (\pi_{s}^t)_{34} \right) + {{\mathfrak{q}}}_{a} {{\mathfrak{q}}}_{b} {{\mathfrak{q}}}_{-a} {{\mathfrak{q}}}_{-b} ( (\pi_{r}^t)_{13} (\pi_{s}^t)_{24} )$$ where by the commutation relations the first term on the right is $$\begin{aligned} {{\mathfrak{q}}}_{a} [ {{\mathfrak{q}}}_{-a} , {{\mathfrak{q}}}_{b} ] {{\mathfrak{q}}}_{-b}\left( (\pi_{r}^t)_{12} (\pi_{s}^t)_{34} \right) & = (-a) \delta_{ab} {{\mathfrak{q}}}_a {{\mathfrak{q}}}_{-b} \left( \pi_{14 \ast}( (\pi_{r}^t)_{12} (\pi_{s}^t)_{34} \Delta_{23} ) \right)\\ & = (-a) \delta_{ab} {{\mathfrak{q}}}_a {{\mathfrak{q}}}_{-a} \left( \pi_{s}^t \circ \pi_r^t \right) \\ & = (-a) \delta_{ab} {{\mathfrak{q}}}_a {{\mathfrak{q}}}_{-a} \left( (\pi_{r} \circ \pi_s)^t \right) \\ & = (-a) \delta_{ab} \delta_{rs} {{\mathfrak{q}}}_a {{\mathfrak{q}}}_{-a} (\pi_{r}^t).\end{aligned}$$ Hence for a composition $$: \! {{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\pi_{-1}^{t}) {{\mathfrak{q}}}_{\mu} {{\mathfrak{q}}}_{-\mu}(\pi_0^{t}) {{\mathfrak{q}}}_{\nu} {{\mathfrak{q}}}_{-\nu}(\pi_{1}^{t}) \! : \circ : \! {{\mathfrak{q}}}_{\lambda'} {{\mathfrak{q}}}_{-\lambda'}(\pi_{-1}^{t}) {{\mathfrak{q}}}_{\mu'} {{\mathfrak{q}}}_{-\mu'}(\pi_0^{t}) {{\mathfrak{q}}}_{\nu'} {{\mathfrak{q}}}_{-\nu'}(\pi_{1}^{t}) \! :\\$$ (with $\lambda, \mu, \nu$ as in the definition of $P_i$, and the same for the primed partitions) to act non-trivially on $\Hilb_n$ we have to have $\lambda = \lambda'$, $\mu = \mu'$ and $\nu = \nu'$. Moreover, if we write $\lambda$ multiplicatively as $(1^{l_1} 2^{l_2} ... )$ where $l_i$ is the number of parts of size $i$, then there are precisely $|{\operatorname{Aut}}(\lambda)| = \prod_i l_i!$ different ways to pair the negative factors in ${{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\pi_{-1}^{t})$ with the positive factors ${{\mathfrak{q}}}_{\lambda'} {{\mathfrak{q}}}_{-\lambda'}(\pi_{-1}^{t})$, and similarly for $\mu, \nu$. Hence $$\begin{gathered} : \! {{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\pi_{-1}^{t}) {{\mathfrak{q}}}_{\mu} {{\mathfrak{q}}}_{-\mu}(\pi_0^{t}) {{\mathfrak{q}}}_{\nu} {{\mathfrak{q}}}_{-\nu}(\pi_{1}^{t}) \! : \circ : \! {{\mathfrak{q}}}_{\lambda'} {{\mathfrak{q}}}_{-\lambda'}(\pi_{-1}^{t}) {{\mathfrak{q}}}_{\mu'} {{\mathfrak{q}}}_{-\mu'}(\pi_0^{t}) {{\mathfrak{q}}}_{\nu'} {{\mathfrak{q}}}_{-\nu'}(\pi_{1}^{t}) \! :\\ = \delta_{\lambda \lambda'} \delta_{\mu \mu'} \delta_{\nu \nu'} (-1)^{l(\lambda) + l(\mu) + l(\nu)}{{\mathfrak{z}}}(\lambda) {{\mathfrak{z}}}(\mu) {{\mathfrak{z}}}(\nu) : \! {{\mathfrak{q}}}_{\lambda} {{\mathfrak{q}}}_{-\lambda}(\pi_{-1}^{t}) {{\mathfrak{q}}}_{\mu} {{\mathfrak{q}}}_{-\mu}(\pi_0^{t}) {{\mathfrak{q}}}_{\nu} {{\mathfrak{q}}}_{-\nu}(\pi_{1}^{t}) \! : \end{gathered}$$ which implies the claim. \(b) To determine $h \circ P_i$ we commute $h$ into the middle, *i.e.*, to the right of all Nakajima operators with positive indices, and to the left of all with negative ones. In the middle position $h$ acts on the Chow ring of $\Hilb_0$ where it vanishes. Hence again we only need to compute the commutators. For this we use and that $\pi_i$ are the projectors onto the eigenspaces of $h$ so that $$(h \times {\operatorname{Id}})( \pi_r^t ) = (({\operatorname{Id}}\times h)(\pi_r))^t = (h \circ \pi_r)^t = r \pi_r^t.$$ As desired we find $$h \circ P_i = (-1 \cdot l(\lambda) + 0 \cdot l(\mu) + 1 \cdot l(\nu)) P_i = i P_i. \qedhere$$ Using the claim it follows that the motivic decomposition $${{\mathfrak{h}}}(\Hilb_n) = \bigoplus_{i=0}^{2n} {{\mathfrak{h}}}^{2i}(\Hilb_n)$$ with ${{\mathfrak{h}}}^{2i}(\Hilb_n) = (\Hilb_n, P_{i-n})$ has the stated properties. The uniqueness of the decomposition follows from the uniqueness of the decomposition of $\Delta_{\Hilb_n}$ under the action of $h$ on $A^{\ast}(\Hilb_n \times \Hilb_n)$; see the proof of the refined decomposition in Section \[sec:refdec\] below. By , an alternative way to write the projector $P_i$ is $$P_i = \sum_{\lambda \vdash n} \frac{(-1)^{n-l(\lambda)}}{{{\mathfrak{z}}}(\lambda)} (\Gamma_{\lambda})^t \circ \widetilde{P}_i \circ \Gamma_{\lambda}$$ where $\widetilde{P}_i \in A^{\ast}(S^{\lambda} \times S^\lambda)$ is the projector $$\widetilde{P}_i = \sum_{i_1 + ... + i_{l(\lambda)} = i} \pi_{i_1} \times ... \times \pi_{i_{l(\lambda)}}.$$ Hence the decomposition of Theorem \[thm:Decomposition\] is precisely the Chow–Künneth decomposition constructed by Vial in [@Vial Section 2]. Refined decomposition {#sec:refdec} --------------------- Let $U({{\mathfrak{g}}}_{\mathrm{NS}})$ be the universal enveloping algebra of ${{\mathfrak{g}}}_{\mathrm{NS}}$. The Lie algebra homomorphism extends to an algebra homomorphism $$\act : U({{\mathfrak{g}}}_{\text{NS}}) \rightarrow A^*(\Hilb_n \times \Hilb_n).$$ \[lemma\_finite\_dim\] The image $W \subset A^*(\eHilb_n \times \eHilb_n)$ of $\act$ is finite-dimensional. For every fixed $k \geq 1$ the subring of $R^{\ast}(S^k) \subset A^{\ast}(S^k)$ generated by - $\alpha_i$ for all $i$ and $\alpha \in A^1(S)$ - $c_i$ for all $i$ - $\Delta_{ij}$ for all $i,j$ is finite-dimensional, and preserved by the projections to the factors. Hence the space of operators $\widetilde{W} \subset A^{\ast}(\Hilb_n \times \Hilb_n)$ spanned by $${{\mathfrak{q}}}_{\lambda_1} ... {{\mathfrak{q}}}_{\lambda_{l(\lambda)}} {{\mathfrak{q}}}_{-\mu_1} ... {{\mathfrak{q}}}_{-\mu_{l(\mu)}}(\Gamma)$$ for all partitions $\lambda, \mu$ of $n$ and all $\Gamma \in R^{\ast}(S^{l(\lambda) + l(\mu)})$ is finite-dimensional. The commutation relations show that $\widetilde{W}$ is closed under compositions of correspondences. Moreover, by inspecting the expressions for the generators of ${{\mathfrak{g}}}_{\mathrm{NS}}$ in  (and using to bring them into the desired form), we see that all generators of $\gNS$ lie in $\widetilde{W}$. Hence $g \in \widetilde{W}$ for all $g \in U(\gNS)$, *i.e.*, $W \subset \widetilde{W}$. We find that $W$ is a finite-dimensional vector space which is preserved by the action of $U(\gNS)$, and hence defines a finite-dimensional representation of $\gNS$. Since $\gNS$ is semisimple, this representation decomposes into isotypic summands $$\label{sum is w} W = \bigoplus_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})} W_{\psi}.$$ Let us look at the image of $\Delta_{\Hilb_n} \in W$ under this decomposition $$\label{sum is 1} \Delta_{\Hilb_n} = \sum_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})} P_{\psi}$$ where $P_\psi \in W_\psi$. \[claim:XX\] The elements $P_{\psi} \in A^*(\eHilb_n \times \eHilb_n)$ are orthogonal projectors. Let us first show that left-multiplication by $P_\psi$ maps $W$ to $W_\psi$, *i.e.*, $$\label{eqn:proj} P_\psi \circ W \subset W_\psi.$$ Indeed, for all $a \in W$, right multiplication by $a$ is a ${{\mathfrak{g}}}_{\text{NS}}$-intertwiner and thus sends $W_\psi$ to $W_\psi$. In other words, we have $W_\psi \circ a \subset W_\psi$, hence $W_\psi \circ W \subset W_\psi$, which implies . If we multiply any $a \in W$ by relation , we obtain $$a = \sum_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})} P_{\psi} \circ a.$$ By , the summands in the right-hand side each lie in $W_\psi$. If $a = P_{\psi'}$, then by comparing summands the equality above implies $$P_{\psi'} \circ a = a \quad \text{and} \quad P_{\psi} \circ a = 0$$ for all $\psi \neq \psi'$. In particular, taking $a = P_{\psi'}$ implies the relations $P_{\psi} \circ P_{\psi'} = \delta_{\psi'}^{\psi} P_{\psi}$. Moreover, this implies that the inclusion is actually an identity, hence left multiplication by $P_\psi$ projects $W$ onto $W_\psi$. From Claim \[claim:XX\] we obtain the decomposition $${{\mathfrak{h}}}(\Hilb_n) = \bigoplus_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})} {{\mathfrak{h}}}_\psi(\Hilb_n) \label{sdas}$$ where ${{\mathfrak{h}}}_\psi(\Hilb_n) = (\Hilb_n, P_{\psi})$. We can now prove the main result of this section. It remains to show that the summands ${{\mathfrak{h}}}_\psi(\Hilb_n)$ are $\psi$-isotypic and that the decomposition is unique. Let $M$ be a Chow motive. The action of $\gNS$ on ${\operatorname{Hom}}(M, {{\mathfrak{h}}}(\Hilb_n))$ is defined by $g \leadsto \text{act}(g) \circ ...\,$. Hence if $f\in {\operatorname{Hom}}(M, M')$ is a morphism of Chow motives, the pullback $$f^{\ast} : {\operatorname{Hom}}(M', {{\mathfrak{h}}}(\Hilb_n)) \to {\operatorname{Hom}}(M, {{\mathfrak{h}}}(\Hilb_n))$$ is equivariant with respect to the ${{\mathfrak{g}}}_{\text{NS}}$-action. Now, for any $v \in {\operatorname{Hom}}(M, {{\mathfrak{h}}}_{\psi}(\Hilb_n))$ we have $v = P_\psi \circ w$ for some $w \in {\operatorname{Hom}}(M, {{\mathfrak{h}}}(\Hilb_n))$ and thus $$U(\gNS) v = U(\gNS) w^{\ast}(P_{\psi}) = w^{\ast}( U(\gNS) \circ P_{\psi} ).$$ Since $U(\gNS) \circ P_{\psi} \subset W_{\psi}$ this implies that $U(\gNS) v$ is finite-dimensional and $\psi$-isotypic. Since $v$ was arbitrary we conclude that ${\operatorname{Hom}}(M, {{\mathfrak{h}}}_{\psi}(\Hilb_n))$ is $\psi$-isotypic. The decomposition is unique because is unique. Indeed, suppose we had any other decomposition $$\label{other} \Delta_{\Hilb_n} = \sum_{\psi \in \mathrm{Irrep}({{\mathfrak{g}}}_{\mathrm{NS}})} P_{\psi}'$$ where $P_{\psi}' \in W_\psi$, for all $\psi$. Then we would need $P_{\psi}' = P_\psi \circ a_\psi$ for some $a_\psi \in W$. But multiplying on the left with $P_\psi$ and using the orthogonality of the projectors would imply $P_\psi = P_\psi \circ P_\psi \circ a_\psi = P_\psi \circ a_\psi$, hence $P_{\psi}' = P_\psi$. As in [@Mon Proof of Theorem 7.2], we could also have used Yoneda’s Lemma to conclude the existence of the decomposition . Our presentation above has the advantage of being constructive. It also shows that the projectors $P_{\psi}$ can be written in terms of the Nakajima operators applied to elements in $R^{\ast}(S^k)$. Multiplicativity {#sec:mult} ================ {#section-5} Let us recall the operator (in the present section, we will find it useful to use the language of operators when referring to correspondences, as explained in Section \[sub:conv\]). This operator was observed in [@Ob] to lift the (shifted) grading operator from cohomology to Chow. Let us undo this shift by considering: $$\th = h + n \cdot {\operatorname{Id}}_{\Hilb_n}.$$ The main purpose of the present section is to prove Theorem \[main\]. In the language of operators, the multiplicativity of the Chow–Künneth decomposition boils down to the identity . Alternatively, we could restate this identity as: $$\label{eqn:mult comm} [h,\mult_x] = \mult_{x'}$$ of correspondences $A^*(\Hilb_n) \rightarrow A^*(\Hilb_n)$ indexed by $x \in A^*(\Hilb_n)$. By applying the equality to the fundamental class, we must have $x' = h(x) - x \cdot h([\Hilb_n])$. \[lem:tiny\] We have $h([\eHilb_n]) = -n$. With the lemma above in mind, we conclude that $x' = \th(x)$ in . Therefore,  is actually equivalent to , thus implying part (i) of Theorem \[main\]. It is well-known that: $$[\Hilb_n] = \frac 1{n!} {{\mathfrak{q}}}_1(1)^n [\Hilb_0].$$ Because the only operator ${{\mathfrak{q}}}_k$ which fails to commute with ${{\mathfrak{q}}}_1$ is ${{\mathfrak{q}}}_{-1}$, formula implies that: $$\begin{gathered} h([\Hilb_n]) = h \left(\frac 1{n!} {{\mathfrak{q}}}_1(1)^n [\Hilb_0] \right) = \\ = \left[h, \frac 1{n!} {{\mathfrak{q}}}_1(1)^n \right] [\Hilb_0] = \left[{{\mathfrak{q}}}_1(1){{\mathfrak{q}}}_{-1}(c) - {{\mathfrak{q}}}_1(c) {{\mathfrak{q}}}_{-1}(1), \frac 1{n!} {{\mathfrak{q}}}_1(1)^n \right] [\Hilb_0] \\ = \sum_{i=1}^n \frac 1{n!} {{\mathfrak{q}}}_1(1)^{i-1} \cdot {{\mathfrak{q}}}_1(1) [{{\mathfrak{q}}}_{-1}(c) , {{\mathfrak{q}}}_{1}(1)] \cdot {{\mathfrak{q}}}_{-1}(1)^{n-i}\cdot [\Hilb_0] = - n [\Hilb_n]\end{gathered}$$ where the fact that $[{{\mathfrak{q}}}_{-1}(c), {{\mathfrak{q}}}_1(1)] = -1$ is a consequence of . {#section-6} Due to the surjectivity property , formula remains equivalent if we replace $x \in A^*(\Hilb_n)$ by a class of the form : $$\label{eqn:form} \univ(\Gamma) = \univ_{d_1,...,d_t}(\Gamma)$$ (for any $d_1,...,d_t$, which will be fixed in the present section), which is indexed by $\Gamma \in A^*(S^t)$. Then instead of proving , it suffices to prove the following: \[prop:main 1\] We have the identity of correspondences $A^*(\eHilb_n) \rightarrow A^*(\eHilb_n)$ $$\label{eqn:comm 1} [h, \emult_{\euniv(\Gamma)}] = (d_1+...+d_t-t)\emult_{\euniv(\Gamma)} + \emult_{\euniv(\overline{\Gamma})}$$ parametrized by $\Gamma \in A^*(S^t)$ (the bar notation is defined in ). By combining with , we have: $$\begin{gathered} \label{eqn:formula taut 3} [h,\mult_{\univ}(\Gamma)] = \sum_{\varepsilon_1,...,\varepsilon_t \in \{0,2\}} \quad \sum^{\lambda_{b_{s-1}+1} \geq ... \geq \lambda_{b_s} \in {{\mathbb{Z}}}, \forall s \in \{1,...,t\}}_{\lambda_{b_{s-1}+1} +... + \lambda_{b_s} = 0, \forall s \in \{1,...,t\}} \\ \text{ct} \cdot {{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_{b_t}} \left(\overline{\Delta_{b_0+1...b_1} \Delta_{b_1+1...b_2}... \Delta_{b_{t-1}+1...b_t} \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t}} \right).\end{gathered}$$ To compute the overlined class on the second row, we will use the following: \[claim:3\] For any natural numbers $k$, $l$ and any $\Phi \in A^*(S^l)$, we have: $$\label{eqn:claim 3} \overline{\Delta_{1...k} \Phi_{k*}} = \Delta_{1...k} \left[ (k-1) \Phi_{k*} + \int_\bullet \Phi_{\bullet*} (c_k - c_\bullet) \right] .$$ The notation $*$ stands for the indices $k+1,...,k+l-1$, and it reflects the fact that the bar notation is only defined as in with respect to the indices $1,...,k$ only. By definition, the LHS of equals: $$\begin{aligned} \text{LHS of \eqref{eqn:claim 3}} & = \sum_{i=1}^{k-1} \int_\bullet \Delta_{1... i-1,\bullet,i+1 ... k} (c_i - c_\bullet) \Phi_{k*} + \int_\bullet \Delta_{1... k-1,\bullet} (c_k - c_\bullet) \Phi_{\bullet*} \\ & = \Phi_{k*} \sum_{i=1}^{k} \int_\bullet \Delta_{1... i-1,\bullet,i+1 ... k} (c_i - c_\bullet) + \Delta_{1... k-1,\bullet} (c_k - c_\bullet) (\Phi_{\bullet*} - \Phi_{k*}) \\ & = \text{RHS of \eqref{eqn:claim 3}}\end{aligned}$$ where the last equality is due to and . By applying a number of $t$ times, we obtain (let $\Delta = \Delta_{b_0+1...b_1} ... \Delta_{b_{t-1}+1...b_t}$): $$\begin{gathered} \label{eqn:pik} \overline{\Delta \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t}} = \Delta \left[ (b_t - t) \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t}\phantom{\displaystyle\sum_{i=1}^t}\right. \\ \left. + \sum_{i=1}^t \int_\bullet \Gamma_{b_1...b_{i-1}\bullet b_{i+1}...b_t} \phi_{b_1}...\phi_{b_{i-1}} \phi_\bullet \phi_{b_{i+1}} ... \phi_{b_t} (c_{b_i} - c_\bullet) \right].\end{gathered}$$ Using we have the following simple identities: $$\begin{aligned} \int_\bullet \Gamma_{b_1...\bullet ...b_t} \phi_{b_1}...\phi_\bullet ... \phi_{b_t} (c_{b_i} - c_\bullet) &= \begin{cases}\displaystyle \int_\bullet \Gamma_{b_1...\bullet ...b_t} (c_{b_i} - c_\bullet) \phi_{b_1}...\phi_{b_t} &\text{if } \phi_{b_i} = 1 \\ \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t} & \text{if } \phi_{b_i} = c_{b_i} \end{cases} \\ \int_\bullet \Gamma_{b_1...\bullet ...b_t} \phi_{b_1}... \phi_{b_i} ... \phi_{b_t} (c_{b_i} - c_\bullet) &= - \Gamma_{b_1...b_t} \phi_{b_1} ... \phi_{b_t} \:\: \quad \quad \quad \quad \quad \quad \quad \text{if } \phi_{b_i} = c_{b_i}\end{aligned}$$ which we plug into : $$\label{eqn:formula taut 4} \overline{\Delta \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t}} = \Delta \left[ (b_t - t + 2 \#\{i \,|\, \phi_{b_i} = c_{b_i} \} ) \Gamma_{b_1...b_t} + \overline{\Gamma_{b_1...b_t}} \right] \phi_{b_1}...\phi_{b_t} \,.$$ Since the coefficient of $\Gamma_{b_1...b_t}$ in the right-hand side is equal to $d_1+...+d_t-t$, according to , we obtain precisely formula . {#section-7} Let us now prove part (ii) of Theorem \[main\]. In order to show that a certain class $x \in A^*(\Hilb_n)$ lies in the appropriate direct summand, we must show that $h(x) = (\deg(x) - n) \cdot x$. We are interested in the situation when $x$ is a divisor class or a Chern class of the tangent bundle, in which case we have: $$x = \univ_{d_1,...,d_t} (\Gamma)$$ for some $d_1,...,d_t \in {{\mathbb{N}}}$ and $\Gamma \in A^*(S^t)$. We have shown in Proposition \[prop:main 1\] that: $$h(x) = (d_1+...+d_t -n -t) \univ_{d_1,...,d_t} (\Gamma) + \univ_{d_1,...,d_t} (\overline{\Gamma})$$ so the class $x$ lies in the appropriate direct summand if: $$\label{eqn:want} \overline{\Gamma} = \Gamma \cdot (\deg \univ_{d_1,...,d_t} (\Gamma) - d_1 - ... - d_t + t) = \Gamma \cdot (\deg \Gamma - t).$$ Since divisors and Chern classes of the tangent bundle are linear combinations of the classes , , , it suffices to prove in the particular case $\Gamma = \Delta_*(\gamma)$, where $\Delta$ is the small diagonal and $\gamma \in \{1,l,c\}_{l \in A^1(S)}$. In this case, we have: $$\overline{\Gamma} \stackrel{\eqref{eqn:claim 3}}= \Delta_*\left( (t-1)\gamma + \int_\bullet \gamma_\bullet(c-c_\bullet) \right) = (t-2+\deg \gamma) \cdot \Gamma$$ where the last equality is a simple case-by-case study for all $\gamma \in \{1,l,c\}_{l \in A^1(S)}$. Since $\deg \Gamma = 2t - 2 + \deg \gamma$, this implies formula . {#section-8} Let $\alpha, \beta \in A^1(S) \hookrightarrow A^1(\Hilb_n)$. As a straightforward application of formulas , the element $\alpha \wedge \beta \in {{\mathfrak{g}}}_{\text{NS}}$ acts on $A^*(\Hilb_n)$ by the correspondence: $$h_{\alpha \beta} = \sum_{k=1}^\infty \frac{1}{k} {{\mathfrak{q}}}_k {{\mathfrak{q}}}_{-k}(\alpha_2 \beta_1 - \alpha_1 \beta_2).$$ In the present section, we will prove Theorem \[thm:ref\], that is show that $h_{\alpha \beta}$ acts by derivations. As in the setup of Proposition \[prop:main 1\], it boils down to statement below, for all $\alpha, \beta \in A^1(S)$ and all universal classes of the form . \[prop:main 2\] We have the identity of correspondences $A^*(\eHilb_n) \rightarrow A^*(\eHilb_n$) $$\label{eqn:comm 2} [h_{\alpha \beta}, \emult_{\euniv(\Gamma)}] = \emult_{\euniv(\overline{\overline{\Gamma}})}$$ parametrized by $\Gamma \in A^*(S^t)$, where for any $k$ and any $\Phi \in A^*(S^k)$, we write: $$\label{eqn:bar 2} \overline{\overline{\Phi}} = \sum_{i=1}^k \int_\bullet \underbrace{\Phi_{1...i-1,\bullet,i+1...k}(\alpha_i \beta_\bullet - \alpha_\bullet \beta_i)}_{\text{this class lies in }A^*(S^k \times S)}$$ where the last factor in $S^k \times S$ is the one represented by the index $\bullet$. The proof follows that of Proposition \[prop:main 1\] very closely, so we will only point out the differences. We start from the following identity: $$\left[h_{\alpha \beta},{{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_k}(\Phi) \right] = {{\mathfrak{q}}}_{\lambda_1}...{{\mathfrak{q}}}_{\lambda_k}(\overline{\overline{\Phi}})$$ which is a straightforward analogue of . Therefore, formulas and for $\mult_{\univ(\Gamma)}$ and $[h_{\alpha \beta}, \mult_{\univ(\Gamma)}]$ continue to hold, but with the definition for the double bar operation instead of the single bar operation in . We leave the following analogue of as an exercise to the interested reader: $$\overline{\overline{\Delta_{1...k} \Phi_{k*}}} = \Delta_{1...k} \int_\bullet \Phi_{\bullet*} (\alpha_k \beta_\bullet - \alpha_\bullet \beta_k).$$ By iterating the formula above $t$ times, we obtain (let $\Delta = \Delta_{b_0+1...b_1} ... \Delta_{b_{t-1}+1...b_t}$): $$\overline{\overline{\Delta \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t}}} = \Delta \sum_{i=1}^t \int_\bullet \Gamma_{b_1...b_{i-1}\bullet b_{i+1}...b_t} \phi_{b_1}...\phi_{b_{i-1}} \phi_\bullet \phi_{b_{i+1}} ... \phi_{b_t} (\alpha_{b_i} \beta_\bullet - \alpha_\bullet\beta_{b_i}) .$$ As a consequence of the following simple formulas: $$\begin{aligned} &\int_\bullet \Gamma_{b_1...\bullet ...b_t} \phi_{b_1}... \phi_\bullet ... \phi_{b_t} (\alpha_{b_i} \beta_\bullet - \alpha_\bullet\beta_i) \\ {}=& \int_\bullet \Gamma_{b_1...\bullet ...b_t} \phi_{b_1}...\phi_{b_t} (\alpha_{b_i} \beta_\bullet - \alpha_\bullet\beta_{b_i}) & &\text{if } \phi_{b_i} = 1 \\ &\int_\bullet \Gamma_{b_1...\bullet ...b_t} \phi_{b_1}... (\phi_{b_i} \text{ or } \phi_\bullet) ... \phi_{b_t} (\alpha_{b_i} \beta_\bullet - \alpha_\bullet\beta_{b_i}) = 0 & &\text{if } \phi_{b_i} = c_{b_i}\end{aligned}$$ we have $\overline{\overline{ \Delta \Gamma_{b_1...b_t} \phi_{b_1}...\phi_{b_t}}} = \Delta \overline{\overline{\Gamma}}_{b_1...b_t} \phi_{b_1}...\phi_{b_t}$. This concludes the proof of . [10]{} A. Beauville, [*Sur l’anneau de Chow d’une variété abélienne*]{}, Math. Ann. [**273**]{} (1986), no. 4, 647–651. A. Beauville, [*On the splitting of the Bloch–Beilinson filtration*]{}, Algebraic cycles and motives. Vol. 2, 38–53, London Math. Soc. Lecture Note Ser., [**344**]{}, Cambridge Univ. Press, Cambridge, 2007. A. Beauville, C. Voisin, [*On the Chow ring of a K3 surface*]{}, J. Algebraic Geom. [**13**]{} (2004), no. 3, 417–426. M. A. de Cataldo, L. Migliorini, [*The Chow groups and the motive of the Hilbert scheme of points on a surface*]{}, J. Algebra [**251**]{} (2002), no. 2, 824–848. C. Deninger, J. Murre, [*Motivic decomposition of abelian schemes and the Fourier transform*]{}, J. Reine Angew. Math. [**422**]{} (1991), 201–219. L. Fu, R. Laterveer, C. Vial, [*The generalized Franchetta conjecture for some hyper-Kähler varieties. With an appendix by the authors and Mingmin Shen*]{}, J. Math. Pures Appl. (9) [**130**]{} (2019), 1–35. L. Fu, R. Laterveer, C. Vial, [*Multiplicative Chow–Künneth decompositions and varieties of cohomological K3 type*]{}, [arXiv:1911.06580](http://arxiv.org/abs/1911.06580) L. Fu, Z. Tian, [*Motivic multiplicative McKay correspondence for surfaces*]{}, Manuscripta Math. [**158**]{} (2019), no. 3-4, 295–316. L. Fu, Z. Tian, [*Motivic hyper-Kähler resolution conjecture: II. Hilbert schemes of K3 surfaces*]{}, available at <http://math.univ-lyon1.fr/~fu/articles/MotivicCrepantHilbK3.pdf> L. Fu, Z. Tian, C. Vial, [*Motivic hyper-Kähler resolution conjecture, I: generalized Kummer varieties*]{}, Geom. Topol. [**23**]{} (2019), no. 1, 427–492. L. Fu, C. Vial, [*Distinguished cycles on varieties with motive of abelian type and the Section Property*]{}, J. Algebraic Geom., to appear. L. Fu, C. Vial, [*A motivic global Torelli theorem for isogenous K3 surfaces*]{}, [arXiv:1907.10868](http://arxiv.org/abs/1907.10868) A. Gholampour, R. P. Thomas, [*Degeneracy loci, virtual cycles and nested Hilbert schemes, I*]{}, Tunis. J. Math. [**2**]{} (2020), no. 3, 633–665. M. Green, Y.-J. Kim, R. Laza, C. Robles, [*The LLV decomposition of hyper-Kähler cohomology*]{}, [arXiv:1906.03432](http://arxiv.org/abs/1906.03432) I. Grojnowski, *Instantons and affine algebras. [I]{}. [T]{}he [H]{}ilbert scheme and vertex operators*, Math. Res. Lett. **3** (1996), no. 2, 275–291. K. Künnemann, [*A Lefschetz decomposition for Chow motives of abelian schemes*]{}, Invent. Math. [**113**]{} (1993), no. 1, 85–102. R. Laterveer, C. Vial, [*On the Chow ring of Cynk–Hulek Calabi–Yau varieties and Schreieder varieties*]{}, Canad. J. Math., to appear. M. Lehn, *Chern classes of tautological sheaves on [H]{}ilbert schemes of points on surfaces*, Invent. Math. **136** (1999), no. 1, 157–207. W.-P. Li, Z. Qin, W. Wang, [*Hilbert schemes and $W$-algebras*]{}, Int. Math. Res. Not. 2002, no. 27, 1427–1456. E. Looijenga, V. A. Lunts, [*A Lie algebra attached to a projective variety*]{}, Invent. Math. [**129**]{} (1997), no. 2, 361–412. E. Markman, [*Generators of the cohomology ring of moduli spaces of sheaves on symplectic surfaces*]{}, J. Reine Angew. Math. [**544**]{} (2002), 61–82. D. Maulik, A. Negu, *Lehn’s formula in Chow and conjectures of Beauville and Voisin*, [arXiv:1904.05262](http://arxiv.org/abs/1904.05262) B. Moonen, [*On the Chow motive of an abelian scheme with non-trivial endomorphisms*]{}, J. Reine Angew. Math. [**711**]{} (2016), 75–109. J. Murre, J. Nagel, C. Peters, [*Lectures on the theory of pure motives*]{}, Univ. Lecture Ser., [**61**]{}, Amer. Math. Soc., Providence, RI, 2013. x+149 pp. H. Nakajima, *Heisenberg algebra and [H]{}ilbert schemes of points on projective surfaces*, Ann. of Math. (2) **145** (1997), no. 2, 379–388. A. Negu, [*The Chow of $S^{[n]}$ and the universal subscheme*]{}, [arXiv:1912.03287](http://arxiv.org/abs/1912.03287) G. Oberdieck, *A Lie algebra action on the Chow ring of the Hilbert scheme of points of a K3 surface*, [arXiv:1908.08830](http://arxiv.org/abs/1908.08830) Y. Ruan, [*The cohomology ring of crepant resolutions of orbifolds*]{}, Gromov–Witten theory of spin curves and orbifolds, 117–126, Contemp. Math., [**403**]{}, Amer. Math. Soc., Providence, RI, 2006. M. Shen, C. Vial, [*The Fourier transform for certain hyperkähler fourfolds*]{}, Mem. Amer. Math. Soc. [**240**]{} (2016), no. 1139, vii+163 pp. M. Shen, C. Vial, [*The motive of the Hilbert cube $X^{[3]}$*]{}, Forum Math. Sigma [**4**]{} (2016), e30, 55 pp. M. Verbitsky, [*Cohomology of compact hyper-Kähler manifolds and its applications*]{}, Geom. Funct. Anal. [**6**]{} (1996), no. 4, 601–611. C. Vial, [*On the motive of some hyperKähler varieties*]{}, J. Reine Angew. Math. [**725**]{} (2017), 235–247. C. Voisin, [*On the Chow ring of certain algebraic hyper-Kähler manifolds*]{}, Pure Appl. Math. Q. [**4**]{} (2008), no. 3, Special Issue: In honor of Fedor Bogomolov. Part 2, 613–649. C. Voisin, [*Some new results on modified diagonals*]{}, Geom. Topol. [**19**]{} (2015), no. 6, 3307–3343.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that eigenvalues of the Robin Laplacian with a positive boundary parameter $\alpha$ on rectangles and unions of rectangtes satisfy Pólya-type inequalities, albeit with an exponent smaller than that of the corresponding Weyl asympotics for a fixed domain. We determine the optimal exponents in either case, showing that they are different in the two situations. Our approach to proving these results includes a characterisation of the corresponding extremal domains for the $k^{\rm th}$ eigenvalue in regions of the $(k,\alpha)-$plane.' address: - 'Departamento de Matemática, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, P-1049-001 Lisboa, Portugal [and]{} Grupo de Física Mátematica, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, Edifício C6, P-1749-016 Lisboa, Portugal' - 'Grupo de Física Mátematica, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, Edifício C6, P-1749-016 Lisboa, Portugal' author: - Pedro Freitas - James Kennedy title: 'Extremal domains and Pólya-type inequalities for the Robin Laplacian on rectangles and unions of rectangles' --- [^1] [^2] [^3] Introduction {#sec:intro} ============ Given a planar domain $\Omega$ with a sufficiently smooth boundary $\partial\Omega$, consider the equation $$\label{eigeq} \Delta u + \tau u = 0 \mbox{ in } \Omega$$ with one of the following boundary conditions $$\begin{array}{lll} u = 0, & x \in \partial\Omega & \mbox{(Dirichlet)}{ \vspace*{2mm}\\ }{\frac{{\displaystyle}\partial u}{{\displaystyle}\partial \nu}} = 0, & x \in \partial\Omega & \mbox{(Neumann}) \end{array},$$ where $\nu$ is the outer unit normal defined on $\partial\Omega$. Denoting by $\gamma_{k}$ and $\mu_{k}$ the Dirichlet and Neumann eigenvalues, respectively, corresponding to the numbers $\tau$ for which nontrivial solutions $u$ of the above equation exist, we have $$0 < \gamma_{1}\leq \gamma_{2} \leq \dots$$ and $$0 = \mu_{1}\leq \mu_{2} \leq \dots,$$ with both sequences being unbounded. In the second volume of his book [*Mathematics and plausible reasoning*]{}, Pólya conjectured that $$\label{originpolyaconj} \mu_{k} < {\frac{{\displaystyle}4k\pi}{{\displaystyle}A}} < \gamma_{k}, \; k=1,2,\dots$$ for planar domains with area $A$ [@poly1 pp. 51–53]. After stating this conjecture, Pólya went on to say that these inequalities are satisfied in the case of rectangles and that it had been this particular case that had suggested the conjecture. A few years later, Pólya himself provided a remarkably simple and elegant argument to prove  in the Dirichlet case for plane-covering (tiling) domains, that is, domains which “cover the whole plane without gaps and without overlapping” [@poly2]. In the same article, Pólya also provided a shaper version of the Neumann part of the conjecture by replacing $\mu_{k}$ with $\mu_{k+1}$, which he then proved for a smaller class of domains, with the general result for tiling domains being obtained not long afterwards by Kellner [@kell]. In [@poly1] Pólya gave some further heuristic arguments as to why conjecture  should be true in general, such as the fact that it holds for the first two eigenvalues of general domains, which follows from the Faber–Krahn and Hong–Krahn–Szego inequalities, and the Szegő–Weinberger inequality in the Dirichlet and Neumann cases, respectively. He also mentioned that, upon division by $k$, all three terms in  have the same limit, as a consequence of the Weyl asymptotics. However, he left out one of the most compelling pieces of evidence for  to hold, probably because this was itself a conjecture at the time, namely the two-term Weyl asymptotics $$\tau_{k} = {\frac{{\displaystyle}4k\pi}{{\displaystyle}A}} \pm 2\sqrt{k \pi} {\frac{{\displaystyle}L}{{\displaystyle}A^{3/2}}} + {{\rm o}}\left(k^{1/2}\right)$$ where $L$ denotes the perimeter of $\Omega$ and the $+$ and $-$ signs correspond to Dirichlet and Neumann boundary conditions, respectively [@sava]. In fact, not only does the second term in the above asymptotics support the conjecture, but it also shows that the latter is asymptotically correct for any particular domain. Although the conjecture remains open to this day, progress has been made with the best results so far for general planar domains being $$\label{eq:polya-best} \mu_{k} \leq {\frac{{\displaystyle}8\pi}{{\displaystyle}A}}(k-1) \mbox{ and } {\frac{{\displaystyle}2\pi}{{\displaystyle}A}}k \leq \gamma_{k}, \;\; k=1,2,\dots.$$ Here the Dirichlet bound was proved by Li and Yau in 1983 [@liyau], and as was later realised, could also be recovered from work by Berezin [@bere], while the Neumann bound is due to Kröger in 1992 [@krog]. In this paper we want to study the same problem in the Robin case, that is, we consider equation  together with the boundary condition $$\label{robinboundary} \begin{array}{lll} {\frac{{\displaystyle}\partial u}{{\displaystyle}\partial \nu}} + \alpha u= 0, & x \in \partial\Omega &\mbox{(Robin)}, \end{array}$$ where $\alpha$ is a positive real parameter. It is a natural question to ask what form, if any, should an inequality of the same type as  take for the eigenvalues of the above problem. We first note that the corresponding eigenvalues, which we shall denote by ${{\lambda_{k}}(\Omega,\alpha)}$, also satisfy an inequality of Faber–Krahn type, namely, $${{\lambda_{1}}(B,\alpha)} \leq {{\lambda_{1}}(\Omega,\alpha)}$$ for all positive $\alpha$, where $B$ denotes the ball with the same measure as $\Omega$ – for planar domains this was proved by Bossel [@boss] and generalised by Daners [@dane] to higher dimensions. The corresponding Hong–Krahn–Szego inequality was proved by the second author in [@kenn]. It might thus seem reasonable to expect the sequence of ${{\lambda_{k}}(\Omega,\alpha)}$ to have a behaviour analogous to that of the Dirichlet problem. However, it is known that the two-term Weyl asymptotic is in fact the same as that of the Neumann problem [@frge], namely, $$\label{robinweyl} {{\lambda_{k}}(\Omega,\alpha)} = {\frac{{\displaystyle}4k\pi}{{\displaystyle}A}}- 2\sqrt{k \pi} {\frac{{\displaystyle}L}{{\displaystyle}A^{3/2}}} + {{\rm o}}\left(k^{1/2}\right).$$ A consequence of this is that clearly there cannot exist a lower bound for ${{\lambda_{k}}(\Omega,\alpha)}$ with the same power of $k$ as in the Dirichlet case which is also compatible with the first term of . To some extent, this was already pointed out in [@anfrke], where it was seen, by considering the sequence of domains ${\mathcal{B}_{k}}$ consisting of the $k$ disjoint unions of equal balls, that the asymptotic behaviour of the infimum of the $k^{\rm th}$ Robin eigenvalue among domains of equal volume will satisfy $$\inf_{|\Omega|=A} {{\lambda_{k}}(\Omega,\alpha)} \leq {{\lambda_{k}}({\mathcal{B}_{k}},\alpha)}\leq 2\alpha \left({\frac{{\displaystyle}k \pi}{{\displaystyle}A}}\right)^{1/2}.$$ Thus, although it might still possible to consider inequalities of the type ${{\lambda_{k}}(\Omega,\alpha)}\geq c k$, the constant $c$ would have to depend on $\Omega$ in a nontrivial fashion and cannot, in any case, be optimal in an asymptotic sense as $k$ approaches infinity. In order to gain some insight into this issue, it is, of course, tempting to follow Pólya’s approach and see what happens in the simpler case of rectangles or possibly even tiling domains. However, for Robin boundary conditions there is no explicit closed form for the eigenvalues in the former case, while in the latter two of the key ingredients used in [@poly2], namely the simple rescaling formula and monotonicity by inclusion which are fundamental in the Dirichlet proof do not apply for . A first purpose of this paper is thus to obtain further understanding of this problem by studying the existence of Pólya-type inequalities of the form $$c k^\beta \leq {{\lambda_{k}}(\Omega,\alpha)},$$ where the constant $c$ depends only on the boundary parameter $\alpha$ and is independent of $\Omega$ within families of domains with a given area. A key point is the determination of the optimal power $\beta$. The two classes of domains which we shall consider here are rectangles and disjoint unions of rectangles. There are two reasons for studying these two families. On the one hand, it is natural to try to understand what happens in the case of rectangles, by analogy with the Dirichlet case. On the other hand, and as will become clear, the two problems yield different values of $\beta$ and thus illustrate the essential differences that may be expected even within the Robin problem. Furthermore, the behaviour for unions of rectangles should, in principle, be closer to what is to be expected to happen in the general problem. In this direction, our main result may be summarised as follows. \[thm:polya\] Given positive numbers $\alpha$ and $A$, there exist positive constants $c_{r}$ and $c_{u}$, depending only on $\alpha$ and $A$, such that the Robin eigenvalues satisfy $$c_{r} k^{2/3} \leq {{\lambda_{k}}(\Omega,\alpha)},$$ for all rectangles with given area $A$, and $$c_{u} k^{1/2} \leq {{\lambda_{k}}(\Omega,\alpha)},$$ for all unions of rectangles with total area $A$. Furthermore, the exponents $2/3$ and $1/2$ are optimal. This theorem follows directly from the more detailed Theorems \[thm:k-squares\] and \[thm:rectangles\] below; in particular, they, together with the fact that the eigenvalues are increasing with $\alpha$, allow us to give explicit lower bounds on the constants $c_{r}$ and $c_{u}$. The issue of determining the optimal constant in Pólya’s conjecture for the Dirichlet and Neumann problems is naturally related to that of considering the extremal values of the eigenvalues $\gamma_{k}$ and $\mu_{k}$. In fact, in the case of general domains with a measure restriction, this connection is much stronger than had been previously thought, in that it was shown recently that Pólya’s conjecture is equivalent to the first term in the asymptotic behaviour of the extremal values being the same as that in the Weyl asymptotics for a fixed domain [@colels]. This effect is a direct consequence of the subadditivity and superadditivity of the sequences of (dimensionally normalised) extremal eigenvalues in the Dirichlet and Neumann cases, respectively. Taking this into consideration, the approach we follow in this paper is mixed, in the sense that we will prove Theorem \[thm:polya\] by studying the sequence of extremal sets in both cases. We recall that even for Dirichlet eigenvalues, which in the case of rectangles are known explicitly, it is a nontrivial problem to show that the sequence of extremal rectangles does converge to the square as $k$ goes to infinity [@anfr2]. This result, which is closely related to a lattice point counting problem, has also been extended to Neumann boundary conditions [@vdbbugi], higher dimensions [@gila], and several variants with a more geometric [@ar] or number-theoretic flavour [@guwa; @arla; @lali1; @lali2; @ma; @mast]. This is thus also a motivation to study the evolution of the sequence of extremal rectangles. However, since for the Robin problem the asymptotic extremal domain is no longer thought to be the square, we also consider the situation where we allow for arbitrary unions of rectangles. This is a natural setting to consider, due to the considerations made above and, in particular, the results obtained in [@anfrke], where the following conjecture was made ([@anfrke Section 5]; see also [@bufrke Open Problem 4.38]). \[conj:robin-optimisers\] Fix a dimension $d\geq 2$ and $k\geq 3$. Then there exists some $\alpha_k^\ast>0$ depending only on $k$ and $d$ such that $${{\lambda_{k}}({\mathcal{B}_{k}},\alpha)} \leq {{\lambda_{k}}(\Omega,\alpha)}$$ for all $\alpha \in (0,\alpha_k^\ast]$ and all (sufficiently smooth) domains $\Omega\subset {\mathbb{R}}^d$ with $|\Omega|=1$, where ${\mathcal{B}_{k}}$ is the disjoint union of $k$ equal balls of total volume $1$. Moreover, ${\mathcal{B}_{k}}$ is *not* optimal for $\alpha > \alpha_k^\ast$, and $\alpha_k^\ast \to \infty$ as $k\to \infty$. Here we shall provide strong supporting evidence for this conjecture by essentially proving it in the restricted setting of rectangles and unions of rectangles (with $k$ equal squares taking the role of $k$ equal balls); we also expect some of the tools and insights we develop to be of use when investigating the conjecture on more general domains. For any positive values of the area $A$ and boundary parameter $\alpha$, and any positive integer $k$, we will write ${\lambda_{k}^{\rm +} (A,\alpha)}$ to stand for the extremal quantity $$\inf \{ {{\lambda_{k}}(\Omega,\alpha)}:\Omega \subset {\mathbb{R}}^2 \text{ is a disjoint union of rectangles, } |\Omega|=A \},$$ and we let ${\mathcal{U}_{k}}$ denote the disjoint union of $k$ equal squares of the same total area $A$. Our main result in this context is then \[thm:k-squares\] There exists an absolute positive constant $C_1$ such that, for any finite disjoint union of rectangles $\Omega$ having total area $A$, we have $${{\lambda_{k}}({\mathcal{U}_{k}},\alpha)} < {{\lambda_{k}}(\Omega,\alpha)}$$ whenever $\alpha \leq C_1 k^{1/2}A^{-1/2}$, where ${\mathcal{U}_{k}}$ has total area $A$. Furthermore, for such pairs $\alpha, k$, we have $$\label{eq:opt-value-est} \frac{4\pi^2 k\alpha}{A^{1/2}\pi^2 k^{1/2} + 2A\alpha} < {\lambda_{k}^{\rm +} (A,\alpha)} \leq \frac{4k^{1/2}\alpha}{A^{1/2}}.$$ It is possible to improve the upper bound for ${\lambda_{k}^{\rm +} (A,\alpha)}$ given above, by using the more precise (and complicated) bounds given in the Appendix – see Proposition \[prop:boundkequalsquares\]. Our proof gives an explicit estimate on the constant, namely $C_1>{\frac{{\displaystyle}\pi^2}{{\displaystyle}18}}\left(7 - 2\sqrt{10}\right)\approx 0.370$ – see Sections \[sec:uk-in-eu-fat\] and \[sec:disjoint-unions\] for the derivation of the constant for rectangles and unions of rectangles, respectively. Thus, for any fixed positive $\alpha$, for $k$ sufficiently large (or equivalently, for any fixed $k$ for $\alpha$ sufficiently small, for an explicitly given value), the minimiser of ${{\lambda_{k}}(\,\cdot\,,\alpha)}$ among all unions of rectangles of fixed total area is the domain consisting of the disjoint union of $k$ equal squares. This also allows us to obtain estimates on the constants $c_{u}$ and $c_{r}$ appearing in Theorem \[thm:polya\]. We thus have, for instance, $$c_{u} \geq \left\{ \begin{array}{ll} {\frac{{\displaystyle}4\pi^2 \alpha}{{\displaystyle}A^{1/2}\left(\pi^2 + 2 \alpha A^{1/2}\right)}}, & \alpha < C_{1} k^{1/2}A^{-1/2}{ \vspace*{2mm}\\ }{\frac{{\displaystyle}4\pi^2 C_{1}}{{\displaystyle}A\left(\pi^2+2A^{1/2}C_{1}\right)}}, & \alpha \geq C_{1} k^{1/2}A^{-1/2} \end{array} \right.$$ with $C_{1}$ as above. Moreover, the form of the given relationship between $\alpha$ and $k$ is optimal: \[thm:brexit\] There exists a further absolute positive constant $C_2$ such that ${{\lambda_{k}}(\,\cdot\,,\alpha)}$ is not minimised among all finite unions of rectangles of fixed total area $A$ by ${\mathcal{U}_{k}}$ whenever $\alpha \geq C_2 k^{1/2} A^{-1/2}$. In fact, we obtain the exact form of the curve where having three of the squares of ${\mathcal{U}_{k}}$ replaced by one larger square will provide the same eigenvalue; based on results from [@anfrke] for balls and numerics it is to be expected, although it is not yet known, that this is the exact point where ${\mathcal{U}_{k}}$ stops being the optimiser. Our result again includes an explicit estimate on $C_2$; see Theorem \[transition3to1\] for the details. In fact, for fixed $k$, as $\alpha \to \infty$ the optimiser converges to its Dirichlet counterpart, as we show in Theorem \[thm:robin-dirichlet-convergence\]. Moreover, for any *fixed* domain $\Omega$ there exists a constant $C_\Omega > 0$, which numerically generally appears to be close to the numerically optimal $C_2$, such that ${{\lambda_{k}}(\Omega,\alpha)} < {{\lambda_{k}}({\mathcal{U}_{k}},\alpha)}$ whenever $\alpha \geq C_\Omega k^{1/2}A^{-1/2}$, as we show in Section \[sec:higher-dimension\]. All this highlights a sharp difference in qualitative behaviour between regions of the form $\alpha \leq c k^{1/2}A^{-1/2}$ and $\alpha \geq c k^{1/2} A^{-1/2}$. Nevertheless, the fact that, for any fixed $\alpha$, ${\mathcal{U}_{k}}$ becomes the extremal domain for all sufficiently large $k$, allows us to describe the asymptotic behaviour of the optimal values ${\lambda_{k}^{\rm +} (A,\alpha)}$ as $k\to\infty$ for fixed $A$ and $\alpha$. \[cor:optimal-asymptotic\] For any given positive values of the area $A$ and boundary parameter $\alpha$, $$\lim_{k\to\infty} \frac{{\lambda_{k}^{\rm +} (A,\alpha)}}{k^{1/2}} = \frac{4\alpha}{A^{1/2}},$$ and indeed, ${\lambda_{k}^{\rm +} (A,\alpha)}$ has the same asymptotic behaviour as $k$ goes to infinity as ${\mathcal{U}_{k}}$, namely, $$\label{eq:optimal-asymptotic} \begin{array}{lll} {\lambda_{k}^{\rm +} (A,\alpha)} & = & {\frac{{\displaystyle}4\alpha}{{\displaystyle}A^{1/2}}} k^{1/2} - {\frac{{\displaystyle}2\alpha^2}{{\displaystyle}3}} + {\frac{{\displaystyle}4A^{1/2}\alpha^3}{{\displaystyle}45}} k^{-1/2}{ \vspace*{2mm}\\ }& & \hspace*{5mm}- {\frac{{\displaystyle}8A\alpha^4}{{\displaystyle}945}}k^{-1} + {\frac{{\displaystyle}4A^{3/2}\alpha^5}{{\displaystyle}1475}} k^{-3/2} + {{\rm O}}(k^{-2}) \end{array}$$ as $k \to \infty$. The limit follows from the bounds in Theorem \[thm:k-squares\], while the asymptotic expansion may be obtained from that of the first Robin eigenvalue given by  by taking $a=(A/k)^{1/2}$ and noting that $a\to 0$ as $k\to\infty$. As a further consequence, we can obtain two-sided estimates on the smallest possible value of the sum of the first $k \geq 1$ eigenvalues, $$\label{eq:sumeigopt} {\sigma_{k}^{\rm +} (A,\alpha)} := \inf_\Omega \sum_{j=1}^k {{\lambda_{j}}(\Omega,\alpha)},$$ where the infimum is taken over all disjoint unions $\Omega$ of rectangles such that $|\Omega|=A$, for fixed $A>0$ and $\alpha > 0$. Li and Yau [@liyau] famously obtained a sharp lower bound on the corresponding sum in the Dirichlet case, namely $2\pi k^2/A$ (in two dimensions); the lower bound on $\gamma_k$ from is obtained as a direct consequence of it. Here, ${\sigma_{k}^{\rm +} (A,\alpha)}$ must behave asymptotically like $k^{3/2}$, not $k^2$. \[cor:sumeigopt\] Fix $A>0$ and $\alpha>0$. Then $$\frac{8}{3}A^{-1/2}\alpha \leq \liminf_{k\to\infty} \frac{{\sigma_{k}^{\rm +} (A,\alpha)}}{k^{3/2}} \leq \limsup_{k\to\infty} \frac{{\sigma_{k}^{\rm +} (A,\alpha)}}{k^{3/2}} \leq 4A^{-1/2}\alpha.$$ The method of proof of this result, which we give in Section \[sec:sums\], can be easily extended to provide explicit bounds for ${\sigma_{k}^{\rm +} (A,\alpha)}$ for any given $k$. As we saw in Theorem \[thm:polya\], even if we restrict our attention just to *rectangles* rather than disjoint unions, then we still have that the growth of the extremal eigenvalues in $k$ lies below the corresponding Weyl asymptotics. Moreover, now the corresponding sequence of optimisers is unbounded with the ratio of side lengths tending to infinity with $k$. The following theorem also gives the rate at which this happens. Here, we set $${\lambda_{k}^{\rm *} (A,\alpha)} := \inf\{{{\lambda_{k}}(\Omega,\alpha)}: \Omega \subset {\mathbb{R}}^2 \text{ is a rectangle, } |\Omega|=A\},$$ and denote by $A^{1/2}a_k^\ast$ the longer side of the rectangle yielding the optimum, for fixed $\alpha$. \[thm:rectangles\] For any fixed $A>0$, $\alpha>0$ and $k\geq 2$, there exists a positive constant $C_3$ such that $${\frac{{\displaystyle}3\pi^2\alpha^{2/3}}{{\displaystyle}A^{2/3}\left(\pi^2+2A^{1/2}\alpha\right)^{2/3}}} (k-2)^{2/3} \leq {\lambda_{k}^{\rm *} (A,\alpha)} \leq {\frac{{\displaystyle}3\pi^{2/3}\alpha^{2/3}}{{\displaystyle}A^{2/3}}} k^{2/3},$$ the lower bound holding whenever $\alpha \leq C_3 k^{1/2}$. In particular, there exists a positive constant $C_4=C_4(\alpha,A)$ such that ${\lambda_{k}^{\rm *} (A,\alpha)}\geq C_4k^{2/3}$ for all $k\geq 3$. In addition, $$\lim_{k\to\infty} \frac{{\lambda_{k}^{\rm *} (A,\alpha)}}{k^{2/3}} = 3\left({\frac{{\displaystyle}\pi\alpha}{{\displaystyle}A}}\right)^{2/3}.$$ Moreover, there exist constants $c_1,c_2$ depending on $\alpha$ such that $c_1 k^{2/3} \leq a_k^\ast \leq c_2 k^{2/3}$ for all $k\geq 1$. An explicit estimate for the constant $C_3$ is given in ; asymptotically valid (for large $k$) bounds on $c_1$ and $c_2$ are given in and , respectively. All of the above is in sharp contrast with what happens in the Dirichlet and Neumann cases, where the extremal sets converge to a fixed domain as $k$ becomes large [@anfr2; @anfr1]. In the next section we introduce notation and recall some basic properties about Robin eigenvalues. We then establish the existence of minimisers and prove that the square minimises the first eigenvalue among unions of rectangles with fixed area in Sections \[sec:rectangles\] and \[sec:isoperimetric\], respectively. As far as we are aware, the latter result has not previously appeared in the literature and due to lack of explicit solutions is slightly more complicated to prove that its Dirichlet or Neumann counterparts. Before moving on to the proofs of Theorems \[thm:polya\]–\[thm:rectangles\], we show that, for each fixed $k\geq 1$, the Robin minimisers do in fact converge to Dirichlet minimisers as $\alpha$ goes to infinity, thus behaving in a way similar to the eigenvalues themselves in this respect (Section \[sec:convergence-to-dirichlet\]). The proofs of the main results are then given in Section \[sec:opt-rect\] (rectangles, Theorem \[thm:rectangles\]) and Section \[sec:uk-in-eu\] (unions of rectangles, Theorems \[thm:k-squares\] and \[thm:brexit\]). In Section \[sec:sums\], we give the proof of Corollary \[cor:sumeigopt\] as well as some further remarks on the problem of minimising the sum of the first $k$ Robin eigenvalues. In Section \[sec:higher-dimension\], we briefly discuss higher-dimensional versions of these results: the principles should be the same, and we indicate how the exponents of interest should depend on the dimension. Finally, in the Appendix we collect several sharp estimates for the eigenvalues of the Robin problem on a bounded interval which are used throughout the text and which we believe to be useful in their own right. We draw particular attention to the asymptotic behaviour of the first and second Robin eigenvalues of an interval as its length $a$ tends to zero, which are of orders $a^{-1}$ and $a^{-2}$, respectively; it is this differentiating behaviour that will drive many of our results. Notation and basic properties of the Robin Laplacian {#sec:notation} ==================================================== Let $\Omega \subset {\mathbb{R}}^d$, $d\geq 1$, be a bounded, not necessarily connected, domain with Lipschitz boundary $\partial\Omega$. For given $\alpha>0$, we will be interested in the eigenvalues $$0 < {{\lambda_{1}}(\Omega,\alpha)} \leq {{\lambda_{2}}(\Omega,\alpha)} \leq \ldots \to \infty$$ of the Robin Laplacian, namely the operator on $L^2(\Omega)$ formally associated with the sesquilinear form $q_\alpha: H^1(\Omega) \times H^1(\Omega) \to \mathbb{C}$ given by $$q_\alpha (u,v) = \int_\Omega \nabla u\cdot \overline{\nabla v}\,\textrm{d}x+\alpha\int_{\partial\Omega} u\overline{v}\,\textrm{d}\sigma,$$ see, e.g., [@bufrke Section 4.2] for more details. The Neumann Laplacian corresponds to $\alpha=0$, while the Dirichlet Laplacian is formally obtained for $\alpha = \infty$; we will thus also write $${{\lambda_{k}}(\Omega,0)} = \mu_k (\Omega) \quad \text{and} \quad {{\lambda_{k}}(\Omega,\infty)} = \gamma_k (\Omega)$$ for the Neumann and Dirichlet eigenvalues, respectively, where ${{\lambda_{1}}(\Omega,0)}=0$. We also recall the following standard continuity result with respect to $\alpha$ again, see, e.g., [@bufrke Section 4.2]). \[lem:parameter-continuity\] Let $\Omega \subset {\mathbb{R}}^d$, $d\geq 1$, be Lipschitz. Then for each $k\geq 1$, the mapping $$\alpha \mapsto {{\lambda_{k}}(\Omega,\alpha)}$$ is a continuous and monotonically increasing function of $\alpha \in [0,\infty]$. In particular, ${{\lambda_{k}}(\Omega,\alpha)} \to {{\lambda_{k}}(\Omega,0)} = \mu_k (\Omega)$ from above as $\alpha \to 0$, and ${{\lambda_{k}}(\Omega,\alpha)} \to {{\lambda_{k}}(\Omega,\infty)} = \gamma_k (\Omega)$ from below as $\alpha \to \infty$. When $k=1$ and $\Omega$ is connected, this function is analytic with strictly negative second derivative everywhere. The concrete choices of $\Omega$ which will be most relevant for us in the sequel will be denoted as follows: ---------------------------------------------- ------------------------------------------------------------------------ ${\mathcal{I}_{a}}$ any interval of length $a$; ${\mathcal{R}_{A}(a)}$ any rectangle of area $A$ and side lengths $A^{1/2}a$ and $A^{1/2}/a$; ${\mathcal{S}_{a}}={\mathcal{S}_{\sqrt{A}}}$ a square of side length $a$ and area $a^2=A$; ${\mathcal{U}_{k}}$ the disjoint union of $k$ equal squares of pre-specified area $A$. ---------------------------------------------- ------------------------------------------------------------------------ We refer to Appendix \[sec:interval\] for a number of estimates on the eigenvalues ${{\lambda_{k}}({\mathcal{I}_{a}},\alpha)}$ and ${{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)}$, in particular for small $k$, as well as a description of their asymptotic behaviour in certain parameter ranges. If $\Omega \subset {\mathbb{R}}^d$ is any domain, we denote by $$t\Omega = \{tx: x \in \Omega\}$$ its homothetic scaling by a factor of $t>0$; then we have the relation $|t\Omega|=t^d|\Omega|$. The eigenvalues of both the Dirichlet and Neumann Laplacians scale well with respect to homothetic scalings of the domain, and this property plays a prominent role in Pólya’s proofs: for any $t>0$ and any $k\geq 1$, we have $${{\lambda_{k}}(\Omega,\infty)}=t^{2/d}{{\lambda_{k}}(t\Omega,\infty)} \quad \text{and} \quad {{\lambda_{k}}(\Omega,0)}=t^{2/d}{{\lambda_{k}}(t\Omega,0)}.$$ In the Robin case, we have instead, for any given $\alpha>0$ and $k\geq 1$, $$\label{eq:homothetic-scaling} {{\lambda_{k}}(\Omega,\alpha)} = t^{2/d}{{\lambda_{k}}(t\Omega,\alpha/t)},$$ cf. [@bufrke Section 4.2.1], also for a discussion of some of the consequences of . Before proceeding, for future reference we note how homothetic scalings affect domain minimisation properties. \[lem:optimal-scaling\] Fix $A,B>0$. Suppose ${\mathcal{A}}$ is a family of domains in ${\mathbb{R}}^2$ such that $|\Omega|=A$ for all $\Omega \in {\mathcal{A}}$. Consider the family of scaled domains $${\mathcal{B}}:= \left\{ B^{-1/2}\Omega:\Omega \in {\mathcal{A}}\right\}$$ (so that $|\Omega|=B$ for all $\Omega \in {\mathcal{B}}$). 1. If for some $k\geq 1$ and $\alpha_k>0$ there exists $\Omega^A_k \in {\mathcal{A}}$ such that $$\label{eq:scaling-opt1} {{\lambda_{k}}(\Omega^A_k,\alpha_k)} = \inf \{ {{\lambda_{k}}(\Omega,\alpha_k)}: \Omega \in {\mathcal{A}}\},$$ then the scaled domain $\Omega^B_k := B^{-1/2}\Omega^A_k \in {\mathcal{B}}$ satisfies $$\label{eq:scaling-opt2} {{\lambda_{k}}(\Omega^B_k,A^{1/2}B^{-1/2}\alpha_k)} = \inf \{ {{\lambda_{k}}(\Omega,A^{1/2}B^{-1/2}\alpha_k)}: \Omega \in {\mathcal{B}}\}$$ 2. If in (1) property holds for all $\alpha \in (0,\alpha_k]$, then $${{\lambda_{k}}(\Omega^B_k,\alpha)} = \inf \{ {{\lambda_{k}}(\Omega,\alpha)}: \Omega \in {\mathcal{B}}\}$$ for all $\alpha \in (0,A^{1/2}B^{-1/2}\alpha_k]$. \(1) This follows directly from the scaling relation $${{\lambda_{k}}(t\Omega,\alpha)} = t^{-2}{{\lambda_{k}}(\Omega,t\alpha)} \geq t^{-2}{{\lambda_{k}}(\Omega^A_k,\alpha_k)}$$ for all $\Omega \in {\mathcal{A}}$, provided $t>0$ and $\alpha>0$ are chosen such that $t\alpha = \alpha_k$. Now choose $t=A^{1/2}B^{-1/2}$ to guarantee the area condition. \(2) follows immediately from (1). Likewise, there is still a principle of Wolf–Keller type (cf. [@woke Section 8]) which characterises disjoint minimisers among a given family, see [@anfrke Theorem 2.4], but in the Robin case things are once again complicated by the scaling relation . We will now recall the result from [@anfrke] in the form in which we will need it – actually, while [@anfrke] only considered general unions of Lipschitz domains, the result is still true within smaller classes of domains: \[def:admissible-family\] Let ${\mathcal{A}}$ be a collection of planar domains and fix $A>0$. We call ${\mathcal{A}}$ an *admissible family* (for the value $A$, for short simply *admissible*) if every domain in ${\mathcal{A}}$ has area $A$ and, if $\Omega_1,\ldots,\Omega_n \in {\mathcal{A}}$ is any finite collection of connected domains in ${\mathcal{A}}$, then the disjoint union $$\Omega:= t_1\Omega_1 \cup t_2\Omega_2 \cup \ldots \cup t_n\Omega_n$$ is in ${\mathcal{A}}$ whenever the scaling factors $t_1,\ldots,t_n \in [0,1]$ are chosen such that $|\Omega|=A$, i.e., whenever $t_1^2+\ldots+t_n^2=1$. Thus an admissible family contains all possible finite disjoint unions of the connected domains in it. For example, the set of all bounded, Lipschitz domains in ${\mathbb{R}}^2$ of given area $A$ forms an admissible family, as does the set of all finite disjoint unions of rectangles of area $A$, and the set of all finite disjoint unions of disks of area $A$. We will now state a simplified version of [@anfrke Theorem 2.4] adapted to our needs, noting that the proof of [@anfrke Theorem 2.4], ostensibly for all Lipschitz domains, may be repeated verbatim for any admissible family. \[lem:wolf-keller\] Suppose ${\mathcal{A}}$ is an admissible family of planar domains for some $A>0$ in the sense of Definition \[def:admissible-family\] and suppose the disjoint set $\Omega^\ast = \Omega_1 \cup \Omega_2 \in {\mathcal{A}}$ achieves $\inf \{ {{\lambda_{k}}(\Omega,\alpha)}: \Omega \in {\mathcal{A}}\}$ for some $k\geq 2$. Then there exists some $i=1,\ldots,k-1$, as well as scaling factors $t_1$ and $t_2$ with $t_1^2 + t_2^2 = 1$ and numbers $\alpha_1$, $\alpha_2$ such that $$\Omega_1 = t_1 \Omega_i^\ast, \qquad \Omega_2 = t_2 \Omega_{k-i}^\ast,$$ with $\Omega_i$ and $\Omega_{k-i}$ realising $\inf \{ {{\lambda_{i}}(\Omega,\alpha_1)} : t_1^{-1}\Omega \in {\mathcal{A}}\}$ and $\inf \{ {{\lambda_{k-i}}(\Omega,\alpha_2)} : t_2^{-1}\Omega \in {\mathcal{A}}\}$, respectively. Moreover, $${{\lambda_{k}}(\Omega^\ast,\alpha)} = {{\lambda_{i}}(t_1\Omega_i^\ast,\alpha)} = {{\lambda_{k-i}}(t_2\Omega_k^\ast,\alpha)}.$$ Obviously, in the above lemma we do not rule out the possibility that $\Omega_1$ and $\Omega_2$ are themselves disconnected, meaning this principle extends inductively to all the connected components of $\Omega^\ast$. Despite the complicated way in which the Robin problem scales, blowing up a domain via a homothetic scaling always decreases the eigenvalues. \[lem:homothetic\] Suppose $\Omega \subset {\mathbb{R}}^d$ is a bounded, Lipschitz domain, $d\geq 1$, and $\alpha>0$. Then for each $k\geq 1$, the function $t \mapsto {{\lambda_{k}}(t\Omega,\alpha)}$ is continuous and strictly decreasing in $t \in (0,\infty)$. See [@anfrke Lemma 2.13]. One property that the Robin Laplacian does share with its Dirichlet and Neumann counterparts is the fact that on rectangles a complete system of eigenfunctions can be found by separation of variables, as can be shown by the usual means. A particularly important consequence is that the $k^{\rm th}$ Robin eigenvalue of a rectangle is given by a suitable sum of Robin eigenvalues of intervals corresponding to the side lengths of the rectangle. More precisely, in the notation introduced just above, given $A>0$, $\alpha>0$ and $a>0$, for any $k\geq 1$ there exists a pair $(i,j) \in {\mathbb{N}}\times {\mathbb{N}}$ such that $$\label{eq:separation-of-variables} {{\lambda_{k}}({\mathcal{R}_{A}(a)},\Omega)} = {{\lambda_{i}}({\mathcal{I}_{\sqrt{A}a}},\alpha)} + {{\lambda_{j}}({\mathcal{I}_{\sqrt{A}/a}},\alpha)};$$ moreover, every such pair corresponds to an eigenvalue of ${\mathcal{R}_{A}(a)}$. Of course, as in the Dirichlet and Neumann cases there is in general no clear relationship between $k$ on the one hand and the pair $(i,j)$ on the other. Partly for this reason, the following definition will be important. \[def:i-j-mode\] Fix the area $A>0$, the boundary parameter $\alpha>0$ and the side length $a>0$. For any positive integers $i,j$, the eigenvalue of ${\mathcal{R}_{A}(a)}$ given by $${{\lambda_{i}}({\mathcal{I}_{\sqrt{A}a}},\alpha)} + {{\lambda_{j}}({\mathcal{I}_{\sqrt{A}/a}},\alpha)}.$$ will be denoted by ${\lambda_{(i,j)}({\mathcal{R}_{A}(a)},\alpha)}$ and called *the eigenvalue (of ${\mathcal{R}_{A}(a)}$) associated with the $(i,j)$ mode*. \[rem:i-j-mode\] By standard Sturm–Liouville theory, the eigenvalue ${{\lambda_{k}}({\mathcal{I}_{a}},\alpha)}$ is always simple, and its eigenfunction has exactly $k-1$ zeros in the interior of ${\mathcal{I}_{a}}$, that is, it has $k$ nodal domains. Thus there is always exactly one eigenfunction (up to scalar multiples) associated with the eigenmode ${\lambda_{(i,j)}({\mathcal{R}_{A}(a)},\alpha)}$ (even if the corresponding eigenvalue itself has higher multiplicity), and the $i\times j$ nodal domains of the eigenfunction are rectangles arranged in a grid pattern, just as in the Dirichlet and Neumann cases, with ${\lambda_{(i,j)}({\mathcal{R}_{A}(a)},\alpha)}$ being the first eigenvalue of the nodal domain with Dirichlet conditions on the edges interior to $\Omega$, and the Robin condition on those edges it has in common with $\partial\Omega$ (“exterior edges”). This means that not all nodal domains will be isometric copies of each other: the area of a nodal domain is a strictly decreasing function of its number of exterior edges; furthermore, any two nodal domains with the same number of exterior edges must be isometric to each other. This follows from the fact that having Robin boundary conditions on a side lowers the eigenvalue, together with the monotonicity with respect to homothetic scalings. We finish this section by noting the following continuity result for the eigenvalues with respect to edge lengths. \[lem:length-continuity\] Fix $A>0$ and $\alpha>0$. Then for each $k\geq 1$, the maps $$a \mapsto {{\lambda_{k}}({\mathcal{I}_{a}},\alpha)},\qquad a \mapsto {{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)}$$ are continuous in $a\geq 1$, the former even being analytic. The continuity of $a \mapsto {{\lambda_{k}}({\mathcal{I}_{a}},\alpha)}$ is just the continuity of the mapping $t \mapsto t\Omega$ in the special case $\Omega = {\mathcal{I}_{}}$ (Lemma \[lem:homothetic\]); the analyticity follows from the fact that the eigenvalues are given as solutions of transcendental equations in $\tan$ (or $\cot$) which are analytic functions of their parameters, and each eigenvalue is simple. For the continuity of $a \mapsto {{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)}$, use the continuity of $a \mapsto {{\lambda_{k}}({\mathcal{I}_{a}},\alpha)}$ together with the representation , noting that for any fixed $k$, the set of values of $a$ for which the relationship $k \sim (i,j)$ changes obviously consists of isolated points, and each eigenvalue is continuous across each isolated point. Existence of minimising rectangles and unions of rectangles {#sec:rectangles} =========================================================== Let us start by giving a basic result stating that the problems we are considering are well posed: for any fixed eigenvalue and boundary parameter, there is a rectangle minimising that eigenvalue among all rectangles of given area; the same is true if we replace “rectangle” by “union of rectangles”. \[thm:existence\] Fix $k\geq 1$, $\alpha >0$ and $A>0$. Then there exists a rectangle $\mathcal{R}^\ast = \mathcal{R}^\ast (k,A,\alpha)$ of area $A$ such that $${{\lambda_{k}}(\mathcal{R}^\ast,\alpha)} = {\lambda_{k}^{\rm *} (A,\alpha)} = \inf \{ {{\lambda_{k}}(\mathcal{R},\alpha)}: \mathcal{R} \textrm{ is a rectangle of area }A \}.$$ Moreover, there exists a disjoint union of rectangles $\Omega^\ast = \Omega^\ast (k,A,\alpha)$ of total area $A$ such that $${{\lambda_{k}}(\Omega^\ast,\alpha)} = {\lambda_{k}^{\rm +} (A,\alpha)} = \inf\{ {{\lambda_{k}}(\Omega,\alpha)}: \Omega \textrm{ disjoint union of rectangles, } |\Omega|=A\}.$$ \[rem:dirichlet-existence\] If we consider instead the Dirichlet Laplacian on rectangles and unions of rectangles, i.e., if we consider ${\lambda_{k}^{\rm *} (A,\infty)}$ and ${\lambda_{k}^{\rm +} (A,\infty)}$, then for each given $k\geq 1$ and $A>0$ we can also obtain a minimising domain in each case. We omit the proof, which is an especially easy simplified version of the proof of Theorem \[thm:existence\]. 1\. First we consider the case of rectangles. Denote by ${\mathcal{R}_{A}(a_n)}$ with $a_n \geq 1$ a minimising sequence for ${\lambda_{k}^{\rm *} (A,\alpha)}$. Since $${{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)} \geq {{\lambda_{1}}({\mathcal{R}_{A}(a)},\alpha)} = {\lambda_{(1,1)}({\mathcal{R}_{A}(a)},\alpha)} \geq {{\lambda_{1}}({\mathcal{I}_{A^{1/2}/a}},\alpha)} \to \infty$$ as $a \to \infty$ (cf. Proposition \[prop:firsteiginterv\]), there exists some $\tilde a\geq 1$ such that $a_n \leq \tilde a$ for all $n\in {\mathbb{N}}$. Thus there exists an $a^\ast \geq 1$ such that $a_n \to a^\ast$ up to a subsequence. The corresponding rectangle ${\mathcal{R}_{A}(a^\ast)}$ has area $A$ and, by Lemma \[lem:length-continuity\], $${{\lambda_{k}}({\mathcal{R}_{A}(a_n)},\alpha)} \to {{\lambda_{k}}({\mathcal{R}_{A}(a^\ast)},\alpha)}.$$ 2\. Now we consider unions of rectangles. Suppose $\Omega_n$ is a minimising sequence for ${\lambda_{k}^{\rm +} (A,\alpha)}$. Since the eigenvalues are monotonic with respect to homothetic scalings of the domain (see Lemma \[lem:homothetic\]), we may assume without loss of generality that each domain $\Omega_n$ has no more than $k$ connected components, and that each connected component $U$ of $\Omega_n$ is “needed” (in the sense that ${{\lambda_{1}}(U,\alpha)} \leq {{\lambda_{k}}(\Omega_n,\alpha)}$ and ${{\lambda_{k}}(\Omega_n \setminus U,\alpha)} > {{\lambda_{k}}(\Omega_n,\alpha)}$ for each $U$). By the pigeonhole principle, there exists some $\ell \leq k$ and subsequence, whose members we shall still denote by $\Omega_n$, such that each $\Omega_n$ has exactly $\ell$ connected components $U_n^1,\ldots,U_n^\ell$, and such that for each $i=1,\ldots,\ell$ there exists a fixed $j=j(i)$ such that ${{\lambda_{j}}(U_n^i,\alpha)} \leq {{\lambda_{k}}(\Omega_n,\alpha)} < {{\lambda_{j+1}}(U_n^i,\alpha)}$ (in words, each component $U_n^i$ always “contributes” the same number $j$ of eigenvalues to the first $k$ of $\Omega_n$, independently of $n$). Applying the argument for rectangles, part 1, to each connected component, we obtain a limit domain $\Omega^\ast$ of area $A$ such that $${{\lambda_{k}}(\Omega_n,\alpha)} \to {{\lambda_{k}}(\Omega^\ast,\alpha)}$$ as $n \to \infty$. Isoperimetric inequalities for the low eigenvalues {#sec:isoperimetric} ================================================== We will next prove that the square minimises the first eigenvalue among all rectangles (or indeed their unions) of given area; this may be considered an inequality of “isoperimetric” type, since the square has the least perimeter among all such domains. \[thm:lambda1\] Let $A>0$ be given. Then for any $\alpha>0$ and any finite union of disjoint rectangles $\Omega \subset {\mathbb{R}}^2$ of total area $A$, we have $${{\lambda_{1}}(\Omega,\alpha)} \geq {{\lambda_{1}}({\mathcal{S}_{\sqrt{A}}},\alpha)},$$ with equality if and only if $\Omega$ is itself a square of side length $\sqrt{A}$. Since $\alpha > 0$ is arbitrary, we may assume without loss of generality that $A=1$ (cf. Lemma \[lem:optimal-scaling\]). 1\. We start by proving the statement for rectangles. So fix $a \geq 1$ and consider ${\mathcal{R}_{1}(a)}$. By separation of variables, cf. , $${{\lambda_{1}}({\mathcal{R}_{1}(a)},\alpha)} = {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} + {{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)},$$ where we recall ${{\lambda_{1}}({\mathcal{I}_{b}},\alpha)}$ is the smallest positive solution $\lambda$ of the equation . To prove that ${{\lambda_{1}}({\mathcal{R}_{1}(a)},\alpha)}$ achieves a unique global minimum at $a=1$, it thus suffices to show that for any $a > 1$, $$\frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \leq 0 \leq \frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)} \ \text{and} \ \left|\frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}\right| < \left|\frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}\right|.$$ Differentiating $\lambda$ implicitly with respect to $a$ in , a slightly tedious but elementary calculation leads us to $$ \frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} = -\frac{2{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}\alpha} {2\sin^2\left(\frac{a}{2}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}}\right)+a\alpha} < 0. $$ A similar calculation yields $$\frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)} = \frac{2{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}\alpha} {2a^2\sin^2\left(\frac{1}{2a}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}}}\right)+a\alpha} > 0.$$ Now the scaling relations and the inequality ${{\lambda_{1}}({\mathcal{I}_{1}},a\alpha)} \leq a{{\lambda_{1}}({\mathcal{I}_{1}},\alpha)}$ for $a\geq 1$ (which follows from the last assertion in Lemma \[lem:parameter-continuity\]) give $$\label{eq:1eig-interval-a-scaling-est} {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} = a^{-2}{{\lambda_{1}}({\mathcal{I}_{1}},a\alpha)} \leq a^{-1}{{\lambda_{1}}({\mathcal{I}_{1}},\alpha)},$$ while the reverse inequality for $a^{-1}<1$, that is, ${{\lambda_{1}}({\mathcal{I}_{1}},a^{-1}\alpha)} \geq a^{-1}{{\lambda_{1}}({\mathcal{I}_{1}},\alpha)}$, implies $$\label{eq:1eig-interval-a-1-scaling-est} {{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)} = a^2{{\lambda_{1}}({\mathcal{I}_{1}},a^{-1}\alpha)} \geq a{{\lambda_{1}}({\mathcal{I}_{1}},\alpha)}.$$ Applying and to the expressions for the derivatives found above, we have $$\begin{aligned} \left|\frac{\frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}{\frac{\partial}{\partial a}{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}}\right| &=\frac{{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}{{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}} \frac{2a^2\sin^2\left(\frac{1}{2a}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}}}\right)+a\alpha} {2\sin^2\left(\frac{a}{2}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}}\right)+a\alpha}\\ &\leq \frac{2a\sin^2\left(\frac{1}{2a}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}}\alpha}\right)+\alpha} {2a\sin^2\left(\frac{a}{2}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}}\right)+a^2\alpha}. \end{aligned}$$ To complete the proof for rectangles it suffices to show that this expression is smaller than $1$ whenever $a>1$; in this case, it is in turn sufficient to show that $$\label{eq:sine-comparison} \sin^2\left(\frac{1}{2a}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}}}\right) \leq \sin^2\left(\frac{a}{2}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}}\right).$$ But since ${{\lambda_{1}}({\mathcal{I}_{b}},\alpha)}$ is always smaller than the corresponding Dirichlet eigenvalue $\pi^2/b^2$ for any $b>0$, the arguments of the sines in are always less than $\pi/2$. In particular, since in this range $x\mapsto \sin^2(x)$ is monotonically increasing in $x$, to establish it is sufficient to show that $$\frac{1}{2a}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)}}} \leq \frac{a}{2}{\sqrt{{{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}}},$$ which, upon rearrangement, is equivalent to $${{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)} \leq a^4 {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}.$$ But this now follows from the scaling relations ${{\lambda_{1}}({\mathcal{I}_{a^{-1}}},\alpha)} \leq a^2 {{\lambda_{1}}({\mathcal{I}_{1}},\alpha)}$ and ${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \geq a^{-2} {{\lambda_{1}}({\mathcal{I}_{1}},\alpha)}$ (cf. ). This establishes and hence the statement of the theorem for rectangles. 2\. Now suppose that $\Omega$ is a union of two or more rectangles. Then there exists some rectangle ${\mathcal{R}_{A_1}(a_1)}$ with $A_1<1=A$, such that $${{\lambda_{1}}(\Omega,\alpha)} = {{\lambda_{1}}({\mathcal{R}_{A_1}(a_1)},\alpha)}.$$ Using what we have just shown for rectangles, the inequality $A_1<1$, and the fact that ${{\lambda_{1}}(t\Omega,\alpha)}$ is a strictly monotonically decreasing function of $t>0$ for any bounded, Lipschitz domain $\Omega$ (see Lemma \[lem:homothetic\]), $${{\lambda_{1}}({\mathcal{R}_{A_1}(a_1)},\alpha)} \geq {{\lambda_{1}}({\mathcal{S}_{\sqrt{A_1}}},\alpha)} > {{\lambda_{1}}({\mathcal{S}_{1}},\alpha)},$$ which proves the theorem for $\Omega$. Theorem \[thm:lambda1\] yields as a corollary a corresponding statement concerning the second eigenvalue: that it is always minimised by the union ${\mathcal{U}_{2}}$ of two equal squares. This statement of Hong–Krahn–Szego type can be proved by the usual means. \[cor:lambda2\] Fix $A>0$. Then for any $\alpha>0$ and any finite union of disjoint rectangles $\Omega \subset {\mathbb{R}}^2$ of total area $A$, $${{\lambda_{2}}(\Omega,\alpha)} \geq {{\lambda_{2}}({\mathcal{U}_{2}},\alpha)},$$ where ${\mathcal{U}_{2}}$ is the disjoint union of two equal squares, each of side length $\sqrt{A/2}$. Equality holds if and only if $\Omega = {\mathcal{U}_{2}}$ up to rigid transformations. Again, it suffices to prove the statement for $A=1$. 1\. Suppose first $\Omega$ is a rectangle, say ${\mathcal{R}_{1}(a)}$ for some $a \geq 1$, which we assume to be centred at the origin. Then the zero (nodal) set of an eigenfunction corresponding to ${{\lambda_{2}}({\mathcal{R}_{1}(a)},\alpha)}$ is given by the set $\{(0,y): y \in (-1/(2a), 1/(2a)) \}$, and ${{\lambda_{2}}({\mathcal{R}_{1}(a)},\alpha)}$ is equal to the first eigenvalue of the Laplacian on the rectangle $(0,a/2) \times (-1/(2a), 1/(2a)) \simeq {\mathcal{R}_{1/2}(a/\sqrt{2})}$ with Dirichlet conditions on the side $\{(0,y): y \in (-1/(2a), 1/(2a)) \}$ and Robin conditions with boundary coefficient $\alpha$ on the other three sides. Using the restriction of the eigenfunction for ${{\lambda_{2}}({\mathcal{R}_{1}(a)},\alpha)}$ to either of its nodal domains ${\mathcal{R}_{1/2}(a/\sqrt{2})}$ as a test function for the problem on ${\mathcal{R}_{1/2}(a/\sqrt{2})}$ with Robin boundary conditions on all four sides yields $${{\lambda_{2}}({\mathcal{R}_{1}(a)},\alpha)} > {{\lambda_{1}}({\mathcal{R}_{1/2}(a/\sqrt{2})},\alpha)}.$$ By our isoperimetric inequality, Theorem \[thm:lambda1\], $${{\lambda_{1}}({\mathcal{R}_{1/2}(a/\sqrt{2})},\alpha)} \geq {{\lambda_{1}}({\mathcal{S}_{\sqrt{1/2}}},\alpha)} = {{\lambda_{2}}({\mathcal{U}_{2}},\alpha)},$$ und thus the corollary is true if $\Omega$ is a rectangle. 2\. Suppose now that $\Omega$ is a union of at least two rectangles. There are two cases consider: (1) there exists a rectangle ${\mathcal{R}_{A_1}(a_1)}$ with $A_1<1$ such that ${{\lambda_{2}}(\Omega,\alpha)} = {{\lambda_{2}}({\mathcal{R}_{A_1}(a_1)},\alpha)}$; or (2) there exist two rectangles ${\mathcal{R}_{A_2}(a_2)}$ and ${\mathcal{R}_{A_3}(a_3)}$ belonging to $\Omega$, such that $${{\lambda_{2}}(\Omega,\alpha)} = \max \{ {{\lambda_{1}}({\mathcal{R}_{A_2}(a_2)},\alpha)}, {{\lambda_{1}}({\mathcal{R}_{A_3}(a_3)},\alpha)} \}.$$ For case (1), apply our result for rectangles proved just above directly to ${\mathcal{R}_{A_1}(a_1)}$ and use that the eigenvalues are (strictly) monotonically decreasing with respect to homothetic scalings. For case (2), applying Theorem \[thm:lambda1\] to each of ${\mathcal{R}_{A_2}(a_2)}$ and ${\mathcal{R}_{A_3}(a_3)}$ separately, $${{\lambda_{2}}(\Omega,\alpha)} \geq \max \{ {{\lambda_{1}}({\mathcal{S}_{\sqrt{A_2}}},\alpha)}, {{\lambda_{1}}({\mathcal{S}_{\sqrt{A_3}}},\alpha)} \}.$$ This maximum is at least as large as ${{\lambda_{1}}({\mathcal{S}_{\sqrt{1/2}}},\alpha)} = {{\lambda_{2}}({\mathcal{U}_{2}},\alpha)}$ since at least one of $A_2$ and $A_3$ is no larger than $1/2$. For strictness of the inequality in this case, assuming $\Omega$ not to be equal to ${\mathcal{U}_{2}}$, if it has at least three connected components we may discard the superfluous one(s) and inflate ${\mathcal{R}_{A_2}(a_2)}$ and ${\mathcal{R}_{A_3}(a_3)}$ to decrease ${{\lambda_{1}}(\Omega,\alpha)}$ strictly. So assume $\Omega = {\mathcal{R}_{A_2}(a_2)} \cup {\mathcal{R}_{A_3}(a_3)}$. Then either one of these rectangles is not a square, in which case Theorem \[thm:lambda1\] yields strict inequality, or one of them has area strictly less than $1/2$, in which case it follows from the assertion on strictness in Lemma \[lem:homothetic\]. Note that for the above argument it was important that the nodal domains associated with the second eigenfunction of a rectangle are themselves other rectangles, so that Theorem \[thm:lambda1\] is applicable. Convergence to the Dirichlet minimisers as $\alpha \to \infty$ {#sec:convergence-to-dirichlet} ============================================================== It is well known and easy to show that if $\Omega \subset {\mathbb{R}}^2$ is any fixed domain, then, for any fixed $k\geq 1$, we have ${{\lambda_{k}}(\Omega,\alpha)} \to {{\lambda_{k}}(\Omega,\infty)}$ as $\alpha\to\infty$, where we recall that ${{\lambda_{k}}(\Omega,\infty)}$ is the $k^{\rm th}$ Dirichlet eigenvalue (cf., e.g., [@bufrke Proposition 4.5]). Before we turn to the behaviour of the optimal values for small $\alpha>0$, or fixed $\alpha>0$ and large $k$, we will show that for any fixed $k\geq 1$ and $A>0$, we also have convergence of the optimal values to their Dirichlet counterparts as $\alpha \to \infty$. We note that this does not follow immediately from the convergence for each *fixed* domain since, in general, the optimisers depend on $\alpha>0$. \[thm:robin-dirichlet-convergence\] Fix $k\geq 1$ and $A>0$. Then, as $\alpha \to \infty$, $$\label{eq:robin-dirichlet-convergence} {\lambda_{k}^{\rm *} (A,\alpha)} \to {\lambda_{k}^{\rm *} (A,\infty)} \qquad \text{and} \qquad {\lambda_{k}^{\rm +} (A,\alpha)} \to {\lambda_{k}^{\rm +} (A,\infty)}.$$ Moreover, if $\alpha_n \to \infty$, then 1. if ${\mathcal{R}_{A}(a_n^\ast)}$ is any sequence of rectangles realising ${\lambda_{k}^{\rm *} (A,\alpha)}$, then up to a subsequence $a_n^\ast \to a^\ast$, where ${\mathcal{R}_{A}(a^\ast)}$ realises ${\lambda_{k}^{\rm *} (A,\infty)}$; 2. if $\alpha_n \to \infty$ and $\Omega_n^\ast$ realises ${\lambda_{k}^{\rm +} (A,\alpha)}$, then there exists some $\Omega^\ast$ realising ${\lambda_{k}^{\rm +} (A,\alpha)}$ such that, up to a subsequence, $\Omega_n^\ast$ and $\Omega^\ast$ all have the same number of connected components, and, if these are numbered appropriately, then statement (1) holds for each of them separately. The necessity for considering subsequences in the above theorem comes from the fact that the minimisers may not be unique for each fixed $k\geq 1$ and $\alpha \in (0,\infty]$ (indeed, in general this seems to be unknown). One of the key tools in the proof is the following lemma, which will also play an important role in subsequent sections. It gives us control over long, thin rectangles by giving us an explicit estimate on the long side length of the rectangle necessary to ensure that the $k^{\rm th}$ eigenvalue corresponds to the $(k,1)$ mode (see Definition \[def:i-j-mode\]); this, in turn, can be estimated fairly explicitly using the bounds in Appendix \[sec:interval\]. \[lem:k-1-mode-est\] Let $k \geq 1$, $A>0$, $\alpha>0$ and $a\geq 1$. Then ${{\lambda_{k}}({\mathcal{R}_{a}(A)},\alpha)} = {\lambda_{(k,1)}({\mathcal{R}_{a}(A)},\alpha)}$ whenever $$\label{eq:a-est} a \geq k^{1/2}.$$ It suffices to show that implies $${\lambda_{(k,1)}({\mathcal{R}_{a}(A)},\alpha)} \leq {\lambda_{(1,2)}({\mathcal{R}_{a}(A)},\alpha)}.$$ Now by definition of the $(1,2)$ mode, $${\lambda_{(1,2)}({\mathcal{R}_{a}(A)},\alpha)} = {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a}},\alpha)} + {{\lambda_{2}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)} \geq {{\lambda_{2}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)},$$ while $${\lambda_{(k,1)}({\mathcal{R}_{a}(A)},\alpha)} = {{\lambda_{k}}({\mathcal{I}_{A^{1/2}a}},\alpha)} + {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)} \leq {{\lambda_{k}}({\mathcal{I}_{A^{1/2}a}},\infty)} + {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)}.$$ We now invoke the estimate on the Fundamental Gap $${{\lambda_{2}}({\mathcal{I}_{D}},\alpha)} - {{\lambda_{1}}({\mathcal{I}_{D}},\alpha)} \geq \frac{\pi^2}{D^2}$$ valid for the Robin Laplacian in one dimension (or more generally on any domain whose first Robin eigenfunction is log-concave, see [@anclha Theorem 2.1]), with the choice $D=A^{1/2}a^{-1}$. We thus have $$ {\lambda_{(k,1)}({\mathcal{R}_{a}(A)},\alpha)} - {\lambda_{(1,2)}({\mathcal{R}_{a}(A)},\alpha)} \leq {{\lambda_{k}}({\mathcal{I}_{A^{1/2}a}},\infty)} - \frac{\pi^2 a^2}{A} = \frac{\pi^2 k^2}{A a^2} - \frac{\pi^2 a^2}{A}. $$ This is non-positive as long as $a\geq k^{1/2}$. Fix $k\geq 1$ and an arbitrary sequence $\alpha_n \to \infty$; obviously, it suffices to prove the theorem for this sequence. 1\. We start with rectangles. We first claim the existence of an $\hat a\geq 1$ such that for all $n \geq 1$ the optimal rectangle ${\mathcal{R}_{A}(a_n^\ast)}$ satisfies $a_n^\ast \leq \hat a$, i.e., the sequence of optimal rectangles is uniformly bounded in $n$. In fact, by Lemma \[lem:k-1-mode-est\] and the lower bound in , if $a \geq k^{1/2}$, then we have $$\begin{split} {{\lambda_{k}}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} = {\lambda_{(k,1)}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} &\geq {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)}\\ &\geq \frac{2\pi^2\alpha}{A^{1/2}a^{-1}(\pi^2 + 2A^{1/2}\alpha a^{-1})} \to \infty \end{split}$$ uniformly in $\alpha \geq \alpha_1 > 0$ as $a \to \infty$. This proves the claim. 2\. Since $(a_n^\ast)_{n\geq 1}$ is bounded, up to a subsequence we have $a_n^\ast \to \tilde{a}$ for some $\tilde{a} \in [1,\hat a]$. We claim that the corresponding rectangle ${\mathcal{R}_{A}(\tilde{a})}$ realises ${\lambda_{k}^{\rm *} (A,\infty)}$. Now by the pigeonhole principle, up to another sequence, for each $\ell = 1,\ldots,k$ there exist $1\leq i_\ell,j_\ell\leq \ell$ such that $${{\lambda_{\ell}}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} = {\lambda_{(i_\ell,j_\ell)}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} = {{\lambda_{i_\ell}}({\mathcal{I}_{A^{1/2}a_n^\ast}},\alpha)} + {{\lambda_{j_\ell}}({\mathcal{I}_{A^{1/2}(a_n^\ast)^{-1}}},\alpha)}$$ for all $n$. 3\. We claim that for any $k\geq 1$ and any sequences of numbers $a_n\geq 1$ such that $a_n \to a$ and $\alpha_n \to \infty$, we have $$\label{eq:1-d-diag-conv} {{\lambda_{k}}({\mathcal{I}_{a_n}},\alpha_n)} \to {{\lambda_{k}}({\mathcal{I}_{a}},\infty)}.$$ To prove , we rescale, cf. : writing ${\mathcal{I}_{a}} = \frac{a}{a_n}{\mathcal{I}_{a_n}}$, $${{\lambda_{k}}({\mathcal{I}_{a_n}},\alpha_n)} = \left(\frac{a}{a_n}\right)^2 {{\lambda_{k}}({\mathcal{I}_{a}},\alpha_n a_n/a)}.$$ Since $a/a_n \to 1$ and $\alpha_n a_n/a \to \infty$ as $n\to \infty$, it follows from Lemma \[lem:length-continuity\] that $${{\lambda_{k}}({\mathcal{I}_{a}},\alpha_n a_n/a)} \to {{\lambda_{k}}({\mathcal{I}_{a}},\infty)},$$ which proves the claim. 4\. Combining Steps 3 and 4, we conclude that $${{\lambda_{\ell}}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} = {\lambda_{(i_\ell,j_\ell)}({\mathcal{R}_{A}(a_n^\ast)},\alpha)}\to {\lambda_{(i_\ell,j_\ell)}({\mathcal{R}_{A}(\tilde{a})},\infty)}$$ for all $\ell=1,\ldots,k$. y induction on $\ell$, we also obtain ${\lambda_{(i_\ell,j_\ell)}({\mathcal{R}_{A}(\tilde{a})},\infty)} = {{\lambda_{\ell}}({\mathcal{R}_{A}(\tilde{a})},\infty)}$ and in particular ${{\lambda_{k}}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} \to {{\lambda_{k}}({\mathcal{R}_{A}(\tilde{a})},\infty)} $. Since $${{\lambda_{k}}({\mathcal{R}_{A}(a_n^\ast)},\alpha)} = {\lambda_{k}^{\rm *} (A,\alpha)} \leq {\lambda_{k}^{\rm *} (A,\infty)} \leq {{\lambda_{k}}({\mathcal{R}_{A}(\tilde{a})},\infty)},$$ the first inequality following since the same is true of any fixed domain, we thus have ${\lambda_{k}^{\rm *} (A,\infty)} = {{\lambda_{k}}({\mathcal{R}_{A}(\tilde{a})},\infty)}$, and ${\mathcal{R}_{A}(\tilde{a})}$ is a minimiser. Moreover, since we have shown that every sequence $\alpha_n \to \infty$ has a subsequence for which ${\lambda_{k}^{\rm *} (A,\alpha_n)} \to {\lambda_{k}^{\rm *} (A,\infty)}$ for this subsequence, the hair-splitting lemma implies the convergence of the whole sequence. 5\. Finally, we deal with unions of rectangles. We assume that $\Omega_n^\ast$, not necessarily connected, realises ${\lambda_{k}^{\rm +} (A,\alpha_n)}$. Up to a subsequence each $\Omega_n^\ast$ has some fixed number $m\geq 1$ of connected components (i.e., rectangles) $U_{1,n},\ldots, U_{m,n}$ and (up to a further subsequence and a possible relabelling of the $U_{j,n}$) there exist numbers $i_1,\ldots,i_m$ such that $i_1+\ldots +i_m = k$ and $${{\lambda_{k}}(\Omega_n^\ast,\alpha_n)} = {{\lambda_{i_1}}(U_{1,n},\alpha_n)}=\ldots ={{\lambda_{i_m}}(U_{m,n},\alpha_n)}$$ for all $n$ (see Lemma \[lem:wolf-keller\]). Now the argument of Steps 2 and 3, applied to each of the connected components, implies the existence of a domain $$\Omega^\ast = U_1 \cup \ldots \cup U_m$$ for rectangles $U_1,\ldots,U_m$, such that $|\Omega^\ast|=A$ and, up to a further subsequence, $${{\lambda_{\ell}}(\Omega_n^\ast,\alpha_n)} \to {{\lambda_{\ell}}(\Omega^\ast,\infty)}$$ as $n\to \infty$ for each $\ell=1,\ldots,k$, since for each connected component we can find a further subsequence for which this is true for that connected component. In particular, $${{\lambda_{k}}(A,\infty)} \geq {\lambda_{k}^{\rm +} (A,\alpha_n)} = {{\lambda_{k}}(\Omega_n^\ast,\alpha_n)} \to {{\lambda_{k}}(\Omega^\ast,\infty)} \geq {{\lambda_{k}}(A,\infty)},$$ implying the optimality of $\Omega^\ast$. Moreover, the same argument as before using the hair-splitting lemma implies ${\lambda_{k}^{\rm +} (A,\alpha_n)} \to {{\lambda_{k}}(A,\infty)}$ for the whole sequence. Optimal rectangles: Proof of Theorem \[thm:rectangles\] {#sec:opt-rect} ======================================================= In this section we prove that ${\lambda_{k}^{\rm *} (A,\alpha)}$ grows like $k^{2/3}$ for fixed $\alpha$, at the same time giving asymptotically sharp two-sided bounds. The argument consists of two parts: firstly, we obtain the desired estimate for $(k,1)$ modes (cf. Definition \[def:i-j-mode\]); then we show that for $k$ large enough (depending on $\alpha$ and $A$) the $k^{\rm th}$ eigenvalue of the optimal rectangle is given by its $(k,1)$ mode. For this, we will need to introduce and give a rough estimate on the eigenvalue counting function of the Robin Laplacian on a fixed domain. We present each part in a separate subsection. Two-sided bounds on the $(k,1)$ mode ------------------------------------ We start with the $(k,1)$ mode. Note that the bounds in the following estimate correspond exactly to those in Theorem \[thm:rectangles\], although we have them for a different range of $A$, $\alpha$, $k$. Notationally, we set $$\label{eq:k-1-mode-opt} {\lambda_{(k,1)}^{\rm rect} (A,\alpha)} := \inf \left\{ {\lambda_{(k,1)}({\mathcal{R}_{A}(a)},\alpha)} : a\geq 1\right\}$$ to be the smallest value attainable by a $(k,1)$ mode. \[lem:k-1-mode-bounds\] For any $A>0$, $\alpha>0$ and $k\geq 2$, we have the bounds $$\label{eq:k-1-mode-bounds} {\frac{{\displaystyle}3\pi^2\alpha^{2/3}}{{\displaystyle}\left(\pi^2+2A^{1/2}\alpha\right)^{2/3}A^{2/3}}}(k-2)^{2/3} \leq {\lambda_{(k,1)}^{\rm rect} (A,\alpha)} \leq 3\left({\frac{{\displaystyle}\pi\alpha}{{\displaystyle}A}}\right)^{2/3} k^{2/3},$$ the upper bound holding provided $\alpha \leq \pi^2 A^{-1/2}k^2$. Moreover, the infimum in is attained by a rectangle whose side length is proportional to $k^{2/3}$ for large $k$ and fixed $A,\alpha>0$. Finally, $$\label{eq:k-1-opt-asymptotics} \lim_{k\to\infty} \frac{{\lambda_{(k,1)}^{\rm rect} (A,\alpha)}}{k^{2/3}} = 3\left({\frac{{\displaystyle}\pi\alpha}{{\displaystyle}A}}\right)^{2/3}.$$ By definition of the $(k,1)$ mode, we have $$\label{eq:k-1-breakdown} \begin{split} {\lambda_{(k,1)}({\mathcal{R}_{A}(a)},\alpha)} &= {{\lambda_{k}}({\mathcal{I}_{A^{1/2}a}},\alpha)} + {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)}\\ &= {{\lambda_{1}}({\mathcal{I}_{A^{1/2}\tilde a}},\infty)} + {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)} \end{split}$$ for some $\tilde a \in \left[\frac{a}{k},\frac{a}{k-2}\right]$, where the second equality comes about from restricting to any one of the identical $k-2$ nodal domains of the corresponding eigenfunction which do not touch the shorter sides of the rectangle (cf. Remark \[rem:i-j-mode\]). For the upper bound, we use the monotonicity of the Dirichlet eigenvalue with respect to shrinking the interval and the bound , applied to ${{\lambda_{1}}({\mathcal{I}_{A^{1/2}\tilde a}},\infty)}$ and ${{\lambda_{1}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)}$, respectively, to obtain $${\lambda_{(k,1)}({\mathcal{R}_{A}(a)},\alpha)} \leq \frac{\pi^2 k^2}{Aa^2} + \frac{2\alpha a}{A^{1/2}}.$$ We now make the *Ansatz* $a=c_1 k^{2/3}$ (for some $c_1>0$ which may *a priori* depend on $k$, i.e., formally, we take $c_1:=ak^{-2/3}$); then, switching to considering the infimum over all $a\geq 1$, $${\lambda_{(k,1)}^{\rm rect} (A,\alpha)} \leq \inf_{c_1\geq k^{-2/3}} \pi^2k^{2/3}\left[ \frac{1}{Ac_1^2}+\frac{2\alpha}{\pi^2 A^{1/2}}c_1\right].$$ The infimum over $c_1>0$ is obtained independently of $k\geq 1$ at $$\label{eq:c1} c_1=\pi^{2/3}A^{-1/6}\alpha^{-1/3},$$ resulting in a right-hand side of value $$3\pi^{2/3}A^{-2/3}\alpha^{2/3}k^{2/3};$$ this is valid provided this minimum occurs when $a=c_1 k^{2/3} \geq 1$, that is, $$\pi^{2/3}A^{-1/6}\alpha^{-1/3}k^{2/3} \geq 1,$$ which after simplification reduces to $\alpha \leq \pi A^{-1/2}k^2$. Observe also that the choice of $c_1$ corresponds to $$\label{eq:a-upper-bound} a = \pi^{2/3}A^{-1/6}\alpha^{-1/3}k^{2/3} \sim k^{2/3}$$ as $k\to \infty$, if the other parameters are fixed. For the lower bound, we again start from but this time stretch the Dirichlet interval and use the lower bound in to obtain $$\label{eq:k-1-lower-bound} \begin{split} {\lambda_{(k,1)}({\mathcal{R}_{A}(a)},\alpha)} &\geq\frac{\pi^2 (k-2)^2}{Aa^2}+\frac{2\pi^2\alpha a}{A^{1/2}\left(\pi^2+\frac{2A^{1/2}\alpha}{a}\right)}\\ &\geq \frac{\pi^2(k-2)^2}{Aa^2} + \frac{2\pi^2\alpha a}{\pi^2 A^{1/2} + 2A\alpha}, \end{split}$$ the last inequality following since $a\geq 1$. We now make the *Ansatz* $a=c_2 (k-2)^{2/3}$ to obtain $${\lambda_{(k,1)}^{\rm rect} (A,\alpha)} \geq \inf_{c_2\geq (k-2)^{-2/3}} \pi^2(k-2)^{2/3} \left[\frac{1}{A c_2^2} + \frac{2\alpha}{\pi^2 A^{1/2} +2A\alpha}c_2\right].$$ Obviously, the infimum can only become smaller if we look at all $c_2>0$; in this case, we again obtain a global minimiser at a value of $c_2$ independent of $k$, namely $$\label{eq:a-lower-bound} c_2 = \left(\frac{A^{1/2}\alpha}{\pi^2+2A^{1/2}\alpha}\right)^{-1/3},$$ corresponding to $a=c_2(k-2)^{2/3} \sim k^{2/3}$ and resulting in the lower bound $${\lambda_{(k,1)}^{\rm rect} (A,\alpha)} \geq 3\pi^2\left(\pi^2 + 2A^{1/2}\alpha\right)^{-2/3}A^{-2/3}\alpha^{2/3}(k-2)^{2/3}.$$ In particular, this together with a standard compactness argument establishes the existence of a minimiser for ${\lambda_{(k,1)}^{\rm rect} (A,\alpha)}$ for every admissible combination of parameters. Moreover, combined with the upper bound it also shows that the long side length of the optimiser must be proportional to $k^{2/3}$. Finally, to establish , we refine the second inequality in . Namely, since we now know that the optimal side length behaves like $k^{2/3}$ and in particular tends to $\infty$ with $k$, for every $\varepsilon>0$ there exists some $k_\varepsilon \geq 1$ such that $2A\alpha/a_k^\ast < \varepsilon$ for all $k\geq k_\varepsilon$, where $a_k^\ast$ is the optimal side length value corresponding to ${\lambda_{(k,1)}^{\rm rect} (A,\alpha)}$. This leads to the improved lower bound $${\lambda_{(k,1)}^{\rm rect} (A,\alpha)} \geq \inf_{c_2>0} \pi^2(k-2)^{2/3} \left[\frac{1}{A c_2^2} + \frac{2\alpha}{\pi^2 A^{1/2} +\varepsilon}c_2\right],$$ provided $k\geq k_\varepsilon$ is large enough. This, in turn, leads to $$\frac{{\lambda_{(k,1)}^{\rm rect} (A,\alpha)}}{k^{2/3}} \geq 3\pi^2\left(\pi^2+\varepsilon\right)^{-2/3}A^{-2/3} \alpha^{2/3}\left(\frac{k-2}{k}\right)^{2/3}$$ for all $k\geq k_\varepsilon$. Letting $k\to \infty$ and then passing to the limit as $\varepsilon\to 0$ yields , when combined with the upper bound from . \[rem:k-1-balance\] The power $k^{2/3}$ comes from balancing the effect of the first Dirichlet eigenvalue of an interval of length $\sim a/k$ with the first Robin eigenvalue of an interval of length $\sim a^{-1}$. An estimate on the eigenvalue counting function of the Robin Laplacian on rectangles {#sec:eigcount} ------------------------------------------------------------------------------------ For given numbers $A,\alpha>0$ and $a\geq 1$, we introduce the counting function $$\label{eq:eigcount-def} {N_{{\mathcal{R}_{A}(a)},\alpha}(\lambda)} := \# \{k: {{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)} \leq \lambda \},$$ for positive values of the parameter $\lambda$. We will give a simple but effective upper estimate on this function. \[lem:eigcount\] Fix $\alpha>0$ and $A>0$. Then, for any $a \geq 1$, $$\label{eq:eigcount-bound} {N_{{\mathcal{R}_{A}(a)},\alpha}(\lambda)} \leq \frac{\lambda A}{\pi^2} + \frac{(\lambda A)^{1/2}}{\pi}\left(a+\frac{1}{a}\right) + 1$$ for all $\lambda>0$. Observe that the bound on the right-hand side of is independent of $\alpha > 0$, and indeed, for the proof, we will actually show that is an upper bound on the eigenvalue counting function of the *Neumann* Laplacian. Since one is interested in maximising, not minimising, the eigenvalues of the Neumann Laplacian, previous works have correspondingly only given *lower* bounds on the Neumann counting function (see, e.g., [@vdbbugi]). Although our bound is actually quite rough even in the Neumann case, not to mention loss in going from the Robin to the Neumann condition), it will still be sufficient to give the correct power relationship between $\alpha$ and $k$ in Theorem \[thm:k-squares\]. Monotonicity of the eigenvalues with respect to $\alpha \geq 0$ means that $\alpha \mapsto {N_{{\mathcal{R}_{A}(a)},\alpha}(\lambda)}$ is a *decreasing* function (for fixed $A$, $a$ and $\lambda$); hence, as just noted, it suffices to prove when $\alpha = 0$. Now the eigenvalues of the Neumann Laplacian are solutions $\lambda$ of $$\label{eq:lattice-count} \lambda = \frac{\pi^2}{Aa^2} x^2 + \frac{\pi^2}{A}a^2 y^2,$$ where $x,y$ are nonnegative integers. In particular, ${N_{{\mathcal{R}_{A}(a)},0}(\lambda)}$ gives, for fixed $\lambda$, the number of integer-valued lattice points in the first quadrant of ${\mathbb{R}}^2$ (including the $x$- and $y$-axes) lying below the curve described by . This number is no larger than the number of lattice points within the rectangle having the same intercepts as the curve in , namely $(\lambda A)^{1/2}a/\pi$ and $(\lambda A)^{1/2}/(\pi a)$, respectively. But the number of lattice points within this rectangle is certainly not more than $$\left(\frac{(\lambda A)^{1/2}a}{\pi}+1\right)\left(\frac{(\lambda A)^{1/2}}{\pi a}+1\right),$$ which is exactly the bound in . Completion of the proof of Theorem \[thm:rectangles\] ----------------------------------------------------- Here we combine the previous results to prove Theorem \[thm:rectangles\]. Indeed, by Lemma \[lem:k-1-mode-bounds\], the two-sided bounds in Theorem \[thm:rectangles\] are true whenever the $k^{\rm th}$ eigenvalue of an optimising rectangle for ${\lambda_{k}^{\rm *} (A,\alpha)}$ is given by its $(k,1)$ mode, provided that also $\alpha \leq \pi^2 A^{-1/2}k^2$ as required by Lemma \[lem:k-1-mode-bounds\]; the statement about the asymptotic behaviour of ${\lambda_{k}^{\rm *} (A,\alpha)}$ follows directly once we have our two-sided bounds. Now we know by Lemma \[lem:k-1-mode-est\] that the $k^{\rm th}$ eigenvalue is always given by the $(k,1)$ mode whenever $a\geq k^{1/2}$; we thus have to consider $a\leq k^{1/2}$. To obtain the optimal power relationship between $\alpha$ and $k$, that is, that the theorem is true for a region of the form $\{ \alpha \leq Ck^{1/2}\}$, we need to divide this into two subcases: (1) $a \geq C(A,\alpha)k^{1/3}$, and (2) $a\leq C(A,\alpha)k^{1/3}$, where $$C(A,\alpha):=3^{1/2}\pi^{-2/3}A^{1/6}\alpha^{1/3}.$$ In case (1), we simply show that the $(1,2)$ mode is always larger than the upper bound on ${\lambda_{(k,1)}^{\rm rect} (A,\alpha)}$ from Lemma \[lem:k-1-mode-bounds\]. Indeed, we have $${\lambda_{(1,2)}({\mathcal{R}_{A}(a)},\alpha)} \geq {{\lambda_{2}}({\mathcal{I}_{A^{1/2}a^{-1}}},\alpha)} > {{\lambda_{2}}({\mathcal{I}_{A^{1/2}a^{-1}}},0)} = \frac{\pi^2 a^2}{A}.$$ Then $$\frac{\pi^2 a^2}{A} \geq 3\pi^{2/3}A^{-2/3}\alpha^{2/3}k^{2/3}$$ provided $a\geq C(A,\alpha)k^{1/3}$. (Note that for this argument to work we do not require $a\leq k^{1/2}$; that is, it holds even if $C(A,\alpha)k^{1/3} \geq k^{1/2}$.) In case (2), it suffices to show using the counting function that any rectangle for which $a\leq C(A,\alpha)k^{1/3}$ has a higher $k^{\rm th}$ eigenvalue than the upper estimate on ${\lambda_{k}^{\rm *} (A,\alpha)}$ from Lemma \[lem:k-1-mode-bounds\]. More precisely, we wish to show that, for any $a\leq C(A,\alpha) k^{1/3}$, $${N_{{\mathcal{R}_{A}(a)},\alpha}(3\pi^{2/3}A^{-2/3}\alpha^{2/3}k^{2/3})}={N_{{\mathcal{R}_{A}(a)},\alpha}(\pi^2 A^{-1}C(A,\alpha)^2 k^{2/3})}\leq k.$$ By Lemma \[lem:eigcount\] (more precisely, ) and the fact that the function $a+a^{-1} \leq 2a$ reaches its maximum for $a \in [1,C(A,\alpha)k^{1/3}]$ at $a=C(A,\alpha)k^{1/3}$ (assuming without loss of generality that $C(A,\alpha)k^{1/3}\geq 1$, since otherwise case (1) always holds, it suffices to have $$3C(A,\alpha)^2 k^{2/3} + 1 \leq k.$$ Using the crude bound $k-1\geq k/2$ for $k\geq 2$, this is satisfied provided $$\label{eq:rectangles-rectangles-comparison} \alpha^2 \leq \frac{\pi^4}{18^3 A}k.$$ Concluding, for Theorem \[thm:rectangles\] to hold, in addition to it suffices that $\alpha \leq \pi^2 A^{-1/2}k^2$; but since $k\geq 1$, this latter condition is always implied by . Hence we see that is by itself sufficient for Theorem \[thm:rectangles\]. Minimality of $k$ equal squares: Proof of Theorems \[thm:k-squares\] and \[thm:brexit\] {#sec:uk-in-eu} ======================================================================================= Recall that ${\mathcal{U}_{k}}$ denotes the disjoint union of $k$ equal squares, of total area $A>0$. To prove Theorem \[thm:k-squares\], it suffices to prove the optimality of ${\mathcal{U}_{k}}$ in the claimed region. The two-sided estimate  on ${\lambda_{k}^{\rm +} (A,\alpha)}$ is then simply the two-sided estimate on ${{\lambda_{k}}({\mathcal{U}_{k}},\alpha)}$ which comes from combining the lower estimate from Proposition \[prop:boundkequalsquares\] and the upper estimate from , cf. also . Our proof of the optimality of ${\mathcal{U}_{k}}$ uses the following strategy, firstly dealing with rectangles: 1. we show that the $(k,1)$ mode of any rectangle always has a larger $k^{\rm th}$ eigenvalue than ${\mathcal{U}_{k}}$, meaning that whenever $a\geq k^{1/2}$ we always have ${{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)} > {{\lambda_{k}}({\mathcal{U}_{k}},\alpha)}$ (cf. Lemma \[lem:k-1-mode-est\]); 2. for $a \leq k^{1/2}$ we use the eigenvalue counting function ${N_{{\mathcal{R}_{A}(a)},\alpha}(\,\cdot\,)}$ from Section \[sec:eigcount\] together with a simple estimate on ${{\lambda_{k}}({\mathcal{U}_{k}},\alpha)} = {{\lambda_{1}}({\mathcal{S}_{(A/k)^{1/2}}},\alpha)}$ to show that for sufficiently large $k$ (depending on $\alpha$ in the fashion claimed in the statement of the theorem), we have ${{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)} > {{\lambda_{1}}({\mathcal{S}_{(A/k)^{1/2}}},\alpha)}$; 3. to consider unions of rectangles, we proceed by induction on $k$. We will give each step of the proof in a separate subsection; the proof of Theorem \[thm:brexit\] is then given in a further subsection at the end. Proof of Theorem \[thm:k-squares\] for long, thin rectangles via the $(k,1)$ mode {#sec:uk-in-eu-thin} --------------------------------------------------------------------------------- We start with the following important observation. \[lem:k-1-mode-vs-unionsquare\] For any $\alpha>0$, $A>0$, $k\geq 2$ and $a\geq 1$, we have $${\lambda_{(k,1)}({\mathcal{R}_{A}(a)},\alpha)} \geq {{\lambda_{1}}({\mathcal{S}_{(A/k)^{1/2}}},\alpha)} = {{\lambda_{k}}({\mathcal{U}_{,}}\alpha)}.$$ The argument is essentially the same as in Corollary \[cor:lambda2\]. The $(k,1)$ mode eigenvalue is equal to the first eigenvalue of either of the “end” nodal domains, i.e., which touch either of the shorter sides of ${\mathcal{R}_{A}(a)}$ (see Remark \[rem:i-j-mode\]). This, in turn, is equal to the sum of the first Robin eigenvalue of an interval of length $A^{1/2}/a$ and the first eigenvalue of a mixed Dirichlet-Robin problem on an interval of some length $A^{1/2}\hat{a}$, where $\hat{a} \leq a/k$. Replacing the Dirichlet condition by a Robin one and using Lemma \[lem:homothetic\] applied to the same interval, this means ${\lambda_{(k,1)}({\mathcal{R}_{A}(a)},\alpha)}$ is larger than the first Robin eigenvalue of a rectangle of side length $A^{1/2}/a$ and $A^{1/2}a/k$. Theorem \[thm:lambda1\] applied to this rectangle completes the proof. This immediately has the following consequence, which we summarise as a lemma for future reference. \[lem:uk-in-eu-thin\] Suppose $a \geq k^{1/2}$. Then, for any $\alpha>0$, $A>0$ and $k\geq 2$, we have $${{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)} \geq {{\lambda_{k}}({\mathcal{U}_{,}}\alpha)}.$$ Combine Lemma \[lem:k-1-mode-vs-unionsquare\] and Lemma \[lem:k-1-mode-est\]. Proof of Theorem \[thm:k-squares\] for relatively fat rectangles via the eigenvalue counting function {#sec:uk-in-eu-fat} ----------------------------------------------------------------------------------------------------- Here, we wish to consider ${\mathcal{R}_{A}(a)}$ for $a \leq k^{1/2}$. We start by recalling the following upper bound on ${{\lambda_{k}}({\mathcal{U}_{k}},\alpha)} = {{\lambda_{1}}({\mathcal{S}_{(A/k)^{1/2}}},\alpha)}$ from : $$\label{eq:recall-square-bound} {{\lambda_{1}}({\mathcal{S}_{(A/k)^{1/2}}},\alpha)} = 2{{\lambda_{1}}({\mathcal{I}_{(A/k)^{1/2}}},\alpha)} < \frac{4k^{1/2}\alpha}{A^{1/2}}.$$ \[lem:square-fat\] Suppose, given $A>0$ and $\alpha>0$, that $k \geq 3$ is such that $$\label{eq:ugly-bound} \frac{4A^{1/2}k^{1/2}\alpha}{\pi^2}+\frac{2A^{1/4}k^{1/4}\alpha^{1/2}}{\pi} \left(k^{1/2} + k^{-{1/2}}\right) + 1 \leq k.$$ Then, for any $a \leq k^{1/2}$, we have have ${{\lambda_{k}}({\mathcal{R}_{A}(a)},\alpha)} \geq {{\lambda_{k}}({\mathcal{U}_{k}},\alpha)}$. The proof uses the eigenvalue counting function ${N_{\Omega,\alpha}(\lambda)}$ defined in Section \[sec:eigcount\]. By , we know that ${N_{{\mathcal{U}_{k}},\alpha}(4k^{1/2}\alpha/A^{1/2})} \geq k$; hence, to prove the lemma, it suffices to show that, assuming and $a \leq k^{1/2}$, $${N_{{\mathcal{R}_{A}(a)},\alpha}(4k^{1/2}\alpha/A^{1/2})} \leq k.$$ To show this, we first observe that the bound from Lemma \[lem:eigcount\] is a monotonically increasing function of $a \geq 1$, so suffices to consider the extremal case $a=k^{1/2}$ in , that is, $${N_{{\mathcal{R}_{A}(a)},\alpha}(4k^{1/2}\alpha/A^{1/2})} \leq\frac{4A^{1/2}k^{1/2}\alpha}{\pi^2}+\frac{2A^{1/4}k^{1/4}\alpha^{1/2}}{\pi} \left(k^{1/2} + k^{-{1/2}}\right) + 1.$$ Thus guarantees that ${N_{{\mathcal{R}_{A}(a)},\alpha}(4k^{1/2}\alpha/A^{1/2})} \leq k$. Before proceeding, let us give a somewhat weaker but considerably simpler alternative to , which still gives the correct power dependence between $\alpha$ and $k$. If we posit a relationship of the form $\alpha = Ck^{1/2}$ in , we obtain $$\label{eq:not-so-ugly-bound} \frac{4A^{1/2}k}{\pi^2}C + \frac{(2k+2)A^{1/4}}{\pi}C^{1/2} + 1 \leq k.$$ The corresponding equality is a quadratic equation in $A^{1/4}C^{1/2}/\pi$ with a unique positive solution $$\frac{A^{1/4}C^{1/2}}{\pi} = \frac{-k-1+\sqrt{5k^2-2k+1}}{4k}$$ below which and thus hold. An elementary but tedious calculation shows that this solution is monotonically increasing in $k\geq 3$; thus, to obtain a universally valid bound, it suffices to take $k=3$ (the case $k=2$ being covered by Corollary \[cor:lambda2\]). In this case reduces to $$\frac{12}{\pi^2}A^{1/2}C + \frac{8}{\pi}A^{1/4}C^{1/2} - 2 \leq 0.$$ The largest possible value of $C$ for which this holds is $$C = \left(\frac{\sqrt{10}-2}{6}\right)^2 \pi^2 A^{-1/2} = \frac{\pi^2}{18}(7-2\sqrt{10})A^{-1/2}.$$ Thus, using the fact that if holds for some $\alpha_0>0$, then it holds for all $\alpha \in (0,\alpha_0)$, we see that to satisfy and thus obtain Theorem \[thm:k-squares\] for any rectangle ${\mathcal{R}_{A}(a)}$ with $a \leq k^{1/2}$ it is certainly sufficient that $$\label{eq:less-ugly-bound} \alpha \leq Ck^\frac{1}{2} = \frac{\pi^2}{18}(7-2\sqrt{10}) A^{-1/2}k^{1/2} \approx 0.370 A^{-1/2}k^{1/2}.$$ We finish this subsection by summarising how the above steps complete the proof of Theorem \[thm:k-squares\] for rectangles. Fix $\alpha>0$. Choose $k^\ast$ to be the smallest $k\geq 3$ such that (or ) holds for this $k^\ast = k^\ast (A,\alpha)$ (and hence also for all $k\geq k^\ast$). Now fix $a \geq 1$ and $k \geq k^\ast$. If $a \leq k^\frac{1}{2}$, then by Lemma \[lem:square-fat\], we have ${{\lambda_{k}}({\mathcal{R}_{1}(a)},\alpha)} > {{\lambda_{k}}({\mathcal{U}_{k}},\alpha)}$. If $a \geq k^\frac{1}{2}$, then we may apply Lemma \[lem:uk-in-eu-thin\]. Finally, note that gives an explicit estimate on $k^\ast$. Indeed, reformulating (which still exhibits the asymptotically correct power relationship between $k^\ast$, $A$ and $\alpha$), $$k^\ast \geq \left(\frac{18}{7-2\sqrt{10}}\right)^2\frac{A}{\pi^4} \alpha^2 \approx 7.291 A \alpha^2$$ is sufficient. Disjoint unions of rectangles {#sec:disjoint-unions} ----------------------------- We start by formulating an abstract result on minimisers of disjoint unions of domains. We change perspective slightly: instead of fixing $\alpha$ and showing that a certain type of domain minimises the $k^{\rm th}$ eigenvalue for $k$ large enough, it will be more useful to fix $k$ and consider the corresponding range of $\alpha$ small enough. Here we do not restrict ourselves to rectangles: we will work with a general admissible family in the sense of Definition \[def:admissible-family\]. \[lem:disjoint-growth\] Fix $A>0$ and suppose ${\mathcal{A}}$ is an admissible family of planar domains for $A$, such that for all $k\geq 1$ and $\alpha >0$ there is a domain in ${\mathcal{A}}$ realising $$\inf \{ {{\lambda_{k}}(\Omega,\alpha)}: \Omega \in {\mathcal{A}}\}.$$ Suppose also that, when $k=1$, the minimiser $\Omega^\ast \in {\mathcal{A}}$ is independent of $\alpha>0$. Assume in addition that there exists a sequence of numbers $$0 < \alpha_{2} \leq \alpha_{3} \leq \alpha_{4} \leq \ldots$$ such that, for any $k \geq 2$, $$\inf \{ {{\lambda_{k}}(\Omega,\alpha)} : \Omega \in {\mathcal{A}}\text{ is connected}\}$$ is realised by the (non-connected) domain $\Omega_k^\ast$ consisting of $k$ equal copies of $\frac{1}{\sqrt{k}}\Omega^\ast$ for all $\alpha \in (0,\alpha_k]$. Define $$\label{eq:disjoint-transfer} \alpha_k^\ast := \min \left\{ \alpha_k, \sqrt{\frac{k}{k-1}}\alpha_{k-1},\ldots,\sqrt{\frac{k}{2}}\alpha_{2} \right\}$$ for $k\geq 2$. Then $\Omega_k^\ast$ also realises $$\inf \{ {{\lambda_{k}}(\Omega,\alpha)} : \Omega \in {\mathcal{A}}\}$$ for all $\alpha \in (0,\alpha_k^\ast]$. If $\alpha_k \to \infty$, then also $\alpha_k^\ast \to \infty$ as $k\to\infty$. Thus shows how we can go from minimisers among connected domains to minimisers among disjoint unions of domains. 1\. We first recall the scaling relation for the optimal values from Lemma \[lem:optimal-scaling\](2): suppose that for some $k\geq 2$ the value of $\alpha_k$, resp. $\alpha_k^\ast$, is given and corresponds to some domain $\widetilde\Omega$. If $B<A$, then the corresponding values among the scaled-down family $\{\frac{1}{\sqrt{B}}\Omega: \Omega \in {\mathcal{A}}\}$ are $\alpha_k\sqrt{A/B} > \alpha_k$ and $\alpha_k^\ast\sqrt{A/B} > \alpha_k^\ast$, respectively. 2\. We now proceed by induction on $k$. For $k=2$, the statement recalls closely part 2 of the proof of Corollary \[cor:lambda2\]: if $\Omega \in {\mathcal{A}}$ is connected, then ${{\lambda_{2}}(\Omega,\alpha)} \geq {{\lambda_{2}}(\Omega_k^\ast,\alpha)}$ for all $\alpha\leq \alpha_2$. If $\Omega \in {\mathcal{A}}$ is not connected, then either it has one connected component whose second eigenvalue is ${{\lambda_{2}}(\Omega,\alpha)}$. Discarding the rest of $\Omega$ and inflating this component decreases the second eigenvalue, which is in particular larger than ${{\lambda_{2}}(\Omega_k^\ast,\alpha)}$ for all $\alpha\leq \alpha_2$. If ${{\lambda_{2}}(\Omega,\alpha)} = {{\lambda_{1}}(\Omega'',\alpha)}$ for some connected component $\Omega''$ of $\Omega$, where another connected component $\Omega'$ gives ${{\lambda_{1}}(\Omega,\alpha)}$, then we may replace $\Omega'$ and $\Omega''$ by scaled copies of $\Omega^\ast$. The second eigenvalue of their union is either (depending on the ratios of their areas) always larger than ${{\lambda_{2}}(\Omega_2^\ast,\alpha)}$ (if the two areas are roughly equal) or, if one is much larger than the other and hence the second eigenvalue of the union equals the second eigenvalue of one copy, then, using Step 1, it is at least always larger than ${{\lambda_{2}}(\Omega_2^\ast,\alpha)}$ for $\alpha \leq \alpha_2$. Hence we obtain the conclusion for $\alpha_2^\ast =\alpha_2$. 3\. We now give the induction step. Suppose the lemma is true for $\alpha_2^\ast,\ldots,\alpha_{k-1}^\ast$ and consider $\alpha_{k}^\ast$. Obviously, for $\alpha \leq \alpha_k$ the minimiser, assumed to exist, cannot be connected. Fix such an $\alpha \leq \alpha_k$. By the Wolf–Keller principle (see Lemma \[lem:wolf-keller\]), if $\Omega_\alpha$ is the minimiser, then $$\Omega_\alpha = t_i \Omega_i \cup t_{k-i} \Omega_{k-i},$$ where $\Omega_i$ and $\Omega_{k-i}$ are the minimisers of ${{\lambda_{i}}(\,\cdot\,,\alpha_i)}$ and ${{\lambda_{k-i}}(\,\cdot,\,,\alpha_{k-i})}$ for some $\alpha_i, \alpha_{k-i}$ related to $\alpha$, respectively, and where $t_i^2 + t_{k-i}^2 = 1$. Moreover, the scaling factors $t_i$ and $t_{k-i}$ are chosen such that $${{\lambda_{k}}(\Omega_\alpha,\alpha)} = {{\lambda_{i}}(t_i\Omega_i,\alpha)} = {{\lambda_{k-i}}(t_{k-i}\Omega_{k-i},\alpha)}.$$ In particular, either $t_i^2 \leq i/k$ or $t_{k-i}^2 \leq (k-i)/k$. In the first case, by the induction hypothesis and Step 1, $${{\lambda_{i}}(t_i\Omega_i,\alpha)} \geq {{\lambda_{i}}(t_i\Omega_i^\ast,\alpha)}\qquad \text{if } \alpha \leq \frac{\alpha_i^\ast}{t_i} \leq \sqrt{\frac{k}{i}}\alpha_i^\ast,$$ using that $t_i^2 \leq i/k$. Moreover, in this case, $t_i\Omega_i^\ast$ consists of $i$ equal copies of $\Omega^\ast$ each of area $$t_i^2 \leq \frac{i}{k}\cdot\frac{1}{i} = \frac{1}{k}.$$ Thus, for $\alpha \leq \alpha_i^\ast\sqrt{k/i}\alpha_i^\ast$, we have $${{\lambda_{k}}(\Omega_\alpha,\alpha)} \geq {{\lambda_{i}}(t_i\Omega_i^\ast,\alpha)} \geq {{\lambda_{k}}(\Omega_k^\ast,\alpha)},$$ and thus $\Omega_k^\ast$ is a minimiser in this case. Similarly, if $t_{k-i}^2 \leq (k-i)/k$, then we obtain the optimality of $\Omega_k^\ast$ whenever $\alpha \leq \alpha_{k-i}^\ast \sqrt{k/(k-i)}$. Concluding, $${{\lambda_{k}}(\Omega_\alpha,\alpha)} \geq {{\lambda_{k}}(\Omega_k^\ast,\alpha)} \qquad \text{if } \alpha \leq \min \left\{ \sqrt{\frac{k}{i}}\alpha_i^\ast, \sqrt{\frac{k}{k-i}}\alpha_{k-i}^\ast \right\}.$$ Repeating this argument over all possible pairs $(i,k-i)$, $i=1,\ldots$, we obtain that $\Omega_k^\ast$ is minimal for all $$\alpha \leq \hat\alpha_{k}:= \min \left\{ \alpha_{k}, \sqrt{\frac{k}{k-1}}\alpha_{k-1}^\ast,\ldots,\sqrt{\frac{k}{2}}\alpha_{2}^\ast \right\}.$$ 4\. Finally, another simple induction argument shows that $\hat \alpha_k$ is equal to $\alpha_k^\ast$ given by . Indeed, for $\alpha_3^\ast$, since $\alpha_2=\alpha_2^\ast$, we have $$\hat \alpha_3 = \min \left\{ \alpha_3, \sqrt{\frac{3}{2}}\alpha_2^\ast \right\} = \min \left\{ \alpha_3, \sqrt{\frac{3}{2}}\alpha_2 \right\} =\alpha_3^\ast.$$ Similarly, if $\alpha_{i}^\ast = \hat \alpha_i$ for all $i=1,\ldots,k-1$, then $$\begin{aligned} \hat \alpha_k &= \min \left\{ \alpha_k, \sqrt{\frac{k}{k-1}}\min\left\{ \alpha_{k-1},\sqrt{\frac{k-1}{k-2}}\alpha_{k-2},\ldots,\sqrt{\frac{k-1}{2}} \alpha_2 \right\},\ldots, \sqrt{\frac{k}{2}}\alpha_2 \right\}\\ &= \min \left\{ \alpha_k, \sqrt{\frac{k}{k-1}}\alpha_{k-1},\ldots, \sqrt{\frac{k}{2}}\alpha_2 \right\} = \alpha_k^\ast. \end{aligned}$$ We conclude that ${{\lambda_{k}}(\,\cdot\,,\alpha)}$ is minimsed by $\Omega_k^\ast$ whenever $\alpha \leq \hat\alpha_k = \alpha_k^\ast$. 5\. The statement that $\alpha_k \to \infty$ implies $\alpha_k^\ast \to \infty$ is elementary and follows, for example, from a simple contradiction argument. The formula given by becomes particularly simple if the optimal value $\alpha_k$ for connected domains from Lemma \[lem:disjoint-growth\] behaves like $\sqrt{k}$ (as is the case for our rectangles and as generally appears to be the case for the Robin problem in two dimensions). \[lem:precise-disjoint-growth\] Keep the notation and assumptions from Lemma \[lem:disjoint-growth\]. If, in addition, there exists a constant $C=C({\mathcal{A}})>0$ such that $$\label{eq:disjoint-choice} \alpha_k = C \sqrt{k}$$ for all $k \geq 2$, then $\alpha_k^\ast = \alpha_k = C\sqrt{k}$ for all $k\geq 2$. Inserting into , since $$\sqrt{\frac{k}{k-j}}\alpha_{k-j} = C\sqrt{\frac{k}{k-j}}\sqrt{k-j}=C\sqrt{k}=\alpha_k$$ for all $j=1,\dots,k-1$, we immediately obtain $\alpha_k^\ast = \alpha_k$. With this preparation, we can now treat the case of disjoint unions of rectangles, that is, complete the proof of Theorem \[thm:k-squares\]. We already proved at the end of Section \[sec:uk-in-eu-fat\] that if $k\geq 2$ and $$\alpha_k := \underbrace{\frac{\pi^2}{18}(7-2\sqrt{10})}_{=:C} A^{-1/2} k^{1/2}$$ then for any rectangle $\Omega$ of area $A$, ${{\lambda_{k}}(\Omega,\alpha)}$ is no smaller than ${{\lambda_{k}}({\mathcal{U}_{k}},\alpha)}$ whenever $\alpha \in (0,\alpha_k]$. Thus, by Lemma \[lem:precise-disjoint-growth\], the same is true for all disjoint unions of rectangles whenever $\alpha \leq \alpha_k^\ast = \alpha_k = C\sqrt{k}$. Non-optimality of ${\mathcal{U}_{k}}$ for large $\alpha$: Transition between unions of squares and the proof of Theorem \[thm:brexit\] {#sec:transition} -------------------------------------------------------------------------------------------------------------------------------------- The domain ${\mathcal{U}_{k}}$ stops being optimal at the latest at the point where $k$ equal squares of area $A/k$ have the same $k^{\rm th}$ eigenvalue as the domain consisting of $k-3$ equal squares and one larger square with area three times that of the other smaller squares. The curve where this happens is defined by the following identity $${{\lambda_{1}}({\mathcal{S}_{\sqrt{A/k}}},\alpha)} = {{\lambda_{2}}({\mathcal{S}_{\sqrt{3A/k}}},\alpha)} = {{\lambda_{3}}({\mathcal{S}_{\sqrt{3A/k}}},\alpha)}.$$ Writing $x_{1} = \sqrt{{{\lambda_{1}}({\mathcal{I}_{\sqrt{A/k}}},\alpha)}}$, $x_{2} = \sqrt{{{\lambda_{1}}({\mathcal{I}_{\sqrt{3A/k}}},\alpha)}}$ and $x_{3} = \sqrt{{{\lambda_{2}}({\mathcal{I}_{\sqrt{3A/k}}},\alpha)}}$ this is equivalent to the following system of equations $$\label{transition} \left\{ \begin{array}{lll} \alpha & = & x_{1}\tan\left( {\frac{{\displaystyle}\sqrt{A} x_{1}}{{\displaystyle}2\sqrt{k}}}\right){ \vspace*{2mm}\\ }\alpha & = & x_{2}\tan\left( {\frac{{\displaystyle}\sqrt{3A}x_{2}}{{\displaystyle}2\sqrt{k}}}\right){ \vspace*{2mm}\\ }\alpha & = & -x_{3}\cot\left( {\frac{{\displaystyle}\sqrt{3A}x_{3}}{{\displaystyle}2\sqrt{k}}}\right){ \vspace*{2mm}\\ }2x_{1}^2 & = & x_{2}^{2}+x_{3}^2 \end{array}. \right.$$ Based on this, we shall now prove a result regarding the existence of such a transition curve. \[transition3to1\] There exists a solution of system  of the form $$\alpha = {\frac{{\displaystyle}C}{{\displaystyle}\sqrt{A}}} \sqrt{k},$$ where the constant $C$ satisfies $4/5< C < 5\pi^2/2$. \[rem:transition3to1\] Numerically, we obtain that the solution of system  which yields the lowest positive value of the constant $C$ is $$(x_{1},x_{2},x_{3})\approx(2.50386,1.57707,3.1704)\sqrt{{\frac{{\displaystyle}k}{{\displaystyle}A}}},$$ corresponding to $$\alpha \approx 7.58442 \sqrt{{\frac{{\displaystyle}k}{{\displaystyle}A}}}.$$ We look for solutions of system  of the form $\alpha = C\sqrt{k}$ and $x_{i} = 2c_{i}\sqrt{k/A}$, $(i=1,2,3)$. Replacing this in  yields the system $$\left\{ \begin{array}{lll} C & = & {\frac{{\displaystyle}2c_{1}}{{\displaystyle}\sqrt{A}}}\tan\left( c_{1} \right){ \vspace*{2mm}\\ }C & = & {\frac{{\displaystyle}2c_{2}}{{\displaystyle}\sqrt{A}}}\tan\left( \sqrt{3}c_{2}\right){ \vspace*{2mm}\\ }C & = & -{\frac{{\displaystyle}2c_{3}}{{\displaystyle}\sqrt{A}}}\cot\left( \sqrt{3}c_{3}\right){ \vspace*{2mm}\\ }2c_{1}^2 & = & c_{2}^{2}+c_{3}^2 \end{array}. \right.$$ We now eliminate $C$ from the equations by equating the left-hand sides of the first equation to those of the second and third equations. This yields the new system in $c_{1}, c_{2}$ and $c_{3}$ $$\label{transition2} \left\{ \begin{array}{lll} c_{1} \tan\left( c_{1} \right) = c_{2}\tan\left( \sqrt{3}c_{2}\right){ \vspace*{2mm}\\ }c_{1} \tan\left( c_{1} \right) = -c_{3}\cot\left( \sqrt{3}c_{3}\right){ \vspace*{2mm}\\ }2c_{1}^{2} = c_{2}^2+ c_{3}^{2} \end{array}. \right.$$ We shall now prove the existence of a solution of the above system with smallest possible $c_{1}$, that is, for $c_{1}$ on the interval $(0,\pi/2)$. Note that since $C= c_{1}\tan\left(\sqrt{A} c_{1}/2\right)$ is increasing in $c_{1}$, this yields the smallest possible value for $C$ for a given value of the area. The solutions to other constants $c_{2}$ and $c_{3}$ belong to the intervals $\left(0,\pi/(2\sqrt{3})\right)$ and $\left(\pi/(2\sqrt{3}),\pi/\sqrt{3}\right)$, respectively. We first note that since the function $x \mapsto x\tan(a x)$ is increasing in $x$ for $x\in(0,\pi/2)$, the first equation in  defines $c_{2}$ as a continuous increasing function of $c_{1}$ defined on $[0,\pi/2)$ and with values in $\left[0,\pi/(2\sqrt{3})\right)$. This function (which abusing notation we denote by $c_{2}(c_{1})$), satisfies $$c_{2}(0) = 0 \mbox{ and } \lim_{c_{1}\to(\pi/2)^{-}} c_{2}(c_{1}) = {\frac{{\displaystyle}\pi}{{\displaystyle}2\sqrt{3}}}.$$ Similarly, the second equation in  defines $c_{3}$ as a continuous increasing function of $c_{1}$ on the interval $[0,\pi/2)$ and with values on $\left[\pi/(2\sqrt{3}),\pi/\sqrt{3}\right)$. This second function satisfies $$c_{3}(0) = \pi \mbox{ and } \lim_{c_{1}\to(\pi/2)^{-}} c_{3}(c_{1}) = {\frac{{\displaystyle}\pi}{{\displaystyle}\sqrt{3}}}.$$ Defining now the function $F$ on the interval $[0,\pi)$ by $$F(c_{1}) = 2c_{1}^2-c_{2}^{2}\left(c_{1}\right)-c_{3}^{2}\left(c_{1}\right),$$ we see that this is continuous and satisfies $$F(0) = - {\frac{{\displaystyle}\pi^2}{{\displaystyle}12}} \mbox{ and } \lim_{c_{1}\to(\pi/2)^{-}} F(c_{1}) = 2\times{\frac{{\displaystyle}\pi^2}{{\displaystyle}4}} - {\frac{{\displaystyle}\pi^2}{{\displaystyle}12}} -{\frac{{\displaystyle}\pi^2}{{\displaystyle}3}} = {\frac{{\displaystyle}\pi^2}{{\displaystyle}12}}.$$ Hence $F$ must vanish somewhere on the interval $(0,\pi/2)$, implying the existence of at least one solution of system  (and hence ). From the third equation in  and the fact that $c_{2}$ and $c_{3}$ lie on the intervals $\left[0,\pi/(2\sqrt{3})\right)$ and $\left[\pi/(2\sqrt{3}),\pi/\sqrt{3}\right)$, respectively, we have $${\frac{{\displaystyle}\pi}{{\displaystyle}2\sqrt{6}}} < c_{1} < {\frac{{\displaystyle}\sqrt{5}\pi}{{\displaystyle}2\sqrt{6}}}.$$ Since $C = {\frac{{\displaystyle}2c_{1}}{{\displaystyle}\sqrt{A}}}\tan\left(c_{1}\right)$, and using the (monotone) bounds for the tangent given by  and the above bounds for $c_{1}$ we obtain $${\frac{{\displaystyle}4}{{\displaystyle}5}} < 2c_{1}\tan(c_{1})< {\frac{{\displaystyle}5\pi^2}{{\displaystyle}2}}$$ yielding the desired estimates for the constant $C$. Sums of eigenvalues: Proof of Corollary \[cor:sumeigopt\] {#sec:sums} ========================================================= Given fixed positive numbers $A$ and $\alpha$, we shall now consider the smallest value attainable by the sum of the first $k$ eigenvalues ${\sigma_{k}^{\rm +} (A,\alpha)}$ defined in . Before we proceed with the proof of Corollary \[cor:sumeigopt\], we give a couple of remarks. \(a) An easy argument similar to that of Theorem \[thm:existence\] shows that the infimum in is always attained. We leave it as an open problem actually to determine the domains which realise ${\sigma_{k}^{\rm +} (A,\alpha)}$, although Corollary \[cor:sumeigopt\], together with Theorem \[thm:rectangles\], strongly suggest that the number of connected components of the optimiser should grow with $k$, and indeed it seems natural to expect that ${\mathcal{U}_{k}}$ should be the minimising domain for ${\sigma_{k}^{\rm +} (A,\alpha)}$ for $k$ sufficiently large. \(b) Corollary \[cor:sumeigopt\] also suggests that the optimal sum of eigenvalues taken over *all* planar domains of area $A$, not just unions of rectangles, should also grow like $C A^{-1/2}\alpha k^{3/2}$ for some $0<C\leq 2\pi^{1/2}$ (the corresponding value for $k$ balls). Let us formulate these claims explicitly as a conjecture. Fix positive numbers $A$ and $\alpha$. Then, for $k$ sufficiently large, $$\label{eq:general-sum} \inf \left\{ \sum_{j=1}^k {{\lambda_{k}}(\Omega,\alpha)}: \Omega \subset {\mathbb{R}}^2 \text{ Lipschitz, } |\Omega|=A \right\}$$ is achieved by the disjoint union of $k$ equal disks, of total area $A$. In particular, behaves asymptotically like $$\frac{2\pi^{1/2}\alpha}{A^{1/2}} k^{3/2}$$ as $k\to\infty$. For the upper bound, for each $k\geq 1$ we use the disjoint union ${\mathcal{U}_{k}}$ of $k$ equal squares as a test domain: $${\sigma_{k}^{\rm +} (A,\alpha)} \leq \sum_{j=1}^k {{\lambda_{j}}({\mathcal{U}_{k}},\alpha)} = k{{\lambda_{k}}({\mathcal{U}_{k}},\alpha)} \leq k \cdot \frac{4k^{1/2}\alpha}{A^{1/2}},$$ the latter inequality following as usual from . The lower bound follows from Theorem \[thm:existence\] in the form of Corollary \[cor:optimal-asymptotic\]. We start by observing that $${\sigma_{k}^{\rm +} (A,\alpha)} \geq \sum_{j=1}^k {\lambda_{k}^{\rm +} (A,\alpha)}.$$ We will use the asymptotics in Corollary \[cor:optimal-asymptotic\] to control the latter sum. Indeed, by this corollary, for fixed $A>0$ and $\alpha>0$, there exists a constant $m_1>0$ such that $${\lambda_{k}^{\rm +} (A,\alpha)} \geq \frac{4\alpha}{A^{1/2}}k^{1/2} - m_1$$ for all $k\geq 1$ (use the fact that ${\lambda_{k}^{\rm +} (A,\alpha)} = \frac{4\alpha}{A^{1/2}}k^{1/2} + {{\rm O}}(1)$ as $k\to\infty$, by ). Hence $${\sigma_{k}^{\rm +} (A,\alpha)} \geq \frac{4\alpha}{A^{1/2}} \sum_{j=1}^k j^{1/2} - m_1 k.$$ Now $$\sum_{j=1}^k j^{1/2} = \frac{2}{3}k^{3/2} + \frac{1}{2}k^{1/2} + {{\rm O}}(1) \geq \frac{2}{3}k^{3/2} + \frac{1}{2}k^{1/2} - m_2$$ for some constant $m_2>0$ independent of $k\geq 1$. Hence $${\sigma_{k}^{\rm +} (A,\alpha)} \geq \frac{4\alpha}{A^{1/2}}\left(\frac{2}{3}k^{3/2} + \frac{1}{2}k^{1/2} - m_2\right)- m_1 k$$ for all $k\geq 1$. Dividing by $k^{3/2}$ and passing to the limit yields the lower bound. The higher-dimensional case {#sec:higher-dimension} =========================== To keep both the notation and the arguments as simple as possible, we have restricted ourselves to the planar case; nevertheless, we expect analogous statements to hold in $d\geq 3$ dimensions, where in place of rectangles one considers *hyperrectangles* (sometimes also called *cuboids* or *rectangular parallelepipeds*) and their disjoint unions. Moreover, in most cases the proofs should be directly adaptable. We give a brief summary. \(1) The existence of a domain minimising ${{\lambda_{k}}(\Omega,\alpha)}$ among all $d$-dimensional hyperrectangles (and among all disjoint unions of hyperrectangles, respectively) of given total volume follows from the same blow-up and continuity argument as in Theorem \[thm:existence\]. \(2) The minimiser of ${{\lambda_{1}}(\Omega,\alpha)}$ should be the regular hypercube. However, the computation given in Theorem \[thm:lambda1\] will not work as easily. Once one has the hypercube for the first eigenvalue, the same proof as the one of Corollary \[cor:lambda2\] (noting that the nodal domains of any second eigenvalue on a hyperrectangle are again hyperrectangles) implies that the second eigenvalue is, as usual, minimised by the disjoint union of two equal regular hypercubes. \(3) The statements of Theorems \[thm:k-squares\] and \[thm:brexit\] should still hold (when dimensionally adjusted). Moreover, the proof schemes should still work, although Steps 1 and 2 of Section \[sec:uk-in-eu\] are more complicated due to the greater number of possible ways and directions in which a $d$-dimensional hyperrectangle can become unbounded. If ${\lambda_{k}^{\rm +} (V,\alpha)}$ now denotes the minimal $k^{\rm th}$ eigenvalue among all unions of hyperrectangles of volume $V$ in $d$ dimensions, then the correct power growth will be $k^{1/d}$ and we should have $$\label{eq:optimal-asymptotic-d} \lim_{k\to\infty} \frac{{\lambda_{k}^{\rm +} (V,\alpha)}}{k^{1/d}} = \frac{2d\alpha}{V^{1/d}}$$ corresponding to the $k^{\rm th}$ eigenvalue of the disjoint union ${\mathcal{Q}_{k}}$ of $k$ equal hypercubes, each of volume $V/k$. This in turn equals first eigenvalue of a $d$-dimensional regular hypercube of volume $V/k$ and thus side length $(V/k)^{1/d}$. \(4) Moreover, ${\mathcal{Q}_{k}}$ should be optimal in a region of the form $\alpha \leq C k^{1/d}$, at the point where the first eigenvalue of a cube of side length $k^{-1/d}$ is equal to the $(d+1)$-st eigenvalue of a cube of side length $((d+1)k)^{-1/d}$; the analogous argument for balls was already given in [@anfrke Lemma 4.1]. Additionally, if $\Omega$ is any *fixed* hyperrectangle (or union thereof), then we claim that ${\mathcal{Q}_{k}}$ is also better than $\Omega$ in a region of the form $\alpha \leq C_\Omega k^{1/d}$, which suggests that the region of transition from $k$ equal cubes to a connected optimiser is generally quite “thin”. To lend weight to this assertion, we make use of the counting function of $\Omega$ (cf. ). At energy $\lambda>0$ we have $$\label{eq:omega-count} {N_{\Omega,\alpha}(\lambda)} \sim \lambda^{d/2}.$$ Since ${{\lambda_{k}}({\mathcal{Q}_{k}},\alpha)} \leq 2d\alpha/k^{1/d}$ (as follows, e.g., from or the bounds on ${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}$ in Appendix \[sec:interval\], choosing $a=k^{-1/d}$), arguing as in the proof of Lemma \[eq:eigcount-bound\], for ${{\lambda_{k}}({\mathcal{Q}_{k}},\alpha)}$ to be smaller we want $${N_{\Omega,\alpha}(2d\alpha/k^{1/d})} \leq k.$$ Using , this is equivalent to $\alpha \leq C k^{1/d}$. In fact, this argument can easily be made into a rigorous proof; equally, with a lower bound on the counting function a similar argument could be used to show that $\alpha \geq \widetilde{C}(\Omega) k^{1/d}$ implies the fixed domain $\Omega$ is better than ${\mathcal{Q}_{k}}$. We will consider this question in more detail below. \(5) We *expect* the optimal hyperrectangle for ${\lambda_{k}^{\rm *} (V,\alpha)}$ to be long in one direction and short in the remaining $d-1$ (in fact, it should be the cross product of a small $d-1$-dimensional regular hypercube with a long interval). An argument similar to the one of Lemma \[lem:k-1-mode-bounds\] (cf. also Remark \[rem:k-1-balance\]), with the *Ansatz* $a=ck^\gamma$ (and short sides thus each proportional to $k^{-\gamma/(d-1)}$) leads to the power condition $2-2\gamma=\gamma/(d-1)$, i.e., $\gamma=(2d-2)/(2d-1)$ and thus to the conjecture $${\lambda_{k}^{\rm *} (V,\alpha)} \sim k^\frac{2}{2d-1}$$ as $k\to\infty$, corresponding to a long side of length proportional to $k^{(2d-2)/(2d-1)}$ and $d-1$ short sides of length like $k^{-2/(2d-1)}$. In particular, in any dimension we expect deviation (even among convex domains) from the power coming from the Weyl asymptotics for any given domain, namely ${{\lambda_{k}}(\Omega,\alpha)} \sim k^{2/d}$. \(6) Based on , the smallest possible value ${\sigma_{k}^{\rm +} (V,\alpha)}$ of the sum of the first $k$ eigenvalues of a union of hyperrectangles with total volume $V$ should grow like $k^{1+1/d}$ as $k\to\infty$. We will now give some more detailed considerations about the regions where we may expect the disjoint union of $k$ equal hypercubes to be the extremal domain, and where this will no longer be the case. Let $\Omega$ be a given finite disjoint union of hyperrectangles with volume $V$ and let ${\mathcal{Q}_{k}}$ denote the disjoint union of $k$ equal hypercubes, also of total volume $V$. We then have $$\label{lambdakD} {{\lambda_{k}}(\Omega,\alpha)} < {{\lambda_{k}}(\Omega,\infty)} = {\frac{{\displaystyle}4\pi^2}{{\displaystyle}\left(V\omega_{d}\right)^{2/d}}} k^{2/d} + r_{1}(k),$$ where the remainder term satisfies $r_{1}(k) = {{\rm o}}\left(k^{2/d}\right), \mbox{ as } k\to +\infty$, and is independent of $\alpha$. On the other hand, we also have $$\begin{array}{lll} {{\lambda_{k}}({\mathcal{Q}_{k}},\alpha)} & = & {{\lambda_{1}}(\left(V k^{-1}\right)^{1/d}C,\alpha)} { \vspace*{2mm}\\ }& = & d{{\lambda_{1}}({\mathcal{I}_{(V/k)^{1/d}}},\alpha)}{ \vspace*{2mm}\\ }& \geq & {\frac{{\displaystyle}2\alpha d \pi^2 k^{2/d}}{{\displaystyle}V^{1/d}\left(\pi^2k^{1/d}+2\alpha V^{1/d}\right)}}, \end{array}$$ where $C$ is the unit $d$-dimensional hypercube and we used the lower bound given in Proposition \[prop:firsteiginterv\] in the last step. This will be larger than the right-hand side of  provided that $${\frac{{\displaystyle}4\pi^2}{{\displaystyle}\left(V\omega_{d}\right)^{2/d}}} + {\frac{{\displaystyle}r_{1}(k)}{{\displaystyle}k^{2/d}}} < {\frac{{\displaystyle}2\alpha d \pi^2 }{{\displaystyle}V^{1/d}\left(\pi^2k^{1/d}+2\alpha V^{1/d}\right)}}.$$ We may now solve this with respect to $\alpha$ and obtain that if $$\alpha > \frac{2 \pi ^2 }{d \omega ^{2/d}-4}\left({\frac{{\displaystyle}k}{{\displaystyle}V}}\right)^{1/d} + r_{2}(k),$$ where $r_{2}(k) = {{\rm o}}\left(k^{1/d}\right)$ as $k$ goes to infinity, then the $k$ equal hypercubes are no longer optimal. In the planar case and for area $A$ the above reads as $$\alpha > \frac{ \pi ^2 }{\pi -2}\left({\frac{{\displaystyle}k}{{\displaystyle}A}}\right)^{1/2} + r_{2}(k) \approx 8.64547 \left({\frac{{\displaystyle}k}{{\displaystyle}A}}\right)^{1/2} + {{\rm o}}(k^{1/2}),$$ which is comparable to the result in Section \[sec:transition\] for the transition between $k$ equal squares and $k-3$ equal squares and one larger square. In a similar fashion, it is possible to derive the asymptotic behaviour for the boundary of the region where $k$ equal hypercubes yield a lower value than a fixed disjoint union of hyperrectangles $\Omega$. Starting from $$\label{lambdakN} {{\lambda_{k}}(\Omega,\alpha)} > {{\lambda_{k}}(\Omega,0)}= {\frac{{\displaystyle}4\pi^2}{{\displaystyle}\left(V\omega_{d}\right)^{2/d}}} k^{2/d} + r_{3}(k),$$ where again the remainder term satisfies $r_{3}(k) = {{\rm o}}\left(k^{2/d}\right), \mbox{ as } k\to +\infty$, and is independent of $\alpha$. Proceeding as above, but now using the upper bound given in Proposition \[prop:firsteiginterv\] we obtain $$\begin{array}{lll} {{\lambda_{k}}(V^{1/d}{\mathcal{Q}_{k}},\alpha)} & \leq & {\frac{{\displaystyle}d\pi^2 k^{2/d}}{{\displaystyle}2(\pi^2-8)V^{2/d}}} \Biggr[\pi^2+2\alpha (k^{-1}V)^{1/d} { \vspace*{2mm}\\ }& & \hspace*{5mm} - \sqrt{ 64\alpha (k^{-1}V)^{1/d} - \left(\pi^2-2\alpha (k^{-1}V)^{1/d}\right)^{2}} \Biggr], \end{array}$$ Comparing this with the right-hand side in  and proceeding in the same way as before yields, after some lengthy calculations, that $k$ equal squares are better than $\Omega$ for $$\alpha < {\frac{{\displaystyle}2}{{\displaystyle}d \omega_{d}^{2/d}}}\left( \pi^2 +{\frac{{\displaystyle}32}{{\displaystyle}d \omega_{d}^{2/d}-4}}\right)\left({\frac{{\displaystyle}k}{{\displaystyle}V}}\right)^{1/d} + r_{4}(k),$$ where $r_{4}={{\rm o}}\left(k^{1/d}\right)$ as $k$ goes to infinity. In the planar case we obtain $$\alpha < \left(\pi +{\frac{{\displaystyle}16}{{\displaystyle}\pi(\pi-2)}}\right)\left({\frac{{\displaystyle}k}{{\displaystyle}A}}\right)^{1/2} + r_{4}(k) \approx 7.60287 \left({\frac{{\displaystyle}k}{{\displaystyle}A}}\right)^{1/2} + {{\rm o}}(k^{1/2}).$$ The eigenvalues of the Robin Laplacian on intervals and rectangles\[sec:interval\] ================================================================================== Here we give sharp bounds for the first and second eigenvalues of the Robin Laplacian on an interval of length $a$, as these are used to build the eigenvalues of rectangles and disjoint unions of rectangles. As these are of independent interest, and to the best of our knowledge many are new, we give sharper estimates than we actually need in many cases. Depending on the particular instance, we may however need our bounds to behave in an appropriate fashion in the different limits of interest, namely, as $a$ and $\alpha$ approach either $0$ or infinity; in such cases, we will present complementary bounds and the corresponding asymptotic expansions. The first eigenvalue on an interval ----------------------------------- The first of these eigenvalues, ${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}$, belongs to the interval $(0,\pi^2/a^2)$ and is thus given by the smallest positive root of $$\label{eq:1eig-interval} \alpha = \sqrt{\lambda} \tan \left(\frac{a\sqrt{\lambda}}{2}\right).$$ Expanding the tangent around zero allows us to obtain a formal expression for the expansion of this eigenvalue as $a$ approaches zero as follows $$\label{eq:eig1expa} {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} = {\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}-{\frac{{\displaystyle}\alpha^{2}}{{\displaystyle}3}}+{\frac{{\displaystyle}2\alpha^{3}}{{\displaystyle}45}} a - {\frac{{\displaystyle}4\alpha^{4}}{{\displaystyle}945}}a^2 + {\frac{{\displaystyle}2\alpha^{5}}{{\displaystyle}1475}}a^3+{{\rm O}}(a^4).$$ Inserting the above expression in equation , we see that the argument of the tangent does go to zero as $a$ approaches zero, validating the expansion. On the other hand, expanding the tangent around $\pi/2$ yields the corresponding expansion $${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} = \frac{\pi ^2}{a^2} -\frac{4 \pi ^2}{a^3 \alpha }+\frac{12 \pi ^2}{a^4 \alpha ^2} -\frac{4 \pi ^2 \left(24-\pi^2\right)}{3 a^5 \alpha ^3} + {{\rm O}}(\alpha^{-4})$$ for large $\alpha$. A first simple remark comes from the fact that, on $(0,\pi/2)$, the tangent is bounded from below by its argument. We thus immediately derive from  that $$\label{eq:main-interval-bound} {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \leq {\frac{{\displaystyle}2\alpha}{{\displaystyle}a}},$$ that is, the first Robin eigenvalue on an interval of length $a$ is smaller that the first term in its expansion as $a$ approaches zero. Since the corresponding expansion  seems to alternate with decreasing terms in absolute value, it is in fact expected that the successive terms will provide upper and lower bounds for this quantity. Using further inequalities from the tangent expansion at zero it is also possible to obtain slightly better albeit more complicated bounds, of which the next using $\tan(x)\geq x + x^3/3$ yields $${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \leq \frac{2 \sqrt{6 a \alpha +9}-6}{a^2} = {\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}-{\frac{{\displaystyle}\alpha^{2}}{{\displaystyle}3}} + {{\rm O}}(a), \mbox{ as } a\to0,$$ with the correct asymptotic behaviour up to the second term. However, for most of our purposes it will be convenient to obtain bounds with a different form which behave at least in a qualitatively correct way in more than one asympotic limit. To do this, we shall use a different family of inequalities for the tangent, namely [@best], $$\label{tanbounds} {\frac{{\displaystyle}8x}{{\displaystyle}\pi^2-4x^2}}\leq \tan x \leq {\frac{{\displaystyle}\pi^2x}{{\displaystyle}\pi^2 -4x^2}}, \; x\in(0,{\frac{{\displaystyle}\pi}{{\displaystyle}2}}).$$ Replacing these in equation  we obtain, after some simplifications, $${\frac{{\displaystyle}4a\lambda}{{\displaystyle}\pi^2-a^2\lambda}}\leq \alpha \leq {\frac{{\displaystyle}\pi^2 a\lambda}{{\displaystyle}2(\pi^2-a^2\lambda)}}.$$ Using the fact that we are looking for solutions on the interval $(0,\pi/2)$, we arrive at the following two-sided bounds $$\label{eq:eig1ineq1} {\frac{{\displaystyle}2\alpha\pi^2}{{\displaystyle}a(\pi^2+2\alpha a)}}\leq {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \leq {\frac{{\displaystyle}\alpha\pi^2}{{\displaystyle}a(4+\alpha a)}}.$$ Both bounds have the first correct term in the corresponding asymptotics when $\alpha$ goes to infinity, and the lower bound also displays the correct behaviour as $a$ approaches zero. However, this is not the case for the upper bound. In order to obtain a bound that does so, we will use a test function of the form $$u(x) = 1- c \cos\left({\frac{{\displaystyle}\pi x}{{\displaystyle}a}}\right),$$ where $c$ is a constant (possibly depending on $a$ and $\alpha$) to be determined later. Replacing this in the Rayleigh quotient for ${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}$ yields $$\begin{array}{lll} {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} & \leq & {\frac{{\displaystyle}{{\displaystyle}\int}_{-a/2}^{a/2}\left({\frac{{\displaystyle}\pi c}{{\displaystyle}a}}\right)^2\sin^{2}\left({\frac{{\displaystyle}\pi x}{{\displaystyle}a}}\right)\;{\rm d}x +2\alpha \left[ 1- c \cos\left({\frac{{\displaystyle}\pi}{{\displaystyle}2}}\right)\right]^2}{{\displaystyle}{{\displaystyle}\int}_{-a/2}^{a/2}\left[ 1- c \cos\left({\frac{{\displaystyle}\pi x}{{\displaystyle}2}}\right)\right]^2\;{\rm d}x}}{ \vspace*{2mm}\\ }& = & {\frac{{\displaystyle}\pi \left( \pi^2 c^2+4a\alpha \right)}{{\displaystyle}a^2\left(2\pi - 8 c + c^2\pi\right)}}. \end{array}$$ We now pick the constant $c$ minimising the quotient on the right. This is achieved for $$c= {\frac{{\displaystyle}\pi^2 -2a \alpha-\sqrt{64a \alpha + (\pi^2-2a \alpha)^2}}{{\displaystyle}4\pi}},$$ which, when replaced back into the above bound, yields $${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \leq {\frac{{\displaystyle}\pi^2}{{\displaystyle}a^2}}\times {\frac{{\displaystyle}\pi^2 + 2a \alpha - \sqrt{64a\alpha +(\pi^2-2a\alpha)^2}}{{\displaystyle}2(\pi^2-8)}}.$$ It is simple to check that the above bound does satisfy the asymptotic behaviour for both large $\alpha$ and small $a$. We have thus proved the following \[prop:firsteiginterv\] The first eigenvalue of the Robin Laplacian on an interval satisfies $${\frac{{\displaystyle}2\alpha\pi^2}{{\displaystyle}a(\pi^2+2\alpha a)}}\leq {{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \leq {\frac{{\displaystyle}\pi^2}{{\displaystyle}a^2}}\times {\frac{{\displaystyle}\pi^2 + 2a \alpha - \sqrt{64a\alpha +(\pi^2-2a\alpha)^2}}{{\displaystyle}2(\pi^2-8)}}.$$ The lower bound is accurate up to the first term in the asymptotics for both the small $a$ and large $\alpha$ cases, while the upper bound is also accurate up to first order in the small $a$ case and to second order in the large $\alpha$ case. The second eigenvalue on an interval ------------------------------------ In a similar way as above, the second eigenvalue ${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}$ is obtained as the smallest solution of the equation $$\label{} -\sqrt{\lambda} = \alpha \tan\left({\frac{{\displaystyle}a \sqrt{\lambda}}{{\displaystyle}2}}\right),$$ which is now on the interval $(\pi^2/a^2,4\pi^2/a^2)$. For convenience, we rewrite this equation as $$\label{eq:2eig-interval} \alpha = -\sqrt{\lambda} \cot\left({\frac{{\displaystyle}a \sqrt{\lambda}}{{\displaystyle}2}}\right),$$ and now expand the cotangent around $\pi/2$ to obtain $${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)} = {\frac{{\displaystyle}\pi^2}{{\displaystyle}a^2}}+{\frac{{\displaystyle}4\alpha}{{\displaystyle}a}}-{\frac{{\displaystyle}4\alpha^{2}}{{\displaystyle}\pi^2}}+{\frac{{\displaystyle}4(12-\pi^2)\alpha^{3}}{{\displaystyle}3\pi^4}} a - {\frac{{\displaystyle}8(10-\pi^2)\alpha^{4}}{{\displaystyle}\pi^6}}a^2+{{\rm O}}(a^3),$$ for small $a$. Again note that if the resulting expression is plugged back into , the argument of the cotangent approaches $\pi/2$ as $a$ goes to zero. A first obvious remark is that ${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)}$ and ${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}$ display a different asymptotic behaviour as $a$ goes to zero; indeed, ${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}$ has the same first term as the second Neumann eigenvalue (or first Dirichlet); while the fact that ${{\lambda_{1}}({\mathcal{I}_{a}},\alpha)} \sim 2\alpha/a$ is what will drive our estimate on ${{\lambda_{k}}({\mathcal{U}_{,}}\alpha)}$ in Proposition \[prop:boundkequalsquares\] below. For large $\alpha$ we have $${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)} =\frac{4 \pi ^2}{a^2}-\frac{16 \pi ^2}{\alpha a^3}+\frac{48 \pi ^2}{\alpha ^2 a^4}-\frac{128 \pi ^2}{\alpha ^3 a^5} +\frac{320 \pi ^2}{\alpha ^4 a^6} + {{\rm O}}(\alpha^{-5}).$$ We will now proceed as in the case of the first eigenvalue to obtain upper and lower bounds which are sharp. We first go back to equation  and use the inequality $$\tan x \geq {\frac{{\displaystyle}2}{{\displaystyle}\pi-2x}}$$ valid for $x$ on $(\pi/2,\pi)$ to obtain $$-\sqrt{\lambda} \geq {\frac{{\displaystyle}-2\alpha}{{\displaystyle}a \sqrt{\lambda}-\pi}}.$$ Since $\sqrt{\lambda}>\pi/a$, we get $a \lambda -\pi \sqrt{\lambda}-2\alpha \leq 0$, yielding the following bounds $${\frac{{\displaystyle}\pi}{{\displaystyle}2a}} -\sqrt{{\frac{{\displaystyle}\pi^2}{{\displaystyle}4a^2}}+{\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}} \leq \sqrt{\lambda} \leq {\frac{{\displaystyle}\pi}{{\displaystyle}2a}} +\sqrt{{\frac{{\displaystyle}\pi^2}{{\displaystyle}4a^2}}+{\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}}.$$ Of these, clearly only the upper bound is of interest, and, in fact, it satisfies $$\label{eig2uppbound1} \begin{array}{lll} {{\lambda_{2}}({\mathcal{I}_{a}},\alpha)} & \leq & \left({\frac{{\displaystyle}\pi}{{\displaystyle}2a}} +\sqrt{{\frac{{\displaystyle}\pi^2}{{\displaystyle}4a^2}}+{\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}}\right)^2{ \vspace*{2mm}\\ }& = & \frac{\pi ^2}{a^2}+\frac{4 \alpha }{a} -\frac{4 \alpha ^2}{\pi^2}+ {{\rm O}}(a), \end{array}$$ as $a$ approaches zero, thus having the same first three terms in the asymptotics as ${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}$. To obtain a sharp lower bound, and also an upper bound which is better that the above for large values of $a\alpha$, we now use the identity $$\tan x = {\frac{{\displaystyle}1-\cos(2x)}{{\displaystyle}\sin(2x)}}$$ in . This yields that the second eigenvalue is given by the smallest positive root of the equation $$-\sqrt{\lambda}\sin(a \sqrt{\lambda}) = \alpha\left[ 1-\cos(a \sqrt{\lambda})\right].$$ Using the inequalities $${\frac{{\displaystyle}4}{{\displaystyle}\pi^2}} (x-\pi)(x-2\pi) \leq \sin x \leq {\frac{{\displaystyle}1}{{\displaystyle}\pi^2}} (x-\pi)(x-2\pi)$$ and $${\frac{{\displaystyle}2}{{\displaystyle}\pi^2}} (x-2\pi)^2 \leq 1-\cos x \leq {\frac{{\displaystyle}2}{{\displaystyle}\pi^4}} (x-2\pi)^2x^2,$$ valid on $(\pi,2\pi)$, we are led to $$-{\frac{{\displaystyle}\sqrt{\lambda}}{{\displaystyle}\pi}} (a\sqrt{\lambda}-\pi)(a\sqrt{\lambda}-2\pi) \leq -\sqrt{\lambda}\sin(a \sqrt{\lambda}) = \alpha\left[ 1-\cos(a \sqrt{\lambda})\right] \leq {\frac{{\displaystyle}2\alpha}{{\displaystyle}\pi^4}}(a \sqrt{\lambda}-2\pi)^2a^2\lambda$$ and $${\frac{{\displaystyle}2\alpha}{{\displaystyle}\pi^2}}(a \sqrt{\lambda}-2\pi)^2\leq \alpha\left[ 1-\cos(a \sqrt{\lambda})\right] = -\sqrt{\lambda}\sin(a \sqrt{\lambda}) \leq -{\frac{{\displaystyle}4\sqrt{\lambda}}{{\displaystyle}\pi^2}} (a\sqrt{\lambda}-\pi)(a\sqrt{\lambda}-2\pi).$$ In the range under consideration, these are, in turn, equivalent to $$2\alpha a^3\lambda + \pi a(\pi^2-4a \alpha)\sqrt{\lambda} - \pi^4 \leq 0$$ and $$2a \lambda +(a\alpha -2\pi)\sqrt{\lambda}-2\alpha\pi\geq 0,$$ respectively. The first of these inequalities yields the upper bound $$\label{eig2uppbound2} \sqrt{{{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}}\leq{\frac{{\displaystyle}\pi}{{\displaystyle}4 a^2\alpha}} \left( 4a \alpha -\pi^2 + \sqrt{\pi^4+16a^2\alpha^2} \right),$$ while from the second we obtain $${\frac{{\displaystyle}2\pi -a\alpha + \sqrt{4\pi^2+12a\alpha \pi + \alpha^2 a^2}}{{\displaystyle}4a}} \leq \sqrt{{{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}}.$$ Comparing the two upper bounds  and  we see that $$\begin{array}{ll} & {\frac{{\displaystyle}\pi}{{\displaystyle}4 a^2\alpha}} \left( 4a \alpha -\pi^2 + \sqrt{\pi^4+16a^2\alpha^2}\right)- \left({\frac{{\displaystyle}\pi}{{\displaystyle}2a}} +\sqrt{{\frac{{\displaystyle}\pi^2}{{\displaystyle}4a^2}}+{\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}}\right){ \vspace*{2mm}\\ }= & {\frac{{\displaystyle}\pi}{{\displaystyle}2a}} -{\frac{{\displaystyle}\pi^3}{{\displaystyle}4a^2\alpha}}+{\frac{{\displaystyle}\pi}{{\displaystyle}4a^2\alpha}}\sqrt{\pi^4+16a^2\alpha^2}-{\frac{{\displaystyle}1}{{\displaystyle}2a}}\sqrt{\pi^2+8a\alpha}{ \vspace*{2mm}\\ }= & {\frac{{\displaystyle}\pi}{{\displaystyle}4a^2\alpha}}\left( 2b-\pi^2+\sqrt{\pi^4+16a^2\alpha^2}-2a \alpha \sqrt{1+{\frac{{\displaystyle}8a\alpha}{{\displaystyle}\pi^2}}}\right){ \vspace*{2mm}\\ }= & {\frac{{\displaystyle}\pi}{{\displaystyle}4ab}}\left[ 2b\left(1-\sqrt{1+{\frac{{\displaystyle}8b}{{\displaystyle}\pi^2}}}\right) - \pi^2\left(1-\sqrt{1+{\frac{{\displaystyle}16b^2}{{\displaystyle}\pi^4}}}\right)\right], \end{array}$$ where we have written $b=a\alpha$. Simplifying the expression inside the square brackets we see that it vanishes when either $b=0$ or $b=\pi^2/2$, and that it is negative for $b$ on $(0,\pi^2/2)$ and positive for $b$ larger than $\pi^2/2$. We thus have, for the second eigenvalue, The second eigenvalue of the Robin Laplacian on an interval satisfies $${\frac{{\displaystyle}\left(2\pi -a\alpha + \sqrt{4\pi^2+12a\alpha \pi + \alpha^2 a^2}\right)^2}{{\displaystyle}16a^2}} \leq {{\lambda_{2}}({\mathcal{I}_{a}},\alpha)}$$ and $${{\lambda_{2}}({\mathcal{I}_{a}},\alpha)} \leq \left\{ \begin{array}{ll} \left({\frac{{\displaystyle}\pi}{{\displaystyle}2a}} +\sqrt{{\frac{{\displaystyle}\pi^2}{{\displaystyle}4a^2}}+{\frac{{\displaystyle}2\alpha}{{\displaystyle}a}}}\right)^2, & a\alpha\leq {\frac{{\displaystyle}\pi^2}{{\displaystyle}2}}{ \vspace*{2mm}\\ }{\frac{{\displaystyle}\pi^2}{{\displaystyle}16 a^4\alpha^2}} \left( 4a \alpha -\pi^2 + \sqrt{\pi^4+16a^2\alpha^2}\right)^2, & a\alpha\geq {\frac{{\displaystyle}\pi^2}{{\displaystyle}2}}. \end{array} \right.$$ All these bounds are accurate up to the first term in the asymptotics as either $a$ becomes small or $\alpha$ large, except for the upper bound which is accurate up to the third term in the asymptotics as $a$ approaches zero. Bounds for the eigenvalues of rectangles ---------------------------------------- The estimates obtained above may now be used to derive bounds for eigenvalues of rectangles. The first eigenvalue of a rectangle with side lengths $A^{1/2}a$ and area $A^{1/2}/a$ a particular case of  and is given by $${{\lambda_{1}}({\mathcal{R}_{A}(a)},\alpha)} = {{\lambda_{1}}({\mathcal{I}_{A^{1/2}a}},\alpha)} + {{\lambda_{1}}({\mathcal{I}_{A^{1/2}/a}},\alpha)}.$$ It is thus possible to bound this from above and below by means of the bounds from the previous sections, with the same being possible for the second eigenvalue of rectangles. The expressions do get quinte involved though, and we will concentrate on one of the cases which is relevant throughout the paper, namely, the $k^{\rm th}$ eigenvalue of the disjoint union of $k$ equal squares ${\mathcal{U}_{k}}$ (assumed here to have total area $A$), which coincides with the first eigenvalue of each of the squares. For a total area $A$, we are thus interested in $${{\lambda_{k}}( {\mathcal{U}_{k}} ,\alpha)} = {{\lambda_{1}}({\mathcal{S}_{(A/k)^{1/2}}},\alpha)} = 2 {{\lambda_{1}}({\mathcal{I}_{(A/k)^{1/2}}},\alpha)}.$$ The corresponding bounds obtained directly from Proposition \[prop:firsteiginterv\] are as follows. \[prop:boundkequalsquares\] The $k^{\rm th}$ eigenvalue of the union of $k$ equal squares with total area $A$ satisfies $$\begin{array}{lll} {\frac{{\displaystyle}4\alpha\pi^2 k}{{\displaystyle}A^{1/2}\left(\pi^2 k^{1/2}+2\alpha A^{1/2}\right)}} & \leq & {{\lambda_{k}}( {\mathcal{U}_{k}} ,\alpha)}{ \vspace*{2mm}\\ }& & \leq {\frac{{\displaystyle}\pi^2k^{1/2}}{{\displaystyle}(\pi^2-8)A}}\times\left[ \pi^2k^{1/2}+2\alpha A^{1/2} \right. { \vspace*{2mm}\\ }& & \hspace*{5mm}\left.-\sqrt{64\alpha k^{1/2}A^{1/2}+(\pi^2k^{1/2}-2\alpha A^{1/2})^2}\right]. \end{array}$$ [3000]{} B. Andrews, J. Clutterbuck and D. Hauer, *Non-concavity of Robin eigenfunctions*, preprint (2017), arXiv:1711.02779. P. R. S. Antunes and P. Freitas, *Optimisation of eigenvalues of the Dirichlet Laplacian with a surface area restriction*, Appl. Math. Optim. **73** (2016), 313–328. P. R. S. Antunes and P. Freitas, *Optimal spectral rectangles and lattice ellipses*, Proc. R. Soc. London, Ser. A [**469**]{} (2013), 20120492. P. R. S. Antunes, P. Freitas and J. B. Kennedy, *Asymptotic behaviour and numerical approximation of optimal eigenvalues of the Robin Laplacian*, ESAIM: Control, Optimisation and Calculus of Variations **19** (2013), 438–459. S. Ariturk, *Maximal spectral surfaces of revolution converge to a catenoid*, Proc. R. Soc. Lond. Ser. A Math Phys. Eng. Sci. **472** (2016), 20160239, 12 pp. S. Ariturk and R. S. Laugesen, *Optimal stretching for lattice points under convex curves*, Portugaliae Math. **74** (2017), 91-114. M. Becker and E. L. Stark, *On a hierarchy of quolinomial inequalities for $\tan x$*, Univ. Beograd. Publ. Elektrotehn. Fak. Ser. Mat. Fiz. No. 602-633 (1978), 133–138. F. A. Berezin, *Covariant and contravariant symbols of operators*, Izv. Akad. Nauk SSSR Ser. Mat. [**13**]{} (1972), 1134–1167. M. van den Berg, D. Bucur and K. Gittins, *Maximising Neumann eigenvalues on rectangles*, Bull. London Math. Soc. **48** (2016), 877–894. M. van den Berg and K. Gittins, *Minimising Dirichlet eigenvalues on cuboids of unit measure*, Mathematika **63** (2017), 469–482. M. S. Birman and M.Z. Solomjak, *Quantitative analysis in Sobolev imbedding theorems and applications to spectral theory*, American Mathematical Society Translations, ser. 2, 114. American Mathematical Society, Providence, R.I., 1980. M.-H. Bossel, *Membranes [é]{}lastiquement li[é]{}es: [E]{}xtension du th[é]{}or[é]{}me de [R]{}ayleigh-[F]{}aber-[K]{}rahn et de l’in[é]{}galit[é]{} de [C]{}heeger*, C. R. Acad. Sci. Paris Sér. I Math. **302** (1986), 47–50. D. Bucur and P. Freitas, *Asymptotic behaviour of optimal spectral planar domains with fixed perimeter*, J. Math. Phys. **54** (2013), 053504, 6pp. D. Bucur, P. Freitas and J. Kennedy, *The Robin problem*. Chapter 4 in Antoine Henrot (ed.), Shape optimization and spectral theory, de Gruyter Open, Warsaw-Berlin, 2017. B. Colbois and A. El Soufi, *Extremal eigenvalues of the Laplacian on Euclidean domains and closed surfaces*, Math. Z. [**278**]{} (2014), 529–549. D. Daners, *A [F]{}aber-[K]{}rahn inequality for [R]{}obin problems in any space dimension*, Math. Ann. **335** (2006), 767–785. R. L. Frank and L. Geisinger, *Semi-classical analysis of the Laplace operator with Robin boundary conditions*, Bull. Math. Sci. [**2**]{} (2012), 281–319. K. Gittins and S. Larson, *Asymptotic behaviour of cuboids optimising Laplacian eigenvalues*, Integral Equations Operator Theory **89** (2017), 607–629. J. Guo and W. Wang, *Lattice points in stretched model domains of finite type in ${\mathbb{R}}^d$*, preprint (2017), arXiv:1710.09050. R. Kellner, *On a theorem of Polya*, Amer. Math. Monthly **73** (1966), 856–858. J. Kennedy, *An isoperimetric inequality for the second eigenvalue of the Laplacian with Robin boundary conditions*, Proc. Amer. Math. Soc. **137** (2009), 627–633. P. Kröger, *Upper bounds for the Neumann eigenvalues on a bounded domain in Euclidean space*, J. Funct. Anal. [**106**]{} (1992), 353–357. S. Larson, *Maximizing Riesz means of anisotropic harmonic oscillators*, preprint (2017), arXiv:1712.10247. R. Laugesen and S. Liu, *Shifted lattices and asymptotically optimal ellipses*, J. Anal., online first (2018). R. Laugesen and S. Liu, *Optimal stretching points for lattice points and eigenvalues*, preprint (2016), arXiv:1609.06172. P. Li and S.-T. Yau, *On the Schrödinger equation and the eigenvalue problem*, Comm. Math. Phys. [**88**]{} (1983), 309–318. N. F. Marshall, *Stretching convex domains to capture many lattice points*, preprint (2017), arXiv:1707.00682. N. F. Marshall and S. Steinerberger, *Triangles capturing many lattice points*, Mathematika, preprint (2017), arXiv:1706.04170. G. Pólya, *Mathematics and plausible reasoning: patterns of plausible inference*, 2nd Edition, Princeton University Press 1968. G. Pólya, *On the eigenvalues of vibrating membranes*, Proc. London Math. Soc. [**11**]{} (1961), 419–433. Yu. Safarov and D. Vassiliev, *The asymptotic distribution of eigenvalues of partial differential operators*, American Mathematical Society, series Translations of Mathematical Monographs, vol. 155 (1997). S. A. Wolf and J. B. Keller, *Range of the first two eigenvalues of the laplacian*, Proc. Roy. Soc. London Ser. A [**447**]{} (1994), 397–412. [^1]: *Mathematics Subject Classification* (2010). 35P15 (35J05 35J25 49R05) [^2]: *Key words and phrases*. Laplacian, Robin boundary conditions, eigenvalues, Pólya’s conjecture [^3]: The work of the authors was supported by the Funda[ç]{}[ã]{}o para a Ci[ê]{}ncia e a Tecnologia, Portugal, via the program “Investigador FCT”, reference IF/01461/2015 (JK), and project PTDC/MAT-CAL/4334/2014 (PF and JK)
{ "pile_set_name": "ArXiv" }
0.6cm **** **Patrick Concha**$^{\ast}$, **Lucrezia Ravera**$^{\ddag}$, **Evelyn Rodríguez**$^{\dag}$,\ $^{\ast}$*Departamento de Matemática y Física Aplicadas,*\ *Universidad Católica de la Santísima Concepción,*\ *Alonso de Ribera 2850, Concepción, Chile.*\ $^{\ddag}$*INFN, Sezione di Milano,*\ *Via Celoria 16, I-20133 Milano-Italy.*\ $^{\dag}$*Departamento de Ciencias, Facultad de Artes Liberales,*\ *Universidad Adolfo Ibáñez, Viña del Mar-Chile.*\ `patrick.concha@ucsc.cl`, `lucrezia.ravera@mi.infn.it`, `evelyn.rodriguez@edu.uai.cl`, [**Abstract**]{} We present a novel three-dimensional non-relativistic Chern-Simons supergravity theory invariant under a Maxwellian extended Bargmann superalgebra. We [first study]{} the non-relativistic limits of the minimal and [the]{} $\mathcal{N}=2$ Maxwell superalgebra[s]{}. We show that a well-defined Maxwellian extended Bargmann supergravity requires to construct by hand a supersymmetric extension of the Maxwellian extended Bargmann algebra by introducing additional fermionic and bosonic generators. The new non-relativistic supergravity action presented here contains the extended Bargmann supergravity as a sub-case. Introduction ============ Three-dimensional non-relativistic (NR) and ultra-relativistic (UR) versions of supergravity theory [have]{} only been explored recently in [@ABRS; @BRZ; @BR; @OOTZ; @OOZ; @Rav; @FR]. Although several generalizations and applications of supergravity have been developed by diverse authors these last four decades, its NR construction remains challenging and has only been approached in three spacetime dimensions. In particular, the formulation of a well-defined NR supergravity action has required the introduction of additional fermionic generators. As in NR bosonic cases, the addition of new generators allows to construct a non-degenerate invariant bilinear form which assures the proper construction of a Chern-Simons (CS) action. The NR theories [have]{} received a renewed interest since they play an important role to approach condensed matter systems [@Son; @BM; @KLM; @BG; @BGMM; @CHOR; @CHOR2; @Taylor] and NR effective field theories [@Son2; @HS; @GPR; @GJA]. It seems then natural to extend NR gravity theories [@DK; @DBKP; @DGH; @Duval; @DLP; @DLP2; @Horava; @DH; @PS; @ABPR; @ABGR; @HLO; @BCRR; @CS] to the presence of supersymmetry. In particular, NR supergravity models can be seen as a starting point to approach supersymmetric field theories on curved backgrounds by means of localization [@FS; @Pestun]. On the other hand, the Maxwell algebra has received a growing interest these last decades. Such symmetry has been [first introduced]{} to describe Minkowski space in the presence of a constant electromagnetic field background [Schrader, BCR, GK]{}. In the gravity context, the Maxwell algebra and its generalizations have been useful to recover standard General Relativity from CS and Born-Infeld gravity theories [@EHTZ; @GRCS; @CPRS1; @CPRS2; @CPRS3]. More recently, a Maxwell CS formulation in three spacetime dimensions has been explored in [@SSV]. Its solution [@HR; @CMMRSV], generalization to higher spin [@CCFRS], and asymptotic symmetry [@CMMRSV; @CCRS] have been subsequently studied by diverse authors. Further application of the Maxwell algebra can be found in [@AKL; @DKGS; @AKL2; @CK; @BS; @KSC; @SalgadoReb]. At the supersymmetric level, the minimal Maxwell superalgebra appears to describe a constant Abelian supersymmetric gauge field background in a four-dimensional superspace [@BGKL]. Generalizations of the Maxwell superalgebras have then been explored with diverse applications [@BGKL2; @Lukierski; @AILW; @AI; @CR1; @CR2; @PR; @Ravera; @CRR; @KC]. More recently, a three-dimensional CS supergravity theory invariant under the Maxwell superalgebra and its $\mathcal{N}$-extended versions have been explored in [@CFRS; @CFR; @CPR; @Concha]. The NR version of the Maxwell CS gravity theory [has]{} only been presented recently [@AFGHZ] (see also [@GKPSR], where the related algebra has been recovered through Lie algebra expansion). Interestingly, the relativistic theory required the presence of three U(1) gauge fields in order to establish a well-defined NR limit and to avoid degeneracy. In the presence of supersymmetry, the NR version of the Maxwell CS supergravity was unknown till now. In this work, we explore the NR limit of the Maxwell superalgebra for $\mathcal{N}=1$ and $\mathcal{N}=2$. In particular, we show that a well-defined NR Maxwellian CS supergravity action requires to introduce by hand additional fermionic and bosonic generators. Our model is not only a novel NR supergravity theory without cosmological constant but contains the extended Bargmann supergravity as a sub-case. The paper is organized as follows: In Section 2, we briefly review the Maxwellian extended Bargmann gravity introduced in [@AFGHZ]. Sections 3 and 4 contain our main results. In Section 3, we introduce the NR limits of the minimal and the $\mathcal{N}=2$ Maxwell superalgebras. In Section 4, we present the Maxwellian extendad Bargmann superalgebra and the NR CS supergravity action. Section 5 is devoted to discussion and possible future developments. Maxwellian extended Bargmann gravity ==================================== In this section, we briefly review the Maxwellian [e]{}xtended Bargmann algebra introduced in [@AFGHZ] and the associated [CS]{} gravity theory developed in the same paper in three (2+1) dimensions. In [@AFGHZ] the authors proved that an alternative way to circumvent the degeneracy of the bilinear form in the \[Maxwell\] $\oplus$ $u(1)$ $\oplus$ $u(1)$ system analyzed in the same paper, is to add one more $u(1)$ gauge field. The [non-vanishing]{} commutation relations of the Maxwell algebra are given by $$\begin{aligned} \left[ J_{A},J_{B}\right] &=&\epsilon _{ABC}J^{C}\,, \notag \\ \left[ J_{A},P_{B}\right] &=&\epsilon _{ABC}P^{C}\,, \notag \\ \left[ J_{A},Z_{B}\right] &=&\epsilon _{ABC}Z^{C}\,, \label{m1} \\ \left[ P_{A},P_{B}\right] &=&\epsilon _{ABC}Z^{C}\,, \notag\end{aligned}$$ where $J_A$ are the spacetime rotations, $P_A$ the spacetime translations, and $Z_A$ are new generators characterized and introduced in [@Schrader; @BCR] ($A=0,1,2$ and $\eta^{AB}=\text{diag}(-,+,+)$). A gauge-invariant [CS]{} gravity action in three (meaning 2+1, here as well as in the sequel) dimensions based on the above written Maxwell algebra [has been constructed in [@SSV; @HR; @AFGHZ; @CMMRSV]. The CS action is constructed]{} using the connection one-form $A = A^A T_A$ taking values in the Maxwell algebra generated by $\lbrace J_A, P_A, Z_A \rbrace$, that is $$A = E^B P_B +W^B J_B + K^B Z_B \, ,$$ where $E^B$, $W^B$, and $K^B$ are one-form fields. [The CS form]{} constructed with the invariant bilinear form defines an action for the relativistic gauge theory for the symmetry under consideration as $$\label{genCS} I_{\text{CS}} = \int \langle A \wedge dA + \frac{2}{3} A \wedge A \wedge A \rangle = \int \langle A \wedge dA + \frac{1}{3} A \wedge \left[ A , A % \right] \rangle \, .$$ In the specific case we are now reviewing, when the Maxwell algebra is supplemented with the three additional $U(1)$ generators ($Y_1$, $Y_2$, and $% Y_3$), the connection [one-form]{} involved in the construction reads $$A = E^B P_B +W^B J_B + K^B Z_B + M Y_1 + S Y_2 + T Y_3 \, ,$$ where $M$, $S$, and $T$ are the additional bosonic [gauge]{} fields. Also the bilinear form acquires further non-zero entries due to the presence of the new generators (see [@AFGHZ] for details). In particular, a non-degenerate bilinear form can be obtained from the aforesaid relativistic bilinear form, allowing for a well-defined and finite NR CS action. Specifically, in [@AFGHZ], the contraction leading to the NR [generators]{} is defined through the identifications $$\begin{aligned} P_{0} &=&\frac{\tilde{H}}{2\xi }+\xi \tilde{M}\,,\text{ \ \ \ }P_{a}=\tilde{P% }_{a}\,,\text{ \ \ \ \ \ \ \ }Y_1=\frac{\tilde{H}}{2\xi }-\xi \tilde{M}\,, \notag \\ J_{0} &=&\frac{\tilde{J}}{2}+ \xi^2 \tilde{S} \,,\text{ \ \ \ \ }J_{a}=\xi \tilde{G}_a \,,\text{ \ \ \ \ \ \ }Y_2=\frac{\tilde{J}}{2}-\xi^2 \tilde{S}\,, \label{contr1} \\ Z_{0} &=&\frac{\tilde{Z}}{2 \xi^2}+ \tilde{T} \,,\text{ \ \ \ \ }Z_{a}=\frac{% \tilde{Z}_{a}}{\xi }\,,\text{ \ \ \ \ \ \ }Y_3=\frac{\tilde{Z}}{2\xi ^{2}}-% \tilde{T}\,, \notag\end{aligned}$$ [and by subsequently taking $\xi \rightarrow \infty$. Let us note that]{} the index $A=0,1,2$ has previously been decomposed as $A \rightarrow \lbrace 0, a \rbrace$, with $a=1,2$. [Furthermore,]{} $Y_1$, $Y_2$, and $Y_3$ are the three $U(1)$ generators introduced at the relativistic level. In terms of the NR generators and fields, the [gauge connection one-form]{} of [@AFGHZ], $\tilde{A} = A^A \tilde{T}_A$, is given by $$\tilde{A} = \tau \tilde{H} + e^a \tilde{P}_a + \omega \tilde{J} + \omega^a \tilde{G}_a + k \tilde{Z} + k^a \tilde{Z}_a + m \tilde{M} + s \tilde{S} + t \tilde{T} \, .$$ The [NR version of the]{} Maxwell algebra presented in [AFGHZ]{} was called by the authors Maxwellian Exotic Bargmann (MEB) algebra, and its non-trivial commutations relations read $$\begin{aligned} \left[ \tilde{G}_{a},\tilde{P}_{b}\right] &=&-\epsilon _{ab}\tilde{M}% \,,\qquad \left[ \tilde{G}_{a},\tilde{Z}_{b}\right] =-\epsilon _{ab}\tilde{T}% \,,\text{ \ } \notag \\ \left[ \tilde{H},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{P}% _{b}\,,\qquad \quad \ \left[ \tilde{J},\tilde{Z}_{a}\right] =\epsilon _{ab}% \tilde{Z}_{b}\,, \notag \\ \left[ \tilde{J},\tilde{P}_{a}\right] &=&\epsilon _{ab}\tilde{P}% _{b}\,,\qquad \quad \left[ \tilde{H},\tilde{P}_{a}\right] =\epsilon _{ab}% \tilde{Z}_{b}\,, \label{MEB1} \\ \left[ \tilde{J},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{G}% _{b}\,,\qquad \ \ \left[ \tilde{P}_{a},\tilde{P}_{b}\right] =-\epsilon _{ab}% \tilde{T}\,, \notag \\ \left[ \tilde{G}_{a},\tilde{G}_{b}\right] &=&-\epsilon _{ab}\tilde{S}% \,,\qquad \quad \ \left[ \tilde{Z},\tilde{G}_{a}\right] =\epsilon _{ab}% \tilde{Z}_{b}\,. \notag\end{aligned}$$ Such NR algebra admits the following non-vanishing components of the invariant tensor $$\begin{aligned} \left\langle \tilde{G}_a \tilde{G}_b \right\rangle &=& \tilde{\alpha}_0 \delta_{ab} \,, \notag \\ \left\langle \tilde{G}_a \tilde{P}_b \right\rangle &=& \tilde{\alpha}_1 \delta_{ab} \,, \notag \\ \left\langle \tilde{G}_a \tilde{Z}_b \right\rangle &=& \tilde{\alpha}_2 \delta_{ab} \,, \notag \\ \left\langle \tilde{P}_a \tilde{P}_b \right\rangle &=& \tilde{\alpha}_2 \delta_{ab} \,, \label{invt1} \\ \left\langle \tilde{J} \tilde{S} \right\rangle &=& -\tilde{\alpha}_0 \,, \notag \\ \left\langle \tilde{J} \tilde{M} \right\rangle &=& -\tilde{\alpha}_1 \, = \, \left\langle \tilde{H} \tilde{S} \right\rangle \,, \notag \\ \left\langle \tilde{J} \tilde{T} \right\rangle &=& -\tilde{\alpha}_2 \, = \, \left\langle \tilde{H} \tilde{M} \right\rangle \,. \notag\end{aligned}$$ This bilinear form is non-degenerate if $\tilde{\alpha}_2 \neq 0$. The MEB curvature two-forms are given by $$\begin{aligned} R\left( \omega \right) &=&d\omega \,, \notag \\ R^{a}\left( \omega ^{b}\right) &=&d\omega ^{a}+\epsilon ^{ac}\omega \omega_{c}\,, \notag \\ R\left( \tau \right) &=& d\tau \,, \notag \\ R^{a}\left( e^{b}\right) &=& d e^a + \epsilon ^{ac}\omega e_{c} + \epsilon ^{ac}\tau \omega_{c} \,, \notag \\ R\left( k\right) &=& dk \,, \label{curvMEB} \\ R^{a}\left( k^{b}\right) &=& d k^a + \epsilon ^{ac}\omega k_{c} + \epsilon ^{ac}\tau e_{c} + \epsilon ^{ac} k \omega_{c} \,, \notag \\ R\left( m\right) &=& dm + \epsilon ^{ac} e_a \omega_{c} \,, \notag \\ R\left( s\right) &=&ds+\frac{1}{2}\epsilon ^{ac}\omega _{a}\omega _{c}\,, \notag \\ R\left( t\right) &=& dt + \epsilon ^{ac}\omega_{a} k_{c} + \frac{1}{2} \epsilon ^{ac} e_{a} e_{c} \,. \notag\end{aligned}$$ The NR three-dimensional CS action obtained in [@AFGHZ] reads, up to boundary terms, as follows: $$\begin{aligned} I_{\text{MEB}} &=& \int \Bigg \lbrace \tilde{\alpha}_{0}\bigg[ \omega _{a}R^{a}(\omega^{b})-2sR\left( \omega \right) \bigg] +\tilde{\alpha}_{1}\bigg[ 2e_{a}R^{a}(\omega ^{b})-2mR(\omega )-2\tau R(s) \bigg] \notag \\ && +\tilde{\alpha}_{2} \bigg[ e_{a}R^{a}\left( e^{b}\right) +k_{a}R^{a}\left( \omega ^{b}\right) +\omega _{a}R^{a}\left( k^{b}\right) -2sR\left( k\right) -2mR\left( \tau \right) \notag \\ && - 2tR\left( \omega \right) \bigg] \Bigg \rbrace \,. \label{CS1}\end{aligned}$$ As was noticed in [@AFGHZ], the NR CS action has three independent sectors proportional to three arbitrary constants, $\tilde{\alpha}_0$, $\tilde{\alpha}_1$, and $\tilde{\alpha}_2$. The first term corresponds to the so-called exotic NR gravity. The second term is the CS action for the extended Bargmann algebra [@LL; @Grigore; @Bose; @DHO2; @JN; @HP], while the last term reproduces the CS action for a new NR Maxwell algebra. Let us note that, since the bilinear form does not result to acquire degeneracy in the contraction process, [the equations of motion from the NR action (\[CS1\]) are given by the vanishing of all the curvatures (\[curvMEB\]).]{} On the supersymmetric extension of the Maxwellian [e]{}xtended Bargmann algebra =============================================================================== In this section[,]{} we explore the supersymmetric extension of the NR Maxwell algebra by applying a NR limit to the $\mathcal{N}=1$ and $\mathcal{N}=2$ Maxwell superalgebra. Interestingly[,]{} we show that, in order to have a well-defined NR superalgebra, we have to consider the NR limit of a centrally extended $\mathcal{N}=2$ Maxwell superalgebra endowed with a $% \mathfrak{so}(2)$ generator. Indeed[,]{} a true supersymmetric [extension]{} of the MEB algebra in which the anti-commutator of two fermionic charges gives a time and a space translation requires, as in the Bargmann case, at least $% \mathcal{N}=2$ supersymmetry. However, as we shall see in the next section, it is necessary to introduce by hand additional fermionic and bosonic generators in order to obtain a MEB superalgebra which allows the proper construction of a NR CS supergravity. In three spacetime dimensions, the minimal Maxwell superalgebra is spanned by the set of generators $\left\{ J_{A},P_{A},Z_{A},Q_{\alpha },\Sigma _{\alpha }\right\} $ [@CPR]. Such supersymmetric extension of the Maxwell algebra is characterized by the presence of an additional Majorana fermionic generator $\Sigma_\alpha$ whose presence assures the Jacobi identity $\left( P_a,Q_\alpha,Q_\beta\right)$. The introduction of a second spinorial charge is not new and have previously been considered in superstring theory [@Green] and $D=11$ supergravity [@AF; @AAR1; @AAR2]. The (anti-)commutation relations of the minimal Maxwell superalgebra are given by$$\begin{aligned} \left[ J_{A},J_{B}\right] &=&\epsilon _{ABC}J^{C}\,, \notag \\ \left[ J_{A},P_{B}\right] &=&\epsilon _{ABC}P^{C}\,, \notag \\ \left[ J_{A},Z_{B}\right] &=&\epsilon _{ABC}Z^{C}\,, \notag\\ \left[ P_{A},P_{B}\right] &=&\epsilon _{ABC}Z^{C}\,, \notag \\ \left[ J_{A},Q_{\alpha }\right] &=&-\frac{1}{2}\,\left( \gamma _{A}\right) _{\alpha }^{\text{ }\beta }Q_{\beta }\, , \text{ \ \ } \label{sm1} \\ \left[ J_{A},\Sigma _{\alpha }\right] &=&-\frac{1}{2}\,\left( \gamma _{A}\right) _{\alpha }^{\text{ }\beta }\Sigma _{\beta }\,, \notag \\ \left[ P_{A},Q_{\alpha }\right] &=&-\frac{1}{2}\,\left( \gamma _{A}\right) _{\alpha }^{\text{ }\beta }\Sigma _{\beta }\,,\text{ } \notag \\ \left\{ Q_{\alpha },Q_{\beta }\right\} &=&-\left( \gamma ^{A}C\right) _{\alpha \beta }P_{A}\,, \notag \\ \left\{ Q_{\alpha },\Sigma _{\beta }\right\} &=&-\,\left( \gamma ^{A}C\right) _{\alpha \beta }Z_{A}\,, \notag\end{aligned}$$where $\alpha =1,2$ are spinorial indices, $C$ is the charge conjugation matrix, and $\gamma ^{A}$ are the Dirac matrices in three spacetime dimensions. As was discussed in [@AFGHZ], it is necessary to include three additional $U\left( 1\right) $ generators given by $Y_{1}$, $Y_{2}$, and $% Y_{3}$ in order to get the bosonic MEB algebra as a NR limit. At the supersymmetric level, a NR contraction can be applied by considering the rescaling of the bosonic generators as in (\[contr1\]) and the following rescaling, with a dimensionless parameter $\xi $, of the fermionic generators: $$Q=\sqrt{\xi }\tilde{Q}^{-}\,,\qquad \Sigma =\frac{1}{\sqrt{\xi }}\tilde{% \Sigma}^{-}\,.$$A particular supersymmetric extension of the MEB algebra is obtained from the NR contraction $\xi \rightarrow \infty $ of (\[sm1\]): $$\begin{aligned} \left[ \tilde{G}_{a},\tilde{P}_{b}\right] &=&-\epsilon _{ab}\tilde{M}% \,,\qquad \ \ \ \ \ \ \ \ \left[ \tilde{G}_{a},\tilde{Z}_{b}\right] =-\epsilon _{ab}\tilde{T}\,,\text{ \ } \notag \\ \left[ \tilde{H},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{P}% _{b}\,,\qquad \quad \ \ \ \ \ \ \ \ \ \left[ \tilde{J},\tilde{Z}_{a}\right] =\epsilon _{ab}\tilde{Z}_{b}\,, \notag \\ \left[ \tilde{J},\tilde{P}_{a}\right] &=&\epsilon _{ab}\tilde{P}% _{b}\,,\qquad \quad \ \ \ \ \ \ \ \ \left[ \tilde{H},\tilde{P}_{a}\right] =\epsilon _{ab}\tilde{Z}_{b}\,, \notag \\ \left[ \tilde{J},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{G}% _{b}\,,\qquad \ \ \ \ \ \ \ \ \ \ \left[ \tilde{P}_{a},\tilde{P}_{b}\right] =-\epsilon _{ab}\tilde{T}\,, \notag \\ \left[ \tilde{G}_{a},\tilde{G}_{b}\right] &=&-\epsilon _{ab}\tilde{S}% \,,\qquad \quad \ \ \ \ \ \ \ \left[ \tilde{Z},\tilde{G}_{a}\right] =\epsilon _{ab}\tilde{Z}_{b}\,, \label{N1} \\ \left[ \tilde{J},\tilde{Q}_{\alpha }^{-}\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{Q}_{\beta }^{-}\,,\text{ \ \ \ \ \ }\left[ \tilde{J},\tilde{\Sigma}_{\alpha }^{-}\right] =-\frac{1}{2}% \left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\beta }^{-}\,, \notag \\ \left[ \tilde{H},\tilde{Q}_{\alpha }^{-}\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\beta }^{-}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{-},\tilde{Q}_{\beta }^{-}\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{M}\,,\quad \left\{ \tilde{Q}% _{\alpha }^{-},\tilde{\Sigma}_{\beta }^{-}\right\} =-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{T}\,. \notag\end{aligned}$$ Although the (anti-)commutation relations (\[N1\]) are well-defined and satisfy the Jacobi identities, we cannot say that the $\mathcal{N}=1$ MEB superalgebra obtained here is a true supersymmetry algebra. Indeed, the anti-commutator of two supercharges leads to a central charge transformation instead of a time and space translation. This is analogous to the $\mathcal{N}=1$ Bargmann superalgebra case [@ABRS]. One way to circumvent such difficulty is to apply the NR contraction to a $% \mathcal{N}=2$ relativistic Maxwell superalgebra. The $\mathcal{N}=2$ supersymmetric extension of the Maxwell algebra has been explored by diverse authors [@AILW; @CR1; @CFR]. Here, we shall focus on the $\mathcal{N}=2$ centrally extended Maxwell superalgebra endowed with a $\mathfrak{so}\left( 2\right) $ internal symmetry generator introduced in [@Concha]. Such $% \mathcal{N}=2$ Maxwell superalgebra is spanned by the set of generators $% \left\{ J_{A},P_{A},Z_{A},\mathcal{B},\mathcal{Z},Q_{\alpha }^{i},\Sigma _{\alpha }^{i}\right\} $[,]{} which satisfy the following non-vanishing (anti-)commutation relations: $$\begin{aligned} \left[ J_{A},J_{B}\right] &=&\epsilon _{ABC}J^{C}\,, \notag \\ \left[J_{A},P_{B}\right] &=&\epsilon _{ABC}P^{C}\,, \notag \\ \left[ J_{A},Z_{B}\right] &=&\epsilon _{ABC}Z^{C}\,, \notag\\ \left[P_{A},P_{B}\right] &=&\epsilon _{ABC}Z^{C}\,, \notag \\ \left[ J_{A},Q_{\alpha }^{i}\right] &=&-\frac{1}{2}\,\left( \gamma _{A}\right) _{\alpha }^{\text{ }\beta }Q_{\beta }^{i}\,,\quad \ \left[ J_{A},\Sigma _{\alpha }^{i}\right] =-\frac{1}{2}\,\left( \gamma _{A}\right) _{\alpha }^{\text{ }\beta }\Sigma _{\beta }^{i}\,, \label{N2Mnocs} \\ \left[ P_{A},Q_{\alpha }^{i}\right] &=&-\frac{1}{2}\,\left( \gamma _{A}\right) _{\alpha }^{\text{ }\beta }\Sigma _{\beta }^{i}\,, \notag \\ \left[ Q_{\alpha }^{i},\mathcal{B}\right] &=& \frac{1}{2}\epsilon ^{ij}\Sigma _{\alpha }^{j}\,, \notag \\ \left\{ Q_{\alpha }^{i},Q_{\beta }^{j}\right\} &=&-\delta ^{ij}\left( \gamma ^{A}C\right) _{\alpha \beta }P_{A}\,-C_{\alpha \beta }\epsilon ^{ij}\mathcal{% B}\,, \notag \\ \left\{ Q_{\alpha }^{i},\Sigma _{\beta }^{j}\right\} &=&-\delta ^{ij}\,\left( \gamma ^{A}C\right) _{\alpha \beta }Z_{A}\,-C_{\alpha \beta }\epsilon ^{ij}\mathcal{Z}\, , \notag\end{aligned}$$where $i=1,2$ is the number of supercharges. Let us note that the presence of a $\mathfrak{so}\left( 2\right) $ internal symmetry generator is crucial in order to admit a non-degenerate invariant inner product [@Concha]. Then, following [@LPSZ], let us consider the following definitions of the fermionic generators$$\begin{aligned} Q_{\alpha }^{\pm } &=&\frac{1}{\sqrt{2}}\left( Q_{\alpha }^{1}\pm \epsilon _{\alpha \beta }Q_{\beta }^{2}\right) \,, \notag \\ \Sigma _{\alpha }^{\pm } &=&\frac{1}{\sqrt{2}}\left( \Sigma _{\alpha }^{1}\pm \epsilon _{\alpha \beta }\Sigma _{\beta }^{2}\right) \,.\end{aligned}$$A dimensionless parameter $\xi $ can be introduced by considering the rescaling of the generators and central extension,$$\begin{aligned} J_{0} &=&\tilde{J}\,,\text{\ \ \ \quad \quad \quad\ \ \ }J_{a}=\xi \tilde{G}% _{a}\,,\text{ \ \ \ \ \ \ }, \notag \\ P_{0} &=&\frac{\tilde{H}}{2\xi }+\xi \tilde{M}\,,\text{ \ \ \ }P_{a}=\tilde{P% }_{a}\,,\text{ \ \ \ \ \ \ \ }\mathcal{B}=\frac{\tilde{H}}{2\xi }-\xi \tilde{% M}\,, \notag \\ Z_{0} &=&\frac{\tilde{Z}}{2\xi ^{2}}+\tilde{T}\,,\text{ \ \ \ \ }Z_{a}=\frac{% \tilde{Z}_{a}}{\xi }\,,\text{ \ \ \ \ \ \ }\mathcal{Z}=\frac{\tilde{Z}}{2\xi ^{2}}-\tilde{T}\,, \\ Q_{\alpha }^{-} &=&\sqrt{\xi }\tilde{Q}_{\alpha }^{-}\,,\quad \quad Q_{\alpha }^{+}=\frac{1}{\sqrt{\xi }}\tilde{Q}_{\alpha }^{+}\,, \notag \\ \Sigma _{\alpha }^{-} &=&\frac{1}{\sqrt{\xi }}\tilde{\Sigma}_{\alpha }^{-}\,,\qquad \Sigma _{\alpha }^{+}=\frac{1}{\xi ^{3/2}}\tilde{\Sigma}% _{\alpha }^{+}\,. \notag\end{aligned}$$Then, after taking the limit $\xi \rightarrow \infty $, a particular $\mathcal{N}=2$ Maxwellian Bargmann superalgebra is obtained; its (anti-)commutation relations are given by the purely bosonic commutators$$\begin{aligned} \left[ \tilde{G}_{a},\tilde{P}_{b}\right] &=&-\epsilon _{ab}\tilde{M}\,,% \text{ \ \ \ \ \ \ }\left[ \tilde{G}_{a},\tilde{Z}_{b}\right] =-\epsilon _{ab}\tilde{T}\,,\text{ \ } \notag \\ \left[ \tilde{H},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{P}_{b}\,,\text{ \ \ \ \ \ \ \ \ \ \ }\left[ \tilde{J},\tilde{Z}_{a}\right] =\epsilon _{ab}% \tilde{Z}_{b}\,, \notag \\ \left[ \tilde{J},\tilde{P}_{a}\right] &=&\epsilon _{ab}\tilde{P}_{b}\,,\text{ \ \ \ \ \ \ \ \ \ }\left[ \tilde{H},\tilde{P}_{a}\right] =\epsilon _{ab}% \tilde{Z}_{b}\,, \label{MB} \\ \left[ \tilde{J},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{G}_{b}\,,\text{ \ \ \ \ \ \ \ \ }\left[ \tilde{P}_{a},\tilde{P}_{b}\right] =-\epsilon _{ab}% \tilde{T}\,, \notag \\ \left[ \tilde{Z},\tilde{G}_{a}\right] &=&\epsilon _{ab}\tilde{Z}_{b}\,, \notag\end{aligned}$$along with$$\begin{aligned} \left[ \tilde{J},\tilde{Q}_{\alpha }^{\pm }\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{Q}_{\beta }^{\pm }\,,\qquad \ \ \left[ \tilde{J},\tilde{\Sigma}_{\alpha }^{\pm }\right] =-% \frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma% }_{\beta }^{\pm }\,, \notag \\ \qquad \left[ \tilde{H},\tilde{Q}_{\alpha }^{-}\right] &=&-\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\beta }^{-}\,,\qquad \quad \left[ \tilde{P}_{a},\tilde{Q}_{\alpha }^{+}\right] =-\frac{1}{2}% \left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\beta }^{-}\,, \notag \\ \left[ \tilde{G}_{a},\tilde{Q}_{\alpha }^{+}\right] &=&-\frac{1}{2}\left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{Q}_{\beta }^{-}\,,\qquad \left[ \tilde{G}_{a},\tilde{\Sigma}_{\alpha }^{+}\right] =-% \frac{1}{2}\left( \gamma _{a}\right) _{\alpha }^{\text{ \ }\beta }\tilde{% \Sigma}_{\beta }^{-}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{-},\tilde{Q}_{\beta }^{-}\right\} &=&-2\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{M}\,,\quad \left\{ \tilde{Q}% _{\alpha }^{+},\tilde{Q}_{\beta }^{-}\right\} =-\left( \gamma ^{a}C\right) _{\alpha \beta }\tilde{P}_{a}\,, \label{N2} \\ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{Q}_{\beta }^{+}\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{H}\,,\quad \quad \left\{ \tilde{Q}% _{\alpha }^{-},\tilde{\Sigma}_{\beta }^{-}\right\} =-2\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{T}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{\pm },\tilde{\Sigma}_{\beta }^{\mp }\right\} &=&-\left( \gamma ^{a}C\right) _{\alpha \beta }\tilde{Z}_{a}\,,\quad \ \ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{\Sigma}_{\beta }^{+}\right\} =-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{Z}\,. \notag\end{aligned}$$ Notice that, unlike the $\mathcal{N}=1$ superalgebra, the $\mathcal{N}% =2 $ Maxwellian Bargmann superalgebra obtained here can be seen as a true supersymmetry algebra. In particular, let us note the presence of the non-vanishing commutator between the $\tilde{G}_{a}$ generator and supersymmetry generator. Nevertheless, this superalgebra does not contain the MEB algebra as a subalgebra. Indeed, the bosonic subalgebra (\[MB\]) can be seen as a non-relativistic version of a \[Maxwell\]$\oplus u\left( 1\right) \oplus u\left( 1\right) $ algebra. Moreover, although the $\mathcal{N}=2$ NR Maxwell superalgebra (\[MB\])-(\[N2\]) has the desired features of a true superalgebra, it is not a good candidate to construct a three-dimensional CS supergravity action. Indeed, in order to have a NR supergravity action based on a supersymmetric extension of the MEB algebra, we need a well-defined invariant tensor, which requires to introduce by hand additional fermionic generators. The explicit Maxwellian extended Bargmann superalgebra allowing to construct a NR supergravity action is presented in the next section. Maxwellian extended Bargmann supergravity ========================================= Here[,]{} we present the explicit form of the Maxwellian extended Bargmann superalgebra allowing to construct a NR supergravity action. Consequently, we develop the aforementioned NR supergravity action by exploiting the CS construction in three dimensions. Maxwellian extended Bargmann superalgebra ----------------------------------------- As we have discussed in the previous section, the $\mathcal{N}% =2$ NR Maxwell superalgebra given by (\[MB\])-(\[N2\]) does not allow for the proper construction of a NR CS supergravity action although its relativistic analogue is well-defined. In order to have a proper NR CS supergravity action based on a supersymmetric extension of the MEB algebra, one requires to find a NR superalgebra which not only contains the MEB algebra as a subalgebra but also admits a non-degenerate invariant supertrace. Here we construct by hand a supersymmetric extension of the MEB algebra by introducing six Majorana fermionic generators $\tilde{Q}^{+}$, $\tilde{Q}% ^{-} $, $\tilde{\Sigma}^{+}$, $\tilde{\Sigma}^{-}$, $\tilde{R}$, and $\tilde{W% }$. Let us note that the presence of the $\tilde{R}$ and $\tilde{W}$ generators is similar to what happens in the extended Bargmann superalgebra presented in [@BR] and in the extended Newtonian superalgebra of [@OOTZ], in which a $\tilde{R}$ generator is considered. Furthermore, we introduce six extra bosonic generators $Y_1$, $Y_2$, $U_1$, $U_2$, $B_1$, and $B_2$. Both $B_1$ and $B_2$ are central, while the others act non-trivially on the spinor generators, similarly to the extra bosonic generators introduced in the extended Newton-Hooke supergravity [@OOZ]. The proposed supersymmetric extension of the MEB algebra is generated by the set of bosonic and fermionic generators $$\{\tilde{J},\tilde{G}_{a},\tilde{S},\tilde{H},\tilde{P}_{a},\tilde{M},\tilde{% Z},\tilde{Z}_{a},\tilde{T},\tilde{Y}_{1},\tilde{Y}_{2},\tilde{U}_{1},\tilde{U% }_{2},\tilde{B}_{1},\tilde{B}_{2},\tilde{Q}_{\alpha }^{+},\tilde{Q}_{\alpha }^{-},\tilde{R}_{\alpha },\tilde{\Sigma}_{\alpha }^{+},\tilde{\Sigma}_{\alpha }^{-},\tilde{W}_{\alpha }\}.$$ Such generators satisfy the MEB algebra (\[MEB1\]) along with the following non-vanishing (anti-)commutation relations: $$\begin{aligned} \left[ \tilde{J},\tilde{Q}_{\alpha }^{\pm }\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{Q}_{\beta }^{\pm }\,,\qquad \ \ \left[ \tilde{J},\tilde{\Sigma}_{\alpha }^{\pm }\right] =-% \frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma% }_{\beta }^{\pm }\,, \notag \\ \left[ \tilde{H},\tilde{Q}_{\alpha }^{\pm }\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\alpha }^{\pm }\,,\qquad \ \left[ \tilde{P}_{a},\tilde{Q}_{\alpha }^{+}\right] =-\frac{1}{2% }\left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\beta }^{-}\,, \notag \\ \left[ \tilde{G}_{a},\tilde{Q}_{\alpha }^{+}\right] &=&-\frac{1}{2}\left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{Q}_{\beta }^{-}\,,\qquad \ \left[ \tilde{G}_{a},\tilde{Q}_{\alpha }^{-}\right] =-\frac{% 1}{2}\left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{R}_{\beta }\,, \notag \\ \left[ \tilde{G}_{a},\tilde{\Sigma}_{\alpha }^{+}\right] &=&-\frac{1}{2}% \left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{\Sigma}_{\beta }^{-}\,,\ \ \quad \ \ \ \left[ \tilde{G}_{a},\tilde{\Sigma}_{\alpha }^{-}% \right] =-\frac{1}{2}\left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }% \tilde{W}_{\beta }\,, \notag \\ \left[ P_{a},\tilde{Q}_{\alpha }^{-}\right] &=&-\frac{1}{2}\left( \gamma _{a}\right) _{\alpha }^{\text{ }\beta }\tilde{W}_{\beta }\,,\qquad \quad \ % \left[ \tilde{J},\tilde{R}_{\alpha }\right] =-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{R}_{\beta }\,\,, \label{sMEB2} \\ \left[ \tilde{J},\tilde{W}_{\alpha }\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{W}_{\beta }\,,\qquad \quad % \left[ \tilde{H},\tilde{R}_{\alpha }\right] =-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{W}_{\beta }\,, \notag \\ \left[ \tilde{S},\tilde{Q}_{\alpha }^{+}\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{R}_{\beta }\,,\qquad \quad \ % \left[ \tilde{S},\tilde{\Sigma}_{\alpha }^{+}\right] =-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{W}_{\beta }\,, \notag \\ \left[ \tilde{M},\tilde{Q}_{\alpha }^{+}\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha }^{\text{ }\beta }\tilde{W}_{\beta }\,, \notag\end{aligned}$$$$\begin{aligned} \left[ \tilde{Y}_{1},\tilde{Q}_{\alpha }^{+}\right] &=&\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{Q}_{\beta }^{+}\,,\qquad \quad % \left[ \tilde{Y}_{2},\tilde{Q}_{\alpha }^{+}\right] =\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{R}_{\beta }\,, \notag \\ \left[ \tilde{Y}_{1},\tilde{Q}_{\alpha }^{-}\right] &=&-\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{Q}_{\beta }^{-}\,,\qquad \ \left[ \tilde{Y}_{1},\tilde{R}_{\alpha }\right] =\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{R}_{\beta }\,, \notag \\ \left[ \tilde{Y}_{1},\tilde{\Sigma}_{\alpha }^{+}\right] &=&\frac{1}{2}% \left( \gamma _{0}\right) _{\alpha \beta }\tilde{\Sigma}_{\beta }^{+}\,,\qquad \quad \left[ \tilde{Y}_{2},\tilde{\Sigma}_{\alpha }^{+}\right] =\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{W}_{\beta }\,, \notag \\ \left[ \tilde{Y}_{1},\tilde{\Sigma}_{\alpha }^{-}\right] &=&-\frac{1}{2}% \left( \gamma _{0}\right) _{\alpha \beta }\tilde{\Sigma}_{\beta }^{-}\,,\qquad \ \left[ \tilde{Y}_{1},\tilde{W}_{\alpha }\right] =\frac{1}{2}% \left( \gamma _{0}\right) _{\alpha \beta }\tilde{W}_{\beta }\,, \label{sMEB2a} \\ \left[ \tilde{U}_{1},\tilde{Q}_{\alpha }^{+}\right] &=&\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{\Sigma}_{\beta }^{+}\,,\qquad \quad \left[ \tilde{U}_{1},\tilde{Q}_{\alpha }^{-}\right] =-\frac{1}{2}% \left( \gamma _{0}\right) _{\alpha \beta }\tilde{\Sigma}_{\beta }^{-}\,, \notag \\ \qquad \left[ \tilde{U}_{2},\tilde{Q}_{\alpha }^{+}\right] &=&\frac{1}{2}% \left( \gamma _{0}\right) _{\alpha \beta }\tilde{W}_{\beta }\,,\qquad \quad % \left[ \tilde{U}_{1},\tilde{R}_{\alpha }\right] =\frac{1}{2}\left( \gamma _{0}\right) _{\alpha \beta }\tilde{W}_{\beta }\,, \notag\end{aligned}$$$$\begin{aligned} \left\{ \tilde{Q}_{\alpha }^{-},\tilde{Q}_{\beta }^{-}\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{M}{+}\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{U}_{2}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{Q}_{\beta }^{-}\right\} &=&-\left( \gamma ^{a}C\right) _{\alpha \beta }\tilde{P}_{a}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{Q}_{\beta }^{+}\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{H}-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{U}_{1}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{-},\tilde{\Sigma}_{\beta }^{-}\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{T}{+}% \left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{B}_{2}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{\pm },\tilde{\Sigma}_{\beta }^{\mp }\right\} &=&-\left( \gamma ^{a}C\right) _{\alpha \beta }\tilde{Z}_{a}\,,\quad \label{sMEB3} \\ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{\Sigma}_{\beta }^{+}\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{Z}-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{B}_{1}\,, \notag \\ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{R}_{\beta }\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{M}-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{U}_{2}\,,\quad \ \notag \\ \left\{ \tilde{Q}_{\alpha }^{+},\tilde{W}_{\beta }\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{T}-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{B}_{2}\,, \notag \\ \left\{ \tilde{\Sigma}_{\alpha }^{+},\tilde{R}_{\beta }\right\} &=&-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{T}-\left( \gamma ^{0}C\right) _{\alpha \beta }\tilde{B}_{2}\,. \notag\end{aligned}$$The superalgebra given by (\[MEB1\]), (\[sMEB2\]), (\[sMEB2a\]), and (\[sMEB3\]) will be denoted as the Maxwellian extended Bargmann superalgebra. One can note that the $\tilde{S}$ generator is no longer a central charge in this supersymmetric extension of the MEB algebra [but acts non-trivially on the spinor generators $\tilde{Q}^{+}$ and $\tilde{\Sigma}^{+}$.]{} It is important to emphasize that the MEB superalgebra obtained here has not been obtained through a NR limit of a relativistic superalgebra. Furthermore, the supersymmetric extension of the MEB algebra allowing a well-defined CS supergravity action could not be unique. Then, it would be interesting to study further supersymmetric extensions of the MEB algebra and the possibility of obtaining them by applying a NR limit to a relativistic theory. Non-relativistic Chern-Simons supergravity action ------------------------------------------------- Let us construct a NR CS supergravity action based on the MEB superalgebra [previously introduced]{}. The non-vanishing components of the invariant tensor for the MEB superalgebra are given by (\[invt1\]) along with $$\begin{aligned} \left\langle \tilde{Z}\tilde{S}\right\rangle &=&-\tilde{\alpha}_{2}\,, \notag \\ \left\langle \tilde{Y}_{1}\tilde{Y}_{2}\right\rangle &=&\tilde{\alpha}% _{0}\,, \label{invt2p1} \\ \left\langle \tilde{Y}_{1}\tilde{U}_{2}\right\rangle &=&\tilde{\alpha}% _{1}\,=\,\left\langle \tilde{U}_{1}\tilde{Y}_{2}\right\rangle \,, \notag \\ \left\langle \tilde{Y}_{1}\tilde{B}_{2}\right\rangle &=&\tilde{\alpha}% _{2}\,=\,\left\langle \tilde{U}_{1}\tilde{U}_{2}\right\rangle \,=\,\left\langle \tilde{B}_{1}\tilde{Y}_{2}\right\rangle \,, \notag\end{aligned}$$and $$\begin{aligned} \left\langle \tilde{Q}_{\alpha }^{-}\tilde{Q}_{\beta }^{-}\right\rangle &=&2% \tilde{\alpha}_{1}C_{\alpha \beta }\,=\,\left\langle \tilde{Q}_{\alpha }^{+}% \tilde{R}_{\beta }\right\rangle \,, \label{invt2p2} \\ \left\langle \tilde{Q}_{\alpha }^{-}\tilde{\Sigma}_{\beta }^{-}\right\rangle &=&2\tilde{\alpha}_{2}C_{\alpha \beta }\,=\,\left\langle \tilde{\Sigma}% _{\alpha }^{+}\tilde{R}_{\beta }\right\rangle \,=\,\left\langle \tilde{Q}% _{\alpha }^{+}\tilde{W}_{\beta }\right\rangle \,, \notag\end{aligned}$$where $\tilde{\alpha}_{0}$, $\tilde{\alpha}_{1}$, and $\tilde{\alpha}_{2}$ are arbitrary constants. The bilinear form associated with the MEB superalgebra is non-degenerate for $\tilde{\alpha} _{2}\neq 0$, analogously to the purely bosonic case [@AFGHZ]. [On the other hand,]{} the gauge connection one-form $\tilde{A}$ for the MEB superalgebra reads $$\begin{aligned} \tilde{A} &=&\omega \tilde{J}+\omega ^{a}\tilde{G}_{a}+\tau \tilde{H}+e% \tilde{P}_{a}+k\tilde{Z}+k^{a}\tilde{Z}_{a}+m\tilde{M}+s\tilde{S}+t\tilde{T} \notag \\ &&+y_{1}\tilde{Y_{1}}+y_{2}\tilde{Y}_{2}+b1\tilde{B}_{1}+b_{2}\tilde{B}% _{2}+u_{1}\tilde{U}_{1}+u_{2}\tilde{U}_{2} \notag \\ &&+{\psi }^{+}\tilde{Q}^{+}+{\psi }^{-}\tilde{Q}^{-}+{\xi }^{+}\tilde{\Sigma}% ^{+}+{\xi }^{-}\tilde{\Sigma}^{-}+{\rho }\tilde{R}+{\chi }\tilde{W}\,\,. \label{oneform2}\end{aligned}$$The corresponding curvature two-form $\tilde{F}=d\tilde{A}+\tilde{A}\wedge \tilde{A}=d\tilde{A}+\frac{1}{2}\left[ \tilde{A},\tilde{A}\right] $ in terms of the generators is given by $$\begin{aligned} \tilde{F} &=&R\left( \omega \right) \tilde{J}+R^{a}\left( \omega ^{b}\right) \tilde{G}_{a}+F\left( \tau \right) \tilde{H}+F^{a}\left( e^{b}\right) \tilde{% P}_{a}+F\left( k\right) \tilde{Z}+F^{a}\left( k^{b}\right) \tilde{Z}% _{a}+F\left( m\right) \tilde{M} \notag \\ &&+R\left( s\right) \tilde{S}+F\left( t\right) \tilde{T}+F\left( y_{1}\right) \tilde{Y_{1}}+F\left( y_{2}\right) \tilde{Y}_{2}+F\left( b_{1}\right) \tilde{B}_{1}+F\left( b_{2}\right) \tilde{B}_{2}+F\left( u_{1}\right) \tilde{U}_{1} \notag \\ &&+F\left( u_{2}\right) \tilde{U}_{2}+\nabla {\psi }^{+}\tilde{Q}^{+}+\nabla {\psi }^{-}\tilde{Q}^{-}+\nabla {\xi }^{+}\tilde{\Sigma}^{+}+\nabla {\xi }% ^{-}\tilde{\Sigma}^{-}+\nabla {\rho }\tilde{R}+\nabla {\chi }\tilde{W}\,. \label{F2c}\end{aligned}$$[In particular,]{} the bosonic curvature two-forms are given by $$\begin{aligned} F\left( \omega \right) &=&R\left( \omega \right) \,, \notag \\ F^{a}\left( \omega ^{b}\right) &=&R^{a}\left( \omega ^{b}\right) \,, \notag \\ F\left( \tau \right) &=&R\left( \tau \right) +\frac{1}{2}\bar{\psi}% ^{+}\gamma ^{0}\psi ^{+}\,, \notag \\ F^{a}\left( e^{b}\right) &=&R^{a}\left( e^{b}\right) +\bar{\psi}^{+}\gamma ^{a}\psi ^{-}\,, \notag \\ F\left( k\right) &=&R\left( k\right) +\bar{\psi}^{+}\gamma ^{0}\xi ^{+}\,, \label{boscurvSuperMEBp1} \\ F^{a}\left( k^{b}\right) &=&R^{a}\left( k^{b}\right) +\bar{\psi}^{+}\gamma ^{a}\xi ^{-}+\bar{\psi}^{-}\gamma ^{a}\xi ^{+}\,, \notag \\ F\left( m\right) &=&R\left( m\right) +\frac{1}{2}\bar{\psi}^{-}\gamma ^{0}\psi ^{-}+\bar{\psi}^{+}\gamma ^{0}\rho \,, \notag \\ F\left( s\right) &=&R\left( s\right) \,, \notag \\ F\left( t\right) &=&R\left( t\right) +\bar{\psi}^{-}\gamma ^{0}\xi ^{-}+% \bar{\psi}^{+}\gamma ^{0}\chi +\bar{\xi}^{+}\gamma ^{0}\rho \,, \notag\end{aligned}$$where $R\left( \omega \right) $, $R^{a}\left( \omega ^{b}\right) $, $R\left( \tau \right) $, $R^{a}\left( e^{b}\right) $, $R\left( k\right) $, $% R^{a}\left( k^{b}\right) $, $R\left( m\right) $, $R\left( s\right) $, and $% R\left( t\right) $ are the MEB curvatures defined in (\[curvMEB\]), together with $$\begin{aligned} F\left( y_{1}\right) &=&dy_{1}\,, \notag \\ F\left( y_{2}\right) &=&dy_{2}\,, \notag \\ F\left( b_{1}\right) &=&db_{1}+\bar{\psi}^{+}\gamma ^{0}\xi ^{+}\,, \notag \\ F\left( b_{2}\right) &=&db_{2}-\bar{\psi}^{-}\gamma ^{0}\xi ^{-}+\bar{\psi}% ^{+}\gamma ^{0}\chi +\bar{\xi}^{+}\gamma ^{0}\rho \,, \label{boscurvSuperMEBp2} \\ F\left( u_{1}\right) &=&du_{1}+\frac{1}{2}\bar{\psi}^{+}\gamma ^{0}\psi ^{+}\,, \notag \\ F\left( u_{2}\right) &=&du_{2}-\frac{1}{2}\bar{\psi}^{-}\gamma ^{0}\psi ^{-}+\bar{\psi}^{+}\gamma ^{0}\rho \,. \notag\end{aligned}$$On the other hand, the covariant derivatives of the spinor $1$-form fields read $$\begin{aligned} \nabla \psi ^{+} &=&d\psi ^{+}+\frac{1}{2}\omega \gamma _{0}\psi ^{+}-\frac{1% }{2}y_{1}\gamma _{0}\psi ^{+}\,, \notag \\ \nabla \psi ^{-} &=&d\psi ^{-}+\frac{1}{2}\omega \gamma _{0}\psi ^{-}+\frac{1% }{2}\omega ^{a}\gamma _{a}\psi ^{+}+\frac{1}{2}y_{1}\gamma _{0}\psi ^{-}\,, \notag \\ \nabla \xi ^{+} &=&d\xi ^{+}+\frac{1}{2}\omega \gamma _{0}\xi ^{+}+\frac{1}{2% }\tau \gamma _{0}\psi ^{+}-\frac{1}{2}y_{1}\gamma _{0}\xi ^{+}-\frac{1}{2}% u_{1}\gamma _{0}\psi ^{+}\,, \notag \\ \nabla \xi ^{-} &=&d\xi ^{-}+\frac{1}{2}\omega \gamma _{0}\xi ^{-}+\frac{1}{2% }\tau \gamma _{0}\psi ^{-}+\frac{1}{2}e^{a}\gamma _{a}\psi ^{+}+\frac{1}{2}% \omega ^{a}\gamma _{a}\xi ^{+} \label{fermcurvSuperMEB} \\ &&+\frac{1}{2}y_{1}\gamma _{0}\xi ^{-}+\frac{1}{2}u_{1}\gamma _{0}\psi ^{-}\,, \notag \\ \nabla \rho &=&d\rho +\frac{1}{2}\omega \gamma _{0}\rho +\frac{1}{2}\omega ^{a}\gamma _{a}\psi ^{-}+\frac{1}{2}s\gamma _{0}\psi ^{+}-\frac{1}{2}% y_{2}\gamma _{0}\psi ^{+}-\frac{1}{2}y_{1}\gamma _{0}\rho \,, \notag \\ \nabla \chi &=&d\chi +\frac{1}{2}\omega \gamma _{0}\chi +\frac{1}{2}\omega ^{a}\gamma _{a}\xi ^{-}+\frac{1}{2}e^{a}\gamma _{a}\psi ^{-}+\frac{1}{2}\tau \gamma _{0}\rho +\frac{1}{2}s\gamma _{0}\xi ^{+}+\frac{1}{2}m\gamma _{0}\psi ^{+} \notag \\ &&-\frac{1}{2}y_{2}\gamma _{0}\xi ^{+}-\frac{1}{2}y_{1}\gamma _{0}\chi -% \frac{1}{2}u_{2}\gamma _{0}\psi ^{+}-\frac{1}{2}u_{1}\gamma _{0}\rho \,. \notag\end{aligned}$$A CS supergravity action based on the [MEB]{} superalgebra can be constructed by combining the non-zero invariant tensors (\[invt1\]), (\[invt2p1\]), and (\[invt2p2\]) with the gauge connection one-form $\tilde{A}$ (\[oneform2\]), and it reads, up to boundary terms, as follows: $$\begin{aligned} I &=&\int \Bigg \lbrace \tilde{\alpha}_{0} \bigg[ \omega _{a}R^{a}(\omega ^{b})-2sR\left(\omega \right) +2y_{1}dy_{2}\bigg] +\tilde{\alpha}_{1} \bigg[ 2e_{a}R^{a}(\omega ^{b})-2mR(\omega )-2\tau R(s)+2y_{1}du_{2} \notag \\ && +2u_{1}dy_{2}+2\bar{\psi}^{+}\nabla \rho +2\bar{\rho}\nabla \psi ^{+}+2\bar{\psi}^{-}\nabla \psi ^{-} \bigg] +\tilde{\alpha}_{2} \bigg[ e_{a}R^{a}\left( e^{b}\right) +k_{a}R^{a}\left( \omega ^{b}\right) +\omega _{a}R^{a}\left( k^{b}\right) \notag \\ && -2sR\left( k\right) -2mR\left( \tau \right) -2tR\left( \omega \right) +2y_{1}db_{2}+2u_{1}du_{2}+2y_{2}db_{1}+2\bar{\psi}^{-}\nabla \xi ^{-}+2\bar{\xi}^{-}\nabla \psi ^{-} \notag \\ && +2\bar{\psi}^{+}\nabla \chi +2\bar{\chi}\nabla \psi ^{+}+2\bar{\xi}% ^{+}\nabla \rho +2\bar{\rho}\nabla \xi ^{+} \bigg] \Bigg \rbrace \,. \label{CS2}\end{aligned}$$The CS action (\[CS2\]) obtained here describes the so-called Maxwellian extended Bargmann supergravity theory. Let us note that the NR CS supergravity action (\[CS2\]) contains three independent sectors proportional to $\tilde{\alpha}_{0}$, $\tilde{\alpha}_{1}$, and $\tilde{% \alpha}_{2}$. In particular, the term proportional to $\tilde{\alpha}_{0}$ corresponds to a NR exotic Lagrangian, while the extended Bargmann supergravity introduced in [@BR] appears in the $\tilde{\alpha}% _{1}$ sector, endowed with some additional terms related to the presence of the bosonic $1$-form fields $y_{1}$, $y_{2}$, $u_{1}$, and $u_{2}$. The term proportional to $\tilde{\alpha}_{2}$ can be seen as the CS Lagrangian for a new NR Maxwell superalgebra. In addition, one can see that the bosonic part of the CS action (\[CS2\]) corresponds to the MEB gravity action presented in [@AFGHZ], supplemented with the bosonic $1$-form fields $y_{1}$, $y_{2}$, $b_{1}$, $% b_{2}$, $u_{1}$, and $u_{2}$. Note that, for $\alpha _{2}\neq 0$, the field equations from the NR CS supergravity action (\[CS2\]) reduce to the vanishing of the curvature two-forms (\[boscurvSuperMEBp1\]), (\[boscurvSuperMEBp2\]), and ([fermcurvSuperMEB]{}) associated with the [MEB]{} superalgebra. These curvatures transform covariantly with respect to the following supersymmetry transformation laws: $$\begin{aligned} \delta \omega &=&0\,, \notag \\ \delta \omega ^{a} &=&0\,, \notag \\ \delta \tau &=&-\bar{\epsilon}^{+}\gamma ^{0}\psi ^{+}\,, \notag \\ \delta e^{a} &=&-\bar{\epsilon}^{+}\gamma ^{a}\psi ^{-}-\bar{\epsilon}% ^{-}\gamma ^{a}\psi ^{+}\,, \notag \\ \delta k &=&-\bar{\epsilon}^{+}\gamma ^{0}\xi ^{+}-\bar{\varphi}^{+}\gamma ^{0}\psi ^{+}\,, \label{GT1} \\ \delta k^{a} &=&-\bar{\epsilon}^{\pm }\gamma ^{a}\xi ^{\mp }-\bar{\varphi}% ^{\pm }\gamma ^{a}\psi ^{\mp }\,, \notag \\ \delta m &=&-\bar{\epsilon}^{-}\gamma ^{0}\psi ^{-}-\bar{\epsilon}^{+}\gamma ^{0}\rho -\bar{\eta}\gamma ^{0}\psi ^{+}\,, \notag \\ \delta s &=&0\,, \notag \\ \delta t &=&-\bar{\epsilon}^{-}\gamma ^{0}\xi ^{-}-\bar{\varphi}^{-}\gamma ^{0}\psi ^{-}-\bar{\epsilon}^{+}\gamma ^{0}\chi -\bar{\zeta}\gamma ^{0}\psi ^{+}-\bar{\varphi}^{+}\gamma ^{0}\rho -\bar{\eta}\gamma ^{0}\xi ^{+}\,, \notag\end{aligned}$$ $$\begin{aligned} \delta y_{1} &=&0\,, \notag \\ \delta y_{2} &=&0\,, \notag \\ \delta b_{1} &=&-\bar{\epsilon}^{+}\gamma ^{0}\xi ^{+}-\bar{\varphi}% ^{+}\gamma ^{0}\psi ^{+}\,, \notag \\ \delta b_{2} &=&\bar{\epsilon}^{-}\gamma ^{0}\xi ^{-}+\bar{\varphi}% ^{-}\gamma ^{0}\psi ^{-}-\bar{\epsilon}^{+}\gamma ^{0}\chi -\bar{\zeta}% \gamma ^{0}\psi ^{+}-\bar{\varphi}^{+}\gamma ^{0}\rho -\bar{\eta}\gamma ^{0}\xi ^{+}\,, \notag \\ \delta u_{1} &=&-\bar{\epsilon}^{+}\gamma ^{0}\psi ^{+}\,, \notag \\ \delta u_{2} &=&\bar{\epsilon}^{-}\gamma ^{0}\psi ^{-}-\bar{\epsilon}% ^{+}\gamma ^{0}\rho -\bar{\eta}\gamma ^{0}\psi ^{+}\,, \notag \\ \delta \epsilon ^{+} &=&d\epsilon ^{+}+\frac{1}{2}\omega \gamma _{0}\epsilon ^{+}-\frac{1}{2}y_{1}\gamma _{0}\epsilon ^{+}\,, \label{GT2} \\ \delta \epsilon ^{-} &=&d\epsilon ^{-}+\frac{1}{2}\omega \gamma _{0}\epsilon ^{-}+\frac{1}{2}\omega ^{a}\gamma _{a}\epsilon ^{+}+\frac{1}{2}y_{1}\gamma _{0}\epsilon ^{-}\,, \notag \\ \delta \varphi ^{+} &=&d\varphi ^{+}+\frac{1}{2}\omega \gamma _{0}\varphi ^{+}+\frac{1}{2}\tau \gamma _{0}\epsilon ^{+}-\frac{1}{2}y_{1}\gamma _{0}\varphi ^{+}-\frac{1}{2}u_{1}\gamma _{0}\epsilon ^{+}\,, \notag \\ \delta \varphi ^{-} &=&d\varphi ^{-}+\frac{1}{2}\omega \gamma _{0}\varphi ^{-}+\frac{1}{2}\tau \gamma _{0}\epsilon ^{-}+\frac{1}{2}e^{a}\gamma _{a}\epsilon ^{+}+\frac{1}{2}\omega ^{a}\gamma _{a}\varphi ^{+}+\frac{1}{2}% y_{1}\gamma _{0}\varphi ^{-}+\frac{1}{2}u_{1}\gamma _{0}\epsilon ^{-}\,, \notag \\ \delta \eta &=&d\eta +\frac{1}{2}\omega \gamma _{0}\eta +\frac{1}{2}\omega ^{a}\gamma _{a}\epsilon ^{-}+\frac{1}{2}s\gamma _{0}\epsilon ^{+}-\frac{1}{2}% y_{2}\gamma _{0}\epsilon ^{+}-\frac{1}{2}y_{1}\gamma _{0}\eta \,, \notag \\ \delta \zeta &=&d\zeta +\frac{1}{2}\omega \gamma _{0}\zeta +\frac{1}{2}% \omega ^{a}\gamma _{a}\varphi ^{-}+\frac{1}{2}e^{a}\gamma _{a}\epsilon ^{-}+% \frac{1}{2}\tau \gamma _{0}\eta +\frac{1}{2}s\gamma _{0}\varphi ^{+}+\frac{1% }{2}m\gamma _{0}\epsilon ^{+} \notag \\ &&-\frac{1}{2}y_{2}\gamma _{0}\varphi ^{+}-\frac{1}{2}y_{1}\gamma _{0}\zeta -% \frac{1}{2}u_{2}\gamma _{0}\epsilon ^{+}-\frac{1}{2}u_{1}\gamma _{0}\eta \,, \notag\end{aligned}$$ where $\epsilon ^{\pm }$, $\varphi ^{\pm }$, $\eta $, and $\zeta $ are the fermionic gauge parameter[s]{} related to $\tilde{Q}^{\pm }$, $\tilde{\Sigma}% ^{\pm }$, $\tilde{R}$, and $\tilde{W}$, respectively. The three-dimensional Maxwellian extended Bargmann supergravity theory obtained here corresponds to an alternative NR supergravity theory which contains the extended Bargmann supergravity [@BR] (supplemented with some additional bosonic $1$-form fields) as a sub-case and which is distinct from the [Newton-Cartan]{} supergravity introduced in [ABRS, BRZ]{}. Discussion ========== In this work[,]{} we have studied the NR limit of the relativistic Maxwell superalgebra. A well-defined NR superalgebra with the desired features has been obtained by contracting the $\mathcal{N}=2$ Maxwell superalgebra introduced in [@Concha]. Nevertheless, the construction of a proper NR supergravity action based on a NR version of the Maxwell superalgebra has required to introduce by hand new fermionic and bosonic generators. The new structure has been called as the Maxwellian extended Bargmann superalgebra and corresponds to a supersymmetric extension of the MEB algebra presented in [@AFGHZ]. In particular the MEB superalgebra admits a non-degenerate invariant bilinear form allowing to construct a proper NR CS supergravity action. Interestingly, the MEB CS supergravity theory presented here contains the extended Bargmann supergravity as a sub-case. The NR supergravity action constructed in this work, see (\[CS2\]), could be useful in the construction of a three-dimensional Horava-Lifshitz supergravity. Indeed, as was noticed in [@BR; @HLO], the extended Bargmann gravity can be seen as a particular kinetic term of the Horava-Lifshitz gravity. In particular, it would be intriguing to explore the effects arising from the presence of the additional gauge field appearing in the Maxwell version of the extended Bargmann (super)algebra. A future development could also consist in exploring the possibility to obtain the MEB superalgebra introduced here through a NR limit or contraction process from a relativistic superalgebra. An alternative limit which could be used to recover the MEB algebra is the vanishing cosmological constant limit. In particular, one could conjecture that a supersymmetric extension of the recent enlarged extended Bargmann gravity, introduced in [@CR4], reproduces the present MEB supergravity in a flat limit \[work in progress\]. On the other hand, it would be interesting to apply the Lie algebra expansion method [@HSa; @AIPV; @AIPV2; @Sexp] to obtain the MEB superalgebra. One could follow the procedure used in [@BIOR; @AGI; @Romano] and consider the expansion of a relativistic Maxwell superalgebra. Alternatively, one might also extend the results obtained in [@CR4; @MPS] in which NR algebras appear as semigroup expansions of the so-called Nappi-Witten algebra. Another aspect that deserves further investigation regards the development of a Maxwellian version of the extended Newtonian gravity [@HHO] and its supersymmetric extension [@OOTZ]. A cosmological constant has recently been accommodated in the extended Newtonian gravity action by including new generators to the Newton-Hooke algebra [@CRR2]. One could expect to obtain a Maxwellian Newtonian algebra by generalizing the MEB one in a similar way to [@OOTZ; @CRR2]. It would be then compelling to explore possible matter couplings. Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported by the CONICYT - PAI grant N$^{\circ }$77190078 (P.C.) and FONDECYT Projects N$^{\circ }$3170438 (E.R.). [99]{} R. Andringa, E.A. Bergshoeff, J. Rosseel, E. Sezgin, *Newton-Cartan Supergravity*, Class. Quant. Grav. **30** (2013) 205005. arXiv:1305.6737 \[hep-th\]. E.A. Bergshoeff, J. Rosseel, T. Zojer, *Newton-Cartan supergravity with torsion and Schrödinger supergravity*, JHEP **11** (2015) 180. arXiv:1509.04527 \[hep-th\]. E.A. Bergshoeff, J. Rosseel, *Three-Dimensional Extended Bargmann Supergravity*, Phys. Rev. Lett. **116** (2016) 251601. arXiv:1604.08042 \[hep-th\]. N. Ozdemir, M. Ozkan, O. Tunca, U. Zorba, *Three-dimensional extended Newtonian (super)gravity*, JHEP **05** (2019) 130. arXiv:1903.09377 \[hep-th\]. N. Ozdemir, M. Ozkan, U. Zorba, *Three-Dimensional Extended Lifshitz, Schrödinger and Newton-Hooke Supergravity*, JHEP **11** (2019) 052. arXiv:1909.10745 \[hep-th\]. L. Ravera, *AdS Carroll Chern-Simons supergravity in 2 + 1 dimensions and its flat limit*, Phys. Lett. B **795** (2019) 331. arXiv:1905.00766 \[hep-th\]. Farhad Ali, Lucrezia Ravera, *$\mathcal{N}$-extended Chern-Simons Carrollian supergravities in $2+1$ spacetime dimensions*, arXiv:1912.04172 \[hep-th\]. D. Son, *Toward an AdS/cold atoms correspondence: A Geometric realization of the Schrodinger symmetry*, Phys. Rev. D**78** (2008) 046003. arXiv:0804.3972 \[hep-th\]. K. Balasubramanian, J. McGreevy, *Gravity duals for non-relativistic CFTs*, Phys. Rev. Lett **101** (2008) 061601. arXiv:0804.4053 \[hep-th\]. S. Kachru, X. Liu, M. Mulligan, *Gravity Duals of Lifshitz-like Fixed Points*, Phys. Rev. D**78** (2008) 106005. arXiv:0808.1725 \[hep-th\]. A. Bagchi, R. Gopakumar, *Galilean Conformal Algebras and AdS/CFT*, JHEP **07** (2009) 037. arXiv:0902.1385 \[hep-th\]. A. Bagchi, R. Gopakumar, I. Mandal, A. Miwa, *GCA in 2d*, JHEP **08** (2010) 004. arXiv: 0912.1090 \[hep-th\]. M.H. Christensen, J. Hartong, N.A. Obers, B. Rollier, *Torsional Newton-Cartan Geometry and Lifshitz Holography*, Phys. Rev. D**89** (2014) 061901. arXiv: 1311.4794 \[hep-th\]. M.H. Christensen, J. Hartong, N.A. Obers, B. Rollier,\ *Boundary Stress-Energy Tensor and Newton-Cartan Geometry in Lifshitz Holography*, JHEP **01** (2014) 057. arXiv:1311.6471 \[hep-th\]. M. Taylor, *Lifshitz holography*, Class. Quant. Grav. **33** (2016) 033001. arXiv:1512.03554 \[hep-th\]. D.T. Son, *Newton-Cartan Geomtry and the Quantum Hall Effect*, arXiv:1306.0638 \[cond-mat.mess-hall\]. C. Hoyos, D.T. Son, *Hall Viscosity and Electromagnetic Response*, Phys. Rev. Lett. **108** (2012) 066805. arXiv:1109.2651 \[cond-mat.mess-hall\]. M. Geracie, K. Prabhu, M.M. Roberts, *Curved non-relativistic spacetimes, Newtonian gravitation and massive matter*, J. Math. Phys. **56** (2015) 103505. arXiv:1503.02682 \[hep-th\]. A. Gromov, K. Jensen, A.G. Abanov, *Boundary effective action for quantum Hall states*, Phys. Rev. Lett. **116** (2016) 126802. arXiv:1506.07171 \[cond-mat.str-el\]. C. Duval, H.P. Kunzle, *Minimal Gravitational Coupling in the Newtonian Theory and the Covariant Schrödinger Equation*, Gen. Rel. Grav. **16** (1984) 333. C. Duval, G. Burdet, H.P. Kunzle, M. Perrin, *Bargmann Structures and Newton-Cartan Theory*, Phys. Rev. D **31** (1985) 1841. C. Duval, G.W. Gibbons, P. Horvathy, *Celestial mechanics, conformal structures and gravitational waves*, Phys. Rev. D**43** (1991) 3907. \[hep-th/0512188\]. C. Duval, *On Galilean isometries*, Class. Quant. Grav. **10** (1993) 2217. arXiv:0903.1641 \[math-ph\]. R. De Pietri, L. Lusanna, M. Pauri, *Standard and generalized Newtonian gravities as 'gauge' theories of the extended Galilei group. I. The standard theory*, Class. Quant. Grav. **12** (1995) 219. \[gr-qc/9405046\]. R. De Pietri, L. Lusanna, M. Pauri, *Standard and generalized Newtonian gravities as 'gauge' theories of the extended Galilei group. II. Dynamical three space theories*, Class. Quant. Grav. **12** (1995) 255. \[gr-qc/9405047\]. P. Hořava, *Quantum Gravity at a Lifshitz Point*, Phys. Rev. D **79** (2009) 084008. arXiv:0901.3775 \[hep-th\]. C. Duval, P.A. Horvathy,* Non-relativistic conformal symmetries and Newton-Cartan structures*, J. Phys. A **42** (2009) 465206. arXiv:0904.0531 \[math-ph\]. G. Papageorgiou, B.J. Schroers, *Galilean quantum gravity with cosmological constant and the extended q-Heisenberg algebra*, JHEP **11** (2010) 020. arXiv:1008.0279 \[hep-th\]. R. Andringa, E. Bergshoeff, S. Panda, M. de Roo, *Newtonian Gravity and the Bargmann Algebra*, Class. Quant. Grav. **28** (2011) 105011. arXiv:1011.1145 \[hep-th\]. R. Andringa, E. Bergshoeff, J. Gomis, M. de Roo, *'Stringly' Newton-Cartan Gravity*, Class. Quant. Grav. **29** (2012) 235020. arXiv:1206.5176 \[hep-th\]. J. Hartong, Y. Lei, N.A. Obers, *Nonrelativistic Chern-Simons theories and three-dimensional Horava-Lifshitz gravity*, Phys. Rev. D **94** (2016) 065027. arXiv:1604.08054 \[hep-th\]. E. Bergshoeff, A. Chatzistavrakidis, L. Romano, J. Rosseel, *Newton-Cartan Gravity and Torsion*, JHEP **1710** (2017) 194. arXiv:1708.05414 \[hep.th\]. D. Chernyavsky, D. Sorokin, *Three-dimensional (higher-spin) gravities with extended Schrödinger and l-conformal Galilean symmetries*, arXiv:1905.13154 \[hep-th\]. G. Festuccia, N. Seiberg, *Rigid Supersymmetric Theories in Curved Superspace*, JHEP **1106** (2011) 114. arXiv:1105.0689 \[hep-th\] V. Pestun, *Localization of gauge theory on a four-sphere and supersymmetric Wilson loops*, Commun. Math. Phys. **313** (2012) 71. arXiv:0712.2824 \[hep-th\]. R. Schrader, *The Maxwell group and the quantum theory of particles in classical homogeneous electromagnetic fields*, Fortsch. Phys. **20** (1972) 701. H. Bacry, P. Combe, J.L. Richard, *Group-theoretical analysis of elementary particles in an external electromagnetic field. 1. The relativistic particle in a constant and uniform field*, Nuovo Cim. A **67** (1970) 267. J. Gomis, A. Kleinschmidt, *On free Lie algebras and particles in electro-magnetic fields*, JHEP **07** (2017) 085. arXiv:1705.05854 \[hep-th\]. J.D. Edelstein, M. Hassaine, R. Troncoso, J. Zanelli, *Lie-algebra expansions, Chern-Simons theories and the Einstein-Hilbert Lagrangian*, Phys. Lett. B **640** (2006) 278. \[hep-th/0605174\]. F. Izaurieta, E. Rodríguez, P. Minning, P. Salgado, A. Perez, *Standard General Relativity from Chern-Simons Gravity*, Phys. Lett. B **678** (2009) 213. arXiv:0905.2187 \[hep-th\]. P.K. Concha, D.M. Peñafiel, E.K. Rodríguez, P. Salgado, *Even-dimensional General Relativity from Born-Infeld gravity*, Phys. Lett. B **725** (2013) 419. arXiv:1309.0062 \[hep-th\]. P.K. Concha, D.M. Peñafiel, E.K. Rodríguez, P. Salgado, *Chern-Simons and Born-Infeld gravity theories and Maxwell algebras type*, Eur. Phys. J. C **74** (2014) 2741. arXiv:1402.0023 \[hep-th\]. P.K. Concha, D.M. Peñafiel, E.K. Rodríguez, P. Salgado, *Generalized Poincaré algebras and Lovelock-Cartan gravity theory*, Phys. Lett. B **742** (2015) 310. arXiv:1405.7078 \[hep.th\]. P. Salgado, R.J. Szabo, O. Valdivia, *Topological gravity and transgression holography*, Phys. Rev. D**89** (2014) 084077. arXiv:1401.3653 \[hep-th\]. S. Hoseinzadeh, A. Rezaei-Aghdam, *(2+1)-dimensional gravity from Maxwell and semisimple extension of the Poincaré gauge symmetric models*, Phys. Rev. D**90** (2014) 084008. arXiv:1402.0320 \[hep-th\]. P. Concha, N. Merino, O. Miskovic, E. Rodríguez, P. Salgado-Rebolledo, O. Valdivia, *Asymptotic symmetries of three-dimensional Chern-Simons gravity for the Maxwell algebra*, JHEP **1810** (2018) 079. arXiv:1805.08834 \[hep-th\]. R. Caroca, P. Concha, O. Fierro, E. Rodríguez, P. Salgado-Rebolledo, *Generalized Chern-Simons higher-spin gravity theories in three dimensions*, arXiv:1712.09975 \[hep-th\]. R. Caroca, P. Concha, E. Rodríguez, P. Salgado-Rebolledo, *Generalizing the* $\mathfrak{bms}_{3}$ *and 2D-conformal algebras by expanding the Virasoro algebra*, Eur. Phys. J. C **78** (2018) 262. arXiv:1707.07209 \[hep-th\]. J.A. de Azcarraga, K. Kamimura, J. Lukierski, *Generalized cosmological term from Maxwell symmetries*, Phys. Rev. D**83** (2011) 124036. arXiv:1012.4402 \[hep-th\]. R. Durka, J. Kowalski-Glikman, M. Szczachor, *Gauges AdS-Maxwell algebra and gravity*, Mod. Phys. Lett. A **26** (2011) 2689. arXiv:1107.4728 \[hep-th\]. J.A. de Azcarraga, K. Kamimura, J. Lukierski, *Maxwell symmetries and some applications*, Int. J. Mod. Phys. Conf. Ser. **23** (2013) 01160. arXiv:1201.2850 \[hep-th\]. O. Cebecioğlu, S. Kibaroğlu,* Maxwell-affine gauge theory of gravity*, Phys. Lett. B **751** (2015) 131. arXiv:1503.09003 \[hep-th\]. S. Bansal, D. Sorokin, *Can Chern-Simons or Rarita-Schwinger ber a Volkov-Akulov Goldstone?*, JHEP **07** (2018) 106. arXiv:1806.05945 \[hep-th\]. S. Kibaroğlu, M. Şenay, O. Cebecioğlu, $D=4\mathit{% \ }$*topological gravity from gauging the Maxwell-special-affine group*, Mod. Phys. Lett. A**34** (2019) 1950016. arXiv:1810.01635 \[hep-th\]. P. Salgado-Rebolledo, *The Maxwell group in 2+1 dimensions and its infinite-dimensional enhancements*, JHEP **10** (2019) 039. arXiv:1905.09421 \[hep-th\]. S. Bonanos, J. Gomis, K. Kamimura, J. Lukierski, *Maxwell superalgebra and superparticle in constant Gauge background*. Phys. Rev. Lett. **104** (2010) 090401. arXiv:0911.5072 \[hep-th\]. S. Bonanos, J. Gomis, K. Kamimura, J. Lukierski, *Deformations of Maxwell Superalgebras and Their Applications*, J. Math. Phys. **51** (2010) 102301. arXiv:1005.3714 \[hep-th\]. J. Lukierski, *Generalized Wigner-Inönü Contractions and Maxwell (Super)Algebras*, Proc. Steklov Inst. Math. **272** (2011) no.1 183. arXiv:1007.3405 \[hep-th\]. J.A. de Azcarrag, J.M. Izquierdo, J. Lukierski, M. Woronowicz, *Generalizations of Maxwell (super)algebras by the expansion method*, Nucl. Phys. B **869** (2013) 303. arXiv:1210.1117 \[hep-th\]. J.A. de Azcarraga, J.M. Izquierdo, *Minimal D=4 supergravity from superMaxwell algebra*, Nucl. Phys. B **885** (2014) 34. arXiv:1403.4128 \[hep-th\]. P.K. Concha, E.K. Rodríguez, *Maxwell superalgebras and Abelian semigroup expansion*, Nucl. Phys. B **886** (2014) 1128. arXiv:1405.1334 \[hep-th\]. P.K. Concha, E.K. Rodríguez, *N=1 Supergravity and Maxwell superalgebras*, JHEP **1409** (2014) 090. arXiv:1407.4635 \[hep-th\]. D.M. Peñafiel, L. Ravera, *On the Hidden Maxwell Superalgebra underlying D=4 Supergravity*, Fortsch. Phys. **65** (2017) 1700005. arXiv:1701.04234 \[hep-th\] L. Ravera, *Hidden role of Maxwell superalgebras in the free differential algebras of* $D=4$* and* $D=11$*supergravity*, Eur. Phys. J. C **78** (2018) 211. arXiv:1801.08860 \[hep-th\]. P. Concha, L. Ravera, E. Rodríguez, *On the supersymmetry invariance of flat supergravity with boundary*, JHEP **01** (2019) 192. arXiv:1809.07871 \[hep-th\]. S. Kibaroğlu, O. Cebecioğlu, $D=4$*supergravity from the Maxwell-Weyl superalgebra*, arXiv:1812.09861 \[hep-th\]. P.K. Concha, O. Fierro, E.K. Rodríguez, P. Salgado, *Chern-Simons supergravity in D=3 and Maxwell superalgebra*, Phys. Lett. B **750** (2015) 117. arXiv:1507.02335 \[hep-th\]. P.K. Concha, O. Fierro, E.K. Rodríguez, *Inönü-Wigner contraction and D=2+1 supergravity*, Eur. Phys. J. C **77** (2017) 48. arXiv:1611.05018 \[hep-th\]. P. Concha, D.M. Peñafiel, E. Rodríguez, *On the Maxwell supergravity and flat limit in 2+1 dimensions*, Phys. Lett. B **785** (2018) 247. arXiv:1807.00194 \[hep-th\]. P. Concha, $\mathcal{N}$*-extended Maxwell supergravities as Chern-Simons theories in three spacetime dimensions*, Phys. Lett. B **792** (2019) 290. arXiv:1903.03081 \[hep-th\]. L. Avilés, E. Frodden, J. Gomis, D. Hidalgo, J. Zanelli, *Non-Relativistic Maxwell Chern-Simons Gravity*, JHEP **1805** (2018) 047. arXiv:1802.08453 \[hep-th\]. J. Gomis, A. Kleinschmidt, J. Palmkvist, P. Salgado-Rebolledo,\ *Newton-Hooke/Carrollian expansions of (A)dS and Chern-Simons gravity*.\ arXiv:1912.07564 \[hep-th\]. J.M. Lévy-Leblond, *Group Theory and its Applications Vol. II*, 221 (1971). D.R. Grigore, *The Projective unitary irreducible representations of the Galilei group in* $\left( 1+2\right) $*-dimensions*, J. Math. Phys. **37** (1996) 460. \[hep-th/9312048\]. S.K. Bose, *The Galilean group in* $\left( 2+1\right) $* space-times and its central extension*, Commun. Math. Phys. **169** (1995) 385. C. Duval, P.A. Horvathy, *The ’Peierls subsitution’ and the exotic Galilei group*, Phys. Lett. B **479** (2000) 284. \[hep-th/0002233\]. R. Jackiw, V.P. Nair, *Anyon spin and the exotic central extension of the planar Galilei group*, Phys. Lett. B **480** (2000) 237. \[hep-th/0003130\]. P.A. Horvathy, M.S. Plyushchay, *Non-relativistic anyons, exotic Galilean symmetry and noncommutative plane*, JHEP **0206** (2002) 033. \[hep-th/0201228\]. M.B. Green, *Supertranslations, superstrings and Chern-Simons forms*, Phys. Lett. B **223** (1989) 157. R. D’Auria, P. Fré, *Geometric supergravity in d=11 and its hidden supergroup*, Nucl. Phys. B **201** (1982) 101. L. Andrianopoli, R. D’Auria and L. Ravera, *Hidden Gauge Structure of Supersymmetric Free Differential Algebras*, JHEP **1608** (2016) 095. arXiv:1606.07328 \[hep-th\]. L. Andrianopoli, R. D’Auria and L. Ravera, *More on the Hidden Symmetries of 11D Supergravity*, Phys. Lett. B **772** (2017) 578. arXiv:1705.06251 \[hep-th\]. J. Lukierski, I. Prochnicka, P.C. Stichel, W.J. Zakrzewski, *Galilean exotic planar supersymmetries and nonrelativistic supersymmetric wave equations*, Phys. Lett. B **639** (2006) 389. \[hep-th/0602198\]. P. Concha, E. Rodríguez, *Non-relativistic gravity theory based on an enlargement of the extended Bargmann algebra*, JHEP **07** (2019) 085. arXiv:1906.00086 \[hep-th\]. M. Hatsuda, M. Sakaguchi, *Wess-Zumino term for the AdS superstring and generalized Inonu-Wigner contraction*, Prog. Teor. Phys. **109** (2003) 853. \[hep-th/0106114\]. J.A. de Azcárraga, J.M. Izquierdo, M. Picón, O. Varela, *Generating Lie and gauge free differential (super)algebras by expanding Maurer-Cartan forms and Chern-Simons supergravity*, Nucl. Phys. B **662** (2003) 185. \[hep-th/0212347\]. J.A. de Azcárraga, J.M. Izquierdo, M. Picón, O. Varela, *Expansions of algebras and superalgebras and some applications*, Int. J. Theor. Phys. **46** (2007) 2738. \[hep-th/0703017\]. F. Izaurieta, E. Rodríguez, P. Salgado, *Expanding Lie (super)algebras through Abelian semigroups*, J. Math. Phys. **47** (2006) 123512. \[hep-th/0606215\]. E. Bergshoeff, J. Izquierdo, T. Ortín, L. Romano, *Lie Algebra Expansions and Actions for Non-Relativistic Gravity*, JHEP **08** (2019) 048. arXiv:1904.08304 \[hep-th\]. J.A. de Azcárraga, D. Gútiez, J.M. Izquierdo, *Extended* $D=3$* Bargmann supergravity from a Lie algebra expansion*, Nucl. Phys. B **946** (2019) 114706. arXiv:1904.12786 \[hep-th\]. L. Romano, *Non-Relativistic Four Dimensional p-Brane Supersymmetric Theories and Lie Algebra Expansion*, arXiv:1906.08220 \[hep-th\]. D.M. Peñafiel, P. Salgado-Rebolledo, *Non-relativistic symmetries in three space-time dimensions and the Nappi-Witten algebra*, Phys. Lett. B **798** (2019) 135005. arXiv:1906.02161 \[hep-th\]. D. Hansen, J. Hartong, N.A. Obers, *Action Principle for Newtonian Gravity*, Phys. Rev. Lett. **122** (2019) 061106. arXiv:1807.04765 \[hep-th\]. P. Concha, L. Ravera, E. Rodríguez, *Three-dimensional exotic Newtonian gravity with cosmological constant*, arXiv:1912.02836 \[hep-th\].
{ "pile_set_name": "ArXiv" }
--- author: - 'Carlos D’Andrea and Jaydeep Chipalkatti' title: On the Jacobian ideal of the binary discriminant --- (with an appendix by [Abdelmalek Abdesselam]{}) Introduction ============ Let $${\mathbb F}= a_0 \, x_1^d + \dots + \binom{d}{i} \, a_i \, x_1^{d-i} \, x_2^i + \dots + a_d \, x_2^d,$$ denote the generic binary form of order $d$ in the variables $x_1,x_2$. Its discriminant $\Delta = \Delta(a_0,\dots,a_d)$ is a homogeneous polynomial with the following property: given $\alpha_0,\dots,\alpha_d \in {\mathbf C}$, the form $F_\alpha = \sum\limits_{i=0}^d \, \binom{d}{i} \, \alpha_i \, x_1^{d-i} x_2^i$ is divisible by the square of a linear form iff $\Delta(\alpha_0,\dots,\alpha_d)=0$. Let $R$ denote the polynomial ring ${\mathbf C}[a_0,\dots,a_d]$, and let $$J = (\frac{\partial \Delta}{\partial a_0}, \dots, \frac{\partial \Delta}{\partial a_d}) \subseteq R,$$ denote the Jacobian ideal of $\Delta$. Our main result (in §\[section.J\_Delta\]) is that $J$ is a [*perfect*]{} ideal of height $2$ for $d \ge 3$, with graded minimal resolution $$0 {\leftarrow}R/J {\leftarrow}R {\leftarrow}R(3 - 2d)^{d+1} {\leftarrow}R(2-2d)^{3} \oplus R(1-2d)^{d-3} {\leftarrow}0. \label{res1}$$ {#section.defnXlambda} To put this statement into a geometric context, identify the form $F_\alpha$ (distinguished up to a scalar) with the point $[\alpha_0, \dots, \alpha_d]$ in the projective space ${\mathbb P}^d$. We recall the notion of a Coincident Root locus introduced in [@ego3]. Let $$\lambda = (\lambda_1, \lambda_2,\dots, \lambda_n)$$ be a partition of $d$ into $n$ parts. Now the CR locus associated to $\lambda$ is defined to be $$X_\lambda = \{ F \in {\mathbb P}^d: F = \prod\limits_{i=1}^n \, l_i^{\lambda_i} \; \; \text{for some linear forms $\l_i$} \},$$ which is an irreducible projective subvariety of dimension $n$. Given two partitions $\lambda$ and $\mu$, we have $X_\lambda \subseteq X_\mu$ iff $\mu$ is a refinement of $\lambda$. Now $X_{(2,1^{d-2})}$ is the hypersurface $\{\Delta =0\}$, and the closed subscheme $Z = {\text{Proj} \, }(R/J)$ is supported on its singular locus. By [@ego3 Theorem 5.4], the latter is equal to the union $X_\tau \cup X_\delta$, where $$\tau = (3,\underbrace{1,\dots,1}_{d-3}) \quad \text{and} \quad \delta = (2,2,\underbrace{1,\dots,1}_{d-4}).$$ The result above implies that $Z$ is an arithmetically Cohen-Macaulay scheme. In Proposition \[prop.multiplicity\] we show that $Z$ has multiplicities $2$ and $1$ along $X_\tau$ and $X_{\delta}$ respectively. {#section-1} The ideas in §\[section.J\_Delta\] are based on the ‘Cayley method’ as explained in [@GKZ Ch. 2]. In §\[section.J\_Res\] we give a pr[é]{}cis of this method in the context of binary resultants, and then deduce the following theorem: let ${\mathfrak R}$ denote the resultant of generic binary forms ${\mathbb F},{\mathbb G}$ of orders $d,e$. If $d \ge e-1$, then the ${\mathbb F}$-Jacobian ideal of ${\mathfrak R}$ (i.e., the ideal of partial derivatives of ${\mathfrak R}$ with respect to the coefficients of ${\mathbb F}$) is perfect. The Cayley method involves constructing a morphism of vector bundles whose determinant is the resultant. The most interesting ingredient in this morphism is the so-called Morley form ${\mathcal M}$, which encodes the $d_2$-differential of a spectral sequence. Although *a priori* the differential is only well-defined modulo coboundaries, it admits a unique equivariant lifting to a morphism from binary forms of order $e-2$ to those of order $d$. This is explained in §\[proof.prop.morley\] – \[section.FGrGFr\], modulo a calculation which is provided in the appendix by A. Abdesselam. The reader may also consult [@Jouanolou §3.11] for a very general treatment of multivariate Morley forms. {#section-2} In a slightly different direction, define $\Phi_n = \bigcup\limits_\lambda \, X_\lambda$, where the union is quantified over all partitions $\lambda$ having $n$ parts. E.g., for $d=6$ and $n=3$, $$\Phi_3 = X_{(4,1,1)} \cup X_{(3,2,1)} \cup X_{(2,2,2)}.$$ Let $I_n \subseteq R$ denote the ideal of $\Phi_n$. In §\[section.acm\] we show that $I_n$ is a determinantal ideal which admits an Eagon-Northcott resolution, in particular it is perfect. {#section-3} Note that the group $SL_2 \, {\mathbf C}$ acts on ${\mathbb P}^d$, namely the element $$g = \left(\begin{array}{rr} p & r \\ q & s \end{array} \right) \in SL_2 \, {\mathbf C},$$ sends $\sum\limits_i \, \binom{d}{i} \, \alpha_i \, x_1^{d-i} \, x_2^i$ to $\sum\limits_i \, \binom{d}{i} \, \alpha_i \, (p \, x_1+q \, x_2)^{d-i} \, (r \, x_1+s \, x_2)^i$. All the varieties defined above inherit this action, in particular the ideals $I_n,J$ and the Betti modules in their free resolutions are $SL_2$-representations. This equivariance is respected in all of our subsequent constructions. The first syzygy modules occuring in the resolution of $J$ encode the invariant differential equations satisfied by $\Delta$ (and similarly for ${\mathfrak R}$). We write down these equations explicitly using transvectants. The reader is referred to [@FH Lecture 11] and [@Sturmfels §4.2] for basic representation theory of $SL_2$. We will use [@Glenn] and [@GrYo] as standard references for classical invariant theory and symbolic calculus; more recent accounts of this subject may be found in [@CD_inv; @Dolgachev1; @KR; @Olver]. [ We thank Bernd Sturmfels for initiating the collaboration which led to this paper. We arrived at many of the results in this paper by extensive calculations in Macaulay-2, and it is a pleasure to thank its authors Dan Grayson and Mike Stillman. The second author was supported by NSERC while this work was in progress.]{} Preliminaries ============= Let $V$ be a two-dimensional vector space over ${\mathbf C}$ with basis ${\mathbf x}= \{x_1,x_2\}$. Then $\text{Sym}^m \, V = S_m \, V$ is the $(m+1)$-dimensional space of binary forms of order $m$ in ${\mathbf x}$. The $\{S_m \, V : m \ge 0\}$ are a complete set of irreducible $SL(V)$-representations. We will omit the $V$ if no confusion is likely, thus $S_m(S_n)$ stands for the plethysm representation $\text{Sym}^m \, (\text{Sym}^n \, V)$ etc. Transvectants {#section.trans} ------------- Given integers $m,n \ge 0$, we have a decomposition of $SL_2$-representations $$S_m \otimes S_n \simeq \bigoplus\limits_{r=0}^{\min\{m,n\}} \, S_{m+n-2r}. \label{Clebsch-Gordan}$$ Let $A,B$ denote binary forms of respective orders $m,n$. The $r$-th transvectant of $A$ with $B$, written $(A,B)_r$, is defined to be the image of $A \otimes B$ via the projection map $$S_m \otimes S_n {\longrightarrow}S_{m+n-2r} \, .$$ It is given by the formula $$(A,B)_r = \frac{(m-r)! \, (n-r)!}{m! \, n!} \, \sum\limits_{i=0}^r \, (-1)^i \binom{r}{i} \, \frac{\partial^r A}{\partial x_1^{r-i} \, \partial x_2^i} \, \frac{\partial^r B}{\partial x_1^i \, \partial x_2^{r-i}} \label{trans.formula}$$ By convention $(A,B)_r = 0$ if $r > \min \, \{m,n\}$. (Some authors choose the scaling factor differently, cf. [@Olver Ch. 5].) Each $S_m$ is isomorphic to its dual representation $S_m^* = \text{Hom}(S_m,S_0)$ by the map which sends $A \in S_m$ to the functional $B {\longrightarrow}(A,B)_m$. Two forms $A,B \in S_m$ are said to be [*apolar*]{} to each other if $(A,B)_m=0$. In some of the examples below quite a few complicated transvectants had to be calculated; to this end we programmed formula (\[trans.formula\]) in [Maple]{}. If two forms are symbolically expressed, a useful general procedure for calculating their transvectants is given in [@Glenn §3.2.5] (also see [@GrYo §49]). {#section-4} We identify the generic binary $d$-ic ${\mathbb F}= \sum\limits_{i=0}^d \, \binom{d}{i} \, a_i \, x_1^{d-i} \, x_2^i$ with the natural trace form in $S_d \, \otimes \, S_d^*$. Using the self-duality above, this amounts to the identification of $a_i \in S_d^*$ with $\frac{1}{d!} \, x_2^{d-i} \, (-x_1)^i$. Let $R$ be the symmetric algebra $$\bigoplus\limits_{m \ge 0} \, S_m(S_d^*) = \bigoplus\limits_{m \ge 0} \, R_m = {\mathbf C}\, [a_0,\dots,a_d],$$ and ${\mathbb P}^d = {\mathbb P}\, S_d = {\text{Proj} \, }\, R$. Generally $F,G,\dots$ will denote specific binary forms, as opposed to generic forms ${\mathbb F},{\mathbb G}, \dots$. {#section.cov} A [*covariant*]{} of degree-order $(m,q)$ of binary $d$-ics is by definition a trivial summand in the representation $S_q \otimes R_m$ (cf. [@GrYo §11 et seq.]). An invariant is a covariant of order $0$. The most frequently appearing covariants are the Hessian ${\mathbb H}= ({\mathbb F},{\mathbb F})_2$, and the cubicovariant ${\mathbb T}= ({\mathbb F},{\mathbb H})_1$, of degree-orders $(2,2d-4)$ and $(3,3d-6)$ respectively. The discriminant $\Delta$ is an invariant of order $2(d-1)$. If $I(a_0,\dots,a_d)$ is an invariant of degree $m$, then its [*evectant*]{} is defined to be $${\mathcal E}_I = \frac{(-1)^d}{m} \sum\limits_{i=0}^d \, \frac{\partial I}{\partial a_i} \, x_2^{d-i} \, (-x_1)^i.$$ It is a covariant of degree-order $(m-1,d)$. The scaling factor is so chosen that we have an identity $({\mathcal E}_I,{\mathbb F})_d= I$. {#deg.Xlambda} The degree of the CR locus $X_\lambda$ is given by a formula due to Hilbert [@Hilbert2]. Let $e_r$ denote the number of parts in $\lambda$ equal to $r$, thus $\sum\limits_{r \ge 1} e_r = n$ and $\sum r \, e_r =d$. Then $\deg X_\lambda = \frac{n!}{\prod\limits_r \, (e_r!)} \, \prod\limits_{i=1}^n \lambda_i$. For instance, $\deg X_{(3^2,2,1^3)} = \frac{6!}{2! \, 1! \, 3!} \; {3^2 \times 2 \times 1^3} = 1080$. The binary discriminant {#section.J_Delta} ======================= Throughout this paper, we will regard $\Delta$ and ${\mathfrak R}$ as well-defined only up to a multiplicative constant. For a binary $d$-ic $F$, we define its Bezoutiant ${\mathbb B}_F$ as follows: introduce new variables ${\mathbf y}= (y_1, y_2)$, and write $G$ for the form obtained by substituting $y_1,y_2$ for $x_1,x_2$ in $F$. Then $${\mathbb B}_F = (\frac{\partial F}{\partial x_1} \frac{\partial G}{\partial y_2} - \frac{\partial G}{\partial y_1} \frac{\partial F}{\partial x_2})/(x_1 \, y_2 - x_2 \, y_1),$$ which is a form of order $(d-2,d-2)$ in ${\mathbf x},{\mathbf y}$. Henceforth we will assume $d \ge 4$ (but see §\[d\_2\_or\_3\]). In the sequel, $\Bbbk$ will stand for a nonzero rational constant which need not be precisely specified. Define a map $$\beta_F: S_{d-4} {\longrightarrow}S_d,$$ by sending $A \in S_{d-4}$ to $[(A,{\mathbb B}_F)_{d-4}]_{{\mathbf y}={\mathbf x}}$. This is interpreted as follows: take the $(d-4)$-th transvectant of $A$ with ${\mathbb B}_F$ with respect to the ${\mathbf x}$ variables, which gives an ${\mathbf x}\, {\mathbf y}$-form of order $(2,d-2)$. By substituting ${\mathbf x}$ for ${\mathbf y}$ we get an ${\mathbf x}$-form of order $d$. Define another morphism $$\gamma_F: S_2 {\longrightarrow}S_d, \quad A {\longrightarrow}(A,F)_1,$$ and finally let $${\mathbf 1}_F: S_0 {\longrightarrow}S_d, \quad 1 {\longrightarrow}F.$$ Note that $\beta_F$ is quadratic in the coefficients of $F$, whereas $\gamma_F,{\mathbf 1}_F$ are linear. Now consider the morphism $$\underbrace{\, \beta_F \oplus \gamma_F \oplus {\mathbf 1}_F}_{h_F}: S_{d-4} \oplus S_2 \oplus S_0 {\longrightarrow}S_d.$$ *We have an equality $$\det \, h_{\mathbb F}= \Delta_{\mathbb F}$$ up to a nonzero scalar.* [[Proof.]{}]{}Let $D_{\mathbb F}= \det h_{\mathbb F}$. It is an invariant of degree $2(d-3) + 3 + 1 = 2(d-1)$, which is the same as $\deg \Delta_{\mathbb F}$. We will show that (1) $D_{\mathbb F}$ vanishes whenever $F$ has a repeated linear factor, and (2) $D_{\mathbb F}$ is not identically zero. This will imply that $D_{\mathbb F}= \Delta_{\mathbb F}$ (up to a scalar). As to (1), after a change of variables we may assume that $x_1^2$ divides $F$. Then $x_1 \, y_1$ divides ${\mathbb B}_F$, and hence $x_1$ divides each form in ${\text{im}}(\beta_F)$. Similarly, $x_1$ divides each form in ${\text{im}}(\gamma_F)$ and ${\text{im}}({\mathbf 1}_F)$, hence $h_F$ is not surjective and $D_F = 0$. Now assume $F = x_1^d + x_2^d$, then $${\mathbb B}_F = d^2 \, \sum\limits_{i=0}^{d-1} \, (x_1 \, y_2)^{d-2-i} \, (x_2 \, y_1)^i.$$ By a direct calculation, $\beta_F(x_1^{d-k-4}x_2^k) = {\Bbbk \;}x_1^{d-k-2} \, x_2^{k+2}$, hence ${\text{im}}(\beta_F) = \text{Span} \, \{ x_1^{d-i} x_2^i: 2 \le i \le d-2\}$. Since $$\gamma_F(x_1^2) = {\Bbbk \;}x_1 \, x_2^{d-1}, \quad \gamma_F(x_1 \, x_2) = {\Bbbk \;}(x_1^d - x_2^d), \quad \gamma_F(x_2^2) = {\Bbbk \;}x_1^{d-1} \, x_2,$$ we deduce that $h_F$ is surjective. This shows (2) and completes the proof. A similar calculation shows that if $F = x_1^2 \, (x_1^{d-2} + x_2^{d-2})$, then ${\text{im}}(h_{\mathbb F}) = \text{Span} \, \{x_1^{d-i} \, x_2^i: 0 \le i \le d-1\}$. Hence $h_F$ has rank $d$ for a general $F \in X_{(2,1^{d-2})}$. Let ${\mathcal E}_\Delta$ be the evectant of $\Delta$ (see §\[section.cov\]), and define the map $$e_F: S_d {\longrightarrow}S_0, \quad A {\longrightarrow}(A,{\mathcal E}_\Delta)_d.$$ *The composites $$e_F \circ \beta_F: S_{d-4} {\longrightarrow}S_0, \quad e_F \circ \gamma_F: S_2 {\longrightarrow}S_0$$ are zero.* [[Proof.]{}]{}Since $e_F \circ \beta_F$ is of degree $(2d-1)$ in the coefficients of $F$, it corresponds to an $SL_2$-equivariant map $S_{d-4} {\longrightarrow}R_{2d-1}$. Said differently, there exists a covariant $C$ of $d$-ics of degree-order $(2d-1,d-4)$ such that $e_F \circ \beta_F(A) = (A,C)_{d-4}$. Similarly, there is a $C'$ of degree-order $(2d-2,2)$ such that $e_F \circ \gamma_F(A) = (A,C')_2$. We will show that if $F \in X_{(2,1^{d-2})}$, then $e_F \circ \beta_F = e_F \circ \gamma_F = 0$. This will imply that each coefficient of $C$ or $C'$ vanishes on $X_{(2,1^{d-2})}$, and hence must be divisible by $\Delta_{\mathbb F}$. The quotients $C/\Delta_{\mathbb F}, C'/\Delta_{\mathbb F}$ are of degree-orders $(1,d-4)$ and $(0,2)$ respectively. Since there are no such nonzero covariants, $C$ and $C'$ must be zero. Let $x_1^2$ be a factor of $F$. By [@GKZ Ch. 12, formula (1.28)] (also see [@Salmon1 Art. 96]), we have ${\mathcal E}_\Delta = {\Bbbk \;}x_1^d$. Any form $B$ in the image of $\beta_F$ or $\gamma_F$ is divisible by $x_1$, hence $(B,{\mathcal E}_\Delta)_d = (B, x_1^d)_d = 0$. This completes the proof. {#section-5} Now consider the map $$\beta_{\mathbb F}\oplus \gamma_{\mathbb F}: S_{d-4} \oplus S_2 {\longrightarrow}S_d,$$ or what is the same, the corresponding map of graded $R$-modules $$R(-2) \otimes S_{d-4} \oplus R(-1) \otimes S_2 {\longrightarrow}R \otimes S_d. \label{map.Rmodules}$$ Let $M$ denote its $d \times (d+1)$ matrix with respect to the natural monomial bases. *The ideal of maximal minors of $M$ equals $J$ (the Jacobian ideal of $\Delta$).* [[Proof.]{}]{} Let $W$ denote the image of $1$ via the map $$\wedge^d \, (\beta_{\mathbb F}\oplus \gamma_{\mathbb F}): {\mathbf C}{\longrightarrow}\wedge^d S_d \simeq S_d.$$ By construction $W$ is a covariant of degree-order $(2d-3,d)$ whose coefficients are exactly the maximal minors. Let $\{ A_1,\dots,A_d\}$ span ${\text{im}}(\beta_{\mathbb F}\oplus \gamma_{\mathbb F})$. On the one hand, $W$ is the Wronskian of the $A_i$, hence it is (up to scalar) the unique $d$-ic which is apolar to all the $A_i$ (see [@GrYo Appendix II]). On the other hand, $(A_i,{\mathcal E}_\Delta)_d =0$ by the lemma above. Hence $W = {\Bbbk \;}{\mathcal E}_\Delta$. The subvariety of ${\mathbb P}^d$ defined by $J$ is codimension $2$, hence the Eagon-Northcott complex (or what is the same in this case, the Hilbert-Burch complex) of the map (\[map.Rmodules\]) resolves $J$ (see [@BrunsVetter Ch. 16 F]). We have proved the following: \[theorem.res.J\] *The ideal $J$ is perfect of height $2$ with $SL_2$-equivariant minimal resolution $$\begin{aligned} 0 {\leftarrow}R/J {\leftarrow}R & {\leftarrow}R(3-2d) \otimes S_d \\ & {\leftarrow}R(2-2d) \otimes S_2 \oplus R(1-2d) \otimes S_{d-4} {\leftarrow}0. \qquad \qed \end{aligned}$$* {#section-6} The first syzygy modules $S_2, S_{d-4}$ correspond to systems of $SL_2$-equivariant differential equations for $\Delta$, we proceed to make these equations explicit. For all $A \in S_2$, we have $((A,{\mathbb F})_1,{\mathcal E}_\Delta)_d =0$. Using classical symbolic calculus (see [@GrYo Ch. I]), let $$A = \alpha_{\mathbf x}^2, \quad {\mathbb F}= f_{\mathbf x}^d, \quad {\mathcal E}_\Delta = e_{\mathbf x}^d.$$ Then $(A,{\mathbb F})_1 = (\alpha \, f) \, \alpha_{\mathbf x}\, f_{\mathbf x}^{d-1}$, and $$\begin{aligned} {} & ((A,{\mathbb F})_1,{\mathcal E}_\Delta)_d = (\alpha \, f) (\alpha \, e) (f \, e)^{d-1} = (\alpha_{\mathbf x}^2, (f \, e)^{d-1} \, f_{\mathbf x}\, e_{\mathbf x})_2 = \\ & (A,({\mathbb F},{\mathcal E}_\Delta)_{d-1})_2 =0. \end{aligned}$$ Since $({\mathbb F},{\mathcal E}_\Delta)_{d-1}$ is apolar to every order $2$ form, it must be identically zero. {#section-7} In fact we have an identity $({\mathbb F},{\mathcal E}_I)_{d-1}=0$ for any invariant. This can be informally explained as follows: $I$ is left unchanged by all $g \in SL_2$, hence it is annihilated by the Lie algebra $\mathfrak{sl}_2$. Now observe that $\mathfrak{sl}_2$ (as the adjoint $SL_2$-representation) is isomorphic to $S_2$. The standard generators $\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right), \left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right), \left( \begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \right)$ respectively give the equations (cf. [@Sturmfels Theorem 4.5.2]) $$\sum\limits_{i=0}^d \, (d-i) \, a_{i+1} \, \frac{\partial I}{\partial a_i} = \sum\limits_{i=0}^d i \, a_{i-1} \, \frac{\partial I }{\partial a_i} = \sum\limits_{i=0}^d \, (d-2i) \, a_i \, \frac{\partial I }{\partial a_i} = 0.$$ {#section.diffeq2.J} Similarly we have a $(d-3)$-dimensional family of differential equations for $\Delta$ coming from the module $S_{d-4}$. We will express it in a form involving only the quadratic covariants of ${\mathbb F}$. As before, $$([(A,{\mathbb B}_{\mathbb F})_{d-4}]_{{\mathbf y}= {\mathbf x}},{\mathcal E}_\Delta)_d = 0 \quad \text{for all $A \in S_{d-4}$.}$$ Let $A = {\alpha_{\mathbf x}}^{d-4}, \, {\mathbb B}_F = b_{\mathbf x}^{d-2} \, {b'_{\mathbf y}}^{d-2}$ where $b,b'$ are equivalent letters. Then $$\begin{aligned} {} & ([(A,{\mathbb B}_{\mathbb F})_{d-4}]_{{\mathbf y}= {\mathbf x}},{\mathcal E}_\Delta)_d = ((\alpha \, b)^{d-4} \, {b_{\mathbf x}}^2 \, {b_{\mathbf x}'}^{d-2}, e_{\mathbf x}^d \, )_d = \\ & (\alpha \, b)^{d-4} \, (b \, e)^2 (b' \, e)^{d-2} = (A,b_{\mathbf x}^{d-4} \, (b \, e)^2 (b' \, e)^{d-2})_{d-4} =0, \end{aligned}$$ hence $$b_{\mathbf x}^{d-4} \, (b \, e)^2 \, (b' \, e)^{d-2} =0. \label{bb'}$$ Let ${\mathbf x}\, \partial_{\mathbf y}= x_1 \, \frac{\partial}{\partial y_1} + x_2 \, \frac{\partial}{\partial y_2}$, usually called the polarization operator. Then $({\mathbf x}\, \partial_{\mathbf y})^2 \circ {\mathbb B}_{\mathbb F}= (d-2) \, (d-1) \, b_{\mathbf x}^{d-2} \, {b'_{\mathbf x}}^2 \, {b'_{\mathbf y}}^{d-4}$, hence identity (\[bb’\]) is the same as $$({\mathcal E}_\Delta,({\mathbf x}\, \partial_{\mathbf y})^2 \circ {\mathbb B}_{\mathbb F})_d =0. \label{quad-syzygy.2}$$ Let us write $({\mathbb F},{\mathbb F})_{2r} = {\tau^{(2r)}_{{\mathbf x}}}^{2d-4r}$ for the even quadratic covariants. We have a Gordan series (see [@GrYo p. 55]) $${\mathbb F}({\mathbf x}) \, {\mathbb F}({\mathbf y}) = \sum\limits_{r=0}^{[\frac{d}{2}]} \, c_r \, ({\mathbf x}\, {\mathbf y})^{2r} \, {\tau^{(2r)}_{\mathbf x}}^{d-2r} \, {\tau^{(2r)}_{\mathbf y}}^{d-2r},$$ where $c_r = \frac{\binom{d}{2r}^2}{\binom{2d-2r+1}{2r}}$. Apply the operator $$\Omega = \frac{\partial^2}{\partial x_1 \, \partial y_2} - \frac{\partial^2}{\partial x_2 \, \partial y_1},$$ and divide by $({\mathbf x}\, {\mathbf y})$, then we get an expansion $${\mathbb B}_{\mathbb F}= \frac{\Omega \circ {\mathbb F}({\mathbf x}) \, {\mathbb F}({\mathbf y})}{({\mathbf x}\, {\mathbf y})} = \sum\limits_{r=1}^{[\frac{d}{2}]} \, c_r \, (2r) \, (2d-2r+1) \, ({\mathbf x}\, {\mathbf y})^{2r-2} \, {\tau^{(2r)}_{\mathbf x}}^{d-2r} \, {\tau^{(2r)}_{\mathbf y}}^{d-2r}.$$ Apply $({\mathbf x}\, \partial_{\mathbf y})^2$ to each term, which amounts to replacing the expression $({\mathbf x}\, {\mathbf y})^{2r-2} \, {\tau^{(2r)}_{\mathbf x}}^{d-2r} \, {\tau^{(2r)}_{\mathbf y}}^{d-2r}$ with $$(d-2r)(d-2r-1) \, ({\mathbf x}\, {\mathbf y})^{2r-2} \, {\tau^{(2r)}_{\mathbf x}}^{d-2r+2} \, {\tau^{(2r)}_{\mathbf y}}^{d-2r-2}.$$ Now apply $({\mathcal E}_\Delta,-)_d$ to each term, then $$\begin{aligned} {} & (\epsilon_{\mathbf x}^d, ({\mathbf x}\, {\mathbf y})^{2r-2} \, {\tau^{(2r)}_{\mathbf x}}^{d-2r+2} \, {\tau^{(2r)}_{\mathbf y}}^{d-2r-2})_d = \epsilon_{\mathbf y}^{2r-2} \, (\epsilon \, \tau)^{d-2r+2} \, {\tau^{(2r)}_{\mathbf y}}^{d-2r-2} \\ & [(\epsilon_{\mathbf x}^d, {\tau^{(2r)}_{\mathbf x}}^{2d-4r})_{d-2r+2}]_{{\mathbf x}= {\mathbf y}}. \end{aligned}$$ Hence finally we deduce the identity $$\sum\limits_{r=1}^{[\frac{d-2}{2}]} \, \xi_r \, ({\mathcal E}_\Delta,({\mathbb F},{\mathbb F})_{2r})_{d-2r+2} = 0, \label{diffeq.deg2}$$ where $$\xi_r = \frac{(2d-4r+1)!}{(2r-1)!(d-2r-2)!(d-2r)!(2d-2r)!}.$$ The degree of the Jacobian scheme --------------------------------- Let $Z={\text{Proj} \, }(R/J)$. It is the scheme-theoretic degeneracy locus where the morphism $$S_d \otimes {\mathcal O}_{{\mathbb P}^d} {\longrightarrow}S_2 \otimes {\mathcal O}_{{\mathbb P}^d}(1) \oplus S_{d-4} \otimes {\mathcal O}_{{\mathbb P}^d}(2)$$ has rank $\le d-1$. Hence, by the Porteous formula (see [@ACGH Ch. II, §4]) the degree of $Z$ is given by the coefficient of $h^2$ in $(1+h)^{-3} (1+2h)^{3-d}$, which is $2 \, d \, (d-2)$. By Hilbert’s formula in §\[deg.Xlambda\], $$\deg X_{\tau} = 3 \, (d-2), \quad \deg X_{\delta} = 2 \, (d-2) \, (d-3).$$ *The scheme $Z$ has multiplicities $2$ and $1$ along $X_{\tau}$ and $X_{\delta}$ respectively. \[prop.multiplicity\]* This means, for instance, that if $\eta_\tau$ is the scheme-theoretic generic point of $X_\tau$, then the ring ${\mathcal O}_{Z,\eta_\tau}$ is of length $2$. [[Proof.]{}]{} If the multiplicities are $a,b$, then $\deg Z = a \, \deg X_\tau + b \, \deg X_\delta$, i.e., $$2 d \, (d-2) = 3 a \, (d-2) + 2 b \, (d-2) \, (d-3).$$ We have obvious constraints $a,b \ge 1$, and then it is straightforward to check that $(a,b)=(2,1)$ is the only possible solution. Examples {#section.examples} ======== In this section we will describe $J$ and its primary decomposition for $d \le 5$. In each case the minimal system of generators for the ring of covariants was calculated in the nineteenth century (see [@GrYo Ch. V, VII]). If $C$ is a covariant of $d$-ics, then ${\mathfrak I}(C) \subseteq R$ will denote the graded ideal generated by the coefficients of $C$. Cubics and quadratics {#d_2_or_3} --------------------- So far we had assumed $d \ge 4$. The case $d=3$ is a little exceptional, but rather easy. In this case $Z$ is a non-reduced scheme of degree $6$ supported on the twisted cubic curve $X_{(3)}$. The minimal system of cubics consists of ${\mathbb F},{\mathbb H},{\mathbb T}$, and $\Delta = ({\mathbb T},{\mathbb F})_3$, i.e., every covariant is a polynomial function in these. It is immediate that ${\mathcal E}_\Delta= {\mathbb T}$, and Theorem \[theorem.res.J\] is true as stated with the convention that $S_{-1} =0$. Thus we have a resolution $$0 {\leftarrow}R/J {\leftarrow}R {\leftarrow}R(-3) \otimes S_3 {\leftarrow}R(-4) \otimes S_2 {\leftarrow}0.$$ The ideal of $X_{(3)}$ is ${\mathfrak I}({\mathbb H})$ (cf. [@FH Exercise 11.32]), hence we have an equality ${\mathfrak I}({\mathbb H}) = \sqrt{{\mathfrak I}({\mathbb T})}$. For $d=2$, we have $\Delta = ({\mathbb F},{\mathbb F})_2$ and ${\mathcal E}_\Delta = {\mathbb F}$, i.e., $J = (a_0,a_1,a_2)$ is the irrelevant maximal ideal. Quartics {#d_4} -------- Define $i = ({\mathbb F},{\mathbb F})_4, \, j = ({\mathbb F},{\mathbb H})_4$, which are invariants of degrees $2,3$. The minimal system for $d=4$ consists of ${\mathbb F},{\mathbb H},{\mathbb T},i$ and $j$. Let ${\mathfrak P}_\tau,{\mathfrak P}_\delta \subseteq R$ denote the ideals of $X_{(3,1)}$ and $X_{(2,2)}$ respectively. ** 1. We have identities $$\Delta_{\mathbb F}= i^3 - 6 \, j^2, \quad {\mathcal E}_\Delta = i^2 \, F - 6 \, j \, {\mathbb H}.$$ 2. ${\mathfrak P}_\tau$ is the complete intersection ideal $(i,j)$, and ${\mathfrak P}_\delta ={\mathfrak I}({\mathbb T})$. 3. We have a primary decomposition $J = (i^2,j) \cap {\mathfrak P}_\delta$. [[Proof.]{}]{} Since $\Delta$ is of degree $6$, it must be a linear combination of $i^3$ and $j^2$, say $c_1 \, i^3 + c_2 \, j^2$. Specialise to $F = x_1^2 \, x_2 \, (x_1 + x_2)$, when $\Delta_F$ must vanish. Computing directly, we get the equation $\frac{c_1}{216} + \frac{c_2}{1296} =0$, hence $c_1:c_2 = 1:-6$, i.e., we may take $\Delta = i^3 - 6 \, j^2$. Differentiating this identity, we get $${\mathcal E}_\Delta = \frac{1}{6} \, (3 \, i^2 \times 2 \, {\mathcal E}_i - 12 \, j \times 3 \, {\mathcal E}_j).$$ But ${\mathcal E}_j = {\mathbb H}$ and ${\mathcal E}_i = {\mathbb F}$, hence it equals $i^2 \, {\mathbb F}- 6 \, j \, {\mathbb H}$. This proves (a1). Since $X_{(3,1)}$ is exactly the locus of nullforms, it is characterized by the vanishing of all invariants, i.e., $i=j=0$ at $F \iff F \in X_{(3,1)}$. Since the ideal $(i,j)$ has no embedded primes, it must be ${\mathfrak P}_\tau$-primary. But since it also has degree $6 \, (= \deg {\mathfrak P}_\tau)$, we get $(i,j) = {\mathfrak P}_\tau$. In [@AC1 Theorem 1.4] it is proved that the ideal of every CR-locus of the type $X_{(a,a)}$ is generated in degree $3$. It follows from the set-up described there that the degree $3$ piece $({\mathfrak P}_\delta)_3$ is the kernel of the surjective morphism $$S_3(S_4) {\longrightarrow}S_3(S_2 \otimes S_2) {\longrightarrow}S_3(S_2) \otimes S_3(S_2) {\longrightarrow}S_6 \otimes S_6 {\longrightarrow}S_2(S_6).$$ We have plethysm decompositions $$S_3(S_4) = S_{12} \oplus S_8 \oplus S_6 \oplus S_4 \oplus S_0, \quad S_2(S_6) = S_{12} \oplus S_8 \oplus S_4 \oplus S_0,$$ hence $({\mathfrak P}_\delta)_3 \simeq S_6$. This subrepresentation must correspond to ${\mathbb T}$, since up to scalar it is the only covariant of degree-order $(3,6)$. This implies that ${\mathfrak P}_\delta = {\mathfrak I}({\mathbb T})$. To prove (a3), let $J = {\mathfrak q}_\tau \cap {\mathfrak q}_\delta$ be the (necessarily unique) primary decomposition, such that ${\mathfrak q}_\star$ is ${\mathfrak P}_\star$-primary. (See [@AM Ch. 4] for generalities on primary decomposition.) Since $J$ has multiplicity one along $X_{(2,2)}$, we have ${\mathfrak q}_\delta = {\mathfrak P}_\delta$. Note that $(i^2,j)$ is ${\mathfrak P}_\tau$-primary (since it is perfect and its radical is ${\mathfrak P}_\tau$), moreover the expression for ${\mathcal E}_\Delta$ in (a1) shows that $J \subseteq (i^2,j)$. This implies that ${\mathfrak q}_\tau \subseteq (i^2,j)$, and it only remains to show the opposite inclusion. Let $z$ be any of the coefficients of ${\mathbb T}$, then $$(J:z) = ({\mathfrak q}_\tau:z) \cap ({\mathfrak P}_\delta:z).$$ Now $z \notin {\mathfrak P}_\tau$, hence $({\mathfrak q}_\tau:z) = {\mathfrak q}_\tau$. Since $({\mathfrak P}_\delta:z)=R$, we have $(J:z) = {\mathfrak q}_\tau$. From (a1), $$({\mathcal E}_\Delta,{\mathbb H})_1 = (i^2 \, {\mathbb F}- 6 \, j \, {\mathbb H},{\mathbb H})_1 = i^2 \, ({\mathbb F}, {\mathbb H})_1 - 6 \, j \, ({\mathbb H},{\mathbb H})_1 = i^2 \, {\mathbb T},$$ and similarly $({\mathcal E}_\Delta: {\mathbb F})_1 = 6 \, j \, {\mathbb T}$. It follows that $i^2 \,z, j \, z \in J$, implying $i^2,j \in {\mathfrak q}_\tau$. This completes the proof of the proposition. The identity (\[diffeq.deg2\]) of §\[section.diffeq2.J\] reduces to $({\mathcal E}_\Delta,{\mathbb H})_4=0$, which gives the differential equation $$\begin{aligned} {} & (2 \, a_0 \, a_2 - 2 \, a_1^2) \, \frac{\partial \Delta}{\partial a_0} + (a_0 \, a_3 - a_1 \, a_2) \, \frac{\partial \Delta}{\partial a_1} + (\frac{2}{3} \, a_1\, a_3 - a_2^2 + \frac{1}{3} \, a_0 \, a_4) \, \frac{\partial \Delta}{\partial a_2} + \\ & (a_1 \, a_4 - a_2 \, a_3) \, \frac{\partial \Delta}{\partial a_3} + (2 \, a_2 \, a_4 - 2 \, a_3^2) \, \frac{\partial \Delta}{\partial a_4} =0. \end{aligned}$$ Quintics -------- The invariant theory of the binary $d$-ic rapidly becomes more complicated with increasing $d$, in particular it is progressively harder to calculate $J$ precisely. In this section we will complete the calculation for $d=5$, making heavy use of machine computations in [Maple]{} and Macaulay-2. The minimal system is given on [@GrYo p. 131]. (Since it has $23$ members, it will not be reproduced here.) For quintics, the number of linearly independent covariants of degree-order $(m,q)$ is the number of copies of $S_q$ in the plethysm $S_m(S_5)$. We wrote our own set of [Maple]{} procedures based on the Cayley-Sylvester formula (see  [@Sturmfels Corollary 4.2.8]) to decompose it into irreducible summands. In addition to ${\mathbb H}$ and ${\mathbb T}$, we have covariants $i = ({\mathbb F},{\mathbb F})_4, A = (i,i)_2$ of degree-orders $(2,2),(4,0)$ respectively. Define $$\begin{array}{lc} & \text{degree-order} \\ C_1 = 15 \, (i,{\mathbb H})_2 + 2 \, i^2 & (4,4)\\ C_2 = 770 \, (i,{\mathbb F}\, {\mathbb H})_2 - 675 \, (i,({\mathbb F},{\mathbb H})_1)_1 + 198 \, i^2 \, {\mathbb F}& (5,9) \\ D_1 = - 21 \, (C_1,{\mathbb F}^2)_4 + 55 \, (C_1,{\mathbb H})_2 + 14 \, C_1 \, i & (6,6) \\ D_2 = 5 \, (C_1,{\mathbb H})_4 + 4 \, (C_1,i)_2 & (6,2) \end{array}$$ ** 1. We have identities $$\begin{aligned} \Delta = & \, 59 \, A^2 + 320\, (i^3,{\mathbb H})_6, \\ {\mathcal E}_\Delta = & \, \frac{25}{3} \, A \, (i,{\mathbb F})_1 + \frac{3400}{21} \, i \, (i^2,{\mathbb F})_3 \, - 240 \, (i^2,({\mathbb F},{\mathbb H})_1)_4. \end{aligned}$$ 2. If ${\mathfrak P}_\tau,{\mathfrak P}_\delta$ denote the ideals of $X_\tau, X_\delta$ respectively, then $${\mathfrak P}_\tau = {\mathfrak I}(C_1,A), \quad {\mathfrak P}_\delta = {\mathfrak I}(C_2).$$ 3. We have a primary decomposition $$J = {\mathfrak q}_\tau \cap {\mathfrak P}_\delta,$$ where ${\mathfrak q}_\tau = {\mathfrak I}(D_1,D_2)$ is ${\mathfrak P}_\tau$-primary. [[Proof.]{}]{} The minimal system shows that there are only two independent invariants in degree $8$, namely $A^2$ and $(i^3,{\mathbb H})_6$. Hence $\Delta = c_1 \, A^2 + c_2 \, (i^3,{\mathbb H})_6$ for some $c_i$. Specialise to $F = x_1^2 \, x_2 \, (x_1 + x_2) \, (x_1 - x_2)$ (when $\Delta$ must vanish), then we get $320 \, c_1 - 59 \, c_2 =0$. Similarly $A \, (i, {\mathbb F})_1, i \, (i^2, {\mathbb F})_3, (i^2,({\mathbb F},{\mathbb H})_1)_4$ form a basis of covariants of degree-order $(7,5)$, hence ${\mathcal E}_\Delta$ must be their linear combination. We can find the coefficients by specialisation as before, and this establishes the formulae in (b1). First we determine the generators of ${\mathfrak P}_\tau$ using the recipe of [@ego3 §3.1]. Write $$\sum\limits_{i=0}^5 \, \binom{5}{i} \, a_i \, x_1^{5-i} \, x_2^i = (b_1 \, x_1 + b_2 \, x_2)^3 \, (c_0 \, x_1^2 + 2 \, c_1 \, x_1 \, x_2 + c_2 \, x_2^2)$$ (where $a,b,c$ are indeterminates), and equate the coefficients. This defines a ring morphism $${\mathbf C}[a_0,\dots,a_5] {\longrightarrow}{\mathbf C}[b_1,b_2,c_0,c_1,c_2],$$ whose kernel is ${\mathfrak P}_\tau$. A computation (done in Macaulay-2) shows that all the ideal generators are in degree $4$, and $\dim \, ({\mathfrak P}_\tau)_4 = 6$. Now $A$ (being an invariant) must vanish on $X_\tau$, hence $({\mathfrak P}_\tau)_4$ has $S_0$ as a summand. The module $S_4(S_5)$ contains no copies of $S_i$ for $0 < i < 4$, and $2$ copies of $S_4$. Hence $({\mathfrak P}_\tau)_4$ must be isomorphic to $S_0 \oplus S_4$ as an $SL_2$-representation. The order $4$ piece (to be called $C_1$) must be a linear combination of $(i,{\mathbb H})_2$ and $i^2$, because the latter form a basis in degree-order $(4,4)$. Then we determine the actual coefficients as before by specialising $F$ to $x_1^3 \, x_2 \, (x_1+x_2)$. A similar computation shows that ${\mathfrak P}_\delta$ is generated by a $10$-dimensional vector subspace of $R_5$. Notice that $X_\delta \supseteq X_{(4,1)}$, and by [@Ei2], the ideal of $X_{(4,1)}$ equals ${\mathfrak I}(i)$. Thus we have an inclusion ${\mathfrak P}_\delta \subseteq {\mathfrak I}(i)$; this implies that each degree $5$ covariant vanishing on $X_\delta$ must be a linear combination of terms of the form $(i, \Phi)_k$ for some degree $3$ covariant $\Phi$. (This follows because the vector space $({\mathfrak I}(i))_5$ is spanned by such terms.) Clearly $0 \le k \le 2$. Now $S_3(S_5) \simeq S_{15} \oplus S_{11} \oplus S_9 \oplus S_7 \oplus S_5 \oplus S_3$, corresponding to the cases $$\Phi = {\mathbb F}^3, \, {\mathbb F}\, {\mathbb H}, \, ({\mathbb F},{\mathbb H})_1, \, i \, {\mathbb F}, \, (i,{\mathbb F})_1, \, (i,{\mathbb F})_2.$$ This allows us to write down all the possibilities for $(i,\Phi)_k$. An exhaustive search shows that $C_2$ is the only linear combination which vanishes on $F = x_1^2 \, x_2^2 \, (x_1 + x_2)$. This proves (b2). The ${\mathfrak P}_\delta$-primary component of $J$ is ${\mathfrak P}_\delta$ itself. Let $w$ denote the coefficient of $x_1^9$ in $C_2$, then ${\mathfrak q}_\tau$ (the ${\mathfrak P}_\tau$-primary component) equals the colon ideal $(J:w)$. We calculated the latter in Macaulay-2, and found it to have $10$ generators in degree $6$, and $12$ first syzygies in degree $7$. Hence we have a resolution $$0 {\leftarrow}R/{\mathfrak q}_\tau {\leftarrow}R {\leftarrow}R(-6) \otimes M_{10} {\leftarrow}R(-7) \otimes M_{12} {\leftarrow}\dots$$ where $M_r$ denotes an $r$-dimensional $SL_2$-representation. Now $$S_6 \, (S_5) = S_2^{\, \oplus 2} \oplus S_4 \oplus S_6^{\, \oplus 4} \oplus S_8^{\, \oplus 2} \oplus \text{summands $S_i$ with $i \ge 10$},$$ hence the dimension count forces $M_{10} \simeq S_6 \oplus S_2$. Let $D_1,D_2$ denote the corresponding covariants of orders $6$ and $2$. Since ${\mathfrak q}_\tau \subseteq {\mathfrak P}_\tau$, each $D_i$ can be written as a sum of terms of the form $(C_1,\Psi)_k, A \, \Psi'$, where $\Psi, \Psi'$ are of degree $2$. Thus we may write $$\begin{aligned} D_1 & = \alpha_1 \, (C_1,{\mathbb F}^2)_4 + \alpha_2 \, (C_1,{\mathbb H})_2 +\alpha_3 \, C_1 \, i, \\ D_2 & = \beta_1 \, (C_1,{\mathbb H})_4 + \beta_2 \, (C_1,i)_2, \end{aligned}$$ for some $\alpha_i, \beta_j \in {\mathbf Q}$. (The terms $A \, {\mathbb H}$ and $A \, i$ are not needed, because a calculation shows that they are respectively equal to $$\frac{3}{25} \, (C_1, {\mathbb F}^2)_4 - \frac{1}{25} \, (C_1,{\mathbb H})_2 + \frac{162}{875} \, C_1 \, i, \quad \frac{18}{25} \, (C_1,{\mathbb H})_4 -\frac{48}{125} \, (C_1,i)_2. )$$ Since $J \subseteq {\mathfrak q}_\tau$, we must have $${\mathcal E}_\Delta = \gamma_1 \, (D_1,F)_3 + \gamma_2 \, (D_2,F)_1$$ for some $\gamma_i \in {\mathbf Q}$. When rewritten in terms of the basis elements $A \, (i,{\mathbb F})_1, i \, (i^2,{\mathbb F})_3, (i^2,({\mathbb F},{\mathbb H})_1)_4$ for covariants of degree-order $(7,5)$, this becomes an inhomogeneous system of three linear equations. It turns out that there is a two-dimensional family of solutions, and the general solution can be written as $$\begin{aligned} {} & (\gamma_1 \, \alpha_1,\gamma_1 \, \alpha_2,\gamma_1 \, \alpha_3, \gamma_2 \, \beta_1,\gamma_2 \, \beta_2) = \\ & (-\frac{3}{5} - s + \frac{5}{4} \, t, \, 5 - \frac{5}{3} \, s - \frac{25}{6} \, t, - \frac{2}{7} - \frac{8}{7} \, s + \frac{75}{28} \, t, \, s,t). \end{aligned}$$ In order to determine $s,t$, we need to look at the first syzygies of ${\mathfrak q}_\tau$. Since they are all linear, $M_{12}$ must be a submodule of $$M_{10} \otimes S_5 \simeq (S_6 \oplus S_2) \otimes S_5 \simeq S_{11} \oplus S_9 \oplus S_7^{\oplus 2} \oplus S_5^{\oplus 2} \oplus S_3^{\oplus 2} \oplus S_1.$$ By a dimension count, there are only four possible choices for $M_{12}$, it can only be $S_{11}, S_5^{\oplus 2}, S_5 \oplus S_3 \oplus S_1$ or $S_7 \oplus S_3$. It cannot be $S_{11}$ since the corresponding covariant is divisible by ${\mathbb F}$, and cancelling the latter would imply the absurdity that there is a first syzygy in degree $6$. If $S_5 \subseteq M_{12}$ (i.e., if there were a syzygy in order $5$), then there would be a nontrivial identity of the form $\eta_1 \, (D_1,F)_3 + \eta_2 \, (D_2,F)_1 = 0$. A calculation shows that there is none, this rules out all but the last choice. Thus $S_7 \subseteq M_{12}$, i.e., we have an identity of the form $$\eta_1 \, (D_1,{\mathbb F})_2 + \eta_2 \, D_2 \, {\mathbb F}= 0.$$ Indeed, it turns out that $(s,t) = (\frac{24}{35}, \frac{96}{175}), \eta_1/\eta_2 = 4$ is the unique nontrivial solution. Finally we choose $\gamma_1 = \frac{1}{35}, \gamma_2 = \frac{24}{175}$, so that $D_1,D_2$ acquire integer coefficients. The proposition is proved. It would be of interest to have a general result describing the primary decomposition of $J$ for all $d$, but this appears inaccessible. {#section-8} Not every invariant of binary forms has a perfect Jacobian ideal. E.g., let $d=4$ (with notation as in §\[d\_4\]). Let us show that ${\mathfrak b}= {\mathfrak I}({\mathcal E}_j)$ (the Jacobian ideal of $j$) is not perfect. Since ${\mathcal E}_j$ is a covariant of degree-order $(2,4)$, it must coincide with ${\mathbb H}$ up to a scalar. The zero locus of ${\mathfrak b}= {\mathfrak I}({\mathbb H})$ is the rational normal quartic curve, hence $\dim \, (R/{\mathfrak b}) = 2$. However we have an identical relation $({\mathbb H},{\mathbb F})_2 = \frac{1}{6} \, i \, {\mathbb F}$ (see [@GrYo p. 92]), which implies that $i \, (a_0,\dots,a_4) \subseteq {\mathfrak b}$. Consequently ${\mathfrak b}$ is not a saturated ideal, and $\text{depth}\, (R/{\mathfrak b}) =0$. The binary resultant {#section.J_Res} ==================== We begin with a recapitulation of the Cayley method of calculating the binary resultant (see [@GKZ Ch. 2]). The reader may also consult [@Dickenstein-Andrea] for variations on this theme. Let $${\mathbb F}= \sum\limits_{i=0}^d \, \binom{d}{i} \, a_i \, x_1^{d-i} \, x_2^i, \quad {\mathbb G}= \sum\limits_{j=0}^e \, \binom{e}{j} \, b_j \, x_1^{e-j} \, x_2^j,$$ denote generic binary forms of orders $d,e$. Define the product space $Y = {\mathbb P}S_d \times {\mathbb P}S_e \times {\mathbb P}S_1$ with projection maps $\mu_1,\mu_2,\pi$ onto the respective factors. Consider the subvariety $${{\widetilde \Gamma}}= \{(F,G,l) \in Y: \text{$l$ divides $F,G$}\} \subseteq Y.$$ Let $f = \mu_1 \times \mu_2$, then $\Gamma = f({{\widetilde \Gamma}}) \subseteq {\mathbb P}^d \times {\mathbb P}^e$ is the resultant hypersurface. For any integers $m,n, p$, let ${\mathcal O}_Y(m,n,p)$ denote the line bundle $$\mu_1^* \, {\mathcal O}_{{\mathbb P}^d}(m) \otimes \mu_2^* \, {\mathcal O}_{{\mathbb P}^e}(n) \otimes \pi^* \, {\mathcal O}_{{\mathbb P}^1}(p),$$ with similar notation on ${\mathbb P}^d \times {\mathbb P}^e$. There is a tautological global section in $H^0(Y,{\mathcal O}_Y(1,0,d)) = S_d \otimes S_d$ corresponding to the trace element ${\mathbb F}$, and similarly for ${\mathbb G}$. Both of these sections simultaneously vanish at $(F,G,l)$ iff $(F,l^d)_d = (G,l^e)_e =0$, i.e., iff $l$ divides $F,G$. In fact we have a Koszul resolution $$\begin{aligned} 0 {\rightarrow}{\mathcal O}_Y(-1,-1,-(d+e)) & {\rightarrow}{\mathcal O}_Y(-1,0,-d) \oplus {\mathcal O}_Y(0,-1,-e) \\ & {\rightarrow}{\mathcal O}_Y {\rightarrow}{\mathcal O}_{{\widetilde \Gamma}}{\rightarrow}0. \end{aligned}$$ Now tensor with ${\mathcal O}_Y(0,0,d)$, and write this complex as $$0 {\rightarrow}{\mathcal C}^{-2} {\rightarrow}{\mathcal C}^{-1} {\rightarrow}{\mathcal C}^0 {\rightarrow}{\mathcal O}_{{{\widetilde \Gamma}}}(0,0,d) {\rightarrow}0. \label{complex.C}$$ We have a second quadrant spectral sequence $$\begin{aligned} {} & E_1^{p,q} = R^q f_* \, {\mathcal C}^p, \qquad d_r^{\, p,q}: E_r^{\, p,q} {\rightarrow}E_r^{\, p+r,q-r+1}, \\ & E_\infty^{p+q} \Rightarrow R^{p+q} f_* \, {\mathcal O}_{{{\widetilde \Gamma}}}(0,0,d) \end{aligned} \label{spectralseq}$$ in the range $p=0,-1,-2$ and $q=0,1$. {#section-9} Now assume $d \ge e -1$, and $e \ge 2$. The only nonzero $E_1$ terms are $$\begin{array}{ll} E_1^{-2,1} = {\mathcal O}(-1,-1) \otimes S_{e-2}, & E_1^{0,0} = {\mathcal O}\otimes S_d, \\ E_1^{-1,0} = {\mathcal O}(-1,0) \oplus {\mathcal O}(0,-1) \otimes S_{d-e}. \end{array}$$ (Throughout ${\mathcal O}$ stands for ${\mathcal O}_{{\mathbb P}^d \times {\mathbb P}^e}$.) It is immediate that $R^i f_* \, {\mathcal O}_{{{\widetilde \Gamma}}}(0,0,d) =0$ for $i > 1$, moreover we have exact sequences $$\begin{aligned} {} & 0 {\rightarrow}E_1^{-1,0} {\rightarrow}E_1^{0,0} {\rightarrow}E_2^{0,0} {\rightarrow}0, \\ & 0 {\rightarrow}E_1^{-2,1} \stackrel{d_2^{-2,1}}{{\rightarrow}} E_2^{0,0} {\rightarrow}f_* \, {\mathcal O}_{{{\widetilde \Gamma}}}(0,0,d) {\rightarrow}0. \end{aligned}$$ *The map $d_2^{-2,1}$ admits a unique $SL_2$-equivariant lifting (say ${\vartheta}$) to a map $E_1^{-2,1} {\rightarrow}E_1^{0,0}$.* [[Proof.]{}]{}Indeed, the obstruction to this lift lies in the group $$\text{Ext}^1(E_1^{-2,1},E_1^{-1,0}) = H^1({\mathcal O}(0,1) \otimes S_{e-2}) \oplus H^1({\mathcal O}(1,0) \otimes S_{e-2} \otimes S_{d-e})$$ which is zero. Thus we have a surjection of $SL_2$-representations $${\text{Hom}}(E_1^{-2,1}, E_1^{0,0}) {\rightarrow}{\text{Hom}}(E_1^{-2,1}, E_2^{0,0}). \label{surj.ss}$$ Since the construction of $d_2^{-2,1}$ is equivariant, it spans a copy of $S_0$ in the target of the map (\[surj.ss\]). By Schur’s lemma it must come from an $S_0$ in the source, i.e., we have an equivariant lifting. If there were two such lifts, their difference would lie in $$\begin{aligned} {} {} & {\text{Hom}}(E_1^{-2,1}, E_1^{-1,0}) = H^0({\mathcal O}(0,1)) \otimes S_{e-2} \oplus H^0({\mathcal O}(1,0)) \otimes S_{e-2} \otimes S_{d-e} \\ & = \, [S_e \otimes S_{e-2}] \oplus [S_d \otimes S_{e-2} \otimes S_{d-e}]. \end{aligned}$$ However this is impossible; formula (\[Clebsch-Gordan\]) from §\[section.trans\] shows that the last module does not contain [*any*]{} copy of $S_0$. {#section-10} Thus we get a map $E_1^{-2,1} \oplus E_1^{-1,0} \stackrel{\eta}{{\longrightarrow}} E_1^{0,0}$ of vector bundles of rank $d+1$ each, which can be seen as a map $$S_{e-2} \oplus S_{d-e} \oplus S_0 \stackrel{\eta_{F,G}} {\longrightarrow}S_d \label{etaFG}$$ parametrised by points $(F,G) \in {\mathbb P}^d \times {\mathbb P}^e$. It fails to be bijective exactly over $\Gamma$. Now $$\wedge^{d+1} \eta: \wedge^{e-1} \, E_1^{-2,1} \otimes \wedge^{d-e+2} \, E_1^{-1,1} {\longrightarrow}\wedge^{d+1} \, E_1^{0,0}$$ is the map ${\mathcal O}(-e,-d) {\longrightarrow}{\mathcal O}$, i.e., ${\mathfrak R}= \det \eta_{{\mathbb F},{\mathbb G}}$ is an invariant of degree $(e,d)$ in the coefficients of ${\mathbb F},{\mathbb G}$ respectively. Hence ${\mathfrak R}$ must coincide with the resultant of $F,G$ (up to a scalar). The maps $S_0 {\longrightarrow}S_d, S_{d-e} {\longrightarrow}S_d$ are respectively $1 {\rightarrow}F$, and $A {\rightarrow}A \, G$ for $A \in S_{d-e}$. The map ${\vartheta}: S_{e-2} {\longrightarrow}S_d$ is given by the [*Morley form*]{} which we describe below. Symbolically write $F = f_{\mathbf x}^d, \, G = g_{\mathbf x}^e$. Define a joint covariant of $F,G$ by the expression $${\mathcal M}= \sum\limits_{i=1}^{e-1} \, (f \, g) \, f_{\mathbf x}^{i-1} \, g_{\mathbf x}^{e-i-1} \, f_{\mathbf y}^{d-i} \, g_{\mathbf y}^i.$$ It is of order $e-2$ and $d$ in ${\mathbf x},{\mathbf y}$ respectively. \[prop.morley\] *For $A = \alpha_{\mathbf x}^{e-2} \in S_{e-2}$, the image ${\vartheta}(A)$ is given by $$(-1)^{e-1} \, [({\mathcal M},A)_{e-2}]_{{\mathbf y}= {\mathbf x}} = - \, \sum\limits_{i=1}^{e-1} \, (f \, g) \, (\alpha \, f)^{i-1} \, (\alpha \, g)^{e-i-1} \, f_{\mathbf x}^{d-i} \, g_{\mathbf x}^i \, . \label{theta.A}$$* The transvectant on the left hand side is with respect to ${\mathbf x}$-variables, treating the ${\mathbf y}$ as constants. The proof is postponed to §\[proof.prop.morley\]. {#section.Res.resolution} Now the rest of the argument is very similar to the discriminant case. (At this point we leave the details to the reader.) That is to say, if $l \in S_1$ divides $F,G$, then each form in the image of the map $$S_{e-2} \oplus S_{d-e} {\longrightarrow}S_d$$ is divisible by $l$ (see Lemma \[lemma.sigma.r\] below), and the ${\mathbb F}$-evectant $${\mathcal E}_{\mathfrak R}^{({\mathbb F})} = \sum\limits_{i=0}^d \, \frac{\partial {\mathfrak R}}{\partial a_i} \, x_2^i \, (-x_1)^{d-i},$$ reduces to ${\Bbbk \;}l^d$. In conclusion, we get the following result: *The ideal $$J_{\mathbb F}= (\frac{\partial {\mathfrak R}}{\partial a_0}, \dots, \frac{\partial {\mathfrak R}}{\partial a_d}) \subseteq Q = {\mathbf C}[a_0,\dots,a_d,b_0,\dots,b_e]$$ is perfect of height $2$, with an equivariant bigraded minimal resolution $$\begin{aligned} 0 {\leftarrow}Q/J_{\mathbb F}{\leftarrow}& Q {\leftarrow}Q(1-e,-d) \otimes S_d {\leftarrow}\\ & Q(1-e,-d-1) \otimes S_{d-e} \oplus Q(-e,-d-1) \otimes S_{e-2} {\leftarrow}0. \end{aligned}$$ \[EF.perfect\]* {#section-11} The syzygy modules $S_{d-e}$ and $S_{e-2}$ respectively correspond to the identities $$({\mathbb G},{\mathcal E}_{\mathfrak R}^{({\mathbb F})})_e =0, \quad ({\mathcal M},{\mathcal E}_{\mathfrak R}^{({\mathbb F})}|_{{\mathbf y}={\mathbf x}})_d^{\mathbf y}=0.$$ In the latter, we have changed ${\mathcal E}$ into a ${\mathbf y}$-form of order $d$. The transvection is with respect to ${\mathbf y}$-variables, leaving an ${\mathbf x}$-form of order $e-2$. We will rewrite this identity non-symbolically, in a form which only involves the joint covariants $({\mathbb F},{\mathbb G})_r$. First we expand each term of ${\mathcal M}$ into its Gordan series (see [@GrYo p. 55]), i.e., we write $$f_{\mathbf x}^{i-1} \, g_{\mathbf x}^{e-i-1} \, f_{\mathbf y}^{d-i} \, g_{\mathbf y}^i = \sum\limits_{s=0}^{e-2} \, \alpha_s \, ({\mathbf x}\, {\mathbf y})^s \, ({\mathbf y}\partial_{\mathbf x})^{d-s} \circ [(f_{\mathbf x}^{i-1} \, g_{\mathbf x}^{e-i-1}, f_{\mathbf x}^{d-i} \, g_{\mathbf x}^i)_s], \label{Gseries}$$ where $$\alpha_s = \frac{\binom{d}{s} \, \binom{e-2}{s}} {\binom{d+e-s-1}{s}\binom{d+e-2s-2}{d-s}}.$$ Using the general formalism of [@Glenn §3.2.5], $$(f_{\mathbf x}^{i-1} \, g_{\mathbf x}^{e-i-1}, f_{\mathbf x}^{d-i} \, g_{\mathbf x}^i)_s = \beta_{i,s} \, (f \, g)^{s+1} \, f_{\mathbf x}^{d-s-1} \, g_{\mathbf x}^{e-s-1},$$ where $$\beta_{i,s} = \frac{1}{\binom{d}{s} \binom{e-2}{s} s!} \sum\limits_{l=0}^s (-1)^l \, l! (s-l)! \binom{i-1}{s-l} \binom{e-i-1}{l} \binom{d-i}{l} \binom{i}{s-l}.$$ Now $(f \, g)^{s+1} \, f_{\mathbf x}^{d-s-1} \, g_{\mathbf x}^{e-s-1} = ({\mathbb F},{\mathbb G})_{s+1}$, which we write symbolically as $\tau_{\mathbf x}^{d+e-2s-2}$. Then $$({\mathbf y}\partial_{{\mathbf x}})^{d-s} \, \circ \tau_{\mathbf x}^{d+e-2s-2} = \binom{d+e-2s-2}{d-s} \, \tau_{\mathbf x}^{e-s-2} \, \tau_{\mathbf y}^{d-s}.$$ Writing ${\mathcal E}_{\mathfrak R}^{({\mathbb F})}|_{{\mathbf y}= {\mathbf x}} = \epsilon_{\mathbf y}^d$, $$\begin{aligned} {} & (({\mathbf x}\, {\mathbf y})^s \, \tau_{\mathbf x}^{e-s-2} \, \tau_{\mathbf y}^{d-s}, \epsilon_{\mathbf y}^d)_d^{\mathbf y}= (-1)^s \, \epsilon_{\mathbf x}^s \, \tau_{\mathbf x}^{e-s-2} \, (\tau \, \epsilon )^{d-s} \\ = \; & (-1)^d \, ({\mathcal E}_{\mathfrak R}^{({\mathbb F})},({\mathbb F},{\mathbb G})_{s+1})_{d-s}. \end{aligned}$$ Hence, by substituting into (\[Gseries\]) we get the required identity $$\sum\limits_{s=0}^{e-2} \, \omega_s \, ( {\mathcal E}_{\mathfrak R}^{({\mathbb F})},({\mathbb F},{\mathbb G})_{s+1})_{d-s} =0,$$ where $\omega_s = \binom{d+e-2s-2}{d-s} \, \alpha_s \, \sum\limits_{i=1}^{e-1} \beta_{i,s}$. {#section-12} If $e=1$, then Theorem \[EF.perfect\] is true as stated if we take $S_{-1} =0$. If $d -e < -1$, then the spectral sequence (\[spectralseq\]) has a nonzero term at $E_1^{-1,1}$. We still get a determinantal formula $${\mathfrak R}= \det \, (S_{e-2} \oplus S_0 \stackrel{\eta'_{{\mathbb F},{\mathbb G}}}{{\longrightarrow}} S_d \oplus S_{e-d-2}),$$ but $J$ may no longer be perfect. E.g., for $(d,e)=(2,4)$, a Macaulay-2 computation shows that $J$ is of height $2$, but $\text{proj-dim}_Q \, (Q/J_{{\mathbb F}})=3$. {#proof.prop.morley} Now we take up the proof of Proposition \[prop.morley\]. For $i=1,2$, let $U_i = \{ l \in S_1: \frac{\partial \, l}{\partial x_i} \neq 0\} \subseteq {\mathbb P}^1$, and ${\mathcal U}_i = \pi^{-1}(U_i)$. We will calculate the differential $d_2^{-2,1}$ using a [Č]{}ech resolution of the complex (\[complex.C\]) for the cover ${\mathcal U}_i$. Let us write ${\mathcal S}_k^j$ as an abbreviation for $f_*({\mathcal C}^j|_{{\mathcal U}_k})$, where $k$ may denote $1,2$, or $12$. (As usual ${\mathcal U}_{12} = {\mathcal U}_1 \cap {\mathcal U}_2$.) On ${\mathbb P}^d \times {\mathbb P}^e$ we have a double complex of locally free sheaves $$\diagram {\mathcal S}^{-2}_{12} \rto^{h_1} & {\mathcal S}^{-1}_{12} \rto & {\mathcal S}^{0}_{12} \\ {\mathcal S}^{-2}_1 \oplus {\mathcal S}^{-2}_2 \rto \uto & {\mathcal S}^{-1}_1 \oplus {\mathcal S}^{-1}_2 \rto^{h_3} \uto^{h_2} & {\mathcal S}^{0}_1 \oplus {\mathcal S}^{0}_2 \uto \enddiagram$$ It will be convenient to see it as a diagram of morphisms of vector spaces parametrised by the pair $(F,G)$. Since expression (\[theta.A\]) is linear in $A$, it is enough to show the proposition for a monomial $A$. Let $A = x_1^r \, x_2^{e-2-r}$. The isomorphism $S_{e-2} \simeq S_{e-2}^*$ of §\[section.trans\] takes the form $A$ to $A' = (-1)^{e-2-r} \, \binom{e-2}{r} \, x_2^r \, x_1^{e-2-r}$, since $(A,A')_{e-2} =1$. This implies that the sequence of isomorphisms $$S_{e-2} \simeq S_{e-2}^* \simeq H^0({\mathbb P}^1,{\mathcal O}(e-2))^* \otimes H^1({\mathbb P}^1,{\mathcal O}(-2)) \simeq H^1({\mathbb P}^1,{\mathcal O}(-e)),$$ takes $A$ to the [Č]{}ech cocyle $${\mathbb A}= \frac{1}{A'} \, \times \frac{1}{x_1 \, x_2} = \frac{(-1)^{e-2-r}}{\binom{e-2}{r} \, x_1^{e-1-r} \, x_2^{r+1}} \in H^0(U_{12},{\mathcal O}(-e)).$$ Recall that by the usual procedure for calculating the differentials in a spectral sequence (see [@BottTu §14]), $$d_2^{-2,1}({\mathbb A}) = h_3 \circ h_2^{-1} \circ h_1({\mathbb A}).$$ (Throughout, the vector space morphisms over $(F,G)$ are also denoted by $h_i$.) {#section.FGrGFr} By the construction of the Koszul complex, $h_1({\mathbb A}) = F \, {\mathbb A}\oplus G \, {\mathbb A}$. To take the pre-image by $h_2$, we need to rewrite each of the summands as a difference $e^{(1)}-e^{(2)}$, where the denominator of $e^{(i)}$ is a power of $x_i$ alone. Write $$F = \frac{(d-e+1)!}{d!} \, [ \, (y_1 \frac{\partial}{\partial x_1} + y_2 \, \frac{\partial}{\partial x_2})^{e-1} \, F \, ]_{{\mathbf y}= {\mathbf x}}.$$ Expand and retain only those terms whose power in $x_1$ is at least $e-1-r$, i.e., let $${\hat F} = \frac{(d-e+1)!}{d!} \sum\limits_{q \ge e-1-r} \, \binom{e-1}{q} \, x_1^q \, x_2^{e-q-1} \, \frac{\partial^{e-1} F}{\partial x_1^q \, \partial x_2^{e-q-1}}.$$ Multiplying by ${\mathbb A}$, we get $$\begin{aligned} {} & e^{(2)} = {\widetilde F}_r \\ & = \frac{(-1)^{e-2-r} \, (d-e+1)!}{x_2^{r+1} \, \binom{e-2}{r} \, d!} \sum\limits_{q=e-r-1}^{e-1} \binom{e-1}{q} \, x_1^{q-e+r+1} \, x_2^{e-q-1} \, \frac{\partial^{e-1} F}{\partial x_1^q \, \partial x_2^{e-q-1}}, \end{aligned} \label{tFr}$$ and then $e^{(1)} = F - {\widetilde F}_r$. Similarly, let $${\widetilde G}_r = \frac{(-1)^{e-2-r}}{x_2^{r+1} \, \binom{e-2}{r} \, e!} \sum\limits_{q=e-r-1}^{e-1} \binom{e-1}{q} \, x_1^{q-e+r+1} \, x_2^{e-q-1} \, \frac{\partial^{e-1} G}{\partial x_1^q \, \partial x_2^{e-q-1}} \label{tGr}$$ Now $u = (F-{\widetilde F}_r,-{\widetilde F}_r) \oplus (G-{\widetilde G}_r,-{\widetilde G}_r)$ is an element such that $h_2(u) = F {\mathbb A}\oplus G {\mathbb A}$. To calculate the image of $u$ by $h_3$, multiply the first summand by $G$, the second by $F$ and subtract. This gives $$d_2^{-2,1}({\mathbb A}) = h_3(u) = F \, {\widetilde G}_r - G \, {\widetilde F}_r. \label{d2A}$$ (Note that we have used a hidden ‘term order’ where $F$ comes before $G$. As long as we remain consistent, this should cause no harm.) It is not *a priori* obvious that the result is invariant under a change of variables, since the [Čech]{} cover is clearly not so invariant. On the other hand, expression (\[theta.A\]) is entirely in terms of symbolic brackets, hence visibly invariant. Thus, to complete the proof, we have to establish the identity $$F \, {\widetilde G}_r - G \, {\widetilde F}_r = (-1)^{e-1} \, [({\mathcal M},A)_{e-2}]_{{\mathbf y}= {\mathbf x}}. \label{morley.identity}$$ This calculation is done in the appendix. The following lemma was needed in §\[section.Res.resolution\]. *If $l$ divides $F$ and $G$, then it divides ${\vartheta}(A)$ for any $A$. \[lemma.sigma.r\]* [[Proof.]{}]{}We may assume that $l = x_1$. Since ${\vartheta}$ is linear, it suffices to give a proof for a monomial $A$. But then the claim follows because $x_1$ clearly divides the left hand side of (\[morley.identity\]). The $\Phi_n$ are arithmetically Cohen-Macaulay {#section.acm} ============================================== Let $\Phi_n \subseteq {\mathbb P}S_d$ be as in the introduction, with ideal $I_n \subseteq R$. We will exhibit $\Phi_n$ as the degeneracy locus of a map of vector bundles and then deduce that $I_n$ is perfect ideal. Along the way we will construct a covariant ${\mathcal A}_n$ of binary $d$-ics such that $F \in \Phi_n \iff {\mathcal A}_n(F)=0$. {#section-13} Every $F \in S_d$ has a factorization $$F = l_1^{ e_1} \dots l_n^{e_n} \label{F.factors}$$ where the $l_i$ are pairwise nonproportional, and $e_1 \ge \dots \ge e_n > 0$. Let $$g_F = \gcd \, (F_{x_1},F_{x_2}).$$ *With notation as above, $g_F = \prod\limits_i \, l_i^{ e_i-1}$. \[lemma.gcd\]* [[Proof.]{}]{}Evidently $g = \prod l_i^{ e_i-1}$ divides both the $F_{x_i}$, write $F_{x_1} = g \, A, F_{x_2} = g \, B$. Divide Euler’s equation $d \, F = x_1 \, F_{x_1} + x_2 \, F_{x_2}$ by $g$, then $d \, \prod l_i = x_1 \, A + x_2 \, B$. If $A,B$ have a common linear factor, it must be one the $l_i$, say $l_1$. But $$A = \sum\limits_i \, e_i \frac{\partial l_i}{\partial x_1} (\prod\limits_{j \neq i} l_j),$$ so $l_1|A$ implies $\frac{\partial l_1}{\partial x_1}=0$. The same argument on $B$ leads to $\frac{\partial l_1}{\partial x_2}=0$, so $l_1=0$. This is absurd, hence $A,B$ can have no common factor, i.e., $g = g_F$. *Let $F \in S_d$. Then $F \in \Phi_n$ iff ${\text{ord}\,}g_F \ge d-n$.* {#section-14} We have a map $S_d \otimes S_1 {\longrightarrow}S_{d-1}$ by formula (\[Clebsch-Gordan\]), we may see it as a morphism of vector bundles ${\mathcal O}_{{\mathbb P}^d}(-1) \otimes S_1 {\longrightarrow}S_{d-1}$. Now consider the composite $${\mathcal O}_{{\mathbb P}^d}(-1) \otimes S_1 \otimes S_{n-1} {\longrightarrow}S_{d-1} \otimes S_{n-1} \stackrel{\text{mult}}{{\longrightarrow}} S_{d+n-2},$$ which we denote by $\alpha_n$. On the fibres over $[F] \in {\mathbb P}^d$, this can be thought of as a morphism $$\begin{aligned} \alpha_{n,F}: \, S_1 & \otimes S_{n-1} {\longrightarrow}S_{d+n-2}, \\ l & \otimes G {\longrightarrow}(l,F)_1 \, G = {\Bbbk \;}\, (l_{x_1}F_{x_2}-l_{x_2}F_{x_1}) \, G. \end{aligned}$$ Now $F_{x_1},F_{x_2}$ have a common factor of order $\ge d-n$, iff there are order $n-1$ forms $G_1,G_2$ such that $G_2 \, F_{x_1} + G_1 \, F_{x_2}=0$. This condition can be rewritten as $\alpha_{n,F}(x_1 \otimes G_1 - x_2 \otimes G_2)=0$. Hence $\alpha_{n,F}$ fails to be injective iff $F \in \Phi_n$. Let $\Psi_n$ denote the determinantal scheme $\{ {\text{rank}\,}(\alpha_n) < 2n \}$ locally defined by the maximal minors of the matrix of $\alpha_{n,F}$. We have shown that $(\Psi_n)_\text{red} = \Phi_n$. *The scheme $\Psi_n$ is reduced, hence $\Psi_n = \Phi_n$ as schemes.* [[Proof.]{}]{} The standard codimension estimate for determinantal loci (see [@ACGH Ch. 2]) takes the form $${\text{codim}\;}\Psi_n \le d+n-1-(2n-1) = d-n.$$ Since equality holds, $\Psi_n$ is a Cohen-Macaulay scheme, in particular it has no embedded components. By the Thom-Porteous formula, $$\deg \Psi_n = (-1)^{d-n} \times \text{coefficient of $h^{d-n}$ in} \; (1-h)^{d+n-1} = \binom{d+n-1}{d-n}.$$ If we show that this coincides with $\deg \Phi_n$, then it will follow that $\Psi_n$ is reduced. Let $\lambda$ be a partition of $d$ with $n$ parts. A moment’s reflection will show that $\deg X_\lambda$ as given by Hilbert’s formula is the coefficient of the monomial $\prod\limits_{r=1}^d \, {z_r}^{r e_r}$ in the expression $$(z_1 + 2 \, z_2^2 + \dots + r \, z_r^r + \dots )^n.$$ Now substitute the same letter $z$ for each $z_r$, then $\prod {z_r}^{r e_r} = z^d$. Hence the coefficient of $z^d$ in $(z + 2 \, z^2 + \dots + r \, z^r + \dots )^n$ equals $$\sum\limits_{\text{$\lambda$ has $n$ parts}} \deg X_\lambda = \deg \Phi_n.$$ But $$(z + 2 \, z^2 + \dots + r \, z^r + \dots ) = \frac{z}{(1-z)^2},$$ hence this coefficient is the same as $$\text{coefficient of $z^{d-n}$ in $(1-z)^{-2n}$} = (-1)^{d-n} \, \binom{-2n}{d-n} = \binom{d+n-1}{d-n}.$$ This completes the proof of the theorem. It follows that the Eagon-Northcott complex of the map $$R(-1) \otimes S_1 \otimes S_{n-1} {\longrightarrow}R \otimes S_{d+n-2}$$ gives a resolution of $R/I_n$ (see [@BrunsVetter Ch. 2C]). Its terms are: ${\mathcal E}^0 = R$, and $${\mathcal E}^p = \wedge^{2n-p-1} (S_{d+n-2}) \otimes S_{-(p+1)}(S_1 \otimes S_{n-1}) \otimes R(-2n+p+1), \label{ENcomplex}$$ for $-(d-n) \le p \le -1$. The covariants ${\mathcal A}_n$ ------------------------------- Consider the map $$\wedge^{2n} \, \alpha_{n,{\mathbb F}}: {\mathbf C}{\longrightarrow}\wedge^{2n} \, S_{d+n-2}.$$ Let ${\mathcal A}_n$ denote the image of $1$ via this map, which is a covariant of degree-order $(2n,2n(d-n-1))$ of binary $d$-ics. (It is well-defined only up to a multiplicative constant.) By construction, it is the Wronskian of the forms $$\{ x_1^{n-j-1} \, x_2^j \, F_{x_i}: 0 \le j \le n-1, i =1,2 \}, \label{im-span}$$ i.e., it is the determinant of the following $2n \times 2n$ matrix: $$(p,q) {\longrightarrow}\begin{cases} (x_1^{2n-q-1} \, x_2^q, \, x_1^{n-p-1} \, x_2^p \; {\mathbb F}_{x_1})_{2n-1} & \text{if $ 0 \le p \le n-1$,} \\ (x_1^{2n-q-1} \, x_2^q, \, x_1^{2n-p-1} \, x_2^{p-n} \; {\mathbb F}_{x_2})_{2n-1} & \text{if $ n \le p \le 2n-1$,} \end{cases} \label{formula.An}$$ and $0 \le q \le 2n-1$. It vanishes at $F$ iff the collection (\[im-span\]) is linearly dependent, hence *$F \in \Phi_n \iff {\mathcal A}_n(F)=0$.* Since ${\mathcal A}_{d-1}$ is an invariant of degree $2(d-1)$ it must coincide with the discriminant. Similarly ${\mathcal A}_1$ is (up to a scalar) the same as the Hessian. Thus the series $\{A_n\}$ can be thought of as an ‘interpolation’ between the two. The following lemma will be used in the next section. \[lemma.gF\] *Asssume $[F] \in \Phi_n \setminus \Phi_{n-1}$. Then $${\mathcal A}_{n-1}(F) = (g_F)^{2n-2}.$$* [[Proof.]{}]{}This is perhaps best proved using the relation between the Wronskian and ramification indices (see [@ACGH pp. 37–43]). By hypothesis, $\alpha_{n-1}$ is of rank $2n-2$ at $[F]$, in fact $${\text{im}}(\alpha_{n-1,F}) = \{ F_{x_1} \, G_1 + F_{x_2} \, G_2: G_i \in S_{n-2} \} = \{ g_F \, G: G \in S_{2n-3} \}.$$ This can be seen as a linear series $\Sigma$ on ${\mathbb P}^1$ of degree $d+n-2$ and dimension $2n-3$. Write $F = \prod l_i^{e_i}$, then $\Sigma$ is only ramified at points $p_i \in {\mathbb P}^1$ corresponding to the $l_i$. Its ramification indices at $p_i$ are $$e_i, e_i + 1, \dots, e_i + 2n-3.$$ Hence the Wronskian of ${\text{im}}(\alpha_{n-1,F})$ is $\prod {l_i}^{(2n-2) e_i} = (g_F)^{2n-2}$. The codimension two case ------------------------ Assume $n=d-2$. Then in the complex (\[ENcomplex\]) we have $${\mathcal E}^{-1} = \wedge^{2d-4} \, S_{2d-4} \otimes R(-2d+4) = S_{2d-4} \otimes R(-2d+4),$$ i.e., $I_{d-2} = {\mathfrak I}({\mathcal A}_{d-2})$. Now $J$ (the Jacobian ideal of $\Delta$) is contained in the ideal $I_{d-2}$, hence the image of the natural multiplication map $$(I_{d-2})_{2d-4} \otimes R_1 {\longrightarrow}R_{2d-3}$$ must contain the representation $(J)_{2d-3}$. Since the latter is spanned by the coefficients of ${\mathcal E}_\Delta$, we deduce the following: *The covariants $({\mathcal A}_{d-2},{\mathbb F})_{d-2}$ and ${\mathcal E}_\Delta$ are equal up to a nonzero scalar.* We end this section by constructing covariants which distinguish between the components $X_{\tau} = X_{(3,1^{d-3})}$ and $X_{\delta}=X_{(2^2,1^{d-4})}$. A result due to Hilbert [@Hilbert1] says that a binary $d$-ic $F$ lies in $X_{(d)}$ iff ${\mathbb H}(F)=0$, and it lies in $X_{(d/2,d/2)}$ (assuming $d$ even) iff ${\mathbb T}(F)=0$. First assume that $F \in X_\tau \setminus X_\delta$. Then $g_F = l^2$ for some $l \in S_1$, and then ${\mathcal A}_{d-3} = l^{4d-12}$ by Lemma \[lemma.gF\]. If $F \in X_\delta \setminus X_\tau$, then $g_F = l_1 \, l_2$ for some nonproportional linear forms, and ${\mathcal A}_{d-3} = (l_1 \, l_2)^{2d-6}$. Hence we get the following proposition. *Let $F$ be a binary $d$-ic. Then $$\begin{aligned} F \in X_\tau & \iff {\mathcal A}_{d-2}(F) = {\mathbb H}({\mathcal A}_{d-3}(F)) =0, \\ F \in X_\delta & \iff {\mathcal A}_{d-2}(F) = {\mathbb T}({\mathcal A}_{d-3}(F)) =0. \end{aligned}$$* Throughout this paper we have used ${\mathbf C}$ as our base field. Note however, that all the irreducible representations of $SL_2 \, {\mathbf Q}$ are defined over ${\mathbf Q}$, hence so are all the varieties and schemes defined above. Thus all of our results are valid over an arbitrary field of characteristic zero. Appendix: the Morley form ========================= (by A. Abdesselam) {#section-15} We will now prove identity (\[morley.identity\]) from §\[section.FGrGFr\]. At this point, a brief explanatory remark on the symbolic method should be helpful. We have $f_{\mathbf x}= (f_1 \, x_1 + f_2 \, x_2), g_{\mathbf x}= (g_1 \, x_1 + g_2 \, x_2)$ where $f_i,g_i$ are treated as indeterminates. Introduce the differential operators $${\mathcal D}_F = \frac{1}{d!} \, F(\frac{\partial}{\partial f_1},\frac{\partial}{\partial f_2}), \quad {\mathcal D}_G = \frac{1}{e!} \, G(\frac{\partial}{\partial g_1},\frac{\partial}{\partial g_2}).$$ Then we have identities $F = {\mathcal D}_F \, f_{\mathbf x}^d, G = {\mathcal D}_G\, g_{\mathbf x}^d$. Moreover, each well-formed symbolic expression in $f,g$ can be evaluated by subjecting it to these operators; this is one way of providing a rigorous justification for the method. Thus the Morley form will be written as $${\mathcal M}({\mathbf x},{\mathbf y}) = {\mathcal D}_F \, {\mathcal D}_G \, \sum\limits_{i=1}^{e-1} \, (f \, g) \, f_{\mathbf x}^{i-1} \, g_{\mathbf x}^{e-i-1} \, f_{\mathbf y}^{d-i} \, g_{\mathbf y}^i . \label{eq4}$$ Now let $${\vartheta}_r = (-1)^{e-1} \, [ \, ({\mathcal M}, x_1^r \, x_2^{e-2-r})_{e-2} \, ]_{{\mathbf y}:={\mathbf x}} \, ,$$ where the transvection is with respect to ${\mathbf x}$. By definition, $${\vartheta}_r = \frac{(-1)^{e-1}}{(e-2)!^2} \left. \left\{ (\frac{\partial^2}{\partial z_1 \, \partial x_2} -\frac{\partial^2}{\partial z_2 \, \partial x_1})^{e-2} \, {\mathcal M}({\mathbf z},{\mathbf y}) \, x_1^r \, x_2^{e-2-r} \right\} \right|_{{\mathbf y}:={\mathbf x}.}$$ After a binomial expansion this simplifies to $$\frac{(-1)^{e-1}}{(e-2)!} \, (-\frac{\partial}{\partial z_2})^r \, (\frac{\partial}{\partial z_1})^{e-2-r} \, {\mathcal M}({\mathbf z},{\mathbf x}). \label{expression.theta.r}$$ {#section-16} Let us introduce a pair of variables $b = (b_1,b_2)$, which will serve as placeholders. Define the sum $$\Psi=(-1)^e \, \sum_{r=0}^{e-2} \, \binom{e-2}{r} \, b_1^r \, b_2^{e-2-r} \, (F \, {\widetilde G}_r - G \, {\widetilde F}_r), \label{Psi.defn}$$ so that $$F \, {\widetilde G}_r - G \, {\widetilde F}_r = \frac{(-1)^e}{(e-2)!} \, \frac{\partial^{e-2} \, \Psi} {\partial \, b_1^r \, \partial \, b_2^{e-2-r}} \label{FGr-GFr}$$ Now $$\begin{aligned} {} & G \, \frac{\partial^{e-1}F}{\partial x_1^q \, \partial x_2^{e-1-q}} = ({\mathcal D}_G \; g_{\mathbf x}^e) \, [ (\frac{\partial^{e-1}}{\partial x_1^q \, \partial x_2^{e-1-q}}) \, {\mathcal D}_F \; f_{\mathbf x}^d \, ] \\ & ={\mathcal D}_F \, {\mathcal D}_G \, [ \, g_{\mathbf x}^e \, (\frac{\partial^{e-1}}{\partial x_1^q \, \partial x_2^{e-1-q}}) \, f_{\mathbf x}^d \, ] \\ & = \frac{d!}{(d-e+1)!} \, {\mathcal D}_F \, {\mathcal D}_G \left[ f_1^q \, f_2^{e-1-q} \, f_{\mathbf x}^{d-e+1} g_{\mathbf x}^e \, \right], \end{aligned}$$ and similarly $$F \, \frac{\partial^{e-1}G}{\partial x_1^q \, \partial x_2^{e-1-q}}= e! \, {\mathcal D}_F \, {\mathcal D}_G \, [ \, g_1^q \, g_2^{e-1-q} \, f_{\mathbf x}^d \, g_{\mathbf x}\, ].$$ Now substitute these expressions into equations (\[tFr\]) and (\[tGr\]) from §\[section.FGrGFr\], and then substitute the latter into (\[Psi.defn\]). Then we have $\Psi = {\mathcal D}_F \, {\mathcal D}_G \, {{\widetilde \Psi}}$, where $$\begin{aligned} {} & {{\widetilde \Psi}}= \sum\limits_{r=0}^{e-2} \; \big[ (-b_1)^r \, b_2^{e-2-r} \, \times \\ & \sum\limits_{q=e-r-1}^{e-1} \binom{e-1}{q} \, x_1^{q-e+r+1} \, x_2^{e-q-r-2} \{ g_1^q \, g_2^{e-1-q} \, f_{\mathbf x}^d \, g_{\mathbf x}- f_1^q \, f_2^{e-1-q} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}^{e} \} \big]. \end{aligned}$$ The double sum is over the range $0 \le r \le e-2, \, e-r-1 \le q \le e-1$, which is the same as $1 \le q \le e-1, \, e-q-1 \le r \le e-2$. Therefore, after changing the order of summation, $$\begin{aligned} {{\widetilde \Psi}}= & \sum\limits_{q=1}^{e-1} \, \big [\binom{e-1}{q} b_2^{e-2} x_1^{q-e+1} x_2^{e-q-2} \{ g_1^q \, g_2^{e-1-q} \, f_{\mathbf x}^d \, g_{\mathbf x}- f_1^q \, f_2^{e-1-q} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}^e \} \times \\ & \sum\limits_{r=e-1-q}^{e-2} (-\frac{b_1 \, x_1}{b_2 \, x_2})^r \big], \end{aligned}$$ which we abbreviate to $$\sum\limits_{q=1}^{e-1} \, [ \, ({M}_1 - {M}_2) \times \sum\limits_{r=e-1-q}^{e-2} (-\frac{b_1 \, x_1}{b_2 \, x_2})^r \big].$$ The geometric series over $r$ is equal to $$\begin{aligned} {} & \; \frac{(-b_1 \, x_1)^{e-1-q} \, (b_2 \, x_2)^q - (-b_1 \, x_1)^{e-1}}{(b_2 \, x_2)^{e-2} \, b_{\mathbf x}} \\ = & \; (-b_1 \, x_1)^{e-1-q} \, (b_2 \, x_2)^{q-e+2} \, b_{\mathbf x}^{-1} - (-b_1 \, x_1)^{e-1} \, (b_2 \, x_2)^{-e+2} \, b_{\mathbf x}^{-1} \\ = & \; {N}_1 - {N}_2. \end{aligned}$$ Hence, after expansion ${{\widetilde \Psi}}$ is a sum of four terms $$\underbrace{\sum {M}_1 \, {N}_1}_{T_1} + \underbrace{\sum - {M}_1 \, {N}_2}_{T_2} + \underbrace{\sum - {M}_2 \, {N}_1}_{T_3} + \underbrace{\sum {M}_2 \, {N}_2}_{T_4}.$$ Now $$\begin{aligned} T_1 = & \sum\limits_{q=1}^{e-1}\binom{e-1}{q} (-1)^{e-1-q} \, b_1^{e-1-q} \, b_2^q \, b_{\mathbf x}^{-1} \, g_1^q \, g_2^{e-1-q} \, f_{\mathbf x}^d \, g_{\mathbf x}\\ = \, & (-1)^{e-1} \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, g_2^{e-1} \, f_{\mathbf x}^d \, g_{\mathbf x}\left\{ {\left(}1-\frac{b_2 \, g_1}{b_1 \, g_2}{\right)}^{e-1}-1 \right\}\\ = \, & (-1)^{e-1} \, (b \, g)^{e-1} \, b_{\mathbf x}^{-1} \, f_{\mathbf x}^d \, g_{\mathbf x}+ (-1)^e \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, g_2^{e-1} \, f_{\mathbf x}^d \, g_{\mathbf x}, \end{aligned}$$ and after similar calculations, $$\begin{aligned} T_2 = & (-1)^e \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, x_2^{-e+1} \, f_{\mathbf x}^d \, g_{\mathbf x}^e + (-1)^{e-1} \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, g_2^{e-1} \, f_{\mathbf x}^d \, g_{\mathbf x}\, , \\ T_3 = & (-1)^e \, (b \, f)^{e-1} \, b_{\mathbf x}^{-1} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}^e+ (-1)^{e-1} \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, f_2^{e-1} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}^e \, , \\ T_4 = & (-1)^{e-1} \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, x_2^{-e+1} \, f_{\mathbf x}^d \, g_{\mathbf x}^e + (-1)^e \, b_1^{e-1} \, b_{\mathbf x}^{-1} \, f_2^{e-1} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}^e. \\ \end{aligned}$$ Notice that six of the eight terms cancel in pairs, for instance, the first term of $T_2$ cancels with the first term of $T_4$. We are left with $$\begin{aligned} {{\widetilde \Psi}}& = (-1)^{e-1} \, (b \, g)^{e-1} \, b_{\mathbf x}^{-1} \, f_{\mathbf x}^d \, g_{\mathbf x}+ (-1)^e \, (b \, f)^{e-1} \, b_{\mathbf x}^{-1} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}^e, \\ & = \frac{(-1)^{e-1} \, f_{\mathbf x}^{d-e+1} \, g_{\mathbf x}}{b_{\mathbf x}} \, [ \, (b \, g)^{e-1} \, f_{\mathbf x}^{e-1} - (b \, f)^{e-1} \, g_{\mathbf x}^{e-1}]. \end{aligned}$$ Rewrite $b_{\mathbf x}$ using the Pl[ü]{}cker syzygy $b_{\mathbf x}\, (f \, g) = (b \, g) \, f_{\mathbf x}- (b \, f) \, g_{\mathbf x}$, and factor the numerator. This gives $${{\widetilde \Psi}}= (-1)^{e-1} \, (f \, g) \, \sum\limits_{i=1}^{e-1} \, (b \, f)^{i-1} \, (b \, g)^{e-i-1} \, f_{\mathbf x}^{d-i} \, g_{\mathbf x}^i \, .$$ Now make a change of variable $(b_1,b_2) = (z_2,-z_1)$. Then $(b \, f) = b_1 f_2 - b_2 f_1 = f_{\mathbf z}, (b \, g) = g_{\mathbf z}$, and ${\mathcal D}_F \, {\mathcal D}_G \, {{\widetilde \Psi}}= (-1)^{e-1} \, {\mathcal M}({\mathbf z},{\mathbf x})$. By formula (\[FGr-GFr\]), $$F \, {\widetilde G}_r - G \, {\widetilde F}_r = \frac{(-1)^e}{(e-2)!} \, (\frac{\partial}{\partial z_2})^r (-\frac{\partial}{\partial z_1})^{e-2-r} \, {\mathcal D}_F \, {\mathcal D}_G \, {{\widetilde \Psi}},$$ which is the same as ${\vartheta}_r$ by formula (\[expression.theta.r\]). This completes the proof of identity (\[morley.identity\]), and hence that of Proposition \[prop.morley\]. [99]{} A. Abdesselam and J. Chipalkatti. Brill-Gordan loci, transvectants and an analogue of the Foulkes conjecture. Available from the LANL preprint server (math.AG/0411110). C. D’Andrea and A. Dickenstein. Explicit formulas for the multivariate resultant. , vol. 164, Nos. 1-2, pp. 59–86, 2001. E. Arbarello, M. Cornalba, P.A. Griffiths, and J. Harris. . Grundlehren der mathematischen Wissenschaften, No. 267. Springer–Verlag,New York, 1985. M. F. Atiyah and I. G. MacDonald. . Addison-Wesley, 1969. R. Bott and L. Tu. . Graduate Texts in Mathematics, Springer-Verlag, 1982. W. Bruns and U. Vetter. . Lecture Notes in Mathematics No. 1327. Springer-Verlag, 1988. J. Chipalkatti. On equations defining coincident root loci. , vol. 267, No. 1, pp. 246–271, 2001. J. Dieudonne and J. Carrell. . Academic Press, New York-London, 1971. I. Dolgachev. . London Mathematical Society Lecture Notes No. 296. Cambridge University Press, 2003. D. Eisenbud. Green’s conjecture: an orientation for algebraists. In [*Free resolutions in commutative algebra*]{} (D. Eisenbud and C. Huneke ed.), pp. 51–78. Jones and Bartlett, Boston, MA, 1992. W. Fulton and J. Harris. . Graduate Texts in Mathematics. Springer–Verlag, 1991. I. M. Gelfand, M. M. Kapranov, and A. V. Zelevinsky. . Birkh[ä]{}user, Boston, 1994. O. Glenn. . Ginn and Co.,Boston, 1915. (Available as an eBook from ‘Project Gutenberg’ at www.gutenberg.net.) J. H. Grace and A. Young. Reprinted by Chelsea Publishing Co., New York, 1962. R. Hartshorne. . Graduate Texts in Mathematics, Springer-Verlag, 1977. D. Hilbert. Ueber die nothwendigen und hinreichenden covarianten Bedingungen f[ü]{}r die Darstellbarkeit einer bin[ä]{}ren Form als vollst[ä]{}ndiger Potenz. , vol. 27, pp. 158–161, 1886. D. Hilbert. Ueber die [S]{}ingularit[ä]{}ten der [D]{}iskriminantenfl[ä]{}che. , vol. 30, pp. 437–441, 1887. J. P. Jouanolou. Formes d’inertie et r[é]{}sultant: un formulaire. Adv. Math., vol. 126, No. 2, pp. 119–250, 1997. J. Kung and G.-C. Rota. Invariant theory of binary forms. Bulletin of the A.M.S., vol. 10, No. 1, pp. 27–85, 1984. P. Olver. . London Mathematical Society Student Texts. Cambridge University Press, 1999. G. Salmon. . Reprinted by Chelsea Publishing Co., New York, 1964. B. Sturmfels. . Texts and Monographs in Symbolic Computation. Springer–Verlag, 1993. —
{ "pile_set_name": "ArXiv" }
--- abstract: | NA61/SHINE at the CERN SPS is a fixed-target experiment pursuing a rich physics program including measurements for heavy ion, neutrino and cosmic ray physics. The main goal of the ion program is to study the properties of the onset of deconfinement and to search for the signatures of the critical point. A specific property of the critical point, the increase in the correlation length, makes fluctuations its basic signal. Higher order moments of suitable observables are of special interest as they are more sensitive to the correlation length than typically studied second order moments. In this contribution preliminary results on higher order fluctuations of negatively charged hadron multiplicity and net-charge in p+p interactions will be shown. The new data will be compared with model predictions. address: 'Faculty of Physics, Warsaw University of Technology, Koszykowa 75, 00-662 Warsaw, Poland' author: - 'Maja Maćkowiak-Paw[ł]{}owska for the NA61/SHINE Collaboration' title: 'Higher order moments of net-charge and multiplicity distributions in p+p interactions at SPS energies from NA61/SHINE [^1]' --- Introduction ============ One of the most important goals of high-energy heavy-ion collisions is to establish the phase diagram of strongly interacting matter by finding the possible phase boundaries and critical points. A specific property of the critical point, the increase in the correlation length $\xi$, makes fuctuations its basic signal [@Stephanov_overview]. Fluctuations are quantified by moments of measured distributions of suitable observables of order higher than the first. Critical point fluctuations are expected to increase as (approximately) $\xi^{2}$ for the variance (second moment) of event-by-event observables such as multiplicities or mean transverse momenta of particles. Higher, non-Gaussian, moments of fluctuations should depend more sensitively on $\xi$, e.g. the fourth moment is expected to grow as $\xi^{7}$ near the critical point [@Stephanov:2011zz]. This contribution shows results on fluctuations of negatively charged hadron and net-charge multiplicity distributions defined by moments or cumulants up to the fourth moment. Net-charge is defined as the difference between positively and negatively charged hadron multiplicities in p+p interactions collected by the NA61/SHINE experiment [@Antoniou:2006mh] in 2009. The reason to focus on multiplicity of negatively charged particle is the fact that the underlying correlations are almost insensitive to resonance decays as there are very few resonances decaying into pairs of negatively charged particles. Net-charge, under some assumptions allows to compare data to QCD calculations on lattice. Fluctuation measures ==================== In the grand canonical ensemble mean, variance and in general cumulants (denoted with index ${c}$) of a multiplicity distribution are extensive quantities (they are proportionl to volume $\sim V$). A ratio of two extensive quantities is an intensive quantity e.g.: $\omega[N] = \frac{Var[N]}{\langle N \rangle}$, where $Var[N]$ and $\langle N \rangle$ are variance and mean of the multiplicity distribution. The scaled variance is independent of $V$ (for event ensembles with fixed $V$) but it depends on fluctuations of $V$ (even if $\langle V \rangle$ is fixed). For the Poisson distribution (independent particle production) $\omega=1$. For third and fourth order cumulants there are several possibilities for deriving intensive measures. The two most popular are: $$\frac{\langle N^{3}\rangle_{c}}{Var[N]},\quad \frac{\langle N^{4}\rangle_{c}}{Var[N]},\nonumber$$ where $\langle N^{3}\rangle_{c}$ and $\langle N^{4}\rangle_{c}$ are the third and fourth order cumulants of the multiplicity distribution [@Asakawa:2015ybt]. The related quantities skewness $S$ and kurtosis $\kappa$ are defined as: $$S=\frac{\langle N^{3}\rangle_{c}}{(Var[N])^{3/2}}=\frac{\langle N^{3}\rangle_{c}}{\sigma^{3}}, \quad \kappa=\frac{\langle N^{4} \rangle_{c}}{Var[N]}=\frac{\langle N^{4}\rangle_{c}}{\sigma^{4}},\nonumber$$ where $\sigma^{2}$ is the variance of the multiplicity distribution ($Var[N]=\langle N^{2}\rangle_{c}$). Thus $$S\sigma=\frac{\langle N^{3}\rangle_{c}}{Var[N]}, \quad \kappa\sigma^{2}=\frac{\langle N^{4} \rangle_{c}}{Var[N]}.\nonumber$$ Results ======= Preliminary results were obtained from p+p data collected in 2009 at 20, 31, 40, 80 and 158 GeV/c beam momenta. Table \[Tab:events\] shows the analysis statistics. $\sqrt{s_{NN}}$ \[GeV\] 6.3 7.6 8.7 12.3 17.3 ------------------------- ------ ------ ------ ------ ------ Events 0.2M 0.9M 3.0M 1.7M 1.6M : Number of p+p events taken in 2009 by the NA61/SHINE experiment.[]{data-label="Tab:events"} The analysis acceptance is the same as used for multiplicity and transverse momentum fluctuation analysis [@Aduszkiewicz:2015jna]. Corrected results refer to inelastic interactions and particles produced in strong and electromagnetic processes within the analysis acceptance. As in Ref. [@Aduszkiewicz:2015jna] multiplicity distributions were corrected for - off-target interactions - detector effects - event selection (trigger bias and analysis procedure) - track selection within the analysis acceptance - contribution of weak decays - secondary interactions Statistical uncertainties were calculated using the sub-sample method[^2]. Systematic uncertainties were estimated by varying event and track selection criteria. ![The energy dependence of $\omega[h^{-}]$, $S\sigma[h^{-}]$ and $\kappa\sigma^{2}[h^{-}]$ in p+p interactions.[]{data-label="Fig:neg"}](FIG/ScaledVarianceCOM_neg "fig:"){width=".45\textwidth"} ![The energy dependence of $\omega[h^{-}]$, $S\sigma[h^{-}]$ and $\kappa\sigma^{2}[h^{-}]$ in p+p interactions.[]{data-label="Fig:neg"}](FIG/SsigmaCOM_neg "fig:"){width=".45\textwidth"} ![The energy dependence of $\omega[h^{-}]$, $S\sigma[h^{-}]$ and $\kappa\sigma^{2}[h^{-}]$ in p+p interactions.[]{data-label="Fig:neg"}](FIG/Ksigma2COM_neg "fig:"){width=".45\textwidth"} ![The energy dependence of $\omega[h^{+}-h^{-}]$, $S\sigma[h^{+}-h^{-}]$ and $\kappa\sigma^{2}[h^{+}-h^{-}]$ in p+p interactions.[]{data-label="Fig:net"}](FIG/ScaledVarianceCOM_net "fig:"){width=".45\textwidth"} ![The energy dependence of $\omega[h^{+}-h^{-}]$, $S\sigma[h^{+}-h^{-}]$ and $\kappa\sigma^{2}[h^{+}-h^{-}]$ in p+p interactions.[]{data-label="Fig:net"}](FIG/SsigmaCOM_net "fig:"){width=".45\textwidth"} ![The energy dependence of $\omega[h^{+}-h^{-}]$, $S\sigma[h^{+}-h^{-}]$ and $\kappa\sigma^{2}[h^{+}-h^{-}]$ in p+p interactions.[]{data-label="Fig:net"}](FIG/Ksigma2COM_net "fig:"){width=".45\textwidth"} Figure \[Fig:neg\] shows results on fluctuations of negatively charged hadron multiplicity. The measures $\omega[h^{-}]$, $S\sigma[h^{-}]$ and $\kappa\sigma^{2}[h^{-}]$ rise with collision energy and cross 1 between 40 and 80 GeV/c. These results are not reproduced by statistical models (GCE or CE) [@BegunCPOD2016] (for details see conference slides). Figure \[Fig:net\] shows results on fluctuations of net-charge. The scaled variance, $\omega[h^{+}-h^{-}]$, as well as $S\sigma[h^{+}-h^{-}]$ depends very weakly on collision energy whereas $\kappa\sigma^{2}[h^{+}-h^{-}]$ rises with collision energy and crosses 1 at 80 GeV/c. Net-charge fluctuations measured by $\omega[h^{+}-h^{-}]$ are smaller than predictions of the independent particle production model for which the net-charge distribution is described by the Skellam distribution. The energy dependences of $S\sigma$ and $\kappa\sigma^{2}$ are completely different from those predicted by the model. The EPOS 1.99 model describes the observed values of $\omega[h^{-}]$, $S\sigma[h^{-}]$ and net-charge fluctuations but it underestimates the value of $\kappa\sigma^{2}[h^{-}]$. [ This work was partially supported by the National Science Centre, Poland grant 2015/18/M/ST2/00125.]{} [99]{} , , 2004. M. A. Stephanov, J. Phys. G [**38**]{}, 124147, 2011. N. Antoniou [*et al.*]{} \[NA49-future Collaboration\], CERN-SPSC-2006-034, CERN-SPSC-P-330, CERN-SPSC-2012-022, SPSC-P-330-ADD-6. N. Abgrall [*et al.*]{} \[NA61 Collaboration\], JINST [**9**]{}, P06005, 2014. M. Asakawa and M. Kitazawa, Prog. Part. Nucl. Phys.  [**90**]{}, 299, 2016. A. Aduszkiewicz [*et al.*]{} \[NA61/SHINE Collaboration\], arXiv:1510.00163 \[hep-ex\]. V. Begun, Poster presented at the Critical Point and Onset of Deconfinement 2016, Wroclaw, Poland. [^1]: Presented at the Critical Point and Onset of Deconfinement 2016, Wroclaw, Poland [^2]: Statistical uncertainties are smaller than the marker size in the figures
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this chapter we review the recent results on the equilibrium configurations of static and uniformly rotating neutron stars within the Hartle formalism. We start from the Einstein-Maxwell-Thomas-Fermi equations formulated and extended by Belvedere et al. (2012, 2014). We demonstrate how to conduct numerical integration of these equations for different central densities ${\it \rho}_c$ and angular velocities $\Omega$ and compute the static $M^{stat}$ and rotating $M^{rot}$ masses, polar $R_p$ and equatorial $R_{\rm eq}$ radii, eccentricity $\epsilon$, moment of inertia $I$, angular momentum $J$, as well as the quadrupole moment $Q$ of the rotating configurations. In order to fulfill the stability criteria of rotating neutron stars we take into considerations the Keplerian mass-shedding limit and the axisymmetric secular instability. Furthermore, we construct the novel mass-radius relations, calculate the maximum mass and minimum rotation periods (maximum frequencies) of neutron stars. Eventually, we compare and contrast our results for the globally and locally neutron star models.' author: - | ***Kuantay Boshkayev[^1]\ Institute of Experimental and Theoretical Physics,\ Faculty of Physics and Technology, Al-Farabi Kazakh National University\ Almaty, Kazakhstan*** title: | 0.45in [***Chapter 1***]{} 0.45in **<span style="font-variant:small-caps;">Uniformly rotating neutron stars</span>** --- **PACS** 97.60.Jd, 97.10.Nf, 97.10.Pg, 97.10.Kc, 26.60.Dd, 26.60.Gj, 26.60.Kp, 04.40.Dg.\ **Keywords:** Neutron stars, equations of state, mass-radius relation. Introduction {#sec:1} ============ Conventionally, in order to construct the equilibrium configurations of static neutron stars the equations of hydrostatic equilibrium derived by Tolman-Oppenheimer-Volkoff (TOV) [@tolman39; @oppenheimer39] are widely used. In connection with this, it has been recently revealed in Refs. [@2012NuPhA.883....1B; @2011PhLB..701..667R; @2011NuPhA.872..286R] that the TOV equations are modified once all fundamental interactions are taken into due account. It has been proposed that the Einstein-Maxwell system of equations coupled with the general relativistic Thomas-Fermi equations of equilibrium have to be used instead. This set of equations is termed as the Einstein-Maxwell-Thomas-Fermi (EMTF) system of equations. Although in the TOV method the condition of local charge neutrality (LCN), $n_e(r)=n_p(r)$ is imposed (see e.g. [@haenselbook]), the EMTF method requires the less rigorous condition of global charge neutrality (GCN) as follows $$\int \rho_{\rm ch} d^3 r=\int e [n_p(r)- n_e(r)] d^3r = 0,$$ where $ \rho_{\rm ch}$ is the electric charge density, $e$ is the fundamental electric charge, $n_p(r)$ and $n_e(r)$ are the proton and electron number densities, respectively. The integration is performed on the entire volume of the system. The Lagrangian density accounting for the strong, weak, electromagnetic and gravitational interactions consists of the free-fields terms such as the gravitational $\mathcal{L}_g$, the electromagnetic $\mathcal{L}_\gamma$, and the three mesonic fields $\mathcal{L}_\sigma$, $\mathcal{L}_\omega$, $\mathcal{L}_\rho$, the three fermion species (electrons, protons and neutrons) term $\mathcal{L}_f$ and the interacting part in the minimal coupling assumption, $\mathcal{L}_{\rm int}$ given as in Refs. [@2011NuPhA.872..286R; @2012NuPhA.883....1B]: $$\label{eq:Lagrangian} \mathcal{L}=\mathcal{L}_{g}+\mathcal{L}_{f}+\mathcal{L}_{\sigma}+\mathcal{L}_{\omega}+\mathcal{L}_{\rho}+\mathcal{L}_{\gamma}+\mathcal{L}_{\rm int} \;, $$ where[^2] $$\begin{aligned} \mathcal{L}_g &= -\frac{R}{16 \pi},\quad \mathcal{L}_f = \sum_{i=e, N}\bar{\psi}_{i}\left(i \gamma^\mu D_\mu-m_i \right)\psi_i,\\ \mathcal{L}_{\sigma} &= \frac{\nabla_{\mu}\sigma \nabla^{\mu}\sigma}{2}-U(\sigma),\, \mathcal{L}_{\omega} = -\frac{\Omega_{\mu\nu}\Omega^{\mu\nu}}{4}+\frac{m_{\omega}^{2} \omega_{\mu} \omega^{\mu}}{2},\\ \mathcal{L}_{\rho} &= -\frac{\mathcal{R}_{\mu\nu}\mathcal{R}^{\mu\nu}}{4}+\frac{m_{\rho}^{2} \rho_{\mu} \rho^{\mu}}{2},\quad\mathcal{L}_{\gamma} = -\frac{F_{\mu\nu}F^{\mu\nu}}{16\pi},\\ \mathcal{L}_{\rm int} &= -g_{\sigma} \sigma \bar{\psi}_N \psi_N - g_{\omega} \omega_{\mu} J_{\omega}^{\mu}-g_{\rho}\rho_{\mu}J_{\rho}^{\mu} + e A_{\mu} J_{\gamma,e}^{\mu} -e A_{\mu} J_{\gamma,N}^{\mu}. $$ The inclusion of the strong interactions between the nucleons is made through the $\sigma$-$\omega$-$\rho$ nuclear model following Ref. [@boguta77]. Consequently, $\Omega_{\mu\nu}\equiv\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}$, $\mathcal{R}_{\mu\nu}\equiv\partial_{\mu}\rho_{\nu}-\partial_{\nu}\rho_{\mu}$, $F_{\mu\nu}\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ are the field strength tensors for the $\omega^{\mu}$, $\rho$ and $A^{\mu}$ fields respectively, $\nabla_\mu$ stands for covariant derivative and $R$ is the Ricci scalar. The Lorentz gauge is adopted for the fields $A_\mu$, $\omega_\mu$, and $\rho_\mu$. The self-interaction scalar field potential is $U(\sigma)$, $\psi_N$ is the nucleon isospin doublet, $\psi_e$ is the electronic singlet, $m_i$ stands for the mass of each particle-species and $D_\mu = \partial_\mu + \Gamma_\mu$, where $\Gamma_\mu$ are the Dirac spin connections. The conserved currents are given as $J^{\mu}_{\omega} = \bar{\psi}_N \gamma^{\mu}\psi_N$, $J^{\mu}_{\rho} = \bar{\psi}_N \tau_3\gamma^{\mu}\psi_N$, $J^{\mu}_{\gamma, e} = \bar{\psi}_e \gamma^{\mu}\psi_e$, and $J^{\mu}_{\gamma, N} = \bar{\psi}_N(1/2)(1+\tau_3)\gamma^{\mu}\psi_N$, where $\tau_3$ is the particle isospin. In this chapter we adopt the NL3 parameter set [@lalazissis97] used in Ref. [@2012NuPhA.883....1B] with $m_\sigma=508.194$ MeV, $m_\omega=782.501$ MeV, $m_\rho=763.000$ MeV, $g_\sigma=10.2170$, $g_\omega=12.8680$, $g_\rho=4.4740$, plus two constants that give the strength of the self-scalar interactions, $g_2=-10.4310$ fm$^{-1}$ and $g_3=-28.8850$. ![The top and middle panels depict the neutron, proton, electron densities and the electric field in units of the critical electric field $E_c$ in the core-crust transition layer, whereas the bottom panel shows a specific example of a density profile inside a neutron star. In this plot we have used for the globally neutral case a density at the edge of the crust equal to the neutron drip density, ${\it\rho}_{\rm drip}\sim 4.3\times 10^{11}$ g cm$^{-3}$.](structure.eps "fig:"){width="0.75\hsize"} \[fig:Model\] ![Mass-radius relation of the static (non-rotating) neutron stars for both globally and locally neutral configurations. In this plot for the globally neutral case a density at the edge of the crust equal to the neutron drip density, ${\it \rho}_{\rm drip}\sim 4.3\times 10^{11}$ g cm$^{-3}$ has been used. GCN and LCN stand for global and local charge neutrality cases, respectively[]{data-label="fig:MRstat"}](MRstat.eps){width="0.75\hsize"} Thus, the system of the EMTF equations [@2011NuPhA.872..286R; @2012NuPhA.883....1B; @belvedere2014; @belvedere2014jkps] is derived from the equations of motion of the above Lagrangian. The solution of the EMTF coupled differential equations gives a novel structure of the neutron star, as shown in Fig \[fig:Model\]: a positively charged core at supranuclear densities, ${\it \rho}>{\it \rho}_{\rm nuc}\sim 2.7\times 10^{14}$ g cm$^{-3}$, surrounded by an electron distribution of thickness $\gtrsim \hbar/(m_e c)$, which is negatively charged and a neutral ordinary crust at lower densities ${\it \rho}<{\it \rho}_{\rm nuc}$. The condition of the thermodynamic equilibrium is given by the constancy of the particle Klein potentials [@klein49] extended to account for electrostatic and strong fields [@2011PhLB..701..667R; @2011NuPhA.872..286R; @2012NuPhA.883....1B; @belvedere2014jkps] $$\label{eq:klein} \frac{1}{u^t}\,[\mu_i + (q_i A_\alpha + g_\omega \omega_\alpha + g_\rho \tau_{3,i} \rho_\alpha) u^\alpha]={\rm constant},$$ where the subscript $i$ stands for each kind of particle, $\mu_i$ is the particle chemical potential, $q_i$ is the particle electric charge, $u^t=(g_{tt})^{-1/2}$ is the time component of the fluid four-velocity which satisfies $u_\alpha u^\alpha =1$ and $g_{tt}$ is the t–t component of the spherically symmetric metric. For the static case we have only the time components of the vector fields, $A_0$, $\omega_0$, $\rho_0$. $$\label{eq:metric1} ds^2=e^{\nu} dt^2-e^{\lambda} dr^2-dr^2-r^2 (d\theta^2+\sin^2\theta d\phi^2)\;.$$ The constancy of the Klein potentials (\[eq:klein\]) leads to a discontinuity in the density at the core-crust transition and, respectively, this generates an overcritical electric field $\sim (m_\pi/m_e)^2 E_c$, where $E_c=m^2_e c^3/(e \hbar)\sim 1.3\times 10^{16}$ Volt cm$^{-1}$, in the core-crust boundary interface. The Klein condition (\[eq:klein\]) is necessary to satisfy the requirement of thermodynamical equilibrium, together with the Tolman condition (constancy of the gravitationally red-shifted temperature) [@1930PhRv...35..904T; @klein49], if finite temperatures are included [@2011NuPhA.872..286R]. Particularly, the continuity of the electron Klein potential leads to a decrease of the electron chemical potential $\mu_e$ and density at the core-crust boundary interface. They attain values $\mu^{\rm crust}_e < \mu^{\rm core}_e$ and ${\it \rho}_{\rm crust}<{\it \rho}_{\rm core}$ at the basis of the crust, where global charge neutrality is achieved. As it has been shown in Refs. [@2012NuPhA.883....1B; @belvedere2014], that the solution of the EMTF equations along with the constancy of the Klein potentials yield a more compact neutron star with a less massive and thiner crust. Correspondingly, this results in a new mass-radius relation which prominently differs from the one given by the solution of the TOV equations with local charge neutrality; see Fig. \[fig:MRstat\]. In this chapter the extension of the previous results obtained in Refs. [@2012NuPhA.883....1B; @belvedere2014] are considered. To this end the Hartle formalism [@1967ApJ...150.1005H] is utilized to solve the Einstein equations accurately up to second order terms in the angular velocity of the star, $\Omega$ (see section \[sec:2\]). For the rotating case, the Klein thermodynamic equilibrium condition has the same form as Eq. (\[eq:klein\]), but the fluid inside the star now moves with a four-velocity of a uniformly rotating body, $u^\alpha=(u^t,0,0,u^\phi)$, with (see [@HS1967], for details) $$u^t=(g_{tt}+2\Omega\,g_{t\phi}+\Omega^2\,g_{\phi \phi})^{-1/2},\qquad u^\phi=\Omega u^t,$$ where $\phi$ is the azimuthal angular coordinate and the metric is axially symmetric independent of $\phi$. The components of the metric tensor $g_{\alpha \beta}$ are now given by Eq. (\[eq:rotmetric\]) below. It is then evident that in a frame comoving with the rotating star, $u^t=(g_{tt})^{-1/2}$, and the Klein thermodynamic equilibrium condition remains the same as Eq. (\[eq:klein\]), as expected. This chapter is organized as follows: in section \[sec:2\] we review the Hartle formalism and consider both interior and exterior solutions. In section \[sec:3\] the stability of uniformly rotating neutron stars are explored taking into account the Keplerian mass-shedding limit and the secular axisymmetric instability. In section \[sec:4\] the structure of uniformly rotating neutron stars is investigated. We compute there the mass $M$, polar $R_p$ and equatorial $R_{\rm eq}$ radii, and angular momentum $J$, as a function of the central density and the angular velocity $\Omega$ of stable neutron stars both in the globally and locally neutral cases. Based on the criteria of equilibrium we calculate the maximum stable neutron star mass. In section \[sec:5\] we construct the new neutron star mass-radius relation. In section \[sec:6\] we calculate the moment of inertia as a function of the central density and total mass of the neutron star. The eccentricity $\epsilon$, the rotational to gravitational energy ratio $T/W$, and quadrupole moment $Q$ are shown in section \[sec:7\]. The observational constraints on the mass-radius relation are discussed in section \[sec:8\]. We finally summarize our results in section \[sec:9\]. Hartle slow rotation approximation {#sec:2} ================================== In his original article, Hartle (1967) [@1967ApJ...150.1005H] derived the equilibrium equations of slowly rotating relativistic stars. The solutions of the Einstein equations have been obtained through a perturbation method, expanding the metric functions up to the second order terms in the angular velocity $\Omega$. Under this assumption the structure of compact objects can be approximately described by the total mass $M$, angular momentum $J$ and quadrupole moment $Q$. The slow rotation regime implies that the perturbations owing to the rotation are relatively small with respect to the known non-rotating geometry. The interior solution is derived by solving numerically a system of ordinary differential equations for the perturbation functions. The exterior solution for the vacuum surrounding the star, can be written analytically in terms of $M$, $J$, and $Q$ [@1967ApJ...150.1005H; @1968ApJ...153..807H]. The numerical values for all the physical quantities are derived by matching the interior and the exterior solution on the surface of the star. The interior Hartle solution ---------------------------- The spacetime metric for the rotating configuration up to the second order of $\Omega$ is given by [@1967ApJ...150.1005H] $$\begin{aligned} \label{eq:rotmetric} ds^2 &=& e^{\nu}\left(1+2h\right)dt^2-e^{\lambda}\left[1+\frac{2m}{r-2 M_0}\right]dr^2 \nonumber\\ &&- r^2\left(1+2k\right)\left[d\theta^2+\sin^2\theta\left(d\phi-\omega dt\right)^2\right]+O(\Omega^3) \, , $$ where $\nu=\nu(r)$, $\lambda=\lambda(r)$, and $M_0=M^{J=0}(r)$ are the metric functions and mass profiles of the corresponding seed static star with the same central density as the rotating one; see Eq. (\[eq:metric1\]). The functions $h=h(r,\theta)$, $m=m(r,\theta)$, $k=k(r,\theta)$ and the fluid angular velocity in the local inertial frame, $\omega=\omega(r)$, have to be calculated from the Einstein equations. Expanding up to the second order the metric in spherical harmonics we have $$\begin{aligned} \label{eq:HarmonicExp} &&h(r,\theta)=h_0(r)+h_2(r)P_2(\cos\theta) \;,\\ &&m(r,\theta)=m_0(r)+m_2(r)P_2(\cos\theta) \;,\\ &&k(r,\theta)=k_0(r)+k_2(r)P_2(\cos\theta) \;, $$ where $P_2(cos\theta)$ is the Legendre polynomial of second order. Because the metric does not change under transformations of the type $r\rightarrow f(r)$, we can assume $k_0(r)=0$. The functions $h=h(r,\theta)$, $m=m(r,\theta)$, $k=k(r,\theta)$ have analytic form in the exterior (vacuum) spacetime and they are shown in the following section. The mass, angular momentum, and quadrupole moment are computed from the matching condition between the interior and exterior metrics. For rotating configurations the angular momentum is the easiest quantity to compute. To this end we consider only $t,\phi$ component of the Einstein equations. By introducing the angular velocity of the fluid relative to the local inertial frame, $\bar{\omega}(r)=\Omega-\omega(r)$ one can show from the Einstein equations at first order in $\Omega$ that $\bar{\omega}$ satisfies the differential equation $$\label{eq:baromega} \frac{1}{r^4}\frac{d}{dr}\left( r^4 j \frac{d\bar{\omega}}{dr} \right)+\frac{4}{r}\frac{d j}{dr}\bar{\omega}=0\;,$$ where $j(r)=e^{-(\nu+\lambda)/2}$ with $\nu$ and $\lambda$ the metric functions of the seed non-rotating solution (\[eq:metric1\]). From the matching conditions, the angular momentum of the star is given by $$\label{eq:J} J = \frac{1}{6}R^4\left(\frac{d\bar{\omega}}{dr}\right)_{r=R}\;,$$ so the angular velocity $\Omega$ is related to the angular momentum as $$\label{eq:Jomega} \Omega = \bar{\omega}(R)+\frac{2 J}{R^3}\;.$$ The total mass of the rotating star, $M$, is given by $$\label{eq:Mrot} M = M_0+\delta M\;,\qquad \delta M = m_0(R)+J^2/R^3\,,$$ where $\delta M$ is the contribution to the mass owing to rotation. The second order functions $m_0$ (the mass perturbation function) and $p_0^*$ (the pressure perturbation function) are computed from the solution of the differential equation $$\begin{aligned} \frac{d m_0}{dr}&=4\pi r^2 \frac{d{\cal E}}{dP} ({\cal E}+P) p_0^* + \frac{1}{12}j^2 r^4 \left(\frac{d\bar{\omega}}{dr}\right)^2-\frac{1}{3}\frac{dj^2}{dr}r^3 \bar{\omega}^2\;,\\ \frac{d p_0^*}{dr}&=-\frac{m_0 (1+8 \pi r^2 P)}{(r-2 M_0)^2}-\frac{4\pi r^2 ({\cal E}+P)}{(r-2 M_0)}p_0^* + \frac{1}{12}\frac{j^2 r^4}{(r-2 M_0)}\left(\frac{d\bar{\omega}}{dr}\right)^2 \nonumber \\& + \frac{1}{3}\frac{d}{dr} \left(\frac{r^3j^2\bar{\omega}^2}{r-2 M_0}\right)\;,\end{aligned}$$ where ${\cal E}$ and $P$ are the total energy-density and pressure. Turning to the quadrupole moment of the neutron star, it is given by $$\label{eq:Q} Q=\frac{J^2}{M_0}+\frac{8}{5}{\cal K} M_0^3\;,$$ where ${\cal K}$ is a constant of integration. This constant is fixed from the matching of the second order function $h_2$ obtained in the interior from $$\begin{aligned} \frac{d k_2}{dr}&=-\frac{d h_2}{dr}-h_2\frac{d\nu}{dr}+\left(\frac{1}{r}+\frac{1}{2}\frac{d\nu}{dr}\right)\bigg[-\frac{1}{3}r^3\bar{\omega}^2\frac{dj^2}{dr} + \frac{1}{6}r^4 j^2 \left(\frac{d\bar{\omega}}{dr}\right)^2\bigg]\;,\\ \frac{d h_2}{dr}&=h_2\bigg\{-\frac{d\nu}{dr}+\frac{r}{r-2 M_0}\left(\frac{d\nu}{dr}\right)^{-1}\bigg[8\pi({\cal E}+P)-\frac{4 M_0}{r^3}\bigg] \bigg\}-\frac{4 (k_2+h_2)}{r (r-2 M_0)}\left(\frac{d\nu}{dr}\right)^{-1}\nonumber\\ &+\frac{1}{6}\bigg[\frac{r}{2}\frac{d\nu}{dr}-\frac{1}{r-2 M_0}\left(\frac{d\nu}{dr}\right)^{-1}\bigg]r^3j^2\left(\frac{d\bar{\omega}}{dr}\right)^2\nonumber\\&-\frac{1}{3}\bigg[\frac{r}{2}\frac{d\nu}{dr}+\frac{1}{r-2 M_0}\left(\frac{d\nu}{dr}\right)^{-1}\bigg]r^2 \bar{\omega}^2\frac{dj^2}{dr}\;,\end{aligned}$$ with its exterior counterpart (see [@1967ApJ...150.1005H]). It is worth emphasizing that the influence of the induced magnetic field owing to the rotation of the charged core of the neutron star in the globally neutral case is negligible [@2012IJMPS..12...58B]. In fact, for a rotating neutron star of period $P=10$ ms and radius $R\sim10$ km, the radial component of the magnetic field $B_r$ in the core interior reaches its maximum at the poles with a value $B_r\sim 2.9\times10^{-16}B_c$, where $B_c=m_e^2c^3/(e\hbar)\approx 4.4\times10^{13}$ G is the critical magnetic field for vacuum polarization. The angular component of the magnetic field $B_\theta$, instead, has its maximum value at the equator and, as for the radial component, it is very low in the interior of the neutron star core, i.e. $|B_\theta|\sim 2.9\times 10^{-16}B_c$. In the case of a sharp core-crust transition as the one studied by [@2012NuPhA.883....1B] and shown in Fig. \[fig:Model\], this component will grow in the transition layer to values of the order of $|B_\theta|\sim 10^2 B_c$ [@2012IJMPS..12...58B]. However, since we are here interested in the macroscopic properties of the neutron star, we can ignore at first approximation the presence of electromagnetic fields in the macroscopic regions where they are indeed very small, and safely apply the original Hartle formulation without any generalization. The exterior Hartle solution {#app:1a} ---------------------------- In this subsection we consider the exterior Hartle solution though in the literature it is widely known as the Hartle-Thorne solution. One can write the line element given by eq. (\[eq:rotmetric\]) in an analytic closed-form outside the source as function of the total mass $M$, angular momentum $J$, and quadrupole moment $Q$ of the rotating star. The angular momentum $J$ along with the angular velocity of local inertial frames $\omega(r)$, proportional to $\Omega$, and the functions $h_0$, $h_2$, $m_0$, $m_2$, $k_2$, proportional to $\Omega^2$, are derived from the Einstein equations (for more details see [@1967ApJ...150.1005H; @1968ApJ...153..807H]). Following this prescriptions the Eq. \[eq:rotmetric\] becomes: $$\begin{aligned} \label{ht1} ds^2&=\left(1-\frac{2{ M }}{r}\right)\bigg[1+2k_1P_2(\cos\theta) +2\left(1-\frac{2{ M}}{r}\right)^{-1}\frac{J^{2}}{r^{4}}(2\cos^2\theta-1)\bigg]dt^2\nonumber\\ &+\frac{4J}{r}\sin^2\theta dt d\phi-\left(1-\frac{2{ M}}{r}\right)^{-1}\times\bigg[1-2\left(k_1-\frac{6 J^{2}}{r^4}\right)P_2(\cos\theta) \nonumber \\ &-2\left(1-\frac{2{ M}}{r}\right)^{-1}\frac{J^{2}}{r^4}\bigg]dr^2-r^2[1-2k_2P_2(\cos\theta)](d\theta^2+\sin^2\theta d\phi^2),\end{aligned}$$ where $$\begin{aligned} k_1&=\frac{J^2}{M r^3}\left(1+\frac{M}{r}\right)+\frac{5}{8}\frac{Q-J^{2}/{M}}{M^3}Q_2^2(x) \;,\\ k_2&=k_1+\frac{J^{2}}{r^4}+\frac{5}{4}\frac{Q-J^{2}/{ M}}{{ M}^2r \sqrt{1-2M/r}}Q_2^1(x) \;,\end{aligned}$$ and $$\begin{aligned} \label{legfunc} Q_2^1(x)&=(x^2-1)^{1/2}\left[\frac{3x}{2}\ln\left(\frac{x+1}{x-1}\right)-\frac{3x^2-2}{x^2-1}\right] \;, \\ Q_2^2(x)&=(x^2-1)\left[\frac{3}{2}\ln\left(\frac{x+1}{x-1}\right)-\frac{3x^3-5x}{(x^2-1)^2}\right] \;,\end{aligned}$$ are the associated Legendre functions of the second kind, being $P_2(\cos\theta)=(1/2)(3\cos^2\theta-1)$ the Legendre polynomial, and $x=r/M -1$. This form of the metric is known in the literature as the Hartle-Thorne metric. To obtain the exact numerical values of $M$, $J$ and $Q$, the exterior and interior line elements have to be matched at the surface of the star. It is worth noticing that in the terms involving $J^2$ and $Q$, the total mass $M$ can be directly substituted by $M_0=M^{J=0}$ since $\delta M$ is already a second order term in the angular velocity. Stability of uniformly rotating neutron stars {#sec:3} ============================================= Secular axisymmetric instability {#subsec:3.1} -------------------------------- In a sequence of increasing central density in the $M$-${\it \rho}_c$ curve, ${\it \rho}_c\equiv {\it \rho}(0)$, the maximum mass of a static neutron star is determined as the first maximum of such a curve, namely the point where $\partial M$/$\partial {\it \rho}_c=0$. This derivative establishes the axisymmetric secular instability point, and if the perturbation obeys the same equation of state (EOS) as the equilibrium configuration, it coincides also with the dynamical instability point (see e.g. Ref. [@shapirobook]). In the rotating case, the situation becomes more complicated and in order to find the axisymmetric dynamical instability points, the perturbed solutions with zero frequency modes (the so-called neutral frequency line) have to be calculated. Friedman et al. (1988) [@1988ApJ...325..722F] however, following the works of Sorkin (1981, 1982) [@1981ApJ...249..254S; @1982ApJ...257..847S], described a turning-point method to obtain the points at which secular instability is reached by uniformly rotating stars. In a constant angular momentum sequence, the turning point is located in the maximum of the mass-central density relation, namely the onset of secular axisymmetric instability is given by $$\label{eq:TurningPoint} \left[\frac{\partial M\left({\it \rho}_c,J\right)}{\partial{\it \rho}_c}\right]_{J=\rm constant}=0 \;, $$ and once the secular instability sets in, the star evolves quasi-stationarily until it reaches a point of dynamical instability where gravitational collapse sets in (see e.g. [@2003LRR.....6....3S]). The above equation determines an upper limit for the mass at a given angular momentum $J$ for a uniformly rotating star, however this criterion is a sufficient but not necessary condition for the instability. This means that all the configurations with the given angular momentum $J$ on the right side of the turning point defined by Eq. (\[eq:TurningPoint\]) are secularly unstable, but it does not imply that the configurations on the left side of it are stable. An example of dynamically unstable configurations on the left side of the turning-point limiting boundary in neutron stars was recently shown in Ref. [@2011MNRAS.416L...1T], for a specific EOS. In order to investigate the secular instability of uniformly rotating stars one should select fixed values for the angular momentum. Then construct mass-central density or mass-radius relations. From here one has to calculate the maximum mass and that will be the turning point for a given angular momentum. For different angular momentum their will be different maximum masses. By joining all the turning points together one obtains axisymmetric secular instability line (boundary). This boundary is essential for the construction of the stability region for uniformly rotating neutron stars (see next sections and figures). Keplerian mass-shedding instability and orbital angular velocity of test particles {#subsec:3.2} ---------------------------------------------------------------------------------- The maximum velocity for a test particle to remain in equilibrium on the equator of a star, kept bound by the balance between gravitational and centrifugal force, is the Keplerian velocity of a free particle computed at the same location. As shown, for instance in [@2003LRR.....6....3S], a star rotating at Keplerian rate becomes unstable due to the loss of mass from its surface. The mass shedding limiting angular velocity of a rotating star is the Keplerian angular velocity evaluated at the equator, $r=R_{\rm eq}$, i.e. $\Omega_K^{J\neq0}=\Omega_K(r=R_{\rm eq})$. Friedman (1986) [@Friedman1986] introduced a method to obtain the maximum possible angular velocity of the star before reaching the mass-shedding limit; however [@2008AcA....58....1T] and [@BBRS2013], showed a simpler way to compute the Keplerian angular velocity of a rotating star. They showed that the mass-shedding angular velocity, $\Omega_K^{J\neq0}$, can be computed as the orbital angular velocity of a test particle in the external field of the star and corotating with it on its equatorial plane at the distance $r=R_{\rm eq}$. It is possible to obtain the analytical expression for the angular velocity $\Omega$ given by Eq. (\[eq:omegaKep\]) with respect to an observer at infinity, taking into account the parameterization of the four-velocity $u$ of a test particle on a circular orbit in equatorial plane of axisymmetric stationary spacetime, regarding as parameter the angular velocity $\Omega$ itself: $$u=\Gamma[\partial_t+\Omega\partial_{\phi}] \;, $$ where $\Gamma$ is a normalization factor such that $u^{\alpha}u_{\alpha}=1$. Normalizing and applying the geodesics conditions we get the following expressions for $\Gamma$ and $\Omega=u^{\phi}/u^{t}$ $$\label{eight} \Gamma=\pm(g_{tt}+2\Omega g_{t\phi}+\Omega^2 g_{\phi\phi})^{-1/2}\;, \quad g_{tt,r}+2\Omega g_{t\phi,r}+\Omega^2 g_{\phi\phi,r}=0 \;.$$ Thus, the solution of the system of Eq. (\[eight\]) can be written as $$\Omega^\pm_{\rm orb}(r)=\frac{u^{\phi}}{u^{t}}=\frac{-g_{t\phi,r}\pm\sqrt{(g_{t\phi,r})^2-g_{tt,r}g_{\phi\phi,r}}}{g_{\phi\phi,r}} \;,$$ where $+/-$ stands for co-rotating/counter-rotating orbits, $u^{\phi}$ and $u^{t}$ are the angular and time components of the four-velocity respectively, and a colon stands for partial derivative with respect to the corresponding coordinate. To determine the mass shedding angular velocity (the Keplerian angular velocity) of the neutron stars, we need to consider only the co-rotating orbit, so from here and thereafter we take into account only the plus sign in Eq. (\[eight\]) and we write $\Omega^{+}_{\rm orb}(r)=\Omega_{\rm orb}(r)$. For the Hartle external solution given by Eq. (\[ht1\]) we obtain $$\label{eq:omegaKep} \Omega_{K}^{J\neq0}(r)=\sqrt{\frac{M}{r^3}}\left[1- j F_{1}(r)+j^2F_{2}(r)+q F_{3}(r)\right] \;, $$ where $j=J/M^2$ and $q=Q/M^3$ are the dimensionless angular momentum and quadrupole moment. The analytical expressions of the functions $F_i$ are given by $$\begin{aligned} F_1&=\left(\frac{M}{r}\right)^{3/2} \;,\\ F_2&=\frac{48M^7-80M^6r+4M^5r^2-18M^4r^3}{16M^2r^4(r-2M)}+\frac{40M^3r^4+10M^2r^5+15Mr^6-15r^7}{16M^2r^4(r-2M)}+F\;,\\ F_3&=\frac{6M^4-8M^3r-2M^2r^2-3Mr^3+3r^4}{16M^2r(r-2M)/5}-F \;,\\ F&=\frac{15(r^3-2M^3)}{32M^3}\ln\frac{r}{r-2M} \;.\\ $$ The maximum angular velocity for a rotating star at the mass-shedding limit is the Keplerian angular velocity evaluated at the equator ($r=R_{\rm eq}$), i.e. $$\label{eq:omegaK1} \Omega_K^{J\neq0}=\Omega_{\rm orb}(r=R_{\rm eq})\;. $$ In the static case i.e. when $j=0$ hence $q=0$ and $\delta M=0$ we have the well-known Schwarzschild solution and the orbital angular velocity for a test particle $\Omega_K^{J=0}$ on the surface ($r=R$) of the neutron star is given by $$\label{eq:omegaK2} \Omega_K^{J=0}=\sqrt{\frac{M^{J=0}}{R_{M^{J=0}}^3}}\;. $$ Gravitational binding energy {#subsec:3.3} ---------------------------- Besides the above stability requirements, one should check if the neutron star is gravitationally bound. In the non-rotating case, the binding energy of the star can be computed as $$\label{eq:W0} W_{J=0}=M_0-M^0_{\rm rest}\;,\qquad M^0_{\rm rest}=m_b A_{J=0}\;,$$ where $M^0_{\rm rest}$ is the rest-mass of the star, $m_b$ is the rest-mass per baryon, and $A_{J=0}$ is the total number of baryons inside the star. So the non-rotating star is considered bound if $W_{J=0}<0$. In the slow rotation approximation the total binding energy is given by Ref. [@1968ApJ...153..807H] $$\label{eq:Wrot} W_{J\neq 0}=W_{J=0}+\delta W\;,\qquad \delta W=\frac{J^2}{R^3}-\int_{0}^R 4\pi r^2 B(r)dr\;,$$ where $$\begin{aligned} B(r)&=({\cal E}+P)p^*_0\bigg\{ \frac{d{\cal E}}{dP}\left[\left(1-\frac{2 M}{r}\right)^{-1/2}-1\right]- \frac{d u}{dP}\left(1-\frac{2 M}{r}\right)^{-1/2}\bigg\}\nonumber \\ &+({\cal E}-u)\left(1-\frac{2M}{r}\right)^{-3/2}\bigg[\frac{m_0}{r}+\frac{1}{3}j^2r^2\bar{\omega}^2\bigg]\nonumber \\&-\frac{1}{4\pi r^2}\left[ \frac{1}{12}j^2 r^4 \left(\frac{d\bar{\omega}}{dr}\right)^2-\frac{1}{3}\frac{dj^2}{dr}r^3 \bar{\omega}^2 \right]\;,\end{aligned}$$ where $u={\cal E}-m_b n_b$ is the internal energy of the star, with $n_b$ the baryon number density. Structure of uniformly rotating neutron stars {#sec:4} ============================================= We show now the results of the integration of the Hartle equations for the globally and locally charge neutrality neutron stars; see e.g. Fig. \[fig:Model\]. Following Refs. [@2012NuPhA.883....1B; @belvedere2014], we adopt, as an example, globally neutral neutron stars with a density at the edge of the crust equal to the neutron drip density, ${\it \rho}_{\rm crust}={\it \rho}_{\rm drip}\approx 4.3\times 10^{11}$ g cm$^{-3}$. Secular instability boundary {#sec:4.1} ---------------------------- In Fig. \[fig:MtotvsRhoG\] we show the mass-central density curve for globally neutral neutron stars in the region close to the axisymmetric stability boundaries. Particularly, we construct some $J$=constant sequences to show that indeed along each of these curves there exist a maximum mass (turning point). The line joining all the turning points determines the secular instability limit. In Fig. \[fig:MtotvsRhoG\] the axisymmetric stable zone is on the left side of the instability line. ![Total mass is shown as a function of central density for neutron stars with global charge neutrality. The mass is given in solar mass $M_{\odot}=1.98\times10^{33}$g and the density is normalized to nuclear density ${\it \rho}_{\rm nuc}=2.7\times10^{14}$g cm$^{-3}$. The solid curve represents the configuration at the Keplerian mass-shedding sequence, the dashed curve represents the static sequence, the dotted curves represent the $J$=constant sequences. The doted-dashed line joins all the turning points of the $J$=constant sequences, so it determines the axisymmetric secular instability boundary.[]{data-label="fig:MtotvsRhoG"}](Mrhogcn.eps){width="0.75\hsize"} ![Total mass is shown as a function of equatorial radius for neutron stars with global charge neutrality. The solid curve represents the configuration at Keplerian mass-shedding sequence, the dashed curve represents the static sequence, the dotted curves represent the $J$-constant sequences. The doted-dashed line is the secular instability boundary.[]{data-label="fig:MtotvsReqG"}](MReqgcn.eps){width="0.75\hsize"} Clearly we can transform the mass-central density relation in a mass-radius relation. In Fig. \[fig:MtotvsReqG\] we show the mass versus the equatorial radius of the neutron star that correspond to the range of densities of Fig. \[fig:MtotvsRhoG\]. In this plot the stable zone is on the right side of the instability line. We can construct a fitting curve joining the turning points of the $J$=constant sequences line which determines the axisymmetric secular instability boundary. Defining $M_{{\rm max},0}$ as the maximum stable mass of the non-rotating neutron star constructed with the same EOS, we find that for globally neutral configurations the instability line is well fitted by the function $$\begin{aligned} \label{eq:SecularG} \frac{M^{\rm GCN}_{\rm sec}}{M_\odot}&=21.22-6.68\frac{M_{{\rm max},0}^{\rm GCN}}{M_\odot}-\left(77.42-28\frac{M_{{\rm max},0}^{\rm GCN}}{M_\odot}\right)\left(\frac{R_{\rm eq}}{10\,{\rm km}}\right)^{-6.08}\;,\end{aligned}$$ where $12.38\,{\rm km}\lesssim R_{\rm eq}\lesssim 12.66\,{\rm km}$, and $M_{{\rm max},0}^{\rm GCN}\approx 2.67 M_\odot$. The turning points of locally neutral configurations in the mass-central density plane are shown in Fig. \[fig:MtotvsRhoL\]. the corresponding mass-equatorial radius plane is plotted in Fig. \[fig:MtotvsReqL\]. ![Total mass is shown as a function of central density for neutron stars with local charge neutrality. The mass is given in solar mass $M_{\odot}=1.98\times10^{33}$g and the density is normalized to nuclear density ${\it \rho}_{\rm nuc}=2.7\times10^{14}$g cm$^{-3}$. The solid curve represents the configuration at Keplerian mass-shedding sequence, the dashed curve represents the static sequence, the dotted curves represent the $J$=constant sequences. The doted-dashed line determines the axisymmetric secular instability boundary.[]{data-label="fig:MtotvsRhoL"}](Mrholcn.eps){width="0.75\hsize"} ![Total mass is shown as a function of equatorial radius for neutron stars with local charge neutrality. The solid curve represents the configuration at Keplerian mass-shedding sequence, the dashed curve represents the static sequence, the dotted curves represent the $J$=constant sequences. The doted-dashed line is the axisymmetric secular instability boundary.[]{data-label="fig:MtotvsReqL"}](MReqlcn.eps){width="0.75\hsize"} For locally neutral neutron stars, the secular instability line is fitted by $$\begin{aligned} \label{eq:SecularL} \frac{M^{\rm LCN}_{\rm sec}}{M_\odot}&=20.51-6.35\frac{M_{{\rm max},0}^{\rm LCN}}{M_\odot}-\left(80.98-29.02\frac{M_{{\rm max},0}^{\rm LCN}}{M_\odot}\right)\left(\frac{R_{\rm eq}}{10\,{\rm km}}\right)^{-5.71}\;,\end{aligned}$$ where $12.71\,{\rm km}\lesssim R_{\rm eq}\lesssim 13.06\,{\rm km}$, and $M_{\rm max,0}^{\rm LCN}\approx 2.70 M_\odot$. Keplerian mass-shedding sequence {#sec:4.2} -------------------------------- We turn now to analyze in detail the behavior of the different properties of the neutron star along the Keplerian mass-shedding sequence. ### Maximum mass and rotation frequency {#sec:4.2.1} The total mass of the rotating star is computed from Eq. (\[eq:Mrot\]). In Fig. \[fig:Mtotvsf\] the total mass of the neutron star is shown as a function of the rotation frequency for the Keplerian sequence. It is clear that for a given mass, the rotational frequency is higher for a globally neutral neutron star with respect to the locally neutral one. ![Total mass is shown as a function of rotational Keplerian frequency both for the global (red) and local (blue) charge neutrality cases.[]{data-label="fig:Mtotvsf"}](Mf.eps){width="0.75\hsize"} The configuration of maximum mass, $M^{J\neq 0}_{\rm max}$, is obtained along the Keplerian sequence, and it is found before the secular instability line crosses the Keplerian curve. Thus, the maximum mass configuration is secularly stable. This implies that the configuration with maximum rotation frequency, $f_{\rm max}$, is located beyond the maximum mass point, specifically at the crossing point between the secular instability and the Keplerian mass-shedding sequence. The results are summarized in Table \[tab:MRfmax\] and shown in Fig. \[fig:Mmaxrho\]. ![Total mass versus central density for the global charge neutrality case. The dashed curve is the static configurations, the solid curve is the Keplerian mass-shedding configurations, the dotted curves are the $J$=constant sequences, the dotted-dashed line is axisymmetric secular instability boundary. The thin line joins $M_{max}^{rot}$ with $M_{max}^{stat}$.[]{data-label="fig:Mmaxrho"}](Mmaxrho.eps){width="0.75\hsize"} In Fig. \[fig:Mmaxrho\] details near the maximum masses are illustrated. Here we focus on the definition of maximum rotating mass $M_{max}^{rot}$, maximum static mass, maximum angular momentum $J_{max}$ and maximum angular velocity (minimum rotation period) $\Omega_{max} (P_{min})$. Note, that $\Omega_{max}$ is determined along the turning points of $J$=constant sequences (axisymmetric secular instability line) what is consistent with the results of Stergioulas and Friedman (1995) [@sterfried1995]. At large scales the difference between axisymmetric secular instability line and the line joining $M_{max}^{rot}$ with $M_{max}^{stat}$ can not be seen (for more details see [@haenselbook]). It is important to discuss briefly the validity of the present perturbative solution for the computation of the properties of maximally rotating neutron stars. The expansion of the radial coordinate of a rotating configuration $r(R,\theta)$ in powers of angular velocity is written as [@1967ApJ...150.1005H] $$r \approx R+\xi(R,\theta)+O(\Omega^4) \; ,$$ where $\xi$ is the difference in the radial coordinate, $r$, between a point located at the polar angle $\theta$ on the surface of constant density ${\it \rho}(R)$ in the rotating configuration, and the point located at the same polar angle on the same constant density surface in the static configuration. In the slow rotation regime, the fractional displacement of the surfaces of constant density due to the rotation have to be small, namely $\xi(R,\theta)/R\ll 1$, where $\xi(R,\theta)=\xi_0(R)+\xi_2(R)P_2(\cos\theta)$ and $\xi_0(R)$ and $\xi_2(R)$ are function of $R$, proportional to $\Omega^2$. From Table \[tab:MRfmax\], we can see that the configuration with the maximum possible rotation frequency has a maximum fractional displacement $\delta R_{\rm eq}^{\rm max}=\xi(R,\pi/2)/R$ as low as $\approx 2\%$ and $\approx 3\%$, for the globally and locally neutral neutron stars respectively. In this line, it is worth quoting the results of Ref. [@benhar05], where it has been shown that the inclusion of a third-order expansion $\Omega^3$ in the Hartle’s method improves the value of the maximum rotation frequency by less than 1% for different EOS. The reason for this is that as mentioned above, along the Keplerian sequence the deviations from sphericity decrease with density (see Fig. \[fig:eccvsrho\]), which ensures the accuracy of the perturbative solution. Turning to the increase of the maximum mass, it has been shown in Ref. [@1992ApJ...390..541W] that the mass of maximally rotating neutron stars, computed with the Hartle’s second order approximation, is accurate within an error as low as $\lesssim 4\%$. Global Neutrality Local Neutrality ------------------------------------- ------------------- ------------------ $M^{J=0}_{\rm max}$ $(M_\odot)$ 2.67 2.70 $R^{J=0}_{\rm max}$ (km) 12.38 12.71 $M^{J\neq 0}_{\rm max}$ $(M_\odot)$ 2.76 2.79 $R^{J\neq 0}_{\rm max}$ (km) 12.66 13.06 $\delta M_{\rm max}$ 3.37% 3.33% $\delta R_{\rm eq}^{\rm max}$ 2.26% 2.75% $f_{\rm max}$ (kHz) 1.97 1.89 $P_{\rm min}$ (ms) 0.51 0.53 : Maximum static mass $M^{J=0}_{\rm max}$ and corresponding static radius $R^{J=0}_{\rm max}$ of neutron stars as computed in Ref. [@2012NuPhA.883....1B]; maximum rotating mass $M^{J\neq0}_{\rm max}$ and corresponding equatorial radius $R^{J\neq0}_{\rm max}$ of neutron stars as given in Refs. [@Boshkayev2012thesis; @belvedere2014]; increase in mass $\delta M_{\rm max}$ and radius $\delta R_{\rm eq}^{\rm max}$ of the maximum mass configuration with respect to its non-rotating counterpart; maximum rotation frequency $f_{\rm max}$ and corresponding minimum period $P_{\rm min}$. \[tab:MRfmax\] We compute the gravitational binding energy of the neutron star from Eq. (\[eq:Wrot\]) as a function of the central density and angular velocity. We make this for central densities higher than the nuclear density, thus we impose the neutron star to have a supranuclear hadronic core. Neutron star mass-radius relation {#sec:5} ================================= ![Total mass versus total equatorial radius for the global (red) and local (blue) charge neutrality cases. The dashed curves represent the static configurations, while the solid lines are the uniformly rotating neutron stars. The pink-red and light-blue color lines define the secular instability boundary for the globally and locally neutral cases, namely the lines given by Eqs. (\[eq:SecularG\]) and (\[eq:SecularL\]), respectively.[]{data-label="fig:MtotvsRtot"}](MRrot.eps){width="0.75\hsize"} We summarize now the above results in form of a new mass-radius relation of uniformly rotating neutron stars, including the Keplerian and secular instability boundary limits. In Fig. \[fig:MtotvsRtot\] we show a summary plot of the equilibrium configurations of rotating neutron stars. In particular we show the total mass versus the equatorial radius: the dashed lines represent the static (non-rotating, $J=0$) sequences, while the solid lines represent the corresponding Keplerian mass-shedding sequences. The secular instability boundaries are plotted in pink-red and light blue color for the global and local charge neutrality cases, respectively. It can be seen that due to the deformation for a given mass the radius of the rotating case is larger than the static one, and similarly the mass of the rotating star is larger than the corresponding static one. It can be also seen that the configurations obeying global charge neutrality are more compact with respect to the ones satisfying local charge neutrality. In general, the region enclosed by the static, Keplerian mass-shedding and secular instability sequences is termed as the stability region for uniformly rotating neutron stars. All stable configurations are inside this region. From a practical point of view it is important to construct the stability region as it allows one to do simple and at the same interesting science without invoking sophisticated numerical simulations, which require powerful super computers and intricate techniques. For instance, for a given value of the angular velocity (rotation period) one can construct the $\Omega$=constant sequence. This sequence intersects the stability region at two points and determines the maximum and minimum values of all physical quantities, describing the structure of rotating neutron stars such as mass, radius, moment of inertia, angular momentum, quadrupole moment etc. Thus this procedure allows one to estimate the range of all the quantities for $\Omega$=constant sequence. The same procedure is valid if there is a necessity to construct the constant baryon (rest) mass sequence. In analogy to the previous case, the constant rest mass sequence also intersects the stability region at two points. Here again one can estimate the upper and lower bounds of all the quantities corresponding to the constant rest mass sequence. The so-called spin-up and spin-down effects emerges as a results of this sequence. Of course, there are other crucial applications of the stability region related to the post merge epoch of neutron stars, binary systems, gravitational wave, physics of gamma ray burst etc. For more details see Ref. [@belvedere2014jkps] and references therein. Moment of inertia {#sec:6} ================= ![Total moment of inertia as a function central density for globally (red) and locally (blue) neutral non-rotating neutron stars.[]{data-label="fig:inertiarho"}](mominrho.eps){width="0.75\hsize"} The moment of inertia $I$ of relativistic stars can be computed from the relation $$\label{eq:I} I = \frac{J}{\Omega}\;,$$ where $J$ is the angular momentum and $\Omega$ is angular velocity, as before, are related via Eq. (\[eq:Jomega\]). Since $J$ is a first-order quantity and so proportional to $\Omega$, the moment of inertia given by Eq. (\[eq:I\]) does not depend on the angular velocity and does not take into account deviations from the spherical symmetry. This implies that Eq. (\[eq:J\]) gives the moment of inertia of the non-rotating unperturbed seed object. In order to find the perturbation to $I$, say $\delta I$, the perturbative treatment has to be extended to the next order $\Omega^3$, in such a way that $I=I_0+\delta I =(J_0+\delta J)/\Omega$, becomes of order $\Omega^2$, with $\delta J$ of order $\Omega^3$ . In this chapter we keep the solution up to second order and therefore we proceed to analyze the behavior of the moment of inertia for the static configurations. In any case, even the fastest observed pulsars rotate at frequencies much lower than the Keplerian rate, and under such conditions we expect that the moment of inertia can be approximated with high accuracy by the one of the corresponding static configurations. In Fig. \[fig:inertiarho\] we show the behavior of the total moment of inertia, i.e. $I=I_{\rm core}+I_{\rm crust}$, with respect to the central density for both globally and locally neutral static neutron stars. We can see from Fig. \[fig:inertiarho\] that the total moment of inertia is quite similar for both global and local charge neutrality cases. This is due to the fact that the globally neutral configurations differ from the locally ones mostly in the structure of the crust, which however contributes much less than the neutron star core to the total moment of inertia. In order to study the single contribution of the core and the crust to the moment of inertia of the neutron star, we shall use the integral expression for the moment of inertia. Multiplying Eq. (\[eq:baromega\]) by $r^3$ and taking the integral of it we obtain $$\begin{aligned} I(r)&=-\frac{2}{3}\int_0^r r^3 \frac{d j}{dr}\frac{\bar{\omega}(r)}{\Omega}dr=\frac{8\pi}{3}\int_0^r r^4 ({\cal E}+P) e^{(\lambda-\nu)/2}\frac{\bar{\omega}(r)}{\Omega}dr\;,\end{aligned}$$ where the integration is carried out in the region of interest. Thus, the contribution of the core, $I_{\rm core}$, is obtained integrating from the origin up to the radius of the core, and the contribution of the crust, $I_{\rm crust}$, integrating from the base of the crust to the total radius of the neutron star. Deformation of the neutron star {#sec:7} =============================== In this section we explore the deformation properties of the neutron star. The behavior of the eccentricity, the rotational to gravitational energy ratio, as well as the quadrupole moment, are investigated as a function of the central density and rotation frequency of the neutron star. ![Eccentricity (\[eq:eccentricity\]) as a function of central density for the Keplerian sequence in both global (red) and local (blue) charge neutrality cases.[]{data-label="fig:eccvsrho"}](eccrho.eps){width="0.75\hsize"} Eccentricity {#sec:4.5.1} ------------ An indicator of the neutron star deformation degree can be estimated with its eccentricity $$\label{eq:eccentricity} \epsilon=\sqrt{1-\left(\frac{R_p}{R_{\rm eq}}\right)^2}\;,$$ where $R_p$ and $R_{\rm eq}$ are the polar and equatorial radii of the rotating deformed configuration. Thus, $\epsilon=0$ determines the spherical limit and $0<\epsilon<1$ corresponds to oblate configurations. We can see in Fig. \[fig:eccvsrho\] that for larger central densities the neutron stars decrease their oblateness and the configurations tend to be a more spherical. ![image](lcn_shape_two_nuc){width="0.48\columnwidth"} ![image](lcn_shape_six_nuc){width="0.48\columnwidth"} ![image](gcn_shape_two_nuc){width="0.48\columnwidth"}![image](gcn_shape_six_nuc){width="0.48\columnwidth"} The shape of rotating neutron stars becomes less oblate with the increasing central density (see Figs. \[fig:lcn\_shape\], \[fig:gcn\_shape\] for details). The size of the core initially increases then after reaching its maximum decreases with the increasing central density. The thickness of the crusts in both global and local neutrality cases gradually decreases with the increasing central density. However the radii of the crusts behave similarly to the radius of the core. Close to the maximum rotating mass, even though the configurations rotate at the Keplerian rate, the shape becomes almost spherical, but still oblate (see the right panels of Figs. \[fig:lcn\_shape\], \[fig:gcn\_shape\] for details). Thus, the system become more gravitationally bound. ![image](Rrhoccgcn){width="0.75\columnwidth"} ![image](Rrhociolcn){width="0.75\columnwidth"} In Figs. \[fig:gcn\_Rrho\] and \[fig:lcn\_Rrho\] we illustrate the dependence of the static and rotating radii for both local and global charge neutrality cases. For small central densities the radius of the core could be smaller than the thickness of the crust. As for larger central densities the core radius increases and the thickness of the crust deacreases. Rotational to gravitational energy ratio {#sec:4.5.2} ---------------------------------------- Other property of the star related to the centrifugal deformation of the star is the ratio between the gravitational energy and the rotational energy of the star. The former is given by Eq. (\[eq:Wrot\]), whereas the latter is $$T = \frac{1}{2} I \Omega^2+O(\Omega^4),$$ In Fig. \[fig:ToverWrho\] we plot the dependence of the ratio $T/|W|$ on the central density. As central density increases so does the angular velocity, hence the ratio $T/|W|$ also increases. ![Rotational to gravitational binding energy ratio versus central density along the Keplerian sequence both for the global (red) and local (blue) charge neutrality.[]{data-label="fig:ToverWrho"}](ToverW.eps){width="0.75\hsize"} Angular momentum and quadrupole moment {#sec:4.4.3} -------------------------------------- ![image](Jrho){width="0.75\columnwidth"} ![image](JOmg){width="0.75\columnwidth"} The angular momentum versus central density is shown in Fig. \[fig:glcnJ\]. Here the angular momentum in both global and local neutrality cases are similar. Instead the nonlinear dependence of the angular momentum on the angular velocity in Fig. \[fig:glcn\_JOmg\]. In Fig. \[fig:Quadrupolerho\] we show the quadrupole moment, $Q$ given by Eq. (\[eq:Q\]), as a function of the central density for both globally and locally neutral neutron stars along the Keplerian sequence.We have normalized the quadrupole moment $Q$ to the quantity $M R^2$ of the non-rotating configuration with the same central density. ![Total quadrupole moment versus central density along the Keplerian sequence both for the global (red) and local (blue) charge neutrality cases. The quadrupole moment $Q$ is here in units of the quantity $M R^2$ of the static configuration with the same central density.[]{data-label="fig:Quadrupolerho"}](QMRrho.eps){width="0.75\hsize"} Observational constraints {#sec:8} ========================= According to observations, the most recent and stringent constraints to the mass-radius relation of neutron stars are provided from data of pulsars by the values of the largest mass, the largest radius, the highest rotational frequency, and the maximum surface gravity. The above mass-radius relations together with the constraints indicated by Tr[ü]{}mper (2011) [@trumper2011] is shown in Fig. \[fig:constraints\]: ![Observational constraints on the mass-radius relation given by Ref. [@trumper2011] and the theoretical mass-radius relation presented in Refs. [@2012NuPhA.883....1B; @Boshkayev2012thesis; @belvedere2014]. The red curves represent the configuration with global charge neutrality, while the blue curves represent the configuration with local charge neutrality. The pink-red line and the light-blue line represent the secular axisymmetric stability boundaries for the globally neutral and the locally neutral case, respectively. The red and blue solid curves represent the Keplerian sequences and the red and blue dashed curves represent the static cases.[]{data-label="fig:constraints"}](MR_constr.eps){width="0.75\hsize"} Up to now the largest neutron star mass measured with a high precision is the mass of the 39.12 millisecond pulsar PSR J0348+0432, $M=2.01 \pm 0.04 M_\odot$ [@2013Sci...340..448A]. The largest radius is given by the lower limit to the radius of RX J1856-3754, as seen by an observer at infinity $R_\infty = R [1-2GM/(c^2 R)]^{-1/2} > 16.8$ km [@trumper04]; it gives the constraint $2G M/c^2 >R-R^3/(R^{\rm min}_\infty)^2$, where $R^{\rm min}_\infty=16.8$ km. The maximum surface gravity is obtained by assuming a neutron star of $M=1.4M_\odot$ to fit the Chandra data of the low-mass X-ray binary X7, it turns out that the radius of the star satisfies $R=14.5^{+1.8}_{-1.6}$ km, at 90$\%$ confidence level, corresponding to $R_\infty = [15.64,18.86]$ km, respectively [@heinke06]. The maximum rotation rate of a neutron star has been found to be $\nu_{\rm max} = 1045 (M/M_\odot)^{1/2}(10\,{\rm km}/R)^{3/2}$ Hz [@lattimer2004]. The fastest observed pulsar is PSR J1748-2246ad with a rotation frequency of 716 Hz [@hessels06], which results in the constraint $M \geq 0.47 (R/10\,{\rm km})^3 M_\odot$. From a technical or practical standpoint, in order to include the above observational constraints in the mass-radius diagram it is convenient to rewrite them for a given range of the radius (for instance, 6 km $\leq$ R $\leq$ 22 km) as follows:\ 1. The maximum mass: $$\label{maxmass} \frac{M}{M_{\odot}}=2.01.$$ 2. The maximum surface gravity: $$\frac{M}{M_{\odot}}<2.4\times 10^5\frac{c^2}{G}\frac{R}{M_{\odot}}.$$ 3. The lower limit for the radius surface gravity: $$\label{eq:lowrad} \frac{M}{M_{\odot}}=\frac{10^5}{2}\frac{c^2}{G}\frac{R}{M_{\odot}}\left(1-\frac{R^2}{(R^{\rm min}_\infty)^2}\right).$$ 4. In order to include the maximum rotation rate in the rotating mass-radius relation one should construct a constant frequency sequence for the fastest spinning pulsar with 716 Hz. For the sake of generality, we can just require that equilibrium models are bound by the Keplerian sequence (see Refs. [@belvedere2014; @cipolletta2015] for details). In all expressions above (\[maxmass\]-\[eq:lowrad\]) the mass is normalized with respect to the solar mass $M_{\odot}$ and the radius is expressed in km. In Fig. \[fig:constraints\] we superposed the observational constraints introduced by Tr[ü]{}mper [@trumper2011] with the theoretical mass-radius relations presented here and in Belvedere et al. [@2012NuPhA.883....1B; @belvedere2014] for static and uniformly rotating neutron stars. Any realistic mass-radius relation should pass through the area delimited by the solid black, the dotted-dashed black, the dotted curves and the Keplerian sequences. From here one can clearly see that the above observational constraints show a preference on stiff EoS that provide largest maximum masses for neutron stars. From the above constraints one can infer that the radius of a canonical neutron star of mass $M = 1.4M_{\odot}$ is strongly constrained to $R\geq12$ km, disfavoring at the same time strange quark matter stars. It is evident from Fig. \[fig:constraints\] that mass-radius relations for both the static and the rotating case presented here, are consistent with all the observational constraints. In Table \[tab:MRprediction\] we show the radii predicted by our mass-radius relation both for the static and the rotating case for a canonical neutron star as well as for the most massive neutron stars discovered, namely, the millisecond pulsar PSR J1614–2230 [@demorest2010], $M=1.97 \pm 0.04 M_\odot$, and the most recent PSR J0348+0432, $M=2.01 \pm 0.04 M_\odot$ [@2013Sci...340..448A]. $M (M_\odot)$ $R^{J=0}$ (km) $R^{J\neq0}_{\rm eq}$ (km) --------------- ---------------- ---------------------------- 1.40 12.313 13.943 1.97 12.991 14.104 2.01 13.020 14.097 : Radii for a canonical neutron star of $M=1.4 M_\odot$ and for PSR J1614–2230 [@demorest2010], $M=1.97 \pm 0.04 M_\odot$, and PSR J0348+0432 [@2013Sci...340..448A], $M=2.01 \pm 0.04 M_\odot$. These configurations are computed under the constraint of global charge neutrality and for a density at the edge of the crust equal to the neutron drip density. The nuclear parametrizations NL3 has been used. \[tab:MRprediction\] Along with the observational constraints one should also take into account the theoretical constraints on the mass-radius relations of neutron stars  [@2016arXiv160607804B]. Indeed only theoretical estimations give the upper and lower bounds for all quantities, describing the properties of neutron stars, to be measured from observations. Concluding remarks {#sec:9} ================== The equilibrium configurations of static and uniformly rotating neutron stars in both global and local charge neutrality cases have been constructed. To achieve this goal the Hartle approach has been applied to the seed static solution, derived from the integration of the Einstein-Maxwell-Thomas-Fermi equations [@2012NuPhA.883....1B]. All physical quantities such as the static and rotating masses, polar and equatorial radii, eccentricity, angular momentum, moment of inertia, quadrupole moment, rotational kinetic energy and gravitational binding energy have been calculated as functions of the central density and the angular velocity of the neutron star. In order to investigate only stable configurations of rotating neutron stars the Keplerian mass-shedding limit and the secular axisymmetric instability have been analyzed. This allowed one to construct the stability region, the boundary inside which all stable uniformly rotating neutron stars can be found. In addition the fitting formulas have been obtained for the secular instability boundary in Eqs. (\[eq:SecularG\]) and (\[eq:SecularL\]) for global and local charge neutrality, respectively. With this analysis the maximum mass and maximum rotation frequency of the neutron star have been established. In order to favor or disfavor some models of neutron stars the current observational constraints on the mass-radius relations related to the maximum observed mass, maximum surface gravity, largest mass, maximum rotation frequency have been analyzed. All these constraints are of paramount importance not only in the physics of neutron stars, but also in nuclear physics to test theoretical hypothesis and assumptions made in the construction of the equations of state. As a result all observations favor stiff equations of state as indicated in Ref. [@yakovlev2016]. Finally, the results of this chapter have their immediate astrophysical implications in the physics of compact objects, gravitational waves, short and long gamma ray burst, X-ray phenomena occurring in the accretion disks around neutron stars such as quasi periodic oscillations. Combining both the theory of compact objects and observational data from different phenomena one can infer information not only on the properties and parameters of neutron stars, but also constrain the equations of state, thus probe the nuclear physics theories [@2012NuPhA.883....1B; @belvedere2014; @belvedere2014jkps; @2016arXiv160607804B; @yakovlev2016]. Acknowledgements {#acknowledgements .unnumbered} ---------------- This work was supported by program No F.0679 of grant No 0073 and the grant for the university best teachers-2015 of the Ministry of Education and Science of the Republic of Kazakhstan. [99]{} , J., [Freire]{}, P. C. C., [Wex]{}, N., [Tauris]{}, T. M., [Lynch]{}, R. S., [van Kerkwijk]{}, M. H., [Kramer]{}, M., [Bassa]{}, C., [Dhillon]{}, V. S., [Driebe]{}, T., [Hessels]{}, J. W. T., [Kaspi]{}, V. M., [Kondratiev]{}, V. I., [Langer]{}, N., [Marsh]{}, T. R., [McLaughlin]{}, M. A., [Pennucci]{}, T. T., [Ransom]{}, S. M., [Stairs]{}, I. H., [van Leeuwen]{}, J., [Verbiest]{}, J. P. W., [Whelan]{}, D. G., Apr. 2013. [A Massive Pulsar in a Compact Relativistic Binary]{}. Science 340, 448. [2012NuPhA.883....1B]{} [Belvedere]{}, R., [Pugliese]{}, D., [Rueda]{}, J. A., [Ruffini]{}, R., [Xue]{}, S.-S., Jun. 2012. [Neutron star equilibrium configurations within a fully relativistic theory with strong, weak, electromagnetic, and gravitational interactions]{}. Nuclear Physics A 883, 1–24. , R., [Rueda]{}, J. A., [Ruffini]{}, R., Jan. 2013. [Neutron Star Cores in the General Relativistic Thomas-Fermi Treatment]{}. International Journal of Modern Physics: Conference Series 23, 185–192. , R., [Boshkayev]{}, K., [Rueda]{}, J. A., [Ruffini]{}, R., Jan. 2014. [Uniformly rotating neutron stars in the global and local charge neutrality cases]{}. Nuclear Physics A 921, 33–59. , R., [Rueda]{}, J. A., [Ruffini]{}, R., Sep. 2014. [Static and rotating neutron stars fulfilling all fundamental interactions]{}. Journal of the Korean Physical Society 65 (6), 897–902. [benhar05]{} [Benhar]{}, O., [Ferrari]{}, V., [Gualtieri]{}, L., [Marassi]{}, S., Aug. 2005. [Perturbative approach to the structure of rapidly rotating neutron stars]{}.   72 (4), 044028–+. [2005MNRAS.358..923B]{} [Berti]{}, E., [White]{}, F., [Maniopoulou]{}, A., [Bruni]{}, M., Apr. 2005. [Rotating neutron stars: an invariant comparison of approximate and numerical space-time models]{}.  358, 923–938. [BBRS2013]{} [Bini]{}, D., [Boshkayev]{}, K., [Ruffini]{}, R., [Siutsou]{}, I., 2013. [Equatorial circular geodesics in the Hartle-Thorne spacetime]{}. Il Nuovo Cimento C 36, 31. [boguta77]{} [Boguta]{}, J., [Bodmer]{}, A. R., Dec. 1977. [Relativistic calculation of nuclear matter and the nuclear surface]{}. Nuclear Physics A 292, 413–428. [2012PhRvD..86f4043B]{} [Boshkayev]{}, K., [Quevedo]{}, H., [Ruffini]{}, R., Sep. 2012. [Gravitational field of compact objects in general relativity]{}.   86 (6), 064043. [2012IJMPS..12...58B]{} [Boshkayev]{}, K., [Rotondo]{}, M., [Ruffini]{}, R., Mar. 2012. [On Magnetic Fields in Rotating Nuclear Matter Cores of Stellar Dimensions]{}. International Journal of Modern Physics Conference Series 12, 58–67. , K. Nov. 2012. [Rotating White Dwarfs and Neutron Stars in General Relativity]{}. Ph.D. Thesis [[ http://padis.uniroma1.it/bitstream/10805/1934/1/Thesis%20of%20Kuantay%20Boshkayev.pdf]( http://padis.uniroma1.it/bitstream/10805/1934/1/Thesis%20of%20Kuantay%20Boshkayev.pdf)]{} , K., [Rueda]{}, J.A., [Muccino]{}, M., Jun. 2016. [Theoretical and observational constraints on the mass-radius relations of neutron stars]{}. ArXiv e-prints: 1606.07804 , F. [Cherubini]{}, C. [Filippi]{}, S. [Rueda]{}, J.A. [Ruffini]{}, R. 2015 [Fast Rotating Neutron Stars with Realistic Nuclear Matter Equation of State]{}.   92 (2), 023007. [demorest2010]{} [Demorest]{}, P. B., [Pennucci]{}, T., [Ransom]{}, S. M., [Roberts]{}, M. S. E., [Hessels]{}, J. W. T., Oct. 2010. [A two-solar-mass neutron star measured using Shapiro delay]{}.   467, 1081–1083. [1988ApJ...325..722F]{} [Friedman]{}, J. L., [Ipser]{}, J. R., [Sorkin]{}, R. D., Feb. 1988. [Turning-point method for axisymmetric stability of rotating relativistic stars]{}.  325, 722–724. [Friedman1986]{} [Friedman]{}, J. L., [Parker]{}, L., [Ipser]{}, J. R., May 1986. [Rapidly rotating neutron star models]{}.   304, 115–139. [haenselbook]{} [Haensel]{}, P., [Potekhin]{}, A. Y., [Yakovlev]{}, D. G. (Eds.), 2007. [Neutron Stars 1 : Equation of State and Structure]{}. Vol. 326 of Astrophysics and Space Science Library. [1967ApJ...150.1005H]{} [Hartle]{}, J. B., Dec. 1967. [Slowly Rotating Relativistic Stars. I. Equations of Structure]{}.   150, 1005. [1973Ap&SS..24..385H]{} [Hartle]{}, J. B., Oct. 1973. [Slowly Rotating Relativistic Stars. IX: Moments of Inertia of Rotationally Distorted Stars]{}.   24, 385–405. [HS1967]{} [Hartle]{}, J. B., [Sharp]{}, D. H., Jan. 1967. [Variational Principle for the Equilibrium of a Relativistic, Rotating Star]{}.   147, 317–+. [1968ApJ...153..807H]{} [Hartle]{}, J. B., [Thorne]{}, K. S., Sep. 1968. [Slowly Rotating Relativistic Stars. II. Models for Neutron Stars and Supermassive Stars]{}.   153, 807. [heinke06]{} [Heinke]{}, C. O., [Rybicki]{}, G. B., [Narayan]{}, R., [Grindlay]{}, J. E., Jun. 2006. [A Hydrogen Atmosphere Spectral Model Applied to the Neutron Star X7 in the Globular Cluster 47 Tucanae]{}.   644, 1090–1103. [hessels06]{} [Hessels]{}, J. W. T., [Ransom]{}, S. M., [Stairs]{}, I. H., [Freire]{}, P. C. C., [Kaspi]{}, V. M., [Camilo]{}, F., Mar. 2006. [A Radio Pulsar Spinning at 716 Hz]{}. Science 311, 1901–1904. [klein49]{} [Klein]{}, O., Jul. 1949. [On the Thermodynamical Equilibrium of Fluids in Gravitational Fields]{}. Reviews of Modern Physics 21, 531–533. [lalazissis97]{} [Lalazissis]{}, G. A., [K[ö]{}nig]{}, J., [Ring]{}, P., Jan. 1997. [New parametrization for the Lagrangian density of relativistic mean field theory]{}.   55, 540–543. J.M., [Prakash]{} M. 2004. [The Physics of Neutron Stars]{} Science, 304, 536. [oppenheimer39]{} [Oppenheimer]{}, J. R., [Volkoff]{}, G. M., Feb. 1939. [On Massive Neutron Cores]{}.   55, 374–381. , M., [Rueda]{}, J. A., [Ruffini]{}, R., [Xue]{}, S.-S., Jul. 2011. [The self-consistent general relativistic solution for a system of degenerate neutrons, protons and electrons in [$\beta$]{}-equilibrium]{}. Physics Letters B 701, 667–671. , J. A., [Ruffini]{}, R., [Xue]{}, S.-S., Dec. 2011. [The Klein first integrals in an equilibrium system with electromagnetic, weak, strong and gravitational interactions]{}. Nuclear Physics A 872, 286–295. , S. L., [Teukolsky]{}, S. A., 1983. [Black holes, white dwarfs, and neutron stars: The physics of compact objects]{}. , R., Oct. 1981. [A Criterion for the Onset of Instability at a Turning Point]{}.   249, 254. , R. D., Jun. 1982. [A Stability Criterion for Many Parameter Equilibrium Families]{}.   257, 847. , N., Jun. 2003. [Rotating Stars in Relativity]{}. Living Reviews in Relativity 6, 3. [sterfried1995]{} [Stergioulas]{} N., [Friedman]{} J.L., May 1995 [Comparing models of rapidly rotating relativistic stars constructed by two numerical methods]{}.   444, 306 –311. [2011MNRAS.416L...1T]{} [Takami]{}, K., [Rezzolla]{}, L., [Yoshida]{}, S., Sep. 2011. [A quasi-radial stability criterion for rotating relativistic stars]{}.   416, L1–L5. [1930PhRv...35..904T]{} [Tolman]{}, R. C., Apr. 1930. [On the Weight of Heat and Thermal Equilibrium in General Relativity]{}. Physical Review 35, 904–924. [tolman39]{} [Tolman]{}, R. C., Feb. 1939. [Static Solutions of Einstein’s Field Equations for Spheres of Fluid]{}. Physical Review 55, 364–373. [2008AcA....58....1T]{} [Torok]{}, G., [Bakala]{}, P., [Stuchlik]{}, Z., [Cech]{}, P., Mar. 2008. [Modeling the Twin Peak QPO Distribution in the Atoll Source 4U 1636-53]{}. Acta Astronomica 58, 1–14. [trumper2011]{} [Tr[ü]{}mper]{}, J. E., Jul. 2011. [Observations of neutron stars and the equation of state of matter at high densities]{}. Progress in Particle and Nuclear Physics 66, 674–680. [trumper04]{} [Tr[ü]{}mper]{}, J. E., [Burwitz]{}, V., [Haberl]{}, F., [Zavlin]{}, V. E., Jun. 2004. [The puzzles of RX J1856.5-3754: neutron star or quark star?]{} Nuclear Physics B Proceedings Supplements 132, 560–565. [1992ApJ...390..541W]{} [Weber]{}, F., [Glendenning]{}, N. K., May 1992. [Application of the improved Hartle method for the construction of general relativistic rotating neutron star models]{}.  390, 541–549. , D.G., 2016. [General relativity and neutron stars]{} International Journal of Modern Physics A  31 (2 & 3), 1641017. \[lastpage-01\] [^1]: E-mail address: kuantay@mail.ru [^2]: We use the spacetime metric signature (+,-,-,-) and geometric units $G=c=1$ unless otherwise specified.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In the current work, we have formulated the optimal bit-allocation problem for a scalable codec of images or videos as a constrained vector-valued optimization problem and demonstrated that there can be many optimal solutions, called Pareto optimal points. In practice, the Pareto points are derived via the weighted sum scalarization approach. An important question which arises is whether all the Pareto optimal points can be derived using the scalarization approach? The present paper provides a sufficient condition on the rate-distortion function of each resolution of a scalable codec to address the above question. The result indicated that if the rate-distortion function of each resolution is strictly decreasing and convex and the Pareto points form a continuous curve, then all the optimal Pareto points can be derived by using the scalarization method.' author: - | Wen-Liang Hwang\ Institute of Information Science, Academia Sinica, Taiwan title: 'A Theorem on Multi-Objective Optimization Approach for Bit Allocation of Scalable Coding' --- Introduction {#sec0} ============ Scalable coding (SC) involves producing from an image or a video (also called coding object) a single bit-stream that meets user requirements of resolutions of the image or the video [@Sko2001; @Ye14]. In SC, the bit-stream is usually organized into subset bit-streams with various resolutions of the coding object. The subset bit-streams are generally correlated by prediction methods to enhance coding efficiency [@Lu13; @Zhongbo12]. The coding efficiency can also be improved if the bit-allocation, which distributes an available amount of bits to resolution, can be optimized [@Sullivan13; @Wie2003b; @Kaaniche14]. In scalable coding studies, the usual assumption is that the solution of the bit-allocation optimization problem is either better or at least no worse than any other alternative. However, this assumption is only correct if all the users demand the same resolution and the coding object is compressed for that resolution. For such a case, the optimization problem can be solved for that particular resolution, and all the users can receive the best service simultaneously from the coding system. However, for SC, where a single bit-stream is designed to serve many users with various demands of resolution, the performance criteria for different resolutions clearly conflict. As a result, the assumption that an optimum bit-stream can be achieved which would produce the best performance simultaneously for all the resolutions is generally incorrect. Specifically, it is very unlikely that a bit-allocation which will optimize one resolution will also optimize the other resolutions. The top subgraph of Figure \[Uncomparable\] shows how two different bit-allocations have been assigned to support three spatial resolutions, where the left-most node supports quarter common intermediate format (QCIF), the left-most and the middle nodes support CIF, and the three nodes all together support high definition (HD). A same bit number has been assigned to the node of QCIF, therefore, the distortion comparison for the two bit-allocations is on CIF and HD. The bottom subgraph of Figure \[Uncomparable\] shows two distortions for CIF and HD with respect to the two bit-allocations. On comparing the distortions of the two bit-allocations for both CIF and HD, it can be inferred that one bit-allocation is better for CIF, but worse for HD, whereas the other is better for HD but not for CIF. Figure \[Uncomparable\] thus demonstrates that it is not always possible for a bit-allocation procedure to generate a bit-stream that can simultaneously achieve the best performance for all the resolutions. Furthermore, since we may not determine that one resolution is more important than another, the performance of any two bit-allocations is, in general, incomparable. Since a scalable codec serves multiple resolutions simultaneously, the performance of a bit-allocation cannot be measured with a single objective function. Instead, it is a multi-objective (multi-criteria) function, with a vector-valued objective, where each component of the objective represents the performance of one resolution. The definition of an optimal solution in a multi-objective problem is referred to as Pareto optimality [@Par; @Ehr; @Eic]. Intuitively, an optimal solution (called a Pareto point) reaches equilibrium in the objective vector space in the sense that any improvement of a participant can only be obtained if there is deterioration of at least one other participant. Therefore, no movement can raise the consensus by all the participating parties in the equilibrium. Since the Pareto points cannot be ordered and compared, it cannot be determined which point is better or worse than the others. In SC, the participating parties are the resolutions, and the objective space is the space of the performance of the resolutions. The multi-criteria perspective is also supported by the weighted sum scalarization method where the optimal bit-allocation can be obtained by solving the weighted sum of the distortions of resolutions: $$\min_{\underline{b}\in \Omega} \sum_{i=0}^{N-1} w_i g_i(\underline{b}), \label{liter}$$ where $\underline{b}\in \Omega$ is a feasible bit-allocation vector (or bit-allocation profile), and $w_i$ and $g_i$ are non-negative weight and distortion for the resolution $i$, respectively. By varying the values of the weights $w_i$, solving (\[liter\]) yields different Pareto optimal points. In general, the solutions of (\[liter\]) form a subset of the Pareto optimal points. Thus, the solutions of (\[liter\]) cannot cover all the performance that a scalable coding method can achieve. Meanwhile, the Pareto optimal solution to the problem of scalable coders is generally large, and if computational cost is a concern, the performance comparison of bit-allocation methods is usually set at a few Pareto points [@Ohm04; @SchwarzMW07; @SchwarzW07; @Sul1998; @Sullivan12; @Chakareski13; @Liu10; @WangNash14]. The weight vector associated with (\[liter\]) is either given or derived based on users’ preference choice [@Peng12SVC; @Peng12Wavelet]. In the literature of SV, solving the bit-allocation problem was mainly based on modelling the rate-distortion (R-D) function $g_i(\underline{b})$ [@NJ84; @Usevitch96; @Mal1998; @Woods99; @Taubman00; @schaar01; @He2002]. The performance comparison, therefore, mainly comprised accuracy and efficiency of the rate-distortion models at some particular Pareto points. Since the Pareto points derived by using the weighted sum scalarization approach is widely used in SC to conduct performance comparison of bit-allocation methods and rate-distortion models, we were motivated to derive the conditions under which the scalarization approach can cover all the Pareto points. The main result is shown in Theorem 2, which states that if the R-D function of each resolution is a strictly decreasing convex function and the Pareto points form a continuous curve, then all the Pareto points can be derived by using the scalarization approach. This result was derived based on formulating the SC’s bit-allocation problem as a multi-objective optimization problem defined on a directed acyclic graph (DAG), representing the coding dependency of a codec. A discrete version of the theorem is also presented. The main contributions of the current study are: 1) the bit-allocation problem for SC has been formulated as a multi-objective optimization problem. The optimal bit-allocation is a set of Pareto points; 2) the rate-distortion (R-D) curve of each resolution of a SC has been characterized so that all the (weakly) Pareto optimal points can be derived by using the weighted sum scalarization approach. The rest of the paper is organized as follows. In Section \[DAGsection\], we presented the prediction structure of SC using a DAG. In Section \[secmop\], we formulated the optimal bit-allocation problem of SC in a DAG and used the Pareto optimal points to characterize the solutions of the problem. Section \[RDModel\] contains the man results which characterize all the Pareto points from the R-D function of each resolution of a scalable coding method by using the scalarization approach. Section \[con\] presents the concluding remarks.\ [**Notations**]{}. We have used underline to indicate a vector; for example, $x$ is a scalar and $\underline{x}$ is a vector. Let $\underline{x} = [x_i]^T$ and $\underline{y}=[y_i]^T$ be two vectors. The following operations are defined based on the vector notation.\ 1. $\underline{x} \in R_{+}^N$ (the cone of nonnegative orthant in $R^N$) if $x_i \geq 0$ for all $i$.\ 2. $\underline{x} < \underline{y}$ if $x_i \leq y_i$ for all $i$, and there is a $j$ such that $x_j < y_j$.\ 3. $\underline{x} \leq \underline{y}$ if for all $i$ such that $x_i \leq y_i$.\ 4. $\underline{x} \ll \underline{y}$ if $x_i < y_i$ for all $i$.\ 5. $\underline{x}^T$ is the transpose of the vector $\underline{x}$. Directed Graph Model for Data Dependency {#DAGsection} ======================================== In SC, a coding object is usually divided into multiple coding segments. The layers are the basic coding segments in SC that support spatial and quality scalability in an image and spatial, temporal, and quality scalability in a video. To remove the abundant redundancy existing between the layers, various kinds of data prediction methods have been adopted. In video, the success of a coding method relies crucially on whether a prediction method can truly reflect the correlation that exists between the layers. The predictive coding structure can be represented by a directed graph where a coding segment is represented as a node and an arc indicates the prediction from one coding segment to another coding segment. For bit allocation, we required the graph to have the following two properties: the graph should be acyclic and the graph should be connected from the source node (i.e., any node is reachable from the source node). The first property states that the graph has no cycle. Because a cycle can create an infinite ways to represent a coding segment for a bit-allocation, we decided to avoid such scenario. For example, a cycle of nodes A to B indicates that the coding result of A can be used to predict that of B and the result of B can then be used to predict and modify the coding result of A. This prediction from A to B and B to A can repeat infinite times for a bit-allocation. The second property implies that the coding object at a node can be reconstructed based on the information on the path from the source to that node. First, a DAG was formed based on a scalable coder where the basic coding segment is a layer, and the prediction was applied on layers. Let the number of layers of the scalable coder be $N$, denoted from $0$ to $N-1$. We used $G = (V,A)$ to represent the DAG with node set $V$ and arc set $A$ where the nodes correspond the layers and the arcs as the dependency between the layers. $G$ has a single source node (node $0$) that denotes the base layer of SC. Arc $(i \rightarrow j)\in A$ indicates node $j$ depending on node $i$. If we associate the (layer) node $i$ with the resolution $i$, then the number of nodes in $G$ is the number of resolutions. To reproduce the coding object at resolution $i$, we used the required layers for the resolution and their dependency, corresponding to the smallest connected sub-graph, denoted as $\pi_i^g$, of $G$ containing all the paths from the node $0$ to the node $i$. Let $p(i)$ denote the parent nodes of node $i$ in $\pi_i^g$. The reconstructed object at the resolution $i$ depends on the reconstructed object at the resolutions of $p(i)$. Figure \[DAG\] illustrates a DAG representation of a scalable codec that supports five resolutions, where the base resolution is at node $0$. Let us take H.264/SVC[^1] as an example [@SchwarzMW07]. In H.264/SVC, there are temporal prediction, spatial prediction, and quality prediction that can remove redundancy between the adjacent temporal layers, spatial layers, and quality layers, respectively. The temporal prediction can exist with spatial or quality prediction, but the spatial and quality predictions cannot be applied to predict one layer at a time. Therefore, a temporal node can be directed from another temporal node, and simultaneously from either a quality or a spatial node. Depending upon the application’s environment, the coding structure, which specifies dependency between the layers, was described in the configuration file. Figures \[SVCGraph1\] and \[SVCGraph2\] show the DAG models corresponding to two coding structures of H.264/SVC. Multi-Objective Bit-Allocation Problem {#secmop} ====================================== The bit-stream of SC was generated to support scalability in various dimensions. This suggests that the bit-allocation procedure can be regarded as a multi-valued function that maps a bit-allocation vector into a vector-valued function. Let $G$ be the DAG constructed from the coding dependency of an SC with $N$ layers (coding segments), represented by $0$ to $N-1$, and $N$ resolutions, also represented by $0$ to $N-1$. Let $b$ be the bit budget and $b_i$ be the number of bits assigned to layer $i$. Then, the bit-allocation vector $\underline{b} = [b_i]_{i=0}^{N-1} \in R^{N}_+$ satisfies $\sum_{i=0}^{N-1} b_i \le b$ and $b_i \ge 0$. Let $\pi_i^g$ denote the sub-graph of $G$ for resolution $i$. If there is more than one prediction path from resolution $0$ to resolution $i$, then $\pi_i^g$ represents the union of the paths. If $g$ denotes the distortion of the reconstructed coding object against the original object $f$ and let $E(f,G,b)$ denote the procedure of allocating $b$ bits for object $f$ with graph $G$, we have $$\begin{aligned} E(f, G,b ): \underline{b}\rightarrow [g_0(\underline{\pi}_0(\underline{b})), \cdots,g_{N-1}(\underline{\pi}_{N-1}(\underline{b}))]^T,\nonumber \label{performance}\end{aligned}$$ where $\underline{\pi}_i(\underline{b})$ denotes the bit-allocation profile of the bit-allocation $\underline{b}$ assigned to the nodes of sub-graph $\pi_i^g$, and $g_i(\underline{\pi}_i(\underline{b}))$ measures the distortion[^2] of the reconstructed coding object at resolution $i$. Then, the bit-allocation problem can be formulated as the following constrained vector-valued optimization problem: $$\begin{aligned} \begin{cases} \min_{\underline{b}} \;\; [g_0(\underline{\pi}_0(\underline{b})), \cdots, g_{N-1}(\underline{\pi}_{N-1}(\underline{b}))]^T \\ \hspace{0.5in} b_i \ge 0, \;\;\; i = 0, \cdots, N-1; \\ \hspace{0.5in} \sum_{i=0}^{N-1} b_i \leq b, \end{cases} \label{bitallocate0}\end{aligned}$$ where the bits allocated to the sub-graph $\pi_i^g$ are $\sum_{j \in \pi_i^g} b_j$, which is the total bits allocated to the layers that support the resolution. We use $\Omega$ to denote the set of feasible bit-allocation vectors of (\[bitallocate0\]). Since $\Omega$ is the intersection of half-spaces and hyperplanes, $\Omega$ is a convex set. To lighten the notation, let us define the vector-valued distortion $\underline{g}_{\Omega}(\underline{b})$ as a feasible distortion (the distortion generated by a feasible coding path in SV): $$\begin{aligned} \underline{g}_{\Omega}(\underline{b}) = [g_0(\underline{\pi}_0(\underline{b})), \cdots, g_{N-1}(\underline{\pi}_{N-1}(\underline{b}))]^T \text{ when } \underline{b} \in \Omega. \label{gfunction}\end{aligned}$$ We also denote the feasible distortion region, the distortions derived by all the feasible coding paths, as $$\underline{g}(\Omega) = \{ \underline{g}_{\Omega}(\underline{b})\}. \label{gfunction1}$$ The optimum bit-allocation $\underline{b}^*$ can be defined as the bit-allocation that yields the smallest distortion in each resolution, [*i.e*]{}. $\underline{g}_{\Omega}(\underline{b}^*) \le \underline{g}_{\Omega}(\underline{b})$ for all $\underline{b} \in \Omega$. In other words, the optimum bit-allocation is the minimum of the problem in (\[bitallocate0\]). Unfortunately, as shown in Figure \[Optimumpoint\], the existence of the optimum bit-allocation vector is uncommon. In general, we cannot compare the distortion vectors of any two feasible bit-allocations. Two feasible distortions can only be compared when they are partially ordered with respect to $R_{+}^{N}$, [*i.e*]{}. $\underline{g}_{\Omega}(\underline{b}_1) \leq \underline{g}_{\Omega}(\underline{b}_2)$ if and only if $\underline{g}_{\Omega}(\underline{b}_2) - \underline{g}_{\Omega}(\underline{b}_1) \geq \underline{0}_{N}$. By virtue of partial ordering, there are actually many optimal (minimal) bit-allocation solutions with respect to $R_{+}^N$ and due to this reason the optimum bit allocation problem for SC does not follow the conventional assumption of the existence of the optimum bit-allocation. Nevertheless, the optimal solutions can be derived from the study of the multi-objection optimization problem. The concept of optimal solutions of a multi-objective optimization problem with respect to nonnegative orthant cone $R_{+}^N$ was first proposed by Pareto in $1896$ [@Par]. Pareto defined an optimal solution as a point in a feasible space that is impossible to find a way of moving from, even slightly, and still reach the consensus of all individual participants. In other words, an optimal solution is an equilibrium position in the sense that any small displacement in departing from the position necessarily has the effect of increasing the values of some individual functions while decreasing those of the other functions. In honor of Pareto, these equilibrium positions are today called Pareto optimal points. Pareto Optimal Bit-Allocations ------------------------------ The Pareto optimal solution deals with the case in which a set of feasible objective vector-values does not have an optimum element. The Pareto optimal solution and the weakly Pareto optimal solution for the bit-allocation problem are defined as follows. The Pareto optimal bit-allocation $\underline{b}^*$ is defined as no $\underline{b} \in \Omega$ so that $\underline{g}_{\Omega}(\underline{b}) < \underline{g}_{\Omega}(\underline{b}^*)$. This definition signifies $$(\underline{g}_{\Omega}(\underline{b}^*) - R_{+}^N) \cap \underline{g}(\Omega) = \{\underline{g}_{\Omega}(\underline{b}^*)\}, \label{Pareto}$$ where $\underline{g}_{\Omega}(\underline{b}^*) - R_{+}^N$ is the Minkowski sum[^3]of $\underline{g}_{\Omega}(\underline{b}^*)$ and $R_{-}^N$. The set of Pareto bit-allocations is denoted as ${\cal B}(\Omega) = \{\underline{b}^* | \underline{b}^* \text{ satisfies } (\ref{Pareto})\}$. In addition, the set of Pareto optimal points is denoted as $$Pareto(\underline{g}(\Omega)) = \{\underline{g}_{\Omega}(\underline{b})|\; \underline{b} \in {\cal B}(\Omega)\}.$$ The bit-allocation $\underline{b}^* \in \Omega$ is called a weakly Pareto bit-allocation if there is no $\underline{b} \in \Omega$ so that $\underline{g}_{\Omega}(\underline{b}) \ll \underline{g}_{\Omega}(\underline{b}^*)$. In other words, $$(\underline{g}_{\Omega}(\underline{b}^*) - int(R_{+}^N)) \cap \underline{g}(\Omega) = \emptyset,$$ where $int(R_{+}^N)$ is the interior of $R_{+}^N$ and $\emptyset$ is the empty set. The set of weakly Pareto bit-allocations is denoted as ${\cal B}_w(\Omega)$ and the set of weakly Pareto optimal points is the image of ${\cal B}_w(\Omega)$: $$Pareto_w(\underline{g}(\Omega)) = \{\underline{g}_{\Omega}(\underline{b})|\; \underline{b} \in {\cal B}_w(\Omega)\}.$$ A Pareto optimal bit-allocation is a weakly Pareto bit-allocation because for a bit-allocation $\underline{b}^*$, if there is no $\underline{b}$ such that $\underline{g}_{\Omega}(\underline{b}) < \underline{g}_{\Omega}(\underline{b}^*)$, then, obviously, there is no $\underline{b}$ such that $\underline{g}_{\Omega}(\underline{b}) \ll \underline{g}_{\Omega}(\underline{b}^*)$. Figure \[WeakPeratoPoint\] illustrates the Pareto optimal and weakly Perato optimal points for a bi-criteria example. The Scalarization Approach {#secSA} -------------------------- The weighted sum scalarization approach, which transforms a vector-valued optimization problem into a scalar-valued optimization one, is widely used to find the (weakly) Pareto optimal points of a multi-objective optimization problem [@Ehr; @Eic]. By virtue of the approach, the optimization problem in (\[bitallocate0\]) is transformed to solve $$\min_{\underline{b}} \underline{w}^T \underline{g}_{\Omega}(\underline{b}) = \min_{\underline{b}\in \Omega} \sum_{i=0}^{N-1} w_i \; g_i(\underline{\pi}_i(\underline{b})), \label{scaleprob}$$ where $\underline{w} = ([w_i]_{i=0}^{N-1})^T$ is the weight vector with $w_i \ge 0$ for each $i$ and $\sum_{i=0}^{N-1} w_i =1$, $\underline{g}_{\Omega}(\underline{b})$ is a feasible distortion, and $g_i(\underline{\pi}_i(\underline{b}))$, defined in (\[gfunction\]), is a feasible distortion at resolution $i$. As shown in Figure \[Scalarization\], the optimum bit-allocation occurs when the hyperplane tangential to $\underline{g}(\Omega)$ has the smallest intercept among all the parallel hyperplanes hat intercept $\underline{g}(\Omega)$. Let $\underline{b}^*$ be the optimum bit-allocation of (\[scaleprob\]) with the weight vector $\underline{w}$. We denote that $\underline{y}(\underline{b}^*) = ([y_i(\underline{b}^*)]_{i=0}^{N-1})^T$ satisfies the equation $$\sum_{i=0}^{N-1} w_iy_i(\underline{b}^*) = \min_{\underline{b} \in \Omega} \sum_{i=0}^{N-1} w_i \; g_i(\underline{\pi}_i(\underline{b})) \label{opt}$$ and define the set of solutions of (\[opt\]) for all normalized weight vectors as $$S_0 = \{\underline{y}(\underline{b}^*) | \text{ there is $\underline{w} \ge \underline{0}$ with $\sum_{i}w_i = 1$ so that $\underline{y}(\underline{b}^*)$ satisfies (\ref{opt})} \}.$$ In general, $S_0$ is a subset of the Pareto points. As shown in Figure \[Scalarization\], the Pareto point $\underline{a}$ is not in $S_0$. The main result for the weighted sum scalarization approach for solving the multi-objective optimization problem is the equivalence of $S_0$ and the weakly Parent optimal points when $\underline{g}(\Omega) + R_{+}^N$ is a convex set. Figure \[peter\] illustrates an example where $\underline{g}(\Omega)$ is not convex, but $\underline{g}(\Omega) + R_{+}^N$ is a convex set. The result is stated through the following theorem. **Theorem 1** [@Ehr]. If $\underline{g}(\Omega) + R^N_{+}$ is a convex set, then $S_0 = Pareto_w(\underline{g}(\Omega))$. The theorem indicates that if $\underline{g}(\Omega) + R^N_{+}$ is a convex set, then the scalarization approach can determine nothing but all weakly Pareto points and weakly Pareto bit-allocations of $\underline{g}(\Omega)$. Main Results {#RDModel} ============ Since it is important and insightful to have all alternatives available for decision makers to choose which Pareto point to operate on, the primary purpose here is to derive a sufficient condition so that $\underline{g}(\Omega) + R_{+}^N$ is a convex set. The distortion space at a resolution is defined as all the feasible distortions that the resolution can generate from a given bit budget. As shown in Figure \[rd\], if the bit budget is $b$, then the distortion at resolution $i$ is defined as the set, $$\{g_i(\underline{\pi}_i(\underline{b})) | g_i(\underline{\pi}_i(\underline{b})) \text{ is the $i$-th component of a } \underline{g}_{\Omega}(\underline{b}) \in \underline{g}(\Omega)\},$$ where $\underline{g}_{\Omega}(\underline{b})$ and $\underline{g}(\Omega)$ are defined in (\[gfunction\] ) and (\[gfunction1\]), respectively, and $\underline{\pi}_i(\underline{b})$ is defined in (\[performance\]) as the bit-allocation profile of resolution $i$ in the DAG. Hereafter, let bit-rate $r_i$ denote the total number of bits in the bit profile $\underline{\pi}_i(\underline{b})$ assigned to the resolution $i$ in the DAG. Note that many bit-allocation profiles assign the same total number of bits $r_i$ at resolution $i$. Let $D_i(r_i)$ denote the rate-distortion (R-D) function of $r_i$ at resolution $i$. The R-D function is the lower envelope formed by all the distortions at the resolution $i$ that can be obtained by coding an image or a video with bit-rate $r_i$. The main result is summarized in Theorem 2, which indicates that the convex set $\underline{g}(\Omega) + R_{+}^N$ can be characterized from the R-D function of each resolution of a scalable coder. The Lemma 1 indicates that $\underline{g}(\Omega) + R_{+}^N$ is equivalent to $Pareto_w(\underline{g}(\Omega)) + R_{+}^N$. [**Lemma 1**]{}. $$\begin{aligned} \underline{g}(\Omega) + R_{+}^N & = & Pareto_w(\underline{g}(\Omega)) + R_{+}^N. \label{wgeq}\end{aligned}$$ [***Proof***: ]{}\ Clearly, $\underline{g}(\Omega) + R_{+}^N \supseteq Pareto_w(\underline{g}(\Omega)) + R_{+}^N$, as $Pareto_w(\underline{g}(\Omega))$ is a subset of $\underline{g}(\Omega)$. To show the other direction: let $\underline{d}$ be a point in $\underline{g}(\Omega) + R_{+}^N$ and the bit-allocation of $\underline{d}$ is $\underline{b} = [b_i]_{i=0}^{N-1}$. Then, it is clear that $\sum_{i=0}^{N-1} b_i \le b$. Since there is a weakly Pareto point $\underline{d}^w$ with bit-allocation $\underline{b}^w = [b_i^w]_{i=0}^{N-1}$ with $\sum_{i=0}^{N-1} b_i^w = b$ such that $\underline{d}^w \le \underline{d}$. Therefore, $Pareto_w(\underline{g}(\Omega)) + R_{+}^N \supseteq \underline{g}(\Omega) + R_{+}^N $. ***End of Proof***. Under mild assumptions on the distortion space $\underline{g}(\Omega)$ and R-D functions, the below lemma indicates that any feasible distortion $\underline{g}_{\Omega}(\underline{b})$ can be represented by the R-D functions. [**Lemma 2**]{}. Let the feasible distortion space $\underline{g}(\Omega)$ be a compact region and let $D_i(r_i)$ be the R-D curve of resolution $i$ with $r_i \le b$. If $\{D_i(r_i)\}$ are strictly decreasing convex functions, then there are one-to-one and onto functions $\{q_i\}$ that map the $i$-th component $g_i(\underline{\pi}_i(\underline{b}))$ of any feasible distortion to the bit-rate in $[0,b]$ so that $$g_i(\underline{\pi}_i(\underline{b})) = D_i(q_i(g_i(\underline{\pi}_i(\underline{b})))) \text{ for all resolution $i$ and all feasible bit-allocations $\underline{b}$}. \label{qfunction}$$ Meanwhile, $q_i$ is a strictly concave function.\ [***Proof***: ]{}\ Without loss of any generality, we can use a two-resolution example to sketch the main concept of the proof. Figure \[mapping\] illustrates the example where the minimum and the maximum distortions with bit budget $b$ for resolution $1$ are $A$ and $B$, respectively. The $q_1$ is a one-to-one and onto mapping of the vertical segment $[B,A]$ at $b$ in the right sub-graph to the bit-rates $[0,b]$. The horizontal dashed line in the left sub-figure shows the distortion of resolution $1$ varies with a fixed distortion of resolution $2$. The dashed line intersects the distortion space at an interval with end points at $C$ and $D$. Since $D_1$ is a strictly decreasing convex function, as shown in the right sub-figure, the interval $[C, D]$ has a unique corresponding curve in $D_1$ and the domain of the curve is defined from $q_1(D)$ to $q_1(C)$. On the other hand, similar discussions can imply that the mapping $q_2$ is an one-to-one and onto mapping of the distortion at resolution $2$ to the bit-rates in the domain of the R-D function $D_2$. This concludes that any distortion point in $\underline{g}(\Omega)$ can be represented based on the R-D functions and the mapping $q_1$ and $q_2$. Since $q_1$ is the inverse function of the strictly convex function $D_1$, $q_1$ is a strictly concave function [@Boy]. This can also be observed at the right sub-figure of Figure \[mapping\] that the function $q_1$ maps intervals $[A,B]$ to $[b, 0]$. The mathematical induction can then be used to extend the proof for cases with more than two resolutions. ***End of Proof***.\ The following lemma indicates that if the weakly Pareto points are continuous, and $\underline{a}$ and $\underline{b}$ are two weakly Pareto points, then any weakly Pareto points from $\underline{a}$ to $\underline{b}$ must be located either inside or within the axis-aligned (minimum) bounding box of $\underline{a}$ and $\underline{b}$[^4] . [**Lemma 3**]{}. If the weakly Pareto point form a continuous curve (surface) and $[a_i]_{i=0}^{N-1}$ and $[b_i]_{i=0}^{N-1}$ be any two weakly Pareto points, then any weakly Pareto point from $[a_i]_{i=0}^{N-1}$ to $[b_i]_{i=0}^{N-1}$ should be either inside or in the axis-aligned minimum bounding box of $[a_i]$ and $[b_i]$ and can be represented as $[p_i(t)]_{i=0}^{N-1}$, where $t \in [0,1]$, and $$p_i(t) = a_i + \alpha_i(t) (b_i - a_i) = (1 - \alpha_i(t)) a_i + \alpha_i(t) b_i, \label{param}$$ where $\alpha(t)$ is a continuous, $\alpha_i(t) \in [0, 1]$, and $\alpha(0) = 0$ and $\alpha(1) = 1$. [***Proof***: ]{}\ We will prove this lemma by using mathematical induction on the dimension of the distortion space $\underline{g}(\Omega)$ with coordinate axes $[g_0, \cdots, g_{N-1}]$. For a two-dimensional distortion space, let $[a_0, a_1]$ and $[b_0, b_1]$ be two weakly Pareto points and let $B^2$ denote the axis-aligned minimum bounding box of $[a_0, a_1]$ and $[b_0, b_1]$. Since the weakly Pareto points between $[a_0, a_1]$ and $[b_0, b_1]$ are continuous, if there is a $[c_0, c_1]$ inside $B^2$ such that either the horizontal line, $g_1 = c_1$, or the vertical line, $g_0 = c_0$, intersects the continuous Pareto curve at a point $[d_0, d_1]$ that is outside $B^2$, then one of the weakly Pareto points $[d_0, d_1]$, $[a_0, a_1]$, and $[b_0, b_1]$ would not be a weakly Pareto point, depending on the location of the intersection point as shown in Figure \[boundingbox\]. Therefore, all the weakly Pareto points between $[a_0, a_1]$ and $[b_0, b_1]$ must be inside or in $B^2$ and, hence, can be represented as (\[param\]). Let us assume that the lemma is true up to dimension $n-1$. Let $[a_i]_{i=0}^{n-1}$ be $[b_i]_{i=0}^{n-1}$ be two weakly Pareto points in an $n$-dimensional distortion space with coordinates $[g_0, \cdots, g_{n-1}]$, and let $B^n$ be the axis-aligned minimum bounding box of $[a_i]_{i=0}^{n-1}$ and $[b_i]_{i=0}^{n-1}$. Then, for any point $[c_i]_{i=0}^{n-1}$ inside $B$, there are $n$ axis-aligned hyperplanes, $g_0= c_0$, $\cdots$, $g_{n-1} = c_{n-1}$. Without loss of any generality, let us take the hyperplane $g_{n-1} = c_{n-1}$. This hyperplane intersects the continuous Pareto curve in a $(n-1)$-dimensional axis-aligned minimum bounding box $B^{n-1}$ of $[a_0,\cdots, a_{n-2}, c_{n-1}]$ and $[b_0,\cdots, b_{n-2}, c_{n-1}]$. Let $[d_0, \cdots, d_{n-2}, c_{n-1}]$ be an intersection point, then by mathematical induction, $[d_0, \cdots, d_{n-2}, c_{n-1}]$ must be inside or in the bounding box $B^{n-1}$. As a result, the point $[d_0, \cdots, d_{n-2}, c_{n-1}]$ is also inside or in the bounding box $B^n$. Since $[c_i]_{i=0}^{n-1}$ is any point inside $B^n$, we conclude that the lemma in true for dimension $n$. ***End of Proof***.\ [**Theorem 2**]{}. Let the feasible region $\underline{g}(\Omega)$ be a compact region, $D_i(r_i)$ be the R-D function of resolution $i$ with $r_i \le b$, and $q_i$ be the mapping derived in Lemma 2. If $\{D_i(r_i)\}$ are strictly decreasing convex functions and if the weakly Pareto points of $\underline{g}(\Omega)$ forms a continuous curve, then $\underline{g}(\Omega) + R_{+}^N$ is a convex set. [***Proof***:]{}\ By Lemmas 1 and 2, for any two points in $\underline{g}(\Omega) + R_{+}^N$, $[u_i]_{i=0}^{N-1}$ and $[v_i]_{i=0}^{N-1}$, we can find two weakly Pareto points $\underline{D}^0 = [D_i(r_i^0)]_{i=0}^{N-1}$ and $\underline{D}^1 = [D_i(r_i^1)]_{i=0}^{N-1}$ with $b \ge r_i^0 \ge q_i(u_i)$ and $b \ge r_i^1 \ge q_i(v_i)$ such that $$\underline{D}^0 \le [u_i]_{i=0}^{N-1} \text{ and } \underline{D}^1 \le [v_i]_{i=0}^{N-1}. \label{eqn}$$ To simplify the notation, we let $a_i = D_i(r_i^0)$ and $b_i = D_i(r_i^1)$. The continuous functions $\{\alpha_i(t)\}$ have the domain $t \in [0,1]$ and the range $\alpha_i(t) \in [0,1]$ and the end points $\alpha_i(0)= a_i$ and $\alpha_i(1) = b_i$. Since the weakly Pareto points form a continuous curve, according to Lemma 3, any weakly Pareto point $[p_i(t)]_{i=0}^{N-1}$ between the Pareto point $\underline{D}^0$ and $\underline{D}^1$ can be represented using $\{\alpha_i(t) \}$ as $$p_i(t) = a_i + \alpha_i(t) (b_i - a_i) = (1 - \alpha_i(t)) a_i + \alpha_i(t) b_i.$$ As $t$ varies from $0$ to $1$, $p_i(t)$ varies continuously from $a_i$ to $b_i$. By Lemma 2, we have $$(1 - \alpha_i(t)) a_i + \alpha_i(t) b_i = D_i(q_i((1 - \alpha_i(t)) a_i + \alpha_i(t) b_i )).$$ Since $D_i$ is a decreasing and convex and $q_i$ is concave, $D_i(q_i)$ is a convex function [@Boy]. Therefore, $$\begin{aligned} D_i(q_i((1 - \alpha_i(t)) a_i + \alpha_i(t) b_i )) & \le & (1 - \alpha_i(t)) D_i(q_i(a_i)) + \alpha_i(t)D_i(q_i(b_i)) \\ & = & (1 - \alpha_i(t)) a_i + \alpha_i(t) b_i, \label{convex}\end{aligned}$$ where the inequality and equality are derived from the definition of convex function and Lemma 2, respectively. Since $a_i = D_i(r_i^0)\le u_i$ and $b_i = D_i(r_i^{1}) \le v_i$ , from Equations (\[eqn\]) and (\[convex\]), we have $$D_i(q_i((1 - \alpha_i(t)) a_i + \alpha_i(t) b_i )) \le \alpha_i(t) a_i + (1 - \alpha_i(t) ) b_i \le \alpha_i(t) u_i + (1-\alpha_i(t)) v_i. \label{distor1}$$ Since $[D_i(q_i((1 - \alpha_i(t)) a_i+ \alpha_i(t) b_i ))]_{i=0}^{N-1}$ for $t \in [0,1]$ are weakly Pareto points of $\underline{g}(\Omega)$, Equation (\[distor1\]) implies that the points lie within the line segment connecting $[u_i]_{i=0}^{N-1}$ and $[v_i]_{i=0}^{N-1}$ are in $\underline{g}(\Omega) + R_{+}^N$. Since $\underline{u} =[u_i]_{i=0}^{N-1}$ and $\underline{v} = [v_i]_{i=0}^{N-1}$ are any two points in $\underline{g}(\Omega) + R_+^N$, we can conclude that $\underline{g}(\Omega) + R_+^N$ is a convex set. ***End of the proof***.\ Figure \[mainresult\] illustrates a two-resolution example of the above theorem. Theorem 2 provides a sufficient condition to characterize all the weakly Pareto points by using the weighted sum scalarization approach from the R-D curve of each resolution and the distortion space. Therefore, according to Theorem 1, by using the weighted sum scalarization approach, all weakly Pareto optimal points can be derived. In practice, the feasible bit-allocation space $\Omega$ and the feasible distortion space $\underline{g}(\Omega)$ of SC are discrete. Since $r_i$ are discrete, $\tilde D_i(r_i)$, called the continuous extension of $D_i(r_i)$, can be defined as a continuous function of $r_i$ which contains $D_i(r_i)$ with $r_i \in Z_{+}$ and $r_i \in [0, b]$. Meanwhile, the distortion $\underline{\tilde g}(\Omega)$, called the continuous extension of discrete point set $\underline{g}(\Omega)$, can be defined as a compact set which contains $\underline{g}(\Omega)$ so that all weakly Pareto points of $\underline{g}(\Omega)$ are also weakly Pareto points of $\underline{\tilde g}(\Omega)$. The following corollary is the discrete version of Theorem 2. [**Corollary 1**]{}. Let $\tilde D_i(r_i)$ and $\underline{\tilde g}(\Omega)$ be the continuous extension of discrete function $D_i(r_i)$ and discrete ponint set $\underline{g}(\Omega)$, respectively. If $\{\tilde D_i(r_i)\}$ are strictly decreasing convex functions and if all the weakly Pareto points of $\underline{\tilde g}(\Omega)$ forms a continuous curve (surface), then all weakly Pareto points of $\underline{g}(\Omega)$ can be derived using the weighed sum scalarization approach. [***Proof***:]{}\ According to Theorem 2, $\underline{\tilde g}(\Omega) + R_+^N$ is a convex set. Therefore, all weakly Pareto point of $\underline{\tilde g}(\Omega)$ can be derived by the scalarization approach. Since the weakly Pareto point of $\underline{g}(\Omega)$ is a subset of that of $\underline{\tilde g}(\Omega)$, using the scalarization approach, all weakly Pareto points of $\underline{g}(\Omega)$ can be derived. ***End of the proof***.\ Conclusions {#con} =========== To conclude, we represented the prediction structure that removes the redundancy in scalable coding (SC) as a directed acyclic graph and formulated the optimal bit-allocation problem on the graph as a multi-criteria optimal problem. In general, there can be many optimal solutions (called Pareto points), but the performance of those solutions are incomparable. In SC, the weighed sum scalarization approach is a popular way to derive Pareto points. Since the Pareto points derived via the weighted sum scalarization approach is a subset of all Pareto points, it is important to present the conditions in SC so that all the Pareto points can be derived through the scalarization approach. Our main results showed that if the rate-distortion (R-D) function of each resolution of a SC method is strictly decreasing and convex and the weakly Pareto points form a continuous curve, then all the Pareto optimal solutions can be derived through the scalarization approach.\ [**[Acknowledgement]{}**]{}: Wen-Liang Hwang would like to express his gratitude to Mr. Jinn Ho, Mr. Chia-Chen Lee, and Dr. Guan-Ju Peng. Without their assistances, this paper cannot be finished. [^1]: Currently, the scalable scheme of H.265 is inherited from H.264/SVC. [@Sullivan12] [^2]: A main goal of SC is to maximize the peak-signal-to-noise-ratio (PSNR) at each resolution. PSNR is $10\log_{10} \frac{255^2}{MSE} dB,$ where $MSE$ is the reconstruction error. Thus, maximizing PSNR of a resolution can be regarded as minimizing $\log MSE$ at the resolution. [^3]: Minkowski sum: $S + T = \{ s + t | s\in S \text{ and } t \in T\}$. [^4]: The axis-aligned minimum bounding box for a given point set is its minimum enclosing box subject to the constraint that the edges of the box are parallel to the coordinate axes.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cosmological Gamma-Ray Bursts (GRBs) offer a unique perspective to probe the evolution of distant galaxies. We discuss one of the multiple benefits of this approach, i.e. the detection of transient edges in the GRB prompt phase emission. These absorption features can be used to directly derive the redshift of GRBs and their host galaxies without the need of any optical spectroscopic follow-up.' address: 'Service d’Astrophysique, CEA–Saclay, 91191 Gif-sur-Yvette' author: - 'Le Floc’h, E.' - 'Duc, P.-A.,' - 'Mirabel, I.F.' title: 'Deriving the redshift of distant galaxies with Gamma-Ray Burst transient edges' --- GRBs and their host: a new approach on galaxy evolution ======================================================= Gamma-ray bursts (GRBs) are now regarded as one of the most promising tools to probe the star formation in the early Universe. There is indeed increasing evidence that the long GRBs originate from the core collapse of short-lived massive stars and they are thus considered as direct tracers of active starbursts. The interests of this approach are numerous: - GRBs are not attenuated by intervening columns of gas and dust, and identically probe unobscured star-forming regions and dusty starbursts. They can trace therefore the history of star formation in the Universe without any bias due to dust obscuration; - GRBs are likely detectable up to very high redshift because they are beamed into relativistic jets; - The spectroscopic redshift of GRBs and their host galaxies can de directly obtained using the features observed in the spectra of GRBs and their transient counterparts at longer wavelengths. So far, three different techniques have been used to derive these redshifts from the intrinsic emission of GRBs and their afterglows, including the detection of 1) absorption features in the spectra of GRB optical transients, 2) iron K emission lines in X-ray afterglows, and 3) absorption transient edges in the GRB prompt phase emission. We examine this third issue hereafter. Determination of GRB redshift using prompt phase transient features =================================================================== In the GRB990705 event, Amati et al. (2000) reported the discovery of a transient absorption edge at $\sim$3.8 keV in the prompt X-ray emission of this burst (see Fig.1). They interpreted this feature as the GRB-intrinsic signature of an iron-enriched absorbing medium at z$\sim$0.86+/–0.17. With the intention of confirming this result, we performed VLT spectroscopic observations of GRB990705 host galaxy and derived a redshift z=0.8424 (Fig.1, see also Le Floc’h et al. 2002). ![ [*Left:*]{} X-ray spectrum of GRB990705 with a transient edge at 3.8keV tracing an absorbing medium at z=0.86+/–0.17 (Amati et al. 2000). [*Right:*]{} Our VLT spectrum of GRB990705 host galaxy. The prominent \[OII\] doublet emission points to an unambiguous redshift z=0.8424 for the burst and its host.[]{data-label="figure_mafig"}](lefloch_fig1.eps){width="12cm"} This confirmation shows that intrinsic GRB properties, such as the redshift and the physical conditions of the GRB–surrounding medium, can be derived from the burst detection itself, without the need of any afterglow to be detected and followed-up. Eventhough GRB990705 is the only one burst in which such a transient edge has been observed so far, which raises the question whether particular ionizing states of the circum-burst environment are required to detect these absorptions, this burst lies among the brightest GRBs ever detected with the Beppo-SAX satellite. This suggests that transient edges could be a more common feature of GRB spectra, and highlights the breakthrough that would be operated with the advent of more sensitive X-ray detectors. Future satellites indeed, such as the ECLAIRs experiment (Barret 2002) could be entirely dedicated for studying the GRB prompt emission, and may provide a systematic detection of these absorption lines.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Left-Right symmetric extension of the Standard Model with Higgs isospin triplets can provide neutrino masses via a TeV scale seesaw mechanism. The doubly charged Higgs bosons $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ induce lepton flavour violating decays $\tau^\pm \to lll$ at tree-level via a coupling which is related to the Maki-Nakagawa-Sakata matrix $(V_{\rm MNS})$. We study the magnitude and correlation of $\tau^\pm \to lll$ and $\mu\to e\gamma$ with specific assumptions for the origin of the large mixing in $V_{\rm MNS}$ while respecting the stringent bound for $\mu\to eee$. It is also shown that an angular asymmetry for $\tau^\pm \to lll$ is sensitive to the relative strength of the $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ mediated contributions and provides a means of distinguishing models with doubly charged Higgs bosons.' --- =10000 [**hep-ph/yymmnnn\ KEK–TH–1106\ October 2006**]{} 1.cm [ ]{} 0.8cm [**.3cm A.G. Akeroyd$^{1,3,4}$[^1], Mayumi Aoki$^1$[^2] and Yasuhiro Okada$^{1,2}$[^3]** ]{} [***1: Theory Group, KEK, 1-1 Oho,\ Tsukuba, Ibaraki, 305-0801 Japan*** ]{} [***2: The Graduate University for Advanced Studies (Sokendai),\ 1-1 Oho, Tsukuba, Ibaraki, 305-0801 Japan*** ]{} [***3: Department of Physics,\ National Cheng Kung University, Tainan, 701 Taiwan*** ]{} [***4: National Center for Theoretical Sciences,\ Taiwan\ ***]{} 0.5cm 1.0cm [**PACS index: 13.35.-r, 14.60.Pq, 14.80.Cp**]{}\ [**Keywords : Higgs boson, Neutrino mass and mixing, Lepton flavour violation**]{} Introduction ============ In recent years there has been increasing evidence that neutrinos oscillate and possess a small mass below the eV scale [@Fukuda:1998mi]. This revelation necessitates physics beyond the Standard Model (SM), which could manifest itself at the CERN Large Hadron Collider (LHC) and/or in low energy experiments which search for lepton flavour violation (LFV) [@Kuno:1999jp]. Consequently, models of neutrino mass generation which can be probed at present and forthcoming experiments are of great phenomenological interest. Massive neutrinos may be accommodated by adding a $SU(2)_L$ singlet (“sterile”) right-handed neutrino $\nu_R$ to the SM Lagrangian together with the corresponding Dirac mass term. In order to obtain masses of the eV scale, the Yukawa coupling of the neutrinos to the SM Higgs boson would need to be at least 6 orders of magnitude smaller than the electron Yukawa coupling. Moreover, there would be no observable phenomenological consequences aside from neutrino oscillations. More appealing frameworks for neutrino mass generation can be found if neutrinos are of the Majorana type. The celebrated seesaw mechanism [@Minkowski:1977sc] ascribes the smallness of the neutrino mass to the large scale of the unobserved [*heavy*]{} right-handed neutrinos ($N_R$). Dirac mass terms of the order of the top quark mass would require $N_R\sim 10^{14}$ GeV, a scale which is far beyond the reach of any envisioned collider. Reducing the scale of $N_R$ to the order of a few TeV would require Dirac mass terms of the order of MeV, which constitutes a mild fine-tuning with respect to magnitude of the charged lepton masses. However, such a choice would permit the mechanism to be probed at future high-energy colliders. This “low energy seesaw mechanism” may be implemented in Left-Right (LR) symmetric models [@Pati:1974yy] with Higgs triplet representations [@Mohapatra:1979ia], in which the mass matrix for $N_R$ is given by the product of a new Yukawa coupling $h_{ij}$ and a triplet vacuum expectation value (vev) $v_R$. The mass scale of the other new particles in the model (e.g. the new gauge bosons $W_R,Z_R$ and doubly charged Higgs bosons $H^{\pm\pm}_{L,R}$) is also determined by $v_R$, resulting in a rich phenomenology at the LHC if $v_R\sim {\rm TeV}$ [@Gunion:1989in],[@Deshpande:1990ip]. The Yukawa coupling $h_{ij}$ mediates many low energy LFV processes. In this paper we consider the impact of $H^{\pm\pm}_{L,R}$ on the branching ratio (BR) of the LFV decays $\tau\to l_i l_j l_k$ and $\mu\to e\gamma$ in the context of the LR symmetric model [@Lim:1981kv],[@Cirigliano:2004mv]. Experimental prospects for $\mu\to e\gamma$ are bright with the imminent commencement of the MEG experiment which will probe BR$\sim 10^{-13}\to 10^{-14}$, two to three orders of magnitude beyond the current upper limit [@Grassi:2005ac]. At the $e^+e^-$ B factories limits of the order BR($\tau\to l_i l_j l_k) <10^{-7}$ with $\sim 90$ fb$^{-1}$ [@Yusa:2004gm],[@Aubert:2003pc] have been obtained utilizing direct $e^+e^-\to \tau^+\tau^-$ production. Simulations of the detection prospects at a proposed high luminosity $e^+e^-$ B factory with ${\cal L}$=5$\to 50$ ab$^{-1}$ anticipate sensitivity to BR$\sim 10^{-8}\to 10^{-9}$ [@Hashimoto:2004sm]. Additional searches can be performed at the LHC where $\tau$ leptons are copiously produced from the decays of $W,Z,B,D$, with anticipated sensitivities to BR$\sim 10^{-8}$ [@Santinelli:2002ea]. In the LR symmetric model $H^{\pm\pm}_{L,R}$ mediate $\tau\to l_i l_j l_k$ at [*tree-level*]{} due to an effective 4-Fermi charged lepton interaction proportional to $h_{\tau i}^\ast h_{jk}/M^2_{H^{\pm\pm}}$. Hence such a model can comfortably accommodate BRs of order $10^{-7}\to 10^{-9}$ which will be probed at current and forthcoming experiments. For the loop induced decays $\mu\to e\gamma$ and $\tau\to l\gamma$ the dominant contribution in the LR symmetric model originates from diagrams involving $H^{\pm\pm}_{L,R}$. In the LR model one has BR($\tau\to lll)\gg$ BR$(\tau\to l\gamma)$, which contrasts with the general expectation BR$(\tau\to l\gamma)\gg $BR($\tau\to lll$) for models in which the tree-level $\tau\to l_il_jl_k$ interaction is absent (for scenarios where BR$(\tau\to l\gamma)\sim $BR($\tau\to lll$) is possible see [@Dedes:2002rh]). Due to the larger backgrounds for the search for $\tau\to l\gamma$, the experimental sensitivity to BR$(\tau\to l\gamma)$ at the $e^+e^-$ B factories is expected to be inferior to that for $\tau\to l_il_jl_k$ [@Hashimoto:2004sm]. Consequently the hierarchy BR($\tau\to lll)\gg$ BR$(\tau\to l\gamma)$ in the LR model affords more promising detection prospects. The presence of the above tree-level 4-Fermi interaction would also mediate the decay $\mu\to eee$ for which there is a strict bound ($< 10^{-12}$ [@Bellgardt:1987du]) at least three orders of magnitude stronger than the anticipated experimental sensitivity to $\tau\to lll$. Hence obtaining BR($\tau\to lll)> 10^{-9}$ together with compliance of the above bound on BR($\mu\to eee$) restricts the structure of $h_{ij}$. In the LR model we consider a specific ansatz for $h_{ij}$ motivated by the observed pattern of the neutrino mixing angles, and perform a numerical study of the magnitude of BR$(\tau\to l_il_jl_k)$. Observation of BR($\tau\to lll)> 10^{-9}$ would be a spectacular signal of LFV and could be readily accommodated by a tree-level 4-Fermi coupling such as $h_{\tau i}^\ast h_{jk}/M^2_{H^{\pm\pm}}$ in the LR model. However, other models which contain a $H^{\pm\pm}_L$ or $H^{\pm\pm}_R$ with an analogous $h_{ij}$ leptonic Yukawa coupling can cause a similar enhancement of BR($\tau\to lll)$. If a signal were established for $\tau\to lll$ the angular distribution of the leptons can act as powerful discriminator of the models [@Kitano:2000fg]. Such studies could be carried out at a high luminosity $e^+e^-$ B factory [@Hashimoto:2004sm]. Our work is organized as follows. In section 2 the manifest LR symmetric model is briefly reviewed. In section 3 a numerical analysis of BR$(\tau\to lll)$ and BR($\mu\to e\gamma$) is presented. In section 4 we discuss how to discriminate the LR model from other models which contain a $H^{\pm\pm}$ by means of angular asymmetries in the LFV decays. Conclusions are contained in section 5. Left-Right Symmetric Model ========================== The Left-Right (LR) symmetric model is an extension of the Standard Model (SM) based on the gauge group $SU(2)_R \otimes SU(2)_L \otimes U(1)_{B-L}$. The LR symmetric model has many virtues, e.g. i) the restoration of parity as an original symmetry of the Lagrangian which is broken spontaneously by a Higgs vev, and ii) the replacement of the arbitrary SM hypercharge $Y$ by the theoretically more attractive $B-L$. Although the Higgs sector is arbitrary, a theoretically and phenomenologically appealing way to break the $SU(2)_R$ gauge symmetry is by invoking Higgs isospin triplet representations. Such a choice conveniently allows the implementation of a low energy seesaw mechanism for neutrino masses. The vev of the neutral member of the right-handed triplet ($v_R$) can be chosen to give a TeV scale Majorana mass term for the right-handed neutrinos, while the bidoublet Higgs fields provide the small Dirac mass, leading to light masses for the observed neutrinos. The above LR symmetric model predicts several new particles, among which the new gauge bosons $W_R$, $Z_R$ and doubly charged scalars $H^{\pm\pm}_L$, $H^{\pm\pm}_R$ [@Gunion:1996pq] have impressive discovery potential at hadron colliders if $v_R={\cal O}$ (1-10) TeV due to their large cross-sections and/or low background signatures. Experiments which search for LFV decays of $\mu$ and $\tau$ provide a complementary way of probing the LR symmetric model. A comprehensive study of $\mu\to e\gamma$, $\mu\to eee$ and $\mu\to e $ conversion in the present model was performed in [@Cirigliano:2004mv]. However, the recent termination of the MECO ($\mu\to e$ conversion) experiment together with no immediate improvement for the SINDRUM collaboration limit BR($\mu\to eee)<10^{-12}$ [@Bellgardt:1987du] leaves $\mu\to e\gamma$ as the only means of testing the LR model in LFV processes involving $\mu$ in the near future [@Grassi:2005ac]. An alternative probe of the LR model which was not developed in [@Cirigliano:2004mv] are the LFV decays $\tau\to l_il_jl_k$. Although the experimental sensitivity is inferior to that for the above processes involving $\mu$, the decays $\tau\to l_il_jl_k$ have the virtue of probing many combinations of the triplet Higgs-lepton-lepton Yukawa $h_{ij}$ couplings. We introduce various strutures for the arbitrary $h_{ij}$ motivated by the currently preferred bi-large mixing form of the Maki-Nakagawa-Sakata [@Maki:1962mu] (MNS) matrix. Should a signal for $\tau\to lll$ and/or $\mu\to e\gamma$ be observed, the angular distribution of the final state leptons can provide a means of distinguishing models with $H^{\pm\pm}$. We now briefly introduce the LR model and present the relevant formulae for the numerical discussion in Section 3. For a detailed introduction we refer the reader to [@Duka:1999uc]. The quarks and leptons are assigned to multiplets with quantum numbers ($T_L, T_R, B-L$): \[eq:matter\] Q\_[iL]{}&=&( [c]{} u\^\_i\ d\^\_i )\_[L]{}: (1/2:0:1/3),  Q\_[iR]{}=( [c]{} u\^\_i\ d\^\_i )\_[R]{}: (0:1/2:1/3)  ,\ L\_[iL]{}&=&( [c]{} \^\_i\ l\^\_i )\_[L]{}: (1/2:0:-1),  L\_[iR]{}=( [c]{} \^\_i\ l\^\_i )\_[R]{}: (0:1/2:-1) . Here $i=1,2,3$ denote generation number. The spontaneous symmetry breaking to $U(1)_{em}$ occurs through the Higgs mechanism. The Higgs sector consists of a bidoublet Higgs field, $\Phi$, and two triplet Higgs fields, $\Delta_L$ and $\Delta_R$: =( [cc]{} \_1\^0 & \_2\^+\ \_1\^- & \_2\^0 )&:& (1/2:1/2:0)  ,\ \_[L]{}=( [cc]{} \_[L]{}\^+/ & \_[L]{}\^[++]{}\ \_[L]{}\^0 & -\_[L]{}[\^+]{}/ ): (1:0:2)  , \_[R]{}&=&( [cc]{} \_[R]{}\^+/ & \_[R]{}\^[++]{}\ \_[R]{}\^0 & -\_[R]{}[\^+]{}/ ): (0:1:2)  . The vevs for these fields are as follows: &=&( [cc]{} \_1/ & 0\ 0 & \_2/ ),   \_[L]{}=( [cc]{}0& 0\ v\_[L]{}/ & 0 )  ,   \_[R]{}=( [cc]{}0& 0\ v\_[R]{}/ & 0 )  . The gauge groups $SU(2)_R$ and $U(1)_{B-L}$ are spontaneously broken at the scale $v_R$. Phenomenological considerations require $v_R \gg \kappa =\sqrt{\kappa_1^2+\kappa_2^2}\sim \frac{2M_{W_1}}{g}$ (EW scale). The vev $v_L$ does not play a role in the breaking of the gauge symmetries and is constrained to be small ($v_L < 8$ GeV) in order to comply with the measurement of $\rho=M_Z\cos\theta_W/M_W\sim 1$. The LR model predicts six neutral Higgs bosons, two singly charged Higgs bosons, and two doubly charged Higgs bosons. The Lagrangian is required to be the invariant under the discrete left-right symmetry: $Q_L \leftrightarrow Q_R$ , $L_L \leftrightarrow L_R$ , $\Delta_L \leftrightarrow \Delta_R$ , $\Phi \leftrightarrow \Phi^\dag$. This ensures equal gauge couplings ($g_L=g_R=g$) for $SU(2)_L$ and $SU(2)_R$. The leptonic Yukawa interactions are as follows: -[L]{}\^[yuk]{}=|L\_L(y\_D+y\_D)L\_R +iy\_M(L\_L\^TC\_2\_LL\_L+L\_R\^TC\_2\_RL\_R)+h.c. . Here $\tilde \Phi\equiv\tau_2\Phi^\ast\tau_2$; $y_D,\tilde y_D$ are Dirac type Yukawa coupling; $y_M$ is a $3\times 3$ Majorana type Yukawa coupling matrix which will lead to Majorana neutrino masses (see below) and is the primary motivation for introducing the Higgs triplet representations $\Delta_L$ and $\Delta_R$. Invariance under the left-right discrete symmetry gives $y_D=y_D^\dag$, $\tilde y_D=\tilde y_D^\dag$ and $y_M=y_M^T$. Redefinitions of the fields $L_L$ and $L_R$ enable $y_M$ to be taken as real, positive and diagonal, while maintaining $y_D=y_D^\dag$, $\tilde y_D=\tilde y_D^\dag$. Hereafter $y_M$ is taken in this diagonal basis. The $3\times 3$ mass matrix for charged leptons is: M\_l= (y\_D\_2+y\_D\_1) , which is diagonalized by the unitary matrices, $V_L^l$ and $V_R^l$, as: V\_L\^[l]{} M\_lV\_R\^l=diag(m\_e, m\_, m\_) . The Lagrangian for the neutrino masses is: -[L]{}\_[mass]{}=(|n\_L M\_n\_R+|n\_RM\_\^n\_L) , where $n_L=(\nu_L,\nu_R^c)^T$ and $n_R=(\nu_L^c, \nu_R)^T$ with the definition of $\nu_R^c =C(\bar {\nu_R})^T$. The $6 \times 6$ mass matrix for the neutrinos can be written in the block form: M\_=( [cc]{} M\_L & m\_D\ m\_D\^T & M\_R ). \[eq:Mnu\] Each entry is given by: m\_D &=&[1]{}(y\_D\_1+[y\_D \_2]{}) ,\ M\_R&=&y\_M v\_R  ,\ M\_L&=&y\_M v\_L  . The neutrino mass matrix is diagonalized by a $6\times 6$ unitary matrix $V$ as $V^TM_\nu V=M_\nu^{diag}=diag(m_1,m_2,m_3,M_1,M_2,M_3)$, where $m_i$ and $M_i$ are the masses for neutrino mass eigenstates: VV\_[L]{}\^ &V\_[L]{}\^[’]{}\ V\_[R]{}\^[’]{} &V\_[R]{}\^  . The small neutrino masses $m_i$ are generated by the Type II seesaw mechanism. Obtaining eV scale neutrino masses with $y_M={\cal O} (0.1-1)$ requires $M_L$ (and consequently $v_L$) to be eV scale. However, the minimization of the Higgs potential leads to a relationship among the vevs, $v_L\sim \gamma\kappa^2/v_R$, where $\gamma$ is a function (introduced in [@Deshpande:1990ip]) of scalar quartic couplings $\beta_i$ and $\rho_i$. For natural values of $\beta_i$ and $\rho_i$ one has $\gamma\sim 1$ and thus $v_L$ would be ${\cal O} (1-10)$ GeV for $v_R\sim$ TeV. Reducing $v_L$ to the eV scale to order to comply with the observed neutrino mass scale would require severe fine-tuning $\gamma < 10^{-7}$. In LR model phenomenology it is standard to set $\beta_i=0$ (and hence $\gamma=0$) which ensures $v_L=0$. Henceforth we will take $v_L=0$ for which the masses of the light neutrinos arise from Type I seesaw mechanism and are approximately $m_i\sim m_D^2/M_R$. [^4] In order to realize the low energy ($\sim {\cal O}(1-10)$ TeV) scale for the right-handed Majorana neutrinos, the Dirac mass term $m_D$ should be [$\cal O$]{} (MeV), which for $\kappa_2\sim 0$ corresponds to $y_D\sim 10^{-6}$ (i.e. comparable in magnitude to the electron Yukawa coupling). There are two physical singly charged Higgs bosons, $H^\pm_1$ and $H^\pm_2$, which are linear combinations of the singly charged scalar fields residing in $\Phi,\Delta_L$ and $\Delta_R$. The leptonic couplings $\tilde y_D$ of $H_2^\pm$ (which is essentially composed of $\phi_1^\pm$ and $\phi_2^\pm$) are of order $m_l/m_W$ and can be neglected compared to leptonic Yukawa couplings for the triplet field $H^\pm_1\sim \delta_L^\pm$ which are unrelated to fermion masses and may be sizeable. The interaction of $H_1^\pm$ with leptons is as follows (where $N_L=V^Tn_L$, $N_R=V^\dagger n_R$, $N=N_L+N_R=N^c$ and $l=l_R+l_L$ are the neutrino and charged lepton fields respectively in the mass eigenstate basis, and $P_{L,R}=(1\mp \gamma_5)/2$): \_[H\_1\^]{} &=&  . \[eq:singly\] The LFV interactions of leptons with doubly charged Higgs bosons (where $H^{\pm\pm}_L=\delta^{\pm\pm}_L, H^{\pm\pm}_R=\delta^{\pm\pm}_R$ for $v_L=0$) are given by: \_[H\^\_[L,R]{}]{} &=& \[eq:doubly\]   . The LFV coupling matrices in Eq.(\[eq:singly\]) and Eq.(\[eq:doubly\]) are respectively given by: h&=& ( [c]{} V\_L\^[T]{}\ V\_L\^[’ T]{} ) M\_RV\_L\^l ,\ h\_L&=& V\_L\^[l T]{}M\_RV\_L\^l , \[eq:hL\]\ h\_R&=& V\_R\^[l T]{}M\_RV\_R\^l . \[eq:hR\] Note that $\tilde h$ is a $6\times 3$ matrix and $h_L$ and $h_R$ are $3\times 3$ matrices. The mass matrix for the charged vector bosons is: \_W\^2=[g\^24]{}( [cc]{} \^2 & -2\_1\_2\ -2\_1\_2 & \^2+2v\_R\^2 )  . This is diagonalized via the mixing angle $\xi=-\tan^{-1}\left(2\kappa_1\kappa_2/v_R^2\right)/2$ with the eigenvalues $M_{W_{1,2}}^2=g^2\left(\kappa^2+v_R^2 \mp \sqrt{v_R^4+4\kappa_1^2\kappa_2^2}\right)/4$: W\_L = W\_1 + W\_2, W\_R = - W\_1 + W\_2  . The strong experimental constraint on the mixing angle ($\xi< 10^{-3}$) [@Yao:2006px] enforces one of $\kappa_1,\kappa_2$ to be small if $v_R={\cal O}$ (TeV). Neglecting such small mixing between $W_1$ and $W_2$, the LFV interactions with the gauge bosons are as follows: \_[CC]{} &=& { l W\_[2  ]{}\^+ + N W\_[2  ]{}\^-\ &&+ l W\_[1  ]{}\^+ + N W\_[1  ]{}\^- }  , where $K_L$ and $K_R$ are the $6\times 3$ LFV coupling matrices which are respectively written as: K\_L= [c]{} V\_L\^ V\_L\^l\ V\_L\^[’]{} V\_L\^l [c]{} V\_[MNS]{}\^\ V\_L\^[’]{} V\_L\^l  ,     K\_R= [c]{} V\_R\^[’]{} V\_R\^l\ V\_R\^ V\_R\^l  . \[eq:K\] The upper $3\times 3$ block in $K_L$ can be identified as the hermitian conjugate of the MNS matrix $V_{\rm MNS}$ on neglecting ${\cal O}(\frac{m_D}{M_R})$ contributions. Manifest LR Symmetric Model --------------------------- In the LR model the mixing matrices for the left and right fermions are in general not equal e.g. for the lepton sector $V_L^l\ne V_R^l$. The special case of $V_L^l= V_R^l$ is referred to as the “Manifest LR symmetric model” and arises in either of the following scenarios: i) both $\kappa_1$ and $\kappa_2$ are real, or ii) one of $\kappa_1$ and $\kappa_2$ is identically zero. In our numerical analysis we will set $\kappa_2=0$, which has the virtue of eliminating $W_L-W_R$ mixing and in some cases (for specific forms of the Higgs potential) is required to suppress FCNCs and preserve unitarity in the LR model [@Gunion:1989in]. In the Manifest LR symmetric model one has the additional constraint $m_D=m_D^\dag$ which must be respected when evaluating the magnitude of the LFV processes. A further important consequence of the Manifest LR symmetric model is the relationship $h_L=h_R\equiv h$ which can be derived from Eq.(\[eq:hL\]) and Eq.(\[eq:hR\]). Using $K_L$ and $K_R$ in Eq.(\[eq:K\]), the LFV couplings for the interactions of leptons with singly and doubly charged Higgs bosons are as follows: = K\_L\^\*h ,       h = K\_R\^T M\_\^[diag]{} K\_R  . \[eq:hcouplings\] At leading order in $m_D/M_R$, one may express $h$ by: h\_[i j]{} = \_[n=[heavy]{}]{} ( K\_R )\_[n i]{} ( K\_R )\_[n j]{}  , \[eq:hcouplings1\] x\_n = ([M\_nv\_R]{})\^2  . Effective Lagrangian and branching ratios for the LFV processes --------------------------------------------------------------- ### 4-lepton interactions The effective Lagrangian for 4-lepton interactions is as follows: &=& (h\^)\_[mi]{}(h)\_[jk]{} { ( l\_m\^P\_Ll\_[k]{} )( l\_i\_P\_Ll\_[j]{} ) + ( l\_m\^P\_Rl\_[k]{} )( l\_i\_P\_Rl\_[j]{} ) }  . The branching ratio for $\tau \to l_il_jl_k$ is given by: BR(l\_il\_jl\_k)=|h\^\_[l\_i]{}h\_[l\_jl\_k]{}|\^2 (+ )BR() . \[BRtaulll\] Here $S$=1 (2) for $j=k$ ($j\ne k$). In the LR model one may express $h_{ij}$ in terms of $K_RK_R$ via Eq.(\[eq:hcouplings1\]). However, the above form Eq.(\[BRtaulll\]) can be applied to other models with $H_L^{\pm\pm}$ and $H_R^{\pm\pm}$ for which no identity Eq.(\[eq:hcouplings1\]) exists. ### $\mu \to e\gamma$ The effective Lagrangian for $\mu \to e \gamma$ is as follows: =-{m\_A\_R\^P\_Le F\_ +m\_A\_L\^P\_Re F\_+h.c. }  . $A_L$ receives contributions from $W_2-N_i$ and $H^{\pm\pm}_R$ and is given by [@Cirigliano:2004mv]: A\_L & = & \_[n=[heavy]{}]{} ( K\_R\^)\_[e n]{} ( K\_R )\_[n ]{}    , \[mu\_ey\_AL\] where S\_3 (x)=-+{(1-x+x)+1  } . $A_R$ receives contributions from $H^{\pm\pm}_L$ and $H_1^\pm$ and is given explicitly by [@Cirigliano:2004mv]: A\_R & = & \_[n=[heavy]{}]{} ( K\_R\^)\_[e n]{} ( K\_R )\_[n ]{}   x\_n  . \[mu\_ey\_AR\] The branching ratio for $\mu \to e\gamma$ is given by: [^5] BR(e) =384\^2e\^2(|A\_L|\^2+|A\_R|\^2) . Numerical analysis for BR($\tau\to lll)$ and BR($\mu\to e\gamma$) ================================================================= The LFV decays $\tau\to lll$ are the analogy of $\mu\to eee$ and provide sensitive probes of the $h_{ij}$ couplings in the LR model. Mere observation of such a decay would constitute a spectacular signal of physics beyond the SM. There are six distinct decays for $\tau^+\to lll$ (likewise for $\tau^-$): $\tau^+\to \mu^+\mu^+\mu^-$, $\tau^+\to e^+e^+e^-$, $\tau^+\to \mu^+\mu^+e^-$, $\tau^+\to \mu^+\mu^-e^+$, $\tau^+\to e^+e^+\mu^-$, $\tau^+\to e^+e^-\mu^+$. Searches for all six decays have been performed by BABAR (91 fb$^{-1}$) [@Aubert:2003pc] and BELLE (87 fb$^{-1}$) [@Yusa:2004gm]. Upper limits of the order BR($\tau\to lll) < 2\times10^{-7}$ were derived. Although these limits are several orders of magnitude weaker than the bound BR$(\mu\to eee)< 10^{-12}$, they have the virtue of constraining many combinations of the $h_{ij}$ couplings in the context of the LR model. Moreover, greater sensitivity to BR($\tau\to lll$) is expected from forthcoming experiments. A proposed Super B Factory anticipates sensitivity to BR($\tau\to lll)\sim 10^{-8}$ and $10^{-9}$ for 5 ab$^{-1}$ and 50 ab$^{-1}$ respectively [@Hashimoto:2004sm]. At the LHC, $\tau$ can be copiously produced from several sources (from $B/D$ decay and direct production via $pp\to W\to \tau\nu$, $pp\to Z\to \tau^+\tau^-$) and sensitivity to BR$(\tau\to lll)>10^{-8}$ is claimed [@Santinelli:2002ea]. Such low BRs can be reached due to the very small SM background. In contrast, the background to $\tau\to l\gamma$ is non-negligible and might prevent the B factories from probing below BR$\sim 10^{-8}$. In addition, one expects BR$(\tau\to lll) \gg {\rm BR}(\tau\to l\gamma)$ in the LR symmetric model and thus the former decay is the more effective probe. We note that other rare LFV $\tau$ decays involving quark final states will not arise in the LR model since $h_{ij}$ mediates processes involving leptons only. Hence we shall only focus on $\tau\to lll$. In our numerical analysis only the stringent constraint from $\mu\to eee$ is imposed. Other constraints on $h_{ij}$ (e.g. the anomalous magnetic moment of $\mu$ ($g-2$), Bhabha scattering and other LFV processes - see [@Cuypers:1996ia]) are considerably weaker and are neglected. The magnitude of $h_{ij}$ cannot be predicted from the neutrino oscillation data alone since it is related to the physics at SU(2$)_R$ breaking scale. However, $h_{ij}$ also crucially depends on the mixing matrix in the charged lepton sector $V^l_L$: h=V\_L\^[lT]{}M\_RV\_L\^l  . \[h\_coup\] We have neglected the ${\cal O}(\frac{m_D}{M_R})$ contribution and used the convention that $M_R$ is diagonal and positive. Since $V_L$ also enters the MNS matrix: V\_[MNS]{}=V\_L\^[l]{}V\_L\^ , \[MNS:V\_L\] we will introduce 4 distinct structures for $V^l_L$ motivated by the bi-large mixing form of $V_{\rm MNS}$ and perform a quantitative analysis of the magnitude of $h_{ij}$ (and consequently BR($\tau\to lll$)) in the LR model. In order to establish our formalism we first explicitly present the standard parametrisation of the MNS matrix: V\_[MNS]{}&=& ( [ccc]{}1&0&0\ 0&c\_[23]{}&s\_[23]{}\ 0&-s\_[23]{}&c\_[23]{}\ ) ( [ccc]{}c\_[13]{}&0&s\_[13]{}e\^[-i]{}\ 0&1&0\ -s\_[13]{}e\^[i]{}&0&c\_[13]{}\ ) ( [ccc]{}c\_[12]{}&s\_[12]{}&0\ -s\_[12]{}&c\_[12]{} &0\ 0&0&1 )\ &=&U(\_[23]{})U(\_[13]{})U(\_[12]{}) . \[eq:MNS\] Here $s_{ij}$ $(c_{ij})$ represents $\sin\theta_{ij}$ $(\cos\theta_{ij})$ and the unitary matrices $U(\theta_{23})$, $U(\theta_{13})$, and $U(\theta_{12})$ are responsible for mixing between 2-3, 1-3, and 1-2 elements respectively. The angles $\theta_{12}$ and $\theta_{23}$ are measured with relatively good accuracy in the solar and atmospheric neutrino oscillation experiments respectively. The solar and KamLAND reactor neutrino oscillation experiments [@Aharmim:2005gt],[@Araki:2004mb] provide the following constraints on the mixing angle $\theta_{12}$ and the mass-squared difference of $\Delta m_{12}^2=m_2^2-m_1^2$: $ \sin^2\theta_{12}\sim 0.31\ , ~\Delta m_{12}^2\sim 8\times 10^{-5} ~{\rm eV}^2\ . $ The mixing angle $\theta_{23}$ and the mass-squared difference $\Delta m_{13}^2$ measured in the atmospheric neutrino oscillation are as follows [@Ashie:2005ik],[@Aliu:2004sq]: $ \sin^22\theta_{23}\sim 1.0\ , ~|\Delta m_{13}^2|\sim 2.6\times 10^{-3}~{\rm eV}^2\ . $ An upper bound on the remaining angle $\theta_{13}$ has been obtained from the CHOOZ and Palo Verde reactor neutrino oscillation experiments [@Apollonio:2002gd],[@Boehm:2001ik]: $ \sin\theta_{13} \lsim 0.2\ . $ The ignorance of the sign of $\Delta m_{13}^2$ and the absolute neutrino mass scale leads to the following three neutrino mass patterns which are consistent with current oscillation data: Normal hierarchy (NH) $m_1 < m_2 \ll m_3$; Inverted hierarchy (IH) $m_3 \ll m_1 < m_2$; Quasi degeneracy (DG) $m_1 \sim m_2 \sim m_3$. Data from WMAP [@Spergel:2006hy] provides the following constraint on the sum of the light neutrino masses: $\sum_{i=1,2,3} m_i < 2$ eV. However, LFV processes in the LR model do not depend sensitively on the neutrino mass pattern. In order to perform our numerical analysis of the magnitude of BR$(\tau\to lll)$ and BR$(\mu\to e\gamma)$ we introduce the following four specific cases: CASE $~~~V_L^{l\dag}$ $V_L^\nu $ ------ ----------------------------------- --------------------------------- I $-iV_{\rm MNS}$ $iI$ II $ -iU(\theta_{23})U(\theta_{13})$ $iU(\theta_{12})$ III $-iU(\theta_{23})$ $iU(\theta_{13})U(\theta_{12})$ IV $-iI$ $iV_{\rm MNS}$ Here $I$ represents a unit matrix. In CASE I (IV) both large mixings in $V_{\rm MNS}$ originate from the charged lepton (neutrino) mixing matrix. Each case has distinct ways of satisfying the stringent bound BR($\mu\to eee)<10^{-12}$. In our numerical analysis we will assume multi-TeV scale masses for $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ which renders direct detection improbable at the LHC. For $M_{H^{\pm\pm}_L}$,$M_{H^{\pm\pm}_R}<1 $ TeV the LHC has excellent discovery prospects in the channels $H^{\pm\pm}\to l^\pm_i l^\pm_j$ [@Gunion:1996pq] (especially for $l_{i,j}=e,\mu$). Observation of $H^{\pm\pm}_{L,R}$ (which would provide a measurement of $M_{H^{\pm\pm}}$) together with a signal for $\tau\to l_il_jl_k$ would permit a measurement of the coupling combination $|h_{\tau i}^\ast h_{jk}|$. We wish to study the magnitude of BR($\tau\to lll$) and BR($\mu\to e\gamma$) in the parameter space with a phenomenologically acceptable neutrino mass matrix. In the LR model the light neutrino masses arise from the seesaw mechanism and are approximately as follows: m\_=-m\_DM\_R\^[-1]{}m\_D\^T . At the leading order, $m_\nu$ is diagonalized by $V_L^\nu$: m\_&&V\_L\^m\_\^[diag]{}V\_L\^[T]{} , \[seesaw\] where $m_\nu^{diag}=diag(m_1,m_2,m_3)$. The Dirac mass matrix $m_D$ depends on an arbitrary Yukawa coupling $y_D$. As advocated in [@Casas:2001sr], it is beneficial to parametrize a general seesaw type matrix such that the arbitrary $m_D$ is replaced by potential observables i.e. the heavy and light neutrino masses. We will apply the formalism of [@Casas:2001sr] in our numerical analysis, with the additional constraint that the manifest LR model requires the Dirac mass matrix for both the neutrinos and charged leptons to be a hermitian matrix: m\_D=m\_D\^ . \[eq:mD\_hermitian\] We introduce a complex orthogonal matrix $R$ which satisfies $R^TR=1$ and parametrize the neutrino Dirac mass matrix $m_D$ as follows: m\_D=-iV\_[L]{}\^R\^T . \[eq:mD\] The LFV processes are evaluated in the parameter space where there exists an $R$ matrix which satisfies the condition Eq.(\[eq:mD\_hermitian\]). This condition guarantees a phenomenologically acceptable neutrino mass matrix and perturbative Yukawa coupling $y_D$. In our calculation, the Majorana phases in the MNS matrix are neglected while the CP conserving cases for the Dirac phase ($\delta=0$ or $\pi$) are taken into account for simplicity. We will comment on the case of the CP violating Dirac phase in section 3.5. Neglecting CP violation, the condition Eq.(\[eq:mD\_hermitian\]) requires that $V_L^\nu$ is purely imaginary, while $R$ is a real matrix. We stress that the LFV processes do not depend on the actual structure of $R$. However, proving the existence of an $R$ matrix for each of the four cases (I, II, III, IV) ensures the validity of our numerical analysis. Numerical results: CASE I ------------------------- The bi-large mixing originates from the charged lepton sector. We parametrize the $R$ matrix as follows: R=U(\^R\_[23]{})U(\^R\_[13]{})U(\^R\_[12]{}) . \[eq:Rmatrix\] The explicit form in CASE I is given by: \^R\_[12]{}=\^R\_[23]{}=\^R\_[13]{}=0 , which leads to $R=I$. In this case $m_D=\sqrt{m_\nu^{diag}}\sqrt{M_R}$, and thus $m_D=m_D^\dag$ is automatically satisfied. (200,200) (70,75)[(10,20)]{} (200,200) (90,75)[(10,20)]{} (200,200) (110,75)[(10,20)]{} (200,200) (70,75)[(10,20)]{} (200,200) (90,75)[(10,20)]{} (200,200) (110,75)[(10,20)]{} (200,200) (70,75)[(10,20)]{} (200,200) (90,75)[(10,20)]{} (200,200) (110,75)[(10,20)]{} Using Eq.(\[h\_coup\]) the elements of $h_{ij}$ in CASE I are as follows: &&-v\_Rh\_[e]{}=c\_[12]{}s\_[12]{}c\_[23]{}c\_[13]{}(M\_2-M\_1)c\_[13]{}s\_[13]{}s\_[23]{}(M\_3-s\_[12]{}\^[2]{}M\_2-c\_[12]{}\^[2]{}M\_1), \[h\_emu\]\ &&-v\_Rh\_[e]{}= -c\_[12]{}s\_[12]{}s\_[23]{}c\_[13]{}(M\_2-M\_1) c\_[23]{}c\_[13]{}s\_[13]{}(M\_3-s\_[12]{}\^[2]{}M\_2-c\_[12]{}\^[2]{}M\_1),\ &&-v\_Rh\_=c\_[23]{}s\_[23]{}(c\_[13]{}\^[2]{}M\_3-c\_[12]{}\^[2]{}M\_2-s\_[12]{}\^[2]{}M\_1)c\_[12]{}s\_[12]{}s\_[13]{}((s\_[23]{}\^[2]{}-c\_[23]{}\^[2]{})M\_2+M\_1)\ &&              +c\_[23]{}s\_[23]{}s\_[13]{}\^[2]{}(s\_[12]{}\^[2]{}M\_[2]{}-c\_[12]{}\^[2]{}M\_1),\ &&-v\_Rh\_[ee]{}=c\_[13]{}\^[2]{}(c\_[12]{}\^[2]{}M\_1+s\_[12]{}\^[2]{}M\_2)+s\_[13]{}\^[2]{}M\_3 , \[h\_ee\]\ &&-v\_Rh\_=c\_[23]{}\^[2]{}(s\_[12]{}\^[2]{}M\_1+c\_[12]{}\^[2]{}M\_2)+s\_[23]{}\^[2]{}c\_[13]{}\^[2]{}M\_3 2c\_[12]{}s\_[12]{}c\_[23]{}s\_[23]{}s\_[13]{}(M\_2-M\_1)\ &&              +s\_[23]{}\^[2]{}s\_[13]{}\^[2]{}(c\_[12]{}\^[2]{}M\_1+s\_[12]{}\^[2]{}M\_2). \[h\_mumu\] Here the upper (lower) sign is for Dirac phase $\delta=0$ ($\pi$). It is clear that the off-diagonal elements in the $h$ couplings vanish when all the heavy neutrinos are degenerate in mass, i.e. $M_1=M_2=M_3$. The strict bound on BR$(\mu\to eee)$ requires $|h_{e\mu}| \ll 1$ which leads to the following conditions: &[(i)]{}& M\_1 M\_2 ,      s\_[13]{}(M\_3-M\_2) 0 , \[eq:CASEI\_i\]\ &[(ii)]{}& s\_[13]{} M\_3 c\_[12]{}s\_[12]{}(M\_2-M\_1)+s\_[13]{}(s\_[12]{}\^2M\_2+c\_[12]{}\^2M\_1),      = . \[eq:CASEI\_ii\] In (i) both terms contributing to $h_{e\mu}$ are zero, while in (ii) there a cancellation which ensures $|h_{e\mu}| \ll 1$. We show the branching ratios for all six $\tau^+ \to lll$ decays against BR$(\mu^+ \to e^+ \gamma)$ for $s_{13}=0$ in Fig.\[fig:1\]. For simplicity we assume degeneracy for the masses of the heavy particles: $M_{W_2}=M_{H_{L}^{\pm\pm}}=M_{H_{R}^{\pm\pm}}=M_{H^\pm_1}=3 {\rm ~TeV} $. In our numerical analysis we vary the heavy neutrino masses randomly in the range 1 TeV $\leq M_i \leq$ 5 TeV, with distributions that are flat on a logarithmic scale. This range is consistent with the vacuum stability condition for $v_R$ given in [@Mohapatra:1986pj]. In Fig.\[fig:1\] (a) ((b), (c)) the light and the dark points respectively denote the branching ratios for $\tau^+ \to e^-e^+e^+$ and $\tau^+ \to \mu^-\mu^+\mu^+$ ( $\tau^+\to e^-e^+\mu^+$ and $\tau^+\to \mu^-e^+e^+$, $\tau^+\to \mu^-e^+\mu^+$ and $\tau^+\to e^-\mu^+\mu^+$). We impose the experimental constraint BR$(\mu\to eee)<10^{-12}$ which prevents a large mass difference between $M_1$ and $M_2$ when $s_{13}=0$, as shown in condition (i) of Eq.(\[eq:CASEI\_i\]). Among the six $\tau^+\to lll$ decay modes the branching ratios of $\tau^+\to \mu^-\mu^+\mu^+$ and $\tau^+\to \mu^- e^+e^+$ can reach the anticipated sensitivity ($> 5\times 10^{-9}$) of a future B factory. Such branching ratios are realized when the mass splitting $M_3-M_2$ assumes larger values because $h_{\mu\tau}$ increases with it. On the other hand, the $\tau\to e$ transition is suppressed because $|h_{e\mu}|=|h_{e\tau}|$ for $s_{13}=0$, and $|h_{e\mu}|$ is necessarily small in order to comply with the severe constraint from $\mu\to eee$. This can be seen for the light points in Fig.\[fig:1\] (a) where BR$(\tau \to eee)$ is proportional to BR$(\mu\to eee)$. Moreover, there is a strong correlation BR$(\tau\to eee)\sim 10\times {\rm BR}(\mu\to e\gamma)$. BR($\mu\to e\gamma$) can be large as $10^{-14}$, which is within the sensitivity of MEG experiment. Figs.\[fig:2\] and \[fig:3\] show the branching ratios for $\tau \to lll$ for $s_{13}=0.2$ with the Dirac phase $\delta=0$ and $\pi$ respectively, imposing the constraint BR$(\mu\to eee)<10^{-12}$. For the other parameters we take the same values as in Fig.\[fig:1\]. When $\delta=0$ (Fig.\[fig:2\]) all LFV processes are predicted to be small because of the small mass differences between the heavy neutrinos as shown in the condition (i). However, there is still the possibility of observing $\mu\to e \gamma$ at MEG experiment. On the other hand, taking $\delta=\pi$ (Fig.\[fig:3\]) results in observable branching ratios for $\tau\to lll$. In this scenario one has $|h_{e\mu}|<|h_{e\tau}|$. Therefore the equality of $|h_{e\mu}|$ and $|h_{e\tau}|$ in Fig.\[fig:1\] is broken, which enables enhancement of BR($\tau^+ \to e^-e^+e^+$) and BR($\tau^+ \to e^-\mu^+\mu^+$) with simultaneous suppression of the $\mu \to e$ transition. Moreover, BR$(\mu\to e\gamma)$ can be large, resulting in multiple signals of LFV processes. Numerical results: CASE II --------------------------- In this case the large mixing for the atmospheric angle originates from the charged lepton sector, while the large solar angle originates from the neutrino sector. The $R$ matrix (given in Eq.(\[eq:Rmatrix\])) which satisfies the condition $m_D=m_D^\dag$ is: \_[12]{}\^R&=&\_[12]{}  ,\ \_[23]{}\^R&=&\_[13]{}\^R=0 . The explicit form for $h_{ij}$ is obtained from Eq.(\[h\_emu\]) - (\[h\_mumu\]) by taking $s_{12}=0$. The condition for suppressing $|h_{e\mu}|$ is: s\_[13]{}(M\_3-c\_[12]{}\^2M\_1) 0  , \[II\_i\] which requires $s_{13}\simeq 0$ or $M_1\simeq M_2\simeq M_3$. In the latter case none of the $\tau\to lll$ decays are measurable, as in CASE I with $s_{13}=0.2$ and $\delta=0$. On the other hand, $h_{e\mu}$ and $h_{e\tau}$ are zero when $s_{13}=0$, which results in vanishing branching ratios for $\tau^+ \to e^-l^+l^+, \mu^- e^+ \mu^+$ and $\mu\to e\gamma$. Fig.\[fig:4\] shows the branching ratios for $\tau^+ \to \mu^-\mu^+\mu^+$ and $\tau^+ \to \mu^- e^+e^+$ against the heaviest neutrino mass $M_3$. Clearly the branching ratios increase with $M_3$ and for $M_3>3$ TeV observable rates are attained. (200,200) (45,75)[(100,20)]{}    (200,200) (70,75)[(10,20)]{} (200,200) (90,75)[(10,20)]{} (200,200) (110,75)[(10,20)]{} When $s_{13}\ne 0$ the magnitudes of each entry in $h_{ij}$ are the same for $\delta=0$ and $\pi$. We show in Fig.\[fig:5\] the branching ratios for $\tau\to lll$ decay against BR$(\mu\to e\gamma)$ for $s_{13}=0.2$. In order to satisfy the condition in Eq.(\[II\_i\]) there cannot be large splittings among the masses of the heavy neutrinos. In this scenario only BR$(\mu\to e\gamma)$ reaches the future experimental sensitivity. Numerical Results: CASE III --------------------------- In CASE III the constraint from $\mu\to eee$ is satisfied automatically since $h_{e\mu}=0$ (obtained by setting $s_{12}=s_{13}=0$ in Eq.(\[h\_emu\]). We obtain $R$ as follows by treating $\theta_{13}$ as a perturbation: \_[12]{}\^R&=& \_[12]{}  ,\ \_[23]{}\^R&=&s\_[13]{}  ,\ f&=&M\_3+{ (+)s\_[12]{}s\_[12]{}\^R+(+)c\_[12]{}c\_[12]{}\^R }\ &&+m\_3  ,\ g&=&c\_[12]{}s\_[12]{}{(+m\_1)c\_[12]{}\^[R2]{}-(+m\_2)s\_[12]{}\^[R2]{} }\ &&+{(m\_2s\_[12]{}\^2-m\_1c\_[12]{}\^2)+(s\_[12]{}\^2-c\_[12]{}\^2) } c\_[12]{}\^Rs\_[12]{}\^R\ &&+{(+M\_3)s\_[12]{}c\_[12]{}\^R-(+M\_3)c\_[12]{}s\_[12]{}\^R }  ,\ \_[13]{}\^R&=&\ &&  . CASE III predicts the same results as CASE II for the branching ratios of the LFV processes with $s_{13}=0$ (Fig.\[fig:4\]). Numerical results: CASE IV -------------------------- In this case the bi-large mixing originates from the neutrino sector. We checked the existence of $R$ numerically. It is clear that none of the LFV processes are observed since $h_{ij}$ is a diagonal matrix. Summary ------- The numerical results of CASES I, II, III and IV for $M_{W_2}=M_{H^{\pm\pm}_L}=M_{H^{\pm\pm}_R}=M_{H_1^\pm}=3$ TeV are qualitatively summarized in Table 1. [@[width0.8pt]{}c|ll @[width0.8pt]{} c|cc|cc@[width0.8pt]{}]{} CASE &$\sin\theta_{13}$ &$\delta$ &$\mu^+ \to e^+\gamma$ &$\tau^+ \to e^-e^+e^+ $&$\tau^+ \to e^-\mu^+\mu^+ $ &$\tau^+\to \mu^- e^+e^+$ &$\tau^+\to \mu^-\mu^+\mu^+$\ & $0$ & & $\surd$ & & & $\surd$ &$\surd$\ I& 0.2,&0 & $\surd$ & & & &\ & 0.2,&$\pi$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ &$\surd$\ & 0 & & & & & $\surd$ &$\surd$\ II& 0.2,&0 & $\surd$ & & & &\ & 0.2,&$\pi$ & $\surd$ & & & &\ & 0 & & & & & $\surd$ &$\surd$\ III& 0.2,&0 & & & & $\surd$ &$\surd$\ & 0.2, &$\pi$ & & & & $\surd$ &$\surd$\ & 0 & & &&&&\ IV& 0.2,&0 &&&&&\ & 0.2,&$\pi$ &&&&&\ \[result\_sum\] It is clear that the number of observable rates for the LFV decays $\tau\to lll$ and $\mu\to e\gamma$ depends sensitively on the origin of the bi-large mixing in $V_{\rm MNS}$, with up to five signals being possible in the optimum scenario (CASE I with $\sin\theta_{13}=0.2$, $\delta=\pi$). Hence future searches for $\tau\to lll$ and $\mu\to e\gamma$ provide insight into the structure of $h_{ij}$ in the LR model. Both BR($\tau\to lll$) and BR($\mu\to eee$) are inversely proportional to the fourth power of $M_{H^{\pm\pm}}$ (when $M_{H^{\pm\pm}} =M_{H^{\pm\pm}_L}=M_{H^{\pm\pm}_R}$) as shown in Eq.(\[BRtaulll\]). Reducing $M_{W_2}$, $M_{H^{\pm\pm}_L}$, and $M_{H^{\pm\pm}_R}$ in all the figures would cause enhancement of BR($\tau\to lll$) and BR($\mu\to eee$) while maintaining the correlation. However, BR($\mu\to eee$) vanishes in CASE II with $s_{13}=0$ and CASE III, and hence the strong bound BR($\mu\to eee)<10^{-12}$ is automatically satisfied for any $M_{H^{\pm\pm}}$. In these latter cases the current experimental limits for BR($\tau\to lll$) can be reached and thus an upper bound on the heaviest neutrino mass $M_3$ can be derived ([*e.g.*]{} Fig.\[fig:4\] with $M_{W_2}=M_{H^{\pm\pm}}=1.5$ TeV gives BR($\tau\to \mu\mu\mu) > 10^{-7}$ for $M_3 > 1.8$ TeV). We note here that the presented numerical results are for the scenario of all phases in $V_{\rm MNS}$ taken to be zero. We now briefly discuss the effect of including a non-zero Dirac phase ($\delta\ne 0$) in our analysis [^6]. In CASE I and CASE II the presence of $\delta\ne 0$ would not affect Eq.(\[eq:mD\]) since $U(\theta_{13}$) in Eq.(\[eq:MNS\]) (which contains $\delta$) only appears in $V_L^l$ and not in $V_L^\nu$. Thus the solution for the $R$ matrix in the CP conserving case ($\delta=0$ or $\pi$) also holds for the case of $\delta\ne 0$. Since $V^l_L$ (which determines the LFV rates) has some dependence on $\delta$ we would expect some changes in our numerical results. For example, we have calculated the branching ratios for the LFV processes for $\delta=\pi/2$ and $3\pi/2$ with $s_{13}=0.2$ in CASE I and II. For all cases BR$(\mu\to e\gamma) < 2\times 10^{-14}$. Maximum values of $10^{-9}$ were found for BR$(\tau^+\to\mu^-\mu^+\mu^+ )$, BR$(\tau^+\to\mu^- e^+e^+ )$, and BR$(\tau^+\to e^-\mu^+\mu^+ )$, with smaller ($<10^{-10}$) values for BR$(\tau^+\to e^-e^+e^+ )$, BR$(\tau^+\to e^- e^+\mu^+ )$ and BR$(\tau^+\to \mu^- e^+\mu^+ )$. When $\delta \ne 0$ there is another possibility to suppress BR$(\mu\to eee)$ by cancellation in $h_{ee}$, because the sign of the term $s_{13}^2M_3$ in Eq.(\[h\_ee\]) flips for $\delta=\pi/2$ and $3\pi/2$. However, the cancellation is not so significant because of the small factor $s_{13}^2$. In CASE III and IV the presence of $\delta\ne 0$ increases the number of free parameters in $V_L^\nu$ and no general solution for the $R$ matrix can be found. However we expect that Eq.(\[eq:mD\_hermitian\]) will be satisfied in specific regions in the parameter space of $M_i$, and for CASE III some observable LFV rates might still be possible. $P$ odd Asymmetry for $\tau\to l_il_jl_k$ and $\mu\to e\gamma$ ============================================================== From the preceding sections it is evident that both BR$(\tau\to lll$) and BR($\mu\to e\gamma$) can be enhanced to experimental observability in the LR symmetric model. In particular, BR$(\tau\to lll)>10^{-8}$ would be a signal suggestive of models which can mediate the decay at tree-level e.g. the LR model via virtual exchange of $H^{\pm\pm}_{L,R}$ [^7]. In order to compare the LR model with other models we introduce the Higgs Triplet Model and Zee-Babu Model in section 4.1 and 4.2. These models provide a different mechanism for neutrino mass generation, and can enhance both $\tau\to l_il_jl_k$ and $\mu\to e\gamma$ to experimental observability by virtual exchange of $H^{\pm\pm}_L$ or $H^{\pm\pm}_R$. If a signal were established for any of the six decays $\tau\to lll$, further information on the underlying model can be obtained by studying the angular distribution of the leptons. In section 4.3 we show how the $P$ odd asymmetry for both decays may act as a powerful discriminator of the three models under consideration. Higgs Triplet Model ------------------- In the Higgs Triplet Model (HTM) [@Schechter:1980gr] a single $I=1$, $Y=2$ complex $SU(2)_L$ triplet is added to the SM. No right-handed neutrino is introduced, and the light neutrinos receive a Majorana mass proportional to the left-handed triplet vev $(v_L)$ leading to the following neutrino mass matrix: $${\cal M}_{\nu}=\sqrt{2}v_{L} \left( \begin{array}{ccc} h_{ee} & h_{e\mu} & h_{e\tau} \\ h_{\mu e} & h_{\mu\mu} & h_{\mu\tau} \\ h_{\tau e} & h_{\tau \mu} & h_{\tau\tau} \end{array} \right) \; .$$ In the HTM $h_{ij}$ is directly related to the neutrino masses and mixing angles as follows: $$h_{ij}=\frac{1}{\sqrt{2}v_L}V_{_{\rm MNS}}diag(m_1,m_2,m_3) V_{_{\rm MNS}}^T\ . \label{hij}$$ Formally, this expression for $h_{ij}$ is equivalent to that in CASE I with the replacements $(m_1,m_2,m_3)\to (M_1,M_2,M_3)$ and $v_L\to v_R$. In Eq.(\[hij\]) $v_L$ is a free parameter and is necessarily non-zero (unlike $v_L$ in the LR symmetric model) in order to generate neutrino masses. Its magnitude may lie anywhere in the range eV $< v_{L} <$ 8 GeV, where the lower limit arises from the requirement of a perturbative $h_{ij}$ satisfying Eq.(\[hij\]) and the upper limit is derived from maintaining $\rho\sim 1$. In the HTM there is no $H^{\pm\pm}_R$ and so $\tau\to l_il_jl_k$ is mediated solely by $H^{\pm\pm}_L$. Obtaining BR($\tau\to l_il_jl_k)>10^{-8}$ with $m_{H^{\pm\pm}}= 1 $ TeV requires $|h_{\tau i}^\ast h_{jk}|>10^{-3}$. Due to the ignorance of $v_L$ the magnitude of $h_{ij}$ cannot be predicted and so the HTM can only [*accommodate*]{} observable BRs for $\tau\to l_il_jl_k$. However, unlike the LR symmetric model, the HTM provides predictions for the ratios of $\tau\to l_il_jl_k$ [@Chun:2003ej] with are distinct for each of the various neutrino mass patterns NH, IH and DG. The necessary suppression of BR$(\mu\to eee$) relative to BR$(\tau\to lll$) requires $h_{e\mu}$ to be sufficiently small. As in CASE I Eq.([\[eq:CASEI\_ii\]]{}), this is arranged by invoking a cancellation between two terms contributing to $h_{e\mu}$, one depending on $\theta_{13}$ and the other depending on both $\theta_{12}$ and $r=\Delta m^2_{12}/\Delta m^2_{13}$ [@Chun:2003ej],[@Kakizaki:2003jk]. Observation of $\tau\to lll$ would restrict $\theta_{13}$ to a narrow interval which can be predicted in terms of $\theta_{12}$ and $r$. For the case of NH neutrinos $\theta_{13}\sim \sqrt r$ while for IH and DG neutrinos $\theta_{13}\sim r$. Since current oscillation data suggests $r \approx 0.04$, the value of $\theta_{13}$ need to ensure small $h_{e\mu}$ is of the order $0.1$ for NH and $0.01$ for IH and DG. Two loop radiative singlet Higgs model (Zee-Babu model) ------------------------------------------------------- Neutrino mass may be absent at the tree-level but is generated radiatively via higher order diagrams involving $L=2$ scalars. In the Zee-Babu model (ZBM) $SU(2)_L$ singlet charged scalars $H^{\pm\pm}_R$ and $H^\pm_L$ are added to the SM Lagrangian [@Zee:1985id],[@Babu:2002uu] with the following Yukawa couplings: $${\cal L}_{Y} = f_{ij}\left( L^{T a}_{iL} C L^b_{jL} \right) \epsilon_{ab} H^+_L \ + \ h'_{ij} \left( l^T_{iR} C l_{jR} \right) H^{++}_R + h.c. $$ No right-handed neutrino is introduced. A Majorana mass for the light neutrinos arises at the two loop level in which the lepton number violating trilinear coupling $\mu H^\mp_L H^\mp_L H^{\pm\pm}_R$ plays a crucial role. The explicit form for ${\cal M}_{\nu}$ is as follows: \_ = ( [ccc]{} \^2 \_ + 2 ’ \_ + ’\^2 \_ , & \_ + ’ \_ - ’ \_[e ]{} & -\_ -’ \_ - \^2 \_[e]{}\ & - ’\^2 \_[e ]{}  , & - ’ \_[e]{}\ . & \_ -2 ’ \_[e]{} + ’\^2 \_[ee]{}  , & -\_ -\_[e]{} + ’ \_[e]{}\ & & + ’ \_[ee]{}\ . & . & \_ + 2 \_[e]{} + \^2 \_[ee]{} ) , where $\epsilon=f_{e\tau}/f_{\mu\tau}$, $\epsilon'=f_{e\mu}/f_{\mu\tau}$, $\omega_{ij}=h_{ij}m_i m_j$ ($m_i,m_j$ are charged fermion masses), $h_{ij}=h'_{ij}(2h'_{ij})$ for $i=j$ ($i\ne j$) and $\zeta$ is given by: $$\zeta=\frac{8\mu f^2_{\mu\tau}\tilde I}{(16\pi^2)^2m_{H^\pm}^2} \ .$$ Here $\tilde I$ is a dimensionless quantity of ${\cal O}(1)$ originating from the loop integration. Clearly the expression for ${\cal M}_{\nu}$ differs from that in the HTM and involves 9 arbitrary couplings. Since the model predicts one massless neutrino (at the two-loop level), quasi-degenerate neutrinos are not permitted (unlike the HTM) and only NH and IH mass patterns can be accommodated. The $f$ couplings (contained in $\epsilon$ and $\epsilon'$) are directly related to the elements of ${\cal M}_{\nu}$. In the scenario of NH, $\epsilon\approx \epsilon'\approx \tan\theta_{12}/\sqrt 2$ and $\sin\theta_{13}$ is close to zero. Since $\epsilon,\epsilon'<1$ one may neglect those terms in ${\cal M}_{\nu}$ which are proportional to the electron mass (i.e. $\omega_{ee},\omega_{e\mu}, \omega_{e\tau}$). This simplification leads to the following prediction [@Babu:2002uu]: $h_{\mu\mu}:h_{\mu\tau}: h_{\tau\tau}\approx 1:m_\mu/m_\tau:(m_\mu/m_\tau)^2$. In the case of IH, large values are required for $\epsilon,\epsilon'(>5)$, and thus neglecting $\omega_{ee},\omega_{e\mu}, \omega_{e\tau}$ in ${\cal M}_{\nu}$ may not be entirely justified. However, if such terms are neglected [@Babu:2002uu] then the above prediction for the ratio of $h_{\mu\mu}:h_{\mu\tau}:h_{\tau\tau}$ also approximately holds for the case of IH. In the ZBM there is no $H^{\pm\pm}_L$ and so $\tau\to l_il_jl_k$ is mediated solely by $H^{\pm\pm}_R$. Another significant difference with the HTM is that eV scale neutrino masses requires $f$, $h_{\mu\mu}\sim 10^{-2}$, and thus LFV decays cannot be suppressed arbitrarily if the 2-loop diagram is solely responsible for the generation of the neutrino mass matrix. Such relatively large couplings are necessary since a rough upper bound on $\zeta$ (which is a function of model parameters) can be derived. In contrast, $v_L$ in the HTM is arbitrary and eV scale neutrino masses can be accommodated even with $h_{ij}\sim 10^{-10}$. The requirement that $f$, $h_{\mu\mu}\sim 10^{-2}$ suggests that BR($\mu\to e\gamma$) and BR$(\tau\to \mu\mu\mu$) could be within range of upcoming experiments [@AristizabalSierra:2006gb]. Since $h_{ee}$, $h_{e\mu}$ and $h_{e\tau}$ may be treated as free parameters (essentially unrelated to the neutrino mass matrix) the necessary suppression of $\mu\to eee$ can be obtained by merely choosing $h_{e\mu}$ and/or $h_{ee}$ very small. Observable rates for BR($\tau\to eee$) can be arranged by appropriate choice of $h_{ee}$ and $h_{e\tau}$. Sensitivity of $P$ odd asymmetry to $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ --------------------------------------------------------------------- Angular distributions of LFV decays can act as a powerful discriminator of models of new physics. The predictions for $\mu^+\to e_L^+\gamma$ and $\mu^+\to e_R^+\gamma$ depend on the chirality structure of LFV interactions and so in general would be model dependent. Ref.[@Okada:1999zk] defined various $P$ odd and $T$ odd asymmetries for $\mu\to e\gamma$ and $\mu\to eee$ and performed a numerical analysis in the context of supersymmetric $SU(5)$ and $SO(10)$. Analogous asymmetries were defined for $\tau^\pm\to l^\pm\gamma$ and $\tau^\pm\to lll$ in [@Kitano:2000fg]. In this section we apply the general formulae introduced in Refs.[@Kitano:2000fg],[@Okada:1999zk] to the three models of interest which all contain $H^{\pm\pm}$. For the decay $\mu^+\to e^-e^+e^+$ with polarized $\mu^+$, one defines $\theta_{e^-}$ as the angle between the polarization vector of $\mu^+$ and the direction of the $e^-$, the latter taken to be the $z$ direction. The $P$ odd asymmetry ${\cal A}_{P}$ is defined as an asymmetry in the $\theta_{e^-}$ distribution. In contrast, for $\tau$ produced in the process $e^+e^-\to \tau^+\tau^-$ the helicity of the $\tau$ in the LFV decay $\tau\to lll$ is not known initially. Consequently, the experimental set up is sensitive to both $\tau^+_L\to lll$ or $\tau^+_R\to lll$. However, by exploiting the spin correlation of the pair produced $\tau$ (i.e. $e^+e^-\to \tau^+_L\tau^-_R,\tau^+_R\tau^-_L$) information on the helicity of the LFV decaying $\tau$ can be obtained by studying the angular and kinematical distributions of the non-LFV decay of the other $\tau$ in the $\tau\to lll$ event. For illustration we shall always take the non-LFV decay mode as $\tau\to \pi\nu$, although such an analysis can also be performed for other main decay modes such as $\tau\to \rho\nu,a_1\nu,l\nu\overline \nu$. In the notation of [@Kitano:2000fg] the effective 4-Fermi interaction for $\tau^+\to \mu^-\mu^+\mu^+$ mediated by $H^{++}_{L,R}$ is as follows: $${\cal L}=\frac{-4G_F}{\sqrt 2}\left\{g_3 (\overline \tau\gamma^\mu P_R\mu)(\overline\mu\gamma_\mu P_R\mu) +g_4(\overline \tau\gamma^\mu P_L\mu)(\overline\mu\gamma_\mu P_L\mu)\right\}\ . $$ The differential cross-section for the events $\tau^+\to \mu^-\mu^+\mu^+$ (LFV) and $\tau^-\to \pi^-\nu$ (non-LFV) is: $$\begin{aligned} &&d\sigma (e^+ e^- \to \tau^+ \tau^- \to \mu^- \mu^+ \mu^+ + \pi^- \nu) \nonumber \\ && = \sigma (e^+ e^- \to \tau^+ \tau^-) BR (\tau^- \to \pi^- \nu) \left( \frac{m_\tau^5 G_{\rm F}^2}{128 \pi^4}/\Gamma \right) \frac{d \cos \theta_\pi}{2}\ dx_1 \ dx_2\ d\cos \theta \ d\phi \nonumber \\ && \hspace*{1cm} \times \left[ X - \frac{s-2m_\tau^2}{s+2 m_\tau^2} \left \{ Y \cos \theta \right \} \cos \theta_\pi \right] \ , \label{eq.taulll_cross}\end{aligned}$$ where $$X=(|g_3| ^2+|g_4|^2)\alpha_1(x_1,x_2);\,\,Y=(|g_3| ^2-|g_4|^2)\alpha_1(x_1,x_2) \ .$$ Here $\alpha_1(x_1,x_2)$ is a function of the energy variables $x_1=2E_1/m_{\tau}$ and $x_2=2E_2/m_{\tau}$ where $E_1(E_2)$ is the energy of $\mu^+$ with larger (smaller) energy in the rest frame of $\tau^+$: \_1 (x\_1, x\_2) = 8(2-x\_1-x\_2)(x\_1+x\_2-1)  . The angles $\theta$ and $\phi$ specify the decay plane of $\tau^+\to \mu^-\mu^+\mu^+$ relative to the production plane of $e^+e^-\to \tau^+\tau^-$ in the $\tau^+$ rest frame. The angle $\theta_\pi$ is the angle between the direction of momentum of $\tau^-$ and $\pi^-$ in the $\tau^-$ rest frame. For a detailed discussion we refer the reader to [@Kitano:2000fg]. It is clear that $Y$ determines the angular dependence of Eq.(\[eq.taulll\_cross\]) and thus $Y$ is a measure of the $P$ odd asymmetry for $\tau\to \mu\mu\mu$. For our quantitative study of its magnitude we define: $${\cal A}(\tau\to \mu\mu\mu)=\frac{|g_3|^2-|g_4|^2}{|g_3|^2+|g_4|^2} \ .$$ Clearly ${\cal A}(\tau\to \mu\mu\mu)=0$ $(\pm 1)$ corresponds to zero (maximal) asymmetry. HTM ($H^{\pm\pm}_L$) ZBM ($H^{\pm\pm}_R$) LR ($H^{\pm\pm}_{L,R}$) ---------------------------------------- ---------------------------------------------------------- ---------------------------------------------------------- ---------------------------------------------------------- \[-3.5mm\] $-\frac{4G_F}{\sqrt{2}}g_3$ 0 $\frac{h_{\mu\mu}h_{\tau \mu}^\ast}{M^2_{H^{\pm\pm}_R}}$ $\frac{h_{\mu\mu}h_{\tau \mu}^\ast}{M^2_{H^{\pm\pm}_R}}$ \[3.5mm\] $-\frac{4G_F}{\sqrt{2}}g_4$ $\frac{h_{\mu\mu}h_{\tau \mu}^\ast}{M^2_{H^{\pm\pm}_L}}$ 0 $\frac{h_{\mu\mu}h_{\tau \mu}^\ast}{M^2_{H^{\pm\pm}_L}}$ \[4mm\] : Expressions for $g_3$, $g_4$ in the three models ![(a) ${\cal A}(\tau\to lll)$ and (b) BR($\tau\to \mu\mu\mu$) and in the plane ($M_{H^{\pm\pm}_L}$, $M_{H^{\pm\pm}_R}$) []{data-label="asym_br"}](tau_asym.eps "fig:"){width="6.5cm"} ![(a) ${\cal A}(\tau\to lll)$ and (b) BR($\tau\to \mu\mu\mu$) and in the plane ($M_{H^{\pm\pm}_L}$, $M_{H^{\pm\pm}_R}$) []{data-label="asym_br"}](tau_br.eps "fig:"){width="6.5cm"} The expressions for $g_3$ and $g_4$ in the three models under consideration are given in Table 2. In the manifest LR symmetric model $h_{ij}$ cancels out in the expression for ${\cal A}(\tau\to lll)$ leaving a simple dependence on $M_{H^{\pm\pm}_L}$ and $M_{H^{\pm\pm}_R}$ which applies to all six $\tau\to lll$ decays: $${\cal A}(\tau\to lll)=\frac{1/M_{H^{\pm\pm}_R}^4-1/M_{H^{\pm\pm}_L}^4}{1/M_{H^{\pm\pm}_R}^4+1/M_{H^{\pm\pm}_L}^4} \ .$$ In Fig.\[asym\_br\] (a) ${\cal A}(\tau\to lll)$ in plotted in the plane ($M_{H^{\pm\pm}_L}, M_{H^{\pm\pm}_R}$). Clearly the case of degeneracy ($M_{H^{\pm\pm}_L}=M_{H^{\pm\pm}_R}$) gives ${\cal A}(\tau\to lll)=0$, while $M_{H^{\pm\pm}_L} > M_{H^{\pm\pm}_R}$ ($M_{H^{\pm\pm}_L} < M_{H^{\pm\pm}_R}$) results in positive (negative) ${\cal A}(\tau\to lll)$. For the HTM and ZBM the asymmetry is maximal, being $-1$ and $+1$ respectively. Consequently, ${\cal A}(\tau\to lll)$ in the LR symmetric model may differ significantly from the corresponding value in models with only a $H^{\pm\pm}_L$ (HTM) or $H^{\pm\pm}_R$ (ZBM). Thus ${\cal A}(\tau\to lll)$ has the potential to discriminate between the various models with a $H^{\pm\pm}$. In addition, in the context of the LR model a measurement of ${\cal A}(\tau\to lll)$ provides important information on the ratio $M_{H^{\pm\pm}_L}/M_{H^{\pm\pm}_R}$, which could assist the direct searches for $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ at high energy colliders. For comparison we show in Fig.\[asym\_br\] (b) the dependence of BR$(\tau\to \mu\mu\mu)$ on $M_{H^{\pm\pm}_L}$ and $M_{H^{\pm\pm}_R}$ for $|h_{\mu\mu}h_{\tau\mu}^\ast |=0.05$. Of the order of 50 $\tau\to lll$ events would be needed to distinguish ${\cal A}(\tau\to lll)=+1$ from $-1$. A high luminosity upgrade of the existing B factories anticipates up $10^{10}$ $\tau^+\tau^-$ pairs and thus BR$(\tau\to lll)>10^{-8}$ would allow measurements of ${\cal A}(\tau\to lll)$. From Fig.\[asym\_br\] (a) it is clear that a LR model with $M_{H^{\pm\pm}_L}\ll M_{H^{\pm\pm}_R}$ ($M_{H^{\pm\pm}_R}\ll M_{H^{\pm\pm}_L}$) will give an almost maximal ${\cal A}(\tau\to lll)$ and consequently would be difficult to distinguish from the HTM (ZBM). However, if a signal were also observed for $\mu^+\to e^+\gamma$, the analogous $P$ odd asymmetry, ${\cal A}(\mu\to e\gamma$), can serve as an additional discriminator: $${\cal A}(\mu\to e\gamma)=\frac{|A_L|^2-|A_R|^2}{|A_L|^2+|A_R|^2} \ .$$ In CASE I, Fig.\[fig:3\] there is a parameter space for BR($\tau\to \mu\mu\mu)\approx 10^{-8}$ and BR($\mu\to e\gamma)\approx 10^{-12}$), which might provide sufficient events for both asymmetries to be measured. In contrast to $\tau\to \mu\mu\mu$, the loop induced decay $\mu\to e\gamma$ can be mediated by $H^{\pm\pm}_{L,R}$, $H^\pm_1$ and $W^\pm_R$ in the LR symmetric model. One may write a simplified formula for $A_L$ and $A_R$ (written explicitly in Eqs. (\[mu\_ey\_AL\]) and (\[mu\_ey\_AR\])), where $a,b,c,d$ are functions of masses and the heavy neutrino mixing matrix $K_R$: $$\begin{aligned} A_L&=&a~(M_{H^{\pm\pm}_R},K_R)+ b~(M_{W^\pm_R},K_R,M_i) \ , \nonumber \\ A_R&=& c~(M_{H^{\pm\pm}_L},K_R)+d~(M_{H^{\pm}_1},K_R) \ .\end{aligned}$$ In the LR model usually $a,c \gg b,d$ and so the dominant contribution to ${\cal A}(\mu\to e\gamma)$ arises from $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$. Hence in LR models one expects ${\cal A}(\mu\to e\gamma)\sim {\cal A}(\tau\to lll$). This is shown in Fig.\[LRmodel\_asym\] where ${\cal A}(\tau\to \mu\mu\mu$) is plotted against ${\cal A}(\mu\to e\gamma)$ for CASE I with $\sin \theta_{13}=0.2$ and $\delta=\pi$. The plotted points correspond to observable rates for both LFV decays (taken to be $10^{-9} \le$ BR($\tau \to \mu\mu\mu) \le 10^{-7}$ and $10^{-14} \le$ BR($\mu \to e\gamma) \le 10^{-11}$), within the following parameter region: $M_{W_2}=3$ TeV, 1 TeV $\leq M_i \leq 5$ TeV, 2 TeV $\leq M_{H^{\pm\pm}_L}=M_{H^\pm_1}\ne M_{H^{\pm\pm}_R} \leq$ 4 TeV. Each point also satisfies the constraint BR$(\mu\to eee) < 10^{-12}$. Clearly the vast majority of the points are close to the line ${\cal A}(\mu\to e\gamma)={\cal A}(\tau\to lll) $, showing that the diagrams involving $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ give the dominant contribution over most of the parameter space. The asymmetries differ sizeably only when $M_i$ and $M_{W_R}$ are considerably smaller than $M_{H^{\pm\pm}_L}$ and $M_{H^{\pm\pm}_R}$. In the HTM one has: $$\begin{aligned} A_L&=&0 \ ,\nonumber \\ A_R&=&c~(M_{H^{\pm\pm}_L},h_{ij})+d~(M_{H^{\pm}_L},h_{ij}) \ .\end{aligned}$$ and thus ${\cal A}(\mu\to e\gamma$) is maximal. In the ZBM: $$\begin{aligned} A_L&=&a~(M_{H^{\pm\pm}_R},h_{ij}) \ , \nonumber \\ A_R&=&d~(M_{H^{\pm}_L},f_{ij}) \ .\end{aligned}$$ ${\cal A}(\mu\to e\gamma$) may take any value since the masses of $H^\pm_L$ and $H^{\pm\pm}_R$ are unrelated and $h_{ij}\ne f_{ij}$ in general. The allowed ranges of ${\cal A}(\tau\to lll)$ and ${\cal A}(\mu\to e\gamma)$ in the three models under consideration are summarized in Table 3. It is clear that if signals for both $\tau\to lll$ and $\mu\to e\gamma$ are observed, the corresponding asymmetries may act as a powerful discriminator of the models. HTM ($H^{\pm\pm}_L$) ZBM ($H^{\pm\pm}_R$) LR ($H^{\pm\pm}_{L,R}$) ------------------------------ ---------------------- ---------------------- ------------------------- ${\cal A}(\mu \to e \gamma$) $-1$ $- 1< {\cal A} < +1$ $ -1< {\cal A} < +1$ ${\cal A}(\tau\to lll$) $-1$ +1 $ -1< {\cal A} < +1$ : $P$ odd asymmetries ${\cal A}(\tau\to lll)$ and ${\cal A}(\mu\to e\gamma)$ in the three models (200,200) (45,75)[(100,20)]{}    Conclusions =========== The Left-Right symmetric extension of the Standard Model with TeV scale breaking of $SU(2)_R$ via a right handed Higgs isospin triplet vacuum expectation value provides an attractive explanation for neutrino masses via the seesaw mechanism. The doubly charged scalars $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ with mass of order TeV mediate the LFV decays $\tau \to lll$ at tree-level via a Yukawa coupling $h_{ij}$ which is related to the Maki-Nakagawa-Sakata matrix $(V_{\rm MNS})$. We introduced four ansatz for the origin of the bi-large mixing in $V_{\rm MNS}$ which satisfy the stringent bound BR($\mu\to eee)<10^{-12}$ in distinct ways. A numerical study of the magnitude and correlation of BR($\tau^\pm \to lll$) and BR($\mu\to e\gamma$) was performed. It was shown that the number of observable rates for such LFV decays depends sensitively on the origin of the bi-large mixing in $V_{\rm MNS}$, with multiple LFV signals being possible in specific cases. If a signal for $\tau \to lll$ were observed we showed how the definition of an angular asymmetry provides information on the relative strength of the contributions from $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$. Such an asymmetry may also be used to distinguish the LR symmetric model from other models which contain either $H^{\pm\pm}_L$ or $H^{\pm\pm}_R$ and thus predict maximal asymmetries. Acknowledgements {#acknowledgements .unnumbered} ================ Y.O was supported in part by the Grant-in-Aid for Science Research, Ministry of Education, Science and Culture, Nos. 16081211, 13135225 and 17540286. A.G.A was supported by National Cheng Kung University grant OUA 95-3-2-057. [99]{} Y. Fukuda [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. Lett.  [**81**]{}, 1562 (1998). Y. Kuno and Y. Okada, Rev. Mod. Phys.  [**73**]{}, 151 (2001). P. Minkowski, Phys. Lett. B [**67**]{}, 421 (1977); T. Yanagida, in [*Proceedings of the Workshop on Unified Theory and Baryon Number of the Universe*]{}, edited by O. Sawada and A. Sugamoto (KEK, 1979) p.95; M. Gell-Mann, P. Ramond, and R. Slansky, in [*Supergravity*]{}, edited by P. van Nieuwenhuizen and D. Freedman (North Holland, Amsterdam, 1979). J. C. Pati and A. Salam, Phys. Rev. D [**10**]{}, 275 (1974); R. N. Mohapatra and J. C. Pati, Phys. Rev. D [**11**]{}, 2558 (1975). G. Senjanovic and R. N. Mohapatra, Phys. Rev. D [**12**]{}, 1502 (1975). R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett.  [**44**]{}, 912 (1980). J. F. Gunion, J. Grifols, A. Mendez, B. Kayser and F. I. Olness, Phys. Rev. D [**40**]{}, 1546 (1989). N. G. Deshpande, J. F. Gunion, B. Kayser and F. I. Olness, Phys. Rev. D [**44**]{}, 837 (1991). C. S. Lim and T. Inami, Prog. Theor. Phys.  [**67**]{}, 1569 (1982); M. L. Swartz, Phys. Rev. D [**40**]{}, 1521 (1989). V. Cirigliano, A. Kurylov, M. J. Ramsey-Musolf and P. Vogel, Phys. Rev. D [**70**]{}, 075007 (2004). M. Grassi \[MEG Collaboration\], Nucl. Phys. Proc. Suppl.  [**149**]{}, 369 (2005). Y. Yusa [*et al.*]{} \[Belle Collaboration\], Phys. Lett. B [**589**]{}, 103 (2004). B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**92**]{}, 121801 (2004). S. Hashimoto [*et al.*]{}, “Letter of intent for KEK Super B Factory,” KEK-REPORT-2004-4; A. G. Akeroyd [*et al.*]{} \[SuperKEKB Physics Working Group\], arXiv:hep-ex/0406071. R. Santinelli, eConf [**C0209101**]{}, WE14 (2002) \[Nucl. Phys. Proc. Suppl.  [**123**]{}, 234 (2003)\] \[arXiv:hep-ex/0210033\]. A. Dedes, J. R. Ellis and M. Raidal, Phys. Lett. B [**549**]{}, 159 (2002); K. S. Babu and C. Kolda, Phys. Rev. Lett.  [**89**]{}, 241802 (2002); G. Cvetic, C. Dib, C. S. Kim and J. D. Kim, Phys. Rev. D [**66**]{}, 034008 (2002) \[Erratum-ibid. D [**68**]{}, 059901 (2003)\]; A. Brignole and A. Rossi, Phys. Lett. B [**566**]{}, 217 (2003); Y. F. Zhou, J. Phys. G [**30**]{}, 783 (2004); S. Kanemura, T. Ota and K. Tsumura, Phys. Rev. D [**73**]{}, 016006 (2006); P. Paradisi, JHEP [**0602**]{}, 050 (2006); E. Arganda and M. J. Herrero, Phys. Rev. D [**73**]{}, 055003 (2006); C. H. Chen and C. Q. Geng, Phys. Rev. D [**74**]{}, 035010 (2006). U. Bellgardt [*et al.*]{} \[SINDRUM Collaboration\], Nucl. Phys. B [**299**]{}, 1 (1988). R. Kitano and Y. Okada, Phys. Rev. D [**63**]{}, 113003 (2001). J. F. Gunion, C. Loomis and K. T. Pitts, eConf [**C960625**]{}, LTH096 (1996) \[arXiv:hep-ph/9610237\]; K. Huitu, J. Maalampi, A. Pietila and M. Raidal, Nucl. Phys. B [**487**]{}, 27 (1997); J. Maalampi and N. Romanenko, Phys. Lett. B [**532**]{}, 202 (2002); G. Azuelos, K. Benslama and J. Ferland, J. Phys. G [**32**]{}, 73 (2006); A. G. Akeroyd and M. Aoki, Phys. Rev. D [**72**]{}, 035011 (2005). Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys.  [**28**]{}, 870 (1962). P. Duka, J. Gluza and M. Zralek, Annals Phys.  [**280**]{}, 336 (2000). K. Kiers, M. Assis, D. Simons, A. A. Petrov and A. Soni, Phys. Rev. D [**73**]{}, 033009 (2006). F. Cuypers and S. Davidson, Eur. Phys. J. C [**2**]{}, 503 (1998); O. M. Boyarkin, G. G. Boyarkina and T. I. Bakanova, Phys. Rev. D [**70**]{}, 113010 (2004). B. Aharmim [*et al.*]{} \[SNO Collaboration\], Phys. Rev. C [**72**]{}, 055502 (2005). T. Araki [*et al.*]{} \[KamLAND Collaboration\], Phys. Rev. Lett.  [**94**]{}, 081801 (2005). Y. Ashie [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. D [**71**]{}, 112005 (2005). E. Aliu [*et al.*]{} \[K2K Collaboration\], Phys. Rev. Lett.  [**94**]{}, 081802 (2005). M. Apollonio [*et al.*]{} \[CHOOZ Collaboration\], Eur. Phys. J. C [**27**]{}, 331 (2003). F. Boehm [*et al.*]{}, Phys. Rev. D [**64**]{}, 112001 (2001). D. N. Spergel [*et al.*]{}, arXiv:astro-ph/0603449. J. A. Casas and A. Ibarra, Nucl. Phys. B [**618**]{}, 171 (2001); J. R. Ellis, J. Hisano, M. Raidal and Y. Shimizu, Phys. Rev. D [**66**]{}, 115013 (2002). W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{}, 1 (2006). A. Czarnecki and E. Jankowski, Phys. Rev. D [**65**]{}, 113004 (2002). R. N. Mohapatra, Phys. Rev. D [**34**]{}, 909 (1986). W. F. Chang and J. N. Ng, Phys. Rev. D [**71**]{}, 053003 (2005). J. Schechter and J. W. F. Valle, Phys. Rev. D [**22**]{}, 2227 (1980); G. B. Gelmini and M. Roncadelli, Phys. Lett. B [**99**]{}, 411 (1981). E. J. Chun, K. Y. Lee and S. C. Park, Phys. Lett. B [**566**]{}, 142 (2003). M. Kakizaki, Y. Ogura and F. Shima, Phys. Lett. B [**566**]{}, 210 (2003). A. Zee, Nucl. Phys. B [**264**]{}, 99 (1986); K. S. Babu, Phys. Lett. B [**203**]{}, 132 (1988). K. S. Babu and C. Macesanu, Phys. Rev. D [**67**]{}, 073010 (2003). D. Aristizabal Sierra and M. Hirsch, JHEP [**0612**]{}, 052 (2006). Y. Okada, K. i. Okumura and Y. Shimizu, Phys. Rev. D [**61**]{}, 094001 (2000). [^1]: akeroyd@mail.ncku.edu.tw [^2]: mayumi.aoki@kek.jp [^3]: yasuhiro.okada@kek.jp [^4]: For an alternative approach which maintains the Type II seesaw mechanism and obtains $v_L\sim $eV by means of horizontal symmetries see [@Kiers:2005vx]. [^5]: In our numerical analysis we do not include a suppression factor of $\sim 15\%$ arising from electromagnetic corrections [@Czarnecki:2001vf]. [^6]: The presence of Majorana phases in $V_{\rm MNS}$ would effectively change the relative phase of the heavy neutrino masses $M_i$ in Eqs.(41) to (45). In such a scenario small $|h_{ee}|$ can arise from a cancellation among the terms in Eq.(44), which provides an additional way to suppress BR$(\mu \to eee)$ in CASE I. We performed an explicit numerical analysis and found a pattern of lepton flavour violation different from the case of $|h_{e\mu}|\sim 0$. In particular, BR($\tau^+ \to e^-e^+\mu^+$) and BR($\tau^+ \to \mu^-e^+\mu^+$) can be enhanced to observable rates. We thank the referee for bringing this scenario to our attention. [^7]: For models with large extra dimensions see [@Chang:2005ag].
{ "pile_set_name": "ArXiv" }
--- abstract: | We firstly prove the completeness of the category of crossed modules in a modified category of interest. Afterwards, we define pullback crossed modules and pullback objects that are both obtained by pullback diagrams with extra structures on certain arrows. These constructions unify many corresponding results for the cases of groups, commutative algebras and can also be adapted to various algebraic structures.\ **Keywords:** Modified category of interest, crossed module, object, limit.\ **MSC(2010):** 18D05, 17A30, 18A30, 18A35. address: - 'Department of Mathematics and Computer Science, Eskişehir Osmangazi University, Turkey.' - 'Department of Mathematics, Mehmet Akif Ersoy University, Burdur, Turkey.' author: - 'Kadİr Emİr$^*$' - Selİm Çetİn date: 'Received: , Accepted: .' title: Limits in Modified Categories of Interest --- [^1] **Introduction** ================ The notion of category of interest was introduced to unify various properties of algebraic structures. The main idea is due to Higgins [@Hig] and the definition is improved by Orzech [@Orz]. As indicated in [@CDL4; @CDL5; @Loday1; @Lo3; @LoRo; @Orz], many algebraic categories are the essential examples of category of interest. However the categories of cat$^{1}$-objects of Lie (associative, Leibniz, etc.) algebras are not. Because of this issue, the authors of [@BCDU] introduced a new type of this notion, called [*modified category of interest*]{} that satisfies all axioms of the former notion except one, which is replaced by a new and modified axiom. The main examples are those, which are equivalent to the categories of crossed modules in the categories of groups, (commutative) algebras, dialgebras, Lie and Leibniz algebras, etc. See [@bau; @CMDA; @dede; @lue; @Por] for more examples. Crossed modules were introduced by Whitehead in [@W1] as a model of homotopy 2-types and used to classify higher dimensional cohomology groups. The notion of crossed module is also defined for various algebraic structures. However the definition of crossed modules in modified categories of interest unifies all of these definitions. As an equivalent model of homotopy 2-types, cat$^{1}$-groups are introduced by Loday in [@Lswf]. This notion and the corresponding equivalence is also adapted to many algebraic structures, as well as to modified category of interest [@ESA]. In this paper, we firstly prove that the category of crossed modules in a modified category of interest $\C$ is finitely complete. This unifies a number of constructions given in [@Nizar]. Afterwards, we define pullback crossed modules and pullback objects in $\C$ that are both obtained by pullback diagrams with extra categorical structures on certain arrows. These definitions will unify the constructions and results given in [@Alp1; @Alp2; @Brown]. Moreover, one can adapt them to many different algebraic structures such as Lie algebras, Leibniz algebras, dialgebras, etc. **Preliminaries** ================= In this section, we recall some notions from [@BCDU; @Lswf; @ESA]. Modified Category of Interest ----------------------------- Let $\mathbb{C}$ be a category of groups with a set of operations $\Omega$ and with a set of identities $\mathbb{E}$, such that $\mathbb{E}$ includes the group identities and the following conditions hold. If $\Omega_{i}$ is the set of $i$-ary operations in $\Omega$, then: 1. $\Omega=\Omega_{0}\cup\Omega_{1}\cup\Omega_{2}$; 2. the group operations (written additively : $0,-,+$) are elements of $\Omega_{0}$, $\Omega_{1}$ and $\Omega_{2}$ respectively. Let $\Omega _{2}^{\prime}=\Omega_{2}\setminus\{+\}$, $\Omega_{1}^{\prime}=\Omega _{1}\setminus\{-\}.$ Assume that if $\ast\in\Omega_{2}$, then $\Omega _{2}^{\prime}$ contains $\ast^{\circ}$ defined by $x\ast^{\circ}y=y\ast x$ and assume $\Omega_{0}=\{0\}$; 3. for each $\ast\in\Omega_{2}^{\prime}$, $\mathbb{E}$ includes the identity $x\ast(y+z)=x\ast y+x\ast z$; 4. for each $\omega\in\Omega_{1}^{\prime}$ and $\ast\in\Omega _{2}^{\prime}$, $\mathbb{E}$ includes the identities $\omega(x+y)=\omega (x)+\omega(y)$ and *either* the identity $\omega(x\ast y)=\omega(x)\ast \omega(y)$ *or* the identity $\omega(x\ast y)=\omega(x)\ast y$. Denote by $\Omega'_{1S}$ the subset of those elements in $\Omega'_1$, which satisfy the identity $\omega(x \ast y) = \omega(x) \ast y $, and by $\Omega''_1$ all other unary operations, i.e. those which satisfy the first identity from (d). Let $C$ be an object of $\mathbb{C}$ and $x_{1},x_{2},x_{3}\in C$: 1. $x_{1}+(x_{2}\ast x_{3})=(x_{2}\ast x_{3})+x_{1}$, for each $\ast\in\Omega_{2}^{\prime}$. 2. For each ordered pair $(\ast,\overline{\ast} )\in\Omega_{2}^{\prime}\times\Omega_{2}^{\prime}$ there is a word $W$ such that: $$\begin{aligned} (x_{1}\ast x_{2}) \ \overline{\ast} \ x_{3}=W \big( x_{1}(x_{2}x_{3}),x_{1}(x_{3} x_{2}),(x_{2}x_{3})x_{1}, (x_{3}x_{2})x_{1}, \\ x_{2}(x_{1}x_{3}),x_{2}(x_{3}x_{1}),(x_{1}x_{3})x_{2} ,(x_{3}x_{1})x_{2} \big), \end{aligned}$$ where each juxtaposition represents an operation in $\Omega_{2}^{\prime}$. A category of groups with operations $\mathbb{C}$ satisfying conditions (a)-(f) is called a **modified category of interest**, or **MCI** for short. As indicated in [@BCDU], the difference between this definition and that of the original **category of interest** is the modification of the second identity in (d). According to this definition every category of interest is also a modified category of interest. Let $A,B$ be two objects of $\C$. A map $f \colon A \to B$ is called a morphism of $\C$ if it satisfies: $$\begin{aligned} f(a + a') & = f (a) + f(a') , \\ f(a \ast a') & = f(a) \ast f(a') , \end{aligned}$$ for all $a,a' \in A$, $\ast\in\Omega_{2}^{\prime}$ and also commutes with all $w \in \Omega_{1}^{\prime}$. \[example1\] The categories of groups, algebras, commutative algebras, Lie algebras, Leibniz algebras, dialgebras are all (modified) categories of interest. The categories  $\mathbf{Cat}^{\mathbf{1}}\mathbf{Ass}$, $\mathbf{Cat} ^{\mathbf{1}}\mathbf{Lie}$, $\mathbf{Cat}^{\mathbf{1}}\mathbf{Leibniz}$, i.e. the categories of associative algebras, Lie algebras and Leibniz algebras are the examples of modified categories of interest, which are not categories of interest (see [@BCDU] for details). From now on, $\mathbb{C}$ will denote an arbitrary but fixed modified category of interest. Let $B\in\mathbb{C}$. A subobject of $B$ is called an ideal if it is the kernel of some morphism. In other words, $A$ is an ideal of $B$ if and only if $A$ is a normal subgroup of $B$ and $a\ast b\in A,$ for all $a\in A$, $b\in B$ and $\ast \in\Omega_{2}^{\prime}$. Let $A,B\in\mathbb{C}$. An extension of $B$ by $A$ is a sequence: $$\begin{aligned} \label{split} \xymatrix{0\ar[r]&A\ar[r]^-{i}&E\ar[r]^-{p}&B\ar[r]&0} \end{aligned}$$ where $p$ is surjective and $i$ is the kernel of $p$. We say that an extension is split if there exists a morphism $s \colon B \to E$ such that $p s = 1_B$. The split extension induces an action of $B$ on $A$ corresponding to the operations of $\C$ with: $$\begin{aligned} b\cdot a&=s(b)+a-s(b), \\ b\ast a&=s(b)\ast a, \end{aligned}$$ for all $b\in B$, $a\in A$ and $\ast\in\Omega'_{2}.$ Actions defined by the previous equations are called *derived actions* of $B$ on $A$. Remark that we use the notation $`` \ast \text{''}$ to denote both the star operation and the star action. Given an action of $B$ on $A,$ a semi-direct product $A\rtimes B$ is a universal algebra, whose underlying set is $A\times B$ and the operations are defined by: $$\begin{aligned} \omega(a,b) & = (\omega\left( a\right) ,\omega\left( b\right) ),\\ (a^{\prime},b^{\prime})+(a,b) & = (a^{\prime}+b^{\prime}\cdot a,b^{\prime }+b),\\ (a^{\prime},b^{\prime})\ast(a,b) & = (a^{\prime}\ast a+a^{\prime}\ast b+b^{\prime}\ast a,b^{\prime}\ast b), \end{aligned}$$ for all $a,a^{\prime}\in A,$ $b,b^{\prime}\in B$, $\ast\in\Omega_{2}^{\prime}$. An action of $B$ on $A$ is a derived action if and only if $A\rtimes B$ is an object of $\mathbb{C}$. Denote a general category of groups with operations of a modified category of interest $\C$ by $\C_G$. A set of actions of $B$ on $A$ in $\mathbb{C}_{G}$ is a set of derived actions if and only if it satisfies the following conditions: 1. $0\cdot a=a$, 2. $b\cdot(a_{1}+a_{2})=b\cdot a_{1}+b\cdot a_{2}$, 3. $(b_{1}+b_{2})\cdot a=b_{1}\cdot(b_{2}\cdot a)$, 4. $b\ast(a_{1}+a_{2})=b\ast a_{1}+b\ast a_{2}$, 5. $(b_{1}+b_{2})\ast a=b_{1}\ast a+b_{2}\ast a$, 6. $(b_{1}\ast b_{2})\cdot(a_{1}\ast a_{2})=a_{1}\ast a_{2}$, 7. $(b_{1}\ast b_{2})\cdot(a\ast b)=a\ast b$, 8. $a_{1}\ast(b\cdot a_{2})=a_{1}\ast a_{2}$, 9. $b\ast(b_{1}\cdot a)=b\ast a$, 10. $\omega(b\cdot a)=\omega(b)\cdot\omega(a)$, 11. $\omega(a\ast b)=\omega(a)\ast b = a \ast \omega(b)$ for any $\omega \in \Omega'_{1S}$, and $\omega(a \ast b)=\omega(a) \ast \omega (b)$ for any $\omega \in \Omega''_{1}$, 12. $x\ast y+z\ast t=z\ast t+x\ast y$, for each $\omega\in\Omega_{1}^{\prime}$, $\ast\in\Omega_{2}^{\prime}$, $b$, $b_{1}$, $b_{2}\in B$, $a,a_{1},a_{2}\in A$; and for $x,y,z,t\in A\cup B$ whenever both sides of the last condition are defined. Crossed Modules --------------- A crossed module $(C_{1},C_{0},\partial)$ in $\mathbb{C}$ is given by a morphism $\partial \colon C_{1} \to C_{0}$ with a derived action of $C_{0}$ on $C_{1}$ such that: - $\begin{array}{l} \partial(c_{0}\cdot c_{1})=c_{0}+\partial(c_{1})-c_{0} \\ \partial(c_{0}\ast c_{1})=c_{0}\ast\partial(c_{1}) \end{array}$ - $\begin{array}{l} \partial(c_{1})\cdot c_{1}^{\prime}=c_{1} +c_{1}^{\prime}-c_{1} \\ \partial(c_{1})\ast c_{1}^{\prime}=c_{1}\ast c_{1}^{\prime} \end{array}$ for all $c_{0}\in C_{0}$, $c_{1},c_{1}^{\prime}\in C_{1}$, $\ast\in\Omega_{2}^{\prime}$. A morphism between two crossed modules $(C_{1},C_{0},\partial )\to(C_{1}^{\prime},C_{0}^{\prime},\partial^{\prime})$ is a pair $(\mu_{1},\mu_{0})$ of morphisms $\mu_{0} \colon C_{0}\to C_{0}^{\prime}$, $\mu_{1} \colon C_{1}\to C_{1}^{\prime}$, such that the diagram: $$\xymatrix @R=40pt@C=40pt{ C_1 \ar[r]^{\d} \ar[d]_{\mu_1} & C_0 \ar[d]^{\mu_0} \\ C'_1 \ar[r]_{\d'} & C'_0 }$$ commutes and: $$\begin{aligned} \mu_{1}(c_{0}\cdot c_{1})&=\mu_{0}(c_{0})\cdot\mu_{1}(c_{1}) \, ,\\ \mu_{1}(c_{0}\ast c_{1})&=\mu_{0}(c_{0})\ast\mu_{1} (c_{1}) \, , \end{aligned}$$ for all $c_{0}\in C_{0}$, $c_{1}\in C_{1}$ and $\ast\in\Omega_{2}^{\prime}$. Crossed modules and their morphisms form the category of crossed modules in $\C$ that will be denoted by $\mathbf{XMod}$. [@JFM2] A crossed module of groups is given by a group homomorphism $\d\colon E \to G$, together with an action $\tr$ of $G$ on $E$ such that (for all $e,f \in E$ and $g \in G$): - $\d(g \tr e) =g \,\d(e) \, g^{-1}$, - $\d(e) \tr f=e\, f \, e^{-1}$. [@JFM2] A crossed module of Lie algebras is given by a Lie algebra homomorphism $\d\colon \mathfrak{e} \to \mathfrak{g}$, together with an action $\tr$ of $\mathfrak{g}$ on $\mathfrak{e}$ such that (for all $e,f \in \mathfrak{e}$ and $g \in \mathfrak{g}$): - $\d(g \tr e) = [g ,\d(e)]$, - $\d(e) \tr f= [e, f]$. Note that $\tr$ denotes the group action and the Lie algebra action respectively in the previous examples. [Cat]{}$^{1}$ Objects --------------------- Let $S$ be a subobject of $R$. A cat$^{1}$-object $(e;s,t,R \to S)$ in $\mathbb{C}$ is an object $C$ together with the morphisms $ s,t \colon R \to S$ and $e \colon S \to R$ such that satisfying the following conditions: - $ s e = {\mathrm{id}}_S$ and $t e = {\mathrm{id}}_S$, - $x\ast y=0$, $x+y-x-y=0$, for all $\ast\in\Omega_{2}^{\prime}$ and $x \in \ker s$, $y \in \ker t$. Let $C=\left( e;s,t:R\rightarrow S\right) $ and $C^{\prime }=\left( e^{\prime };s^{\prime },t^{\prime }:R^{\prime }\rightarrow S^{\prime }\right) $ be two objects. A morphism $\left( \phi ,\varphi \right) \colon C \to C'$ is a tuple which consists of morphisms $\phi :R\rightarrow R^{\prime }$ and $\varphi :S\rightarrow S^{\prime }$ such that the following diagram commutes: $$\diagram R\ddto\ddto<.5ex>^{s}_{t} \rrto^{\phi} && R'\ddto \ddto<.5ex>^{s'}_{t'}\\ \\ S \rrto_{\varphi} \uutol^{{e}} && S' \uutor_{e'} \enddiagram$$ Cat$^1$-objects and their morphisms form the category of objects in $\C$ that will be denoted by $\mathbf{Cat^1}$. We denote any object in $\C$ by $(R,S)$ for short. [@jas] A cat$^{1}$-Leibniz algebra consists of a Leibniz algebra $L$, a sub Leibniz algebra $M$ and Leibniz algebra homomorphisms: $ s, t \colon L \to M$ and $e \colon M \to L$ such that: - $ s e = {\mathrm{id}}_M$ and $t e = {\mathrm{id}}_M$, - $[x, y]=0=[y, x]$, for all $x \in \ker s$, $y \in \ker t$. A cat$^1$-dialgebra consists of a dialgebra [@Loday1] $D$, a sub dialgebra and dialgebra homomorphisms: $ s,t \colon D \to F$ and $e \colon F \to D$ such that: - $ se={\mathrm{id}}_F$ and $te={\mathrm{id}}_F$, - $x\sol y=0=y \sol x$, $x \sag y=0=y \sag x $, for all $x \in \ker s$, $y \in \ker t$. The categories $\mathbf{XMod}$ and $\mathbf{Cat}^\mathbf{1} $ are equivalent. Let $(C_{1},C_{0},\partial )$ be a crossed module in $\mathbb{C}$. Consider the corresponding semi-direct product $C_{1}\rtimes C_{0}$ induced from the action of $C_{0}$ on $C_{1}$. By using the morphisms $s,t \colon C_{1}\rtimes C_{0}\to C_{0}$ and $e \colon C_0 \to C_{1}\rtimes C_{0}$ defined by $s(c_{1},c_{0})=c_{0}$, $t(c_{1},c_{0})=\d (c_{1})+c_{0}$ and $e(c_0)=(0,c_0)$, we obtain a object. This yields to the functor $\mathbf{C^1} \colon \mathbf{XMod} \to \mathbf{Cat}^{\mathbf{1}}$. See [@ESA] for converse. **Limits in MCI** ================= The cartesian product $P \times R$ is the product object of $P$ and $R$ in $\C$, with the projection morphisms satisfying the universal property. Suppose that $\alpha :P\rightarrow S$ and $\beta :R\rightarrow S$ are two morphisms in $\C$. Then the subobject of the cartesian product: $$\begin{aligned} P\times _{S}R=\left\{ \left( p,r\right) \mid \alpha \left( p\right) =\beta \left( r\right) \right\},\end{aligned}$$ the [*fiber product*]{}, defines the pullback of $\alpha, \beta$. Therefore a modified category of interest $\C$ has products and pullbacks which guarantees the existence of equalizer objects. Briefly, suppose that we have two parallel morphisms $f,g \colon P \to R $. Their equalizer is defined as $\mathrm{Eq}(f,g)=\{ x \in P \mid f(x)=g(x) \}$. Consequently, we can say that $\C$ has all finite limits since it has both products and equalizers. Thus $\C$ is finitely complete. Limits in Category of Crossed Modules in MCI -------------------------------------------- The category of crossed modules in $\C$ with fixed codomain $X$ forms a full subcategory of $\mathbf{XMod}$ that is denoted by $\mathbf{XMod/X}$. These kind of crossed modules will be called [*crossed $X$-modules*]{}. \[main1\] Given two crossed modules $(P,S, \alpha)$ and $(R,S,\beta)$ there is a crossed module: $$\begin{aligned} \partial :P\times _{S}R\rightarrow S \, , \end{aligned}$$ where $\partial \left( p,r\right) =\alpha \left( p\right) =\beta \left( r\right) $ and the action of $S$ on $P\times _{S}R$ is defined by: $$s \cdot \left( p,r\right) =\left( s \cdot p, s \cdot r \right) , \quad \quad \quad s \ast \left( p,r\right) =\left(s \ast p,s\ast r\right) .$$ The action given above is well-defined and the action conditions are already satisfied. Moreover $\partial :P\times _{S}R\rightarrow S$ is a morphism of $\C$ since: $$\begin{aligned} \partial \left( \left( p,r\right) + \left( p^{\prime },r^{\prime }\right) \right) & = \partial \left( p + p^{\prime },r + r^{\prime }\right) \\ & = \alpha \left( p + p^{\prime }\right) \\ & = \alpha \left( p\right) + \alpha \left( p^{\prime }\right) \\ & = \partial \left( p,r\right) + \partial \left( p^{\prime },r^{\prime }\right) . \end{aligned}$$ Similarly we have: $$\begin{aligned} \partial \left( \left( p,r\right) \ast \left( p^{\prime },r^{\prime }\right) \right) = \partial \left( p,r\right) \ast \partial \left( p^{\prime},r^{\prime }\right) , \end{aligned}$$ for all $\left( p,r\right) , (p',r') \in P\times _{S}R$. Also $\d$ commutes with all $w \in \Omega_{1}^{\prime}$ since: $$\begin{aligned} \d \big( w(p,r) \big) & = \d \big( w(p) , w(r) \big) = \alpha \big( w(p) \big) = w \big( \alpha (p) \big) = w \big( \d (p,r) \big). \end{aligned}$$ Finally, $\partial$ satisfies the crossed module conditions: - $$\begin{aligned} \partial \left( s \cdot \left( p,r\right) \right) & = \partial \left(s \cdot p,s \cdot r\right) = \alpha ( s \cdot p) = s + \alpha \left( p\right) - s = s + \partial \left( p,r\right) - s , \\ \partial \left(s \ast \left( p,r\right) \right) & = \partial \left( s \ast p, s \ast r \right) = \alpha \left(s \ast p\right) = s \ast \alpha \left( p\right) = s \ast \partial \left( p,r\right) , \end{aligned}$$ - $$\begin{aligned} \partial \left( p^{\prime },r^{\prime }\right) \cdot \left( p,r\right) & = \alpha \left( p^{\prime }\right) \cdot \left( p,r\right) \\ & = \left(\alpha \left( p^{\prime }\right) \cdot p , \alpha \left( p^{\prime }\right) \cdot r \right) \\ & = \left(\alpha \left( p^{\prime }\right) \cdot p , \beta \left( r^{\prime }\right) \cdot r \right) \\ & = \left( p' + p - p', r' + r - r' \right) \\ & = \left( p^{\prime },r^{\prime }\right) + \left( p,r\right) - (p',r') , \end{aligned}$$ $$\begin{aligned} \partial \left( p^{\prime },r^{\prime }\right) \ast \left( p,r\right) & = \alpha \left( p^{\prime }\right) \ast \left( p,r\right) \\ & = \left(\alpha \left( p^{\prime }\right) \ast p , \alpha \left( p^{\prime }\right) \ast r \right) \\ & = \left(\alpha \left( p^{\prime }\right) \ast p , \beta \left( r^{\prime }\right) \ast r \right) \\ & = \left( p' \ast p,r' \ast r \right) \\ & = \left( p^{\prime },r^{\prime }\right) \ast \left( p,r\right) , \end{aligned}$$ for all $\left(p,r\right) , (p',r') \in P\times _{S}R$ and $s\in S$. \[inducedxmod\] Let $(\alpha,{\mathrm{id}}) \colon (P,X,\gamma) \to (S,X,\partial')$ be a crossed module morphism. Then there exists a crossed module $(P,S,\alpha)$ where the action of $S$ on $P$ are defined along $\d'$, namely: $$\begin{aligned} s \cdot p = \d'(s) \cdot p , \quad s \ast p = \d'(s) \ast p . \end{aligned}$$ Since $(\alpha,{\mathrm{id}})$ is a crossed module morphism, the diagram: $$\xymatrix{ P \ar[dr]^-{\alpha} \ar[dd]_{\gamma}& \\ & S \ar[dl]^-{\partial'} \\ X & }$$ commutes; namely $\alpha (x \cdot p) = x \cdot \alpha(p)$ and $\alpha (x \ast p) = x \ast \alpha(p)$, for all $x \in X$ and $p \in P$. Thus: - $$\begin{aligned} \alpha(s \cdot p) & = \alpha \big( \d'(s) \cdot p \big) = \d'(s) \cdot \alpha(p) = s + \alpha(p) - s \, , \\ \alpha(s \ast p) & = \alpha \big( \d'(s) \ast p \big) = \d'(s) \ast \alpha(p) = s \ast \alpha(p) \, , \end{aligned}$$ - $$\begin{aligned} \alpha(p) \cdot p' & = \d'(\alpha(p)) \cdot p' = \gamma(p) \cdot p' = p + p' - p \, , \\ \alpha(p) \ast p' & = \d'(\alpha(p)) \ast p' = \gamma(p) \ast p' = p \ast p' \, , \end{aligned}$$ for all $s \in S$ and $p,p' \in P$. \[comp\] If $(A,B,\d)$ and $(B,C,\d')$ are crossed modules such that $C$ acts on $A$ in a compatible way with $B$ (i.e. $(\d' b \cdot a) = b \cdot a$), then $(A,C,\d' \d)$ becomes a crossed module as well, see [@Nizar] for details. Suppose that we have crossed module morphisms: $$(\alpha ,{\mathrm{id}}) \colon (P,X,\gamma) \to (S,X,\d') \, \text{ and } \,(\beta ,{\mathrm{id}}) \colon (R,X,\delta) \to (S,X,\d').$$ Then there exists a crossed module: $$\begin{aligned} P\times _{S}R \to X \, , \end{aligned}$$ which leads to the pullback object in $\mathbf{XMod/X}$. By using crossed module morphisms $(\alpha ,{\mathrm{id}})$ and $(\beta ,{\mathrm{id}})$, we get the following morphisms of $\C$: $$\begin{aligned} \alpha \colon P \to S \, \text{ and } \, \beta \colon R \to S. \end{aligned}$$ We already know that the pullback of these morphisms in $\C$ are defined by the fiber product $P\times _{S}R$ that makes the following diagram commutative and satisfies the universal property: $$\xymatrix{ & P\times _{S}R \ar[dl]_-{\pi _{1}} \ar[dr]^-{\pi _{2}} & \\ P \ar[dr]_-{\alpha} & & R \ar[dl]^-{\beta} \\ &S& }$$ By using Lemma \[inducedxmod\], $\alpha$ and $\beta$ turn into crossed modules, thus we get a crossed module $\d \colon P\times _{S}R \to S$ in the sense of Lemma \[main1\]. Moreover, $\d' \colon S \to X$ is already a crossed module and $X$ acts on $P\times _{S}R $ in a natural way. Therefore by using Remark \[comp\], we get the crossed module: $$\begin{aligned} \d' \d \colon P\times _{S}R \to X , \end{aligned}$$ which leads to the pullback object in the category of crossed $X$-modules. All fitting into the diagram: $$\xymatrix{ & P\times _{S}R \ar[dl]_-{\pi _{1}} \ar[dr]^-{\pi _{2}} \ar[dd]^{\d} & \\ P \ar[dr]_-{\alpha} \ar@/_1.25pc/[ddr]_{\gamma} & & R \ar[dl]^-{\beta} \ar@/^1.25pc/[ddl]^{\delta} \\ &S \ar[d]^{\d'} & \\ & X &}$$ The category of crossed ${X}$-modules has an initial object $0 \to X$ and a terminal object ${\mathrm{id}}\colon X \to X$. Consequently, one can construct the product object as a pullback of the morphisms: $$\xymatrix{ \mathcal{X} \ar[dr] & & \mathcal{X'} \ar[dl] \\ & 1 &}$$ where $\mathcal{X,X'}$ are two crossed $X$-modules and $1$ is the terminal object. This yields the following: Given two crossed modules $\alpha \colon P\rightarrow S$ and $\beta \colon R\rightarrow S$ in a modified category of interest $\C$, their product is the crossed module $\partial :P\times _{S} R\rightarrow S$. Thus, we have proved the following theorem: The category $\mathbf{XMod/X}$ is finitely complete. As a consequence of this section, one can obtain the completeness of the categories of crossed X-modules of groups, (commutative) algebras, Lie and Leibniz algebras, dialgebras, etc. **Pullback Crossed Modules** ============================ \[def1\] For a given crossed module $(P,R,\partial)$ and a morphism $\phi \colon S \to R$ in $\mathbb{C}$, the pullback crossed module is defined as a crossed module morphism: $$\begin{aligned} (\phi',\phi) \colon \phi^{\star}(P,R,\d) \to (P,R,\d) \, , \end{aligned}$$ where the crossed module: $$\begin{aligned} \phi ^{\star }(P,R,\partial )=\left( \phi ^{\star }(P),S,\partial ^{\star }\right) \end{aligned}$$ satisfies the following universal property. For any crossed module morphism: $$\left( f,\phi \right) \colon \left( X,S,\mu \right) \rightarrow (P,R,\partial ) \, ,$$ there exists a unique crossed module morphism: $$\left( f^{\star },{\mathrm{id}}_{S}\right) :\left( X,S,\mu \right) \rightarrow \left( \phi ^{\star }(P),S,\partial ^{\star }\right)$$ such that the following diagram commutes: $$\begin{aligned} \label{cat1} \xymatrix@R=40pt@C=40pt{ & & & (X,S,\mu ) \ar[d]^{(f,\phi)} \ar@{-->}[dlll]_{(f^{\star },{\mathrm{id}}_{S})} \\ \ \left( \phi ^{\star }(P),S,\partial^{\star }\right) \ar[rrr]_{(\phi',\phi)} & & & (P,R,\partial) } \end{aligned}$$ In other words, it can be seen as a pullback [@Adamek] diagram: $$\begin{aligned} \label{cat2} \xymatrix@R=20pt@C=20pt{ X \ar[dd]_{\mu} \ar[rr]^{f} \ar@{.>}[dr]|-{f^{\star}} & & P \ar[dd]^{\partial} \\ &\phi^{\star}(P)\ar[ur]_{\phi ^{\prime }}\ar[dl]^{\partial^{\star}}& \\ S \ar[rr]_{\phi} & & R }\end{aligned}$$ In order to give a particular construction for the pullback crossed module, let $(P, R, \d)$ be a crossed module and $\phi :S\rightarrow R$ be a morphism in $\mathbb{C}$. Define: $$\phi ^{\star }(P)=P\times _{R}S=\left\{ \left( p,s\right) \mid \partial \left( p\right) =\phi \left( s\right) \right\} ,$$and define the morphism $\partial ^{\star }:\phi ^{\star }(P)\rightarrow S$ by: $$\begin{aligned} \partial ^{\star }\left( p,s\right) =s.\end{aligned}$$ There exists an action of $S$ on $\phi ^{\star }(P)$ defined by: $$\begin{array}{ccl} S \times \phi ^{\star }(P) & \rightarrow & \phi ^{\star }(P) \\ \left(t, \left( p,s\right)\right) & \mapsto &t \cdot \left( p,s\right) =\left( \phi \left( t \right) \cdot p , t + s - t \right) , \end{array}$$ and: $$\begin{array}{ccl} S \times \phi ^{\star }(P) & \rightarrow & \phi ^{\star }(P) \\ \left(t, \left( p,s\right)\right) & \mapsto &t \ast \left( p,s\right) =\left( \phi \left( t \right) \ast p , t \ast s \right) .\end{array}$$ Then $(\phi^{\star }(P) , S , \partial ^{\star })$ defines a crossed module since: - $$\begin{aligned} \partial ^{\star }\left(t \cdot \left( p,s\right) \right) & = \partial ^{\star }\left( \phi \left( t \right)\cdot p , t + s - t \right) = t + s - t = t + \partial ^{\star }\left( p,s\right) - t \, , \\ \partial ^{\star }\left(t \star \left( p,s\right) \right) & = \partial ^{\ast }\left( \phi \left( t \right)\ast p , t \ast s \right) = t \ast s = t \ast \partial ^{\star }\left( p,s\right) \, , \end{aligned}$$ - $$\begin{aligned} \partial ^{\star }\left( p^{\prime },s^{\prime }\right) \cdot \left( p,s\right) & = s' \cdot \left( p,s\right) \\ & = \left( \phi \left( s^{\prime }\right) \cdot p ,s' + s - s' \right) \\ & = \left( \partial \left( p^{\prime }\right)\cdot p ,s' + s - s' \right) \\ & = \left( p' + p - p', s' + s - s' \right) \\ & = \left( p',s'\right) + \left( p, s\right) - (p',s') \, , \end{aligned}$$ $$\begin{aligned} \partial ^{\star }\left( p^{\prime },s^{\prime }\right) \ast \left( p,s\right) & = s' \ast \left( p,s\right) \\ & = \left( \phi \left( s^{\prime }\right) \ast p ,s'\ast s \right) \\ & = \left( \partial \left( p^{\prime }\right)\ast p ,s' \ast s\right) \\ & = \left( p'\ast p, s'\ast s \right) \\ & = \left( p',s'\right) \ast \left( p, s\right) , \end{aligned}$$ for all $(p,s) \, (p',s') \in \phi^{\star} (P)$ and $t \in S$. This construction satisfies the universal property. Consider the crossed module morphism: $$\left( \phi ^{\prime },\phi \right) :\left( \phi ^{\star }(P),S,\partial ^{\star }\right) \rightarrow (P,R,\partial ) \, ,$$where $\phi ^{\prime }:\phi ^{\star }(P)\rightarrow P$ is defined by $\phi ^{\prime }\left( p,s\right) =p.$ Suppose that $\left( X,S,\mu \right) $ is a crossed module and the tuple: $$\begin{aligned} \label{com2} \left( f,\phi \right) :\left( X,S,\mu \right) \rightarrow (P,R,\partial )\end{aligned}$$ is a crossed module morphism. Define: $f^{\star }:X\rightarrow \phi ^{\star }(P)$ by $f^{\star }(x)=\left( f\left( x\right) ,\mu \left( x\right) \right).$ Then: $$\left( f^{\star },{\mathrm{id}}_{S}\right) :\left( X,S,\mu \right) \rightarrow \left( \phi ^{\star }(P),S,\partial ^{\star }\right)$$ becomes a crossed module morphism. In fact the diagram: $$\xymatrix @R=20pt@C=20pt{ X\ar[r]^{\mu} \ar[d]_{f^{\star }} & S \ar[d]^{{\mathrm{id}}_{S}} \\ \ \phi ^{\star }(P) \ar[r]_-{\partial^{\star }} & S }$$ is commutative since: $$\begin{aligned} \label{com1} \begin{split} \partial ^{\star }f^{\star }(x) = \partial ^{\star }\left( f\left( x\right) ,\mu \left( x\right) \right) = \mu \left( x\right) = {\mathrm{id}}_{S} \mu \left( x\right) , \end{split} \end{aligned}$$ and also: $$\begin{aligned} f^{\star }\left( s\cdot x\right) & = \left( f\left( s\cdot x \right) ,\mu \left( s\cdot x\right) \right) \\ & = \left(\phi \left( s\right) \cdot f\left( x\right) ,s \cdot \mu \left( x\right) \right) \\ & = s \cdot \left( f\left( x\right) ,\mu \left( x\right) \right) \\ & = {\mathrm{id}}_{S}\left( s\right) \cdot f^{\star }(x) ,\end{aligned}$$ $$\begin{aligned} f^{\star }\left( s\ast x\right) & = \left( f\left( s\ast x \right) ,\mu \left( s\ast x\right) \right) \\ & = \left(\phi \left( s\right) \ast f\left( x\right) ,s \ast \mu \left( x\right) \right) \\ & = s \ast \left( f\left( x\right) ,\mu \left( x\right) \right) \\ & = {\mathrm{id}}_{S}\left( s\right) \ast f^{\star }(x) ,\end{aligned}$$ for all $s\in S$ and $x\in X$. Moreover we have: $$\begin{aligned} \label{com3} \begin{split} \phi ^{\prime }f^{\star }(x) & = \phi ^{\prime }\left( f\left( x\right) ,\mu \left( x\right) \right) = f\left( x\right) \end{split}\end{aligned}$$ that makes diagram commutative. In other words, pullback diagram commutes since: - $\partial ^{\star }f^{\star }(x) = \mu \left( x\right)$ from , - $\phi \mu = \partial f$ since is a crossed module morphism, - $\phi' f^{\star} = f$ from . Finally, we need to prove that $(f^{\star},{\mathrm{id}})$ is unique in . Suppose that: $$\begin{aligned} \left( f^{\star\star },{\mathrm{id}}_{S}\right) :\left( X,S,\mu \right) \rightarrow \left( \phi ^{\star }(P),S,\partial ^{\star }\right) \end{aligned}$$ is a crossed module morphism with the same property as $(f^{\star},{\mathrm{id}})$. We get: $$\begin{aligned} \partial ^{\star }f^{\star\star }(x) =f(x) , \quad \quad \partial ^{\star }f^{\star\star }(x)=\mu(x) ,\end{aligned}$$ for all $x \in X$ which implies: $$\begin{aligned} f^{\star\star}(x)=(p,s)=(f(x),\mu(x))=f^{\star}(x),\end{aligned}$$ and proves that $(f^{\star},{\mathrm{id}})$ is unique. Therefore we have the following: We get a functor in $\C$: $$\begin{aligned} \phi^{\star} \colon \mathbf{XMod/{R} \to XMod/{S} } . \end{aligned}$$ Moreover, let $(P,R,\partial )$ be a crossed module and $\phi \colon S \to R $ be a morphism in $ \C $. We have the pullback diagram: $$\xymatrix @R=40pt@C=40pt{ \phi ^{\star }(P) \ar[r]^{\phi ^{\prime }} \ar[d]_{\partial^{\star }} & P \ar[d]^{\partial} \\ \ S \ar[r]_\phi & R }$$ Given an object $R$ and a normal subobject $N$ of $R$, then $(N,R,\d)$ is a crossed module where $\d$ is the inclusion map. Suppose that $\phi :S\rightarrow R$ is a morphism. Then the pullback crossed module is defined by: $$\begin{aligned} \phi ^{\star }\left( N\right) & = \left\{ \left( n,s\right) \mid \partial \left( n\right) =\phi \left( s\right) \text{, }n\in N\text{, }s\in S\right\} \\ & \cong \left\{ s\in S\mid \phi \left( s\right) =n,\text{ }n\in N\right\} \\ & = \phi ^{-1}\left( N\right) , \end{aligned}$$ and the pullback diagram is: $$\xymatrix @R=40pt@C=40pt{ \phi ^{-1}\left( N\right) \ar[r]^-{\phi ^{\prime }} \ar[d]_{\partial^{\star }} & N \ar[d]^{\partial} & \\ \ S \ar[r]_\phi & R & }$$ where the preimage $\phi ^{-1}\left( N\right) $ is a normal subobject of $S$. In particular, if $N=\left\{ 0 \right\} $, then: $$\phi ^{\star }\left( \left\{ 0\right\} \right) \cong \left\{ s\in S\mid \phi \left( s\right) =0\right\} =\ker \phi .$$ So kernels are particular cases of pullback crossed modules. **Pullback Cat$^1$-Objects** ============================ The definition of pullback object along a morphism is similar to that for crossed modules given in Definition \[def1\]. For a given object $(R,S)$ and a morphism $\phi \colon Q \to S$ in $\mathbb{C}$, we require a object $\phi ^{\star }(R,S)=\left( \phi ^{\star}(R),Q\right)$ to fill the pullback diagrams: $$\begin{aligned} \label{cat3} \xymatrix@R=40pt@C=40pt{ & & & (P,Q ) \ar[d]^{(\varphi,\phi)} \ar@{-->}[dlll]_{(\psi,{\mathrm{id}}_{S})} \\ \ \left( \phi ^{\star}(R),Q\right) \ar[rrr]_{(\pi,\phi)} & & & (R,S) } \end{aligned}$$ and $$\begin{aligned} \label{cat4} \xymatrix{ P \ddto\ddto<.5ex>^{s^{\prime }}_{t^{\prime }} \ar[rr]^{\varphi} \ar@{.>}[dr]|-{\psi} & & R \ddto\ddto<.5ex>^{s}_{t} \\ &\phi^{\star}(R)\ar[ur]_{\pi}\dlto\dlto<.5ex>^{s^{\star}}_{t^{\star}} & \\ Q \ar[rr]_{\phi} & &S } \end{aligned}$$ Note that we do not include the embedding morphisms in the above diagrams for the sake of simplicity. In order to give a particular construction for the pullback object, let $\left( e;s,t:R\rightarrow S\right) $ be a object and $\phi :Q\rightarrow S$ be a morphism. Define: $$\phi ^{\star }\left( e;s,t:R\rightarrow S\right) = \left( e^{\star };s^{\star },t^{\star }:\phi ^{\star }(R)\rightarrow Q\right) ,$$ where: $$\phi ^{\star }(R)=\left\{ \left( q_{1},r,q_{2}\right) \in Q\times R\times Q\mid \phi \left( q_{1}\right) =s\left( r\right) ,\text{ }\phi \left( q_{2}\right) =t\left( r\right) \right\}$$ is a subobject of $Q \times R \times Q$. Define the morphisms: $$\begin{aligned} s^{\star }\left( q_{1},r,q_{2}\right) =q_{1}, \quad t^{\star }\left(q_{1},r,q_{2}\right) =q_{2}, \quad e^{\star }\left( q\right) =\left( q,e\phi \left( q\right) ,q\right).\end{aligned}$$ It is easily verified that $s^{\star}e^{\star }=t^{\star }e^{\star}= {\mathrm{id}}_{Q}$. Moreover, let $\left( q_{1}^{\prime },r_{1},q_{1}\right) \in \ker s^{\star }$ and $\left( q_{2},r_{2},q_{2}^{\prime }\right) \in \ker t^{\star }.$ Then: $$s^{\star }\left( q_{1}^{\prime },r_{1},q_{1}\right) = 0_Q \quad \text{and} \quad t^{\star }\left( q_{2},r_{2},q_{2}^{\prime }\right) =0_Q,$$ which implies $q_{1}^{\prime }=q'_2 = 0_Q$, hence we get $r_1 \in \ker s$ and $r_2 \in \ker t$. Therefore: $$\begin{aligned} \left( q_{1}^{\prime },r_{1},q_{1}\right) \ast \left( q_{2},r_{2},q_{2}^{\prime }\right) & =\left( 0_Q \ast q_{2},r_{1}\ast r_{2},q_{1}\ast 0_Q \right) \\ & =\left( 0_Q , r_{1} \ast r_2 , 0_Q \right) \\ & = ( 0_Q , 0_R , 0_Q) ,\end{aligned}$$ and: $$\begin{aligned} \left( q_{1}^{\prime },r_{1},q_{1}\right) + \left( q_{2},r_{2},q_{2}^{\prime }\right) & =\left( 0_Q + q_{2},r_{1} + r_{2},q_{1} + 0_Q \right) \\ & =\left( q_2 , r_{2} + r_1 , q_1 \right) \\ & = \left( q_{2},r_{2},q_{2}^{\prime }\right) + \left( q_{1}^{\prime },r_{1},q_{1}\right) ,\end{aligned}$$ which implies: $$\begin{aligned} \left( q_{1}^{\prime },r_{1},q_{1}\right) + \left( q_{2},r_{2},q_{2}^{\prime }\right) - \left( q_{1}^{\prime },r_{1},q_{1}\right) - \left( q_{2},r_{2},q_{2}^{\prime }\right) = 0_{\phi^{\ast}(R)}.\end{aligned}$$ Consequently, we get the object structure: $$\left( e^{\star};s^{\star},t^{\star}:\phi ^{\star}(R)\rightarrow Q\right).$$ Define the morphism: $$\begin{array}{cccl} \pi : & \phi ^{\star}(R) & \rightarrow & R \\ & \left( q_{1},r,q_{2}\right) & \mapsto & \pi \left( q_{1},r,q_{2}\right) =r.\end{array}$$ Since: $$\begin{aligned} \phi s^{\star}\left( \left( q_{1},r,q_{2}\right) \right) & =\phi \left( q_{1}\right) =s\left( r\right) =s\pi \left( q_{1},r,q_{2}\right) , \\ \phi t^{\star}\left( \left( q_{1},r,q_{2}\right) \right) & =\phi \left( q_{2}\right) =t\left( r\right) =t\pi \left( q_{1},r,q_{2}\right) , \\ \pi e^{\star }\left( q\right) & =\pi \left( q,e\phi \left( q\right) ,q\right) =e\phi \left( q\right),\end{aligned}$$ for all $\left( q_{1},r,q_{2}\right) \in \phi ^{\star }(R)$, $q\in Q$, the following diagram commutes: $$\diagram \phi ^{\star }(R)\ddto \ddto<.7ex>^{s^{\star}}_{t^{\star}} \rrto^{\pi} && R\ddto<-.8ex>_{t} \ddto<-.1ex>^{s}\\ \\ Q \rrto_{\phi} \ar@/^1.25pc/[uu]^{e^{\star}} && S \ar@/_1.25pc/[uu]_{e} \enddiagram$$ Hence $(\pi,\phi)$ becomes a object morphism. Now we need to prove the universal property. Let: $$\begin{aligned} \left( \varphi ,\phi \right) \colon \left( e^{\prime };s^{\prime },t^{\prime }:P\rightarrow Q\right) \rightarrow \left( e;s,t:R\rightarrow S\right) \end{aligned}$$ be any morphism such that the following diagram commutes: $$\diagram P\ddto\ddto<.5ex>^{s^{\prime }}_{t^{\prime }} \rrto^{\varphi} && R\ddto \ddto<.5ex>^{s}_{t}\\ \\ Q \rrto_{\phi} \uutol^{{e^{\prime }}} && S \uutor_{e} \enddiagram$$ Define $\psi :P\rightarrow \phi ^{\star}(R)$ by $ \psi \left( p\right) =\left( s^{\prime }\left( p\right) ,\varphi \left( p\right) ,t^{\prime }\left( p\right) \right)$. Then: $$\left( \psi ,{\mathrm{id}}_{Q}\right) \colon \left( e^{\prime };s^{\prime },t^{\prime }:P\rightarrow Q\right) \rightarrow \left( e^{\star };s^{\star },t^{\star}:\phi ^{\star }(R)\rightarrow Q\right)$$becomes a object since: $$\begin{aligned} s^{\star }\psi \left( p\right) & =s^{\star }\left( s^{\prime }\left( p\right) ,\varphi \left( p\right) ,t^{\prime }\left( p\right) \right) =s^{\prime }\left( p\right) ={\mathrm{id}}_{Q}s^{\prime }\left( p\right) , \\ t^{\star }\psi \left( p\right) & =t^{\star}\left( s^{\prime }\left( p\right) ,\varphi \left( p\right) ,t^{\prime }\left( p\right) \right) =t^{\prime }\left( p\right) ={\mathrm{id}}_{Q}t^{\prime }\left( p\right) ,\end{aligned}$$and also: $$\begin{aligned} \psi e^{\prime }\left( q\right) & =\left( s^{\prime }e^{\prime }\left( q\right) ,\varphi e^{\prime }\left( q\right) ,t^{\prime }e^{\prime }\left( q\right) \right) =\left( q,e\phi \left( q\right) ,q\right) =e^{\star } {\mathrm{id}}_{Q}\left( q\right) ,\end{aligned}$$ for all $p \in P$ and $q \in Q$. Moreover we have: $$\begin{aligned} \pi \psi \left( p\right) & =\pi \left( s^{\prime }\left( p\right) ,\varphi \left( p\right) ,t^{\prime }\left( p\right) \right) =\varphi \left( p\right),\end{aligned}$$ which makes commutative and leads to the pullback diagram . The uniqueness of $\left( \pi ,\phi \right) $ can be proven analogously to the crossed module case given in the previous section. **Conclusion** ============== Consequently, we get the commutativity of the following diagram (up to isomorphism) for a fixed morphism $\phi$ of $\C$. $$\xymatrix @R=40pt@C=40pt{ \mathbf{XMod} \ar[r]^{\phi^{\star}} \ar[d]_{\mathbf{C^{1}}} & \mathbf{XMod} \ar[d]^{\mathbf{C^{1}}} \\ \mathbf{Cat^{1}} \ar[r]_{\phi^{\star}} & \mathbf{Cat^{1}} }$$ Another main outcome of the paper is the following: One can obtain pullback crossed modules and pullback objects in many well-known algebraic categories listed in Example \[example1\], such as category of groups, (commutative) algebras, dialgebras, Lie algebras and Leibniz algebras, etc. For instance, if we consider the cases of category of groups and commutative algebras, we lead to the constructions given in [@Alp1; @Alp2; @Brown]. **Acknowledgments** {#acknowledgments .unnumbered} =================== The authors are thankful to Enver Önder Uslu, Murat Alp and the anonymous referee for their invaluable comments and suggestions. [10]{} J. Ad[á]{}mek, H. Herrlich, and G.E. Strecker, , Pure and applied mathematics. Wiley, (1990). M. [Alp]{}, Pullbacks of crossed modules and cat-1 groups, , 22:273–281, (1998). M. [Alp]{}, Pullbacks of crossed modules and cat-1 commutative algebras, , 30:237–246, (2006). , Crossed extensions of algebras and Hochschild cohomology, *Homology Homotopy Appl., 4*, No. 2, (2002). Actions in MCI with application to crossed modules, *Theory Appl. Categ., 30*, No. 25, 882-908, (2015). R. [Brown]{} and C.D. [Wensley]{}. , 1, (1995). Crossed modules for Leibniz n-algebras, *Forum Mathematicum, 20*, No. 5, 841-858, (2008). , More on crossed modules of Lie, Leibniz, associative and diassociative algebras, arxiv.org/1508.01147. , Actor of an alternative algebra, arXiv:math.RA/0910.0550v1, (2009). , Left-right Noncommutative Poisson algebras, *Cent. Eur. J. Math., 12*, No. 1, 57-78, (2014). , A non-abelian two-dimensional cohomology for associative algebras, *Bull. Amer. Math. Soc., 72*, No. 6, 1044-1050, (1966). J. [Faria Martins]{}, [Crossed modules of Hopf algebras and of associative algebras and two-dimensional holonomy]{}, *[J. Geom. Phys.]{}*, [99]{}, 68–110, (2016). Groups with multiple operators, *Proc. London Math. Soc., 3*, 366-416, (1956). Spaces with finitely many non-trivial homotopy groups, *J. Pure Appl. Algebra, 24*, 179-202, (1982). J.-L. [Loday]{}, , , 7–66. Berlin: Springer, (2001). , Algèbres ayant deux opérations associatives (digèbres), *C. R. Acad. Sci. Paris Sér. I Math., 321*, 141-146, (1995). , Trialgebras and families of polytopes, Homotopy Theory: Relations with Algebraic Geometry, Group Cohomology, and Algebraic K-theory, *Contemp. Math. 346*, Amer. Math. Soc., Providence, RI, 369-398, (2004). Non-abelian cohomology of associative algebras, *Quart. J. Math. Oxford Ser., 19*, No. 2, 150-180, (1968). Obstruction theory in algebraic categories I and II, *J. Pure Appl. Algebra, 2*, 287-314 and 315-340, (1972). , Extensions, crossed modules and internal categories in categories of groups with operations, *Proc. Edinburgh Math. Soc., (2) 30*, No. 3, 373-381, (1987). N.M. [Shammu,]{} , (1987). E.Ö. [Uslu]{}, S. [Çetin]{} and A.F. [Arslan]{}, On crossed modules in modified categories of interest, *[Math. Commun.]{}*, [22]{}, 103–121, (2017). J.H.C. Whitehead, Combinatorial homotopy II, *Bull. Amer. Math. Soc.*, [55]{}, 453–496, (1949). [^1]: $^*$Corresponding author
{ "pile_set_name": "ArXiv" }
--- abstract: | There is an ongoing debate on personalization, adapting results to the unique user exploiting a user’s personal history, versus customization, adapting results to a group profile sharing one or more characteristics with the user at hand. Personal profiles are often sparse, due to cold start problems and the fact that users typically search for new items or information, necessitating to back-off to customization, but group profiles often suffer from accidental features brought in by the unique individual contributing to the group. In this paper we propose a generalized group profiling approach that teases apart the exact contribution of the individual user level and the ‘abstract’ group level by extracting a latent model that captures all, and only, the essential features of the whole group. Our main findings are the followings. First, we propose an efficient way of group profiling which implicitly eliminates the general and specific features from users’ models in a group and takes out the abstract model representing the whole group. Second, we employ the resulting models in the task of contextual suggestion. We analyse different grouping criteria and we find that group-based suggestions improve the customization. Third, we see that the granularity of groups affects the quality of group profiling. We observe that grouping approach should compromise between the level of customization and groups’ size. author: - | Mostafa Dehghani$^1$ $\qquad$ Hosein Azarbonyad$^2$ $\qquad$ Jaap Kamps$^1$ $\qquad$ Maarten Marx$^2$\ \ \ {[dehghani,h.azarbonyad,kamps,maartenmarx](dehghani,h.azarbonyad,kamps,maartenmarx)}[@uva.nl](@uva.nl) bibliography: - 'ref.bib' title: Generalized Group Profiling for Content Customization --- =10000 = 10000 [**Keywords:**]{} Group Profiling, Contextual Suggestion, Content Customization. Introduction ============ Context is pervasive on the modern web, due to cloud-based and mobile applications, making every information access interaction part of an eternal user session. Effective ways to leverage this context are key to further enhancing the user experience, both in terms of better quality of results as in terms of easier ways to articulate complex information needs. This requires both effective ways of personalization to an individual user as well as customization to a profile based on groups of users. For group level analysis, there is a need for extracting a group profile that captures the essence of the group, separate from the sum of the profiles of its individual members. This profile should be “specific” enough to distinguish the preferences of the group from other groups, and in the same time, “general” enough to capture all shared tastes, expectations, and similarities of its members. Group profiling can help understand both explicit groups, like Facebook groups, and implicit groups, like groups extracted by community detection algorithms. There is a wide range of applications for group profiling, like understanding social structures [@Tang:2011], network visualization, recommender systems [@Hu:2014; @Shang:2014; @Amer-Yahia], and direct marketing [@Custers:2003]. One of the important applications of group profiling is in the content customization problem. Content customization is the process of tailoring content to individual users’ characteristics or preferences. However, using individual preferences for content customization is not always possible. For example sometimes there is a new user in the system with no historical interactions and no rich information about the preferences, or sometimes the user is not able to determine his/her preferences explicitly. In these situations, group based content customization would be beneficial to suggest content to the user based on the preferences of the groups that the user belongs to. The main aim of this paper is *[to develop a language model for a group of users based on the group preferences, which contains all, and only, essential shared commonalities of the group members, and to employ such a group profile to customize content suggestion for individual users]{}*. We break this down into three concrete research questions: 1. *[How to estimate models for group profiles capturing exactly the shared commonalities of their members?]{}* 2. *[How effective are group profiles to customize content suggestion for individual users?]{}* 3. *[How does the user’s group granularity affect the quality of the group’s profile?]{}* There is various research done on the task of group profiling which, given the individual attributes and preferences, aims to find out group-level shared preferences [@Senot:2011; @Masthoff:2011]. @Tang:2011 presented three different methods for group profiling: *Aggregation*, which tries to find features that are shared by the whole group; *Differentiation*, which tries to extract features that can help to differentiate one group from others; and *Egocentric differentiation*, which tries to extract features that can help to differentiate members of one group from the neighbour members. In recent work, @Hu:2014 proposed a deep-architecture model to learn a high level representation of group preferences. For group recommendation there is research on building a model of a group by forming a linear combination of the individual models [@Jameson:2007]. Some of them construct the group’s preference model on the basis of individual preference models, using a notion of distance between preference models [@Yu:2006]. Some approaches try to divide the group into several categories of homogeneous users and specify the preference model for each subgroup. Then they create the group model as a weighted average of the subgroup models, with the weights reflecting the importance of the subgroups [@Ardissono:2003]. In this paper, we assume that the model of a user’s preferences is a mixture of general, specific and his/her group preferences, and we try to extract the latent group model. We utilize the extracted profile in the task of contextual suggestion to improve content customization based on a user’s group memberships. The rest of the paper is structured as follows. First, in Section \[sec:gp\], we explain our approach for group profiling in detail. Section \[sec:exp\] presents the results of our experiments on the task of group-based contextual suggestion as well as some analysis on the effect of group granularity on the quality of group profiling. Finally, Section \[sec:con\] concludes the paper and discusses possible extensions and future work. Group Profiling {#sec:gp} =============== In this section, we investigate our first research question: “[How to estimate models for group profiles capturing exactly the shared commonalities of their members?]{}”  Group profiling refers to the task of extracting a descriptive model for a group of users which addresses the particular aspects that bind group members together. The goal of the proposed method for group profiling is to extract the latent common language model of the group which represents the shared group preferences. We assume that there are three different models from which the group members sample to express their preferences: *group* model, *general* model, and *specific* model. The group model is supposed to represent the shared preferences of the group. The general model represents the general terms that anybody may use (very common observed terms), and specific model represents the terms that are attached to an individual user’s preferences but not others (partially observed terms). Each model is a distribution over terms. We consider collection language model $\theta_c$ as the general model: $$p(t|\theta_c) = \frac{c(t,C)}{\sum_{t' \in V} c(t',C)}$$ This way, terms that are well explained in the collection model get high probability and are considered as general terms. To estimate the probability of terms given the specific model, $\theta_s$, we use the Equation \[eq:Spec\] and normalize all the probabilities to form a distribution: $$P(t|\theta_s) = \sum_{u_i\in G} \bigg(P(t|\theta_{u_i}) \prod_{\substack{u_j\in G \\ j \neq i}} (1-P(t|\theta_{u_j}))\bigg) \cdot idf_G(t) \label{eq:Spec}$$ where ${u_i}$ is the user $i$ in group $G$, and $\theta_{u_i}$ is the language model representing the user’s preferences. Here, $idf_g(t)$ represents inverse document frequency of term $t$ in group $G$. Equation \[eq:Spec\] calculates probability of term $t$ to be an specific term. To this end, it simply considers the probability of a term to be important in one of the user models but not others, marginalized over all user models as well as considering group document frequency. In this way, terms that are well explained in only one user model but not others get high probability and are considered as specific terms. Based on the generative model, each term in a user model is generated by sampling from a mixture of these three models—group, general (or collection), specific—independently. Thus, the probability of generating term $t$ in user model $u$ would be: $$p(t|u) = \lambda_{u,g} p(t|\theta_g) + \lambda_{u,c} p(t|\theta_c) + \lambda_{u,s} p(t|\theta_s),$$ where $\lambda_{u,x}$ stands for $p(\theta_x|u)$ which is the probability of choosing model $\theta_x$ given the user $u$. The goal is to fit the log-likelihood model of generating all terms in the user models to discover the exact term distribution of the group model, $\theta_g$. Let $G = \{u_1, \ldots u_n\}$ be a group of users. The log-likelihood of the group would be: $$\log p(G|\Lambda) = \sum_{u \in G}\sum_{t \in V} c(t,u) \log \big(\sum_{x\in\{g,c,s\}}\lambda_{u,x} p(t|\theta_x)\big),$$ where $c(t,u)$ is the frequency of term $t$ in user model $u$ and $\Lambda$ determines the set of all parameters: $$\Lambda = \{\lambda_{u,c}, \lambda_{u,s}, \lambda_{u,g} \}_{u \in G} \cup \{\theta_g\}$$ As we have mentioned, we estimate $\theta_c$ and $\theta_s$ based on the collection language model as well as the patterns of terms occurrences in the documents and we make them fixed in the model. Finally, to fit our model, we estimate the parameters using the maximum likelihood (ML) estimator. So we solve the following problem: $$\Lambda^* = {\arg\!\max}_\Lambda p(G|\Lambda)$$ Assuming $X_{u,t}\in \{g,c,s\}$ as a hidden variable indicating which model has been used to generate term $t$ in user model $u$, we can compute the parameters efficiently using Expectation-Maximization (EM) algorithm. The stages of EM algorithm would be: E-Step : $$p(X_{u,t} = x) = \frac{\lambda_{u,x} p(t|\theta_x)}{\sum_{x' \in \{g,c,s\}}\lambda_{u,x'} p(t|\theta_x')}$$ M-Step : $$p(t|\theta_g) = \frac{\sum_{u \in G}c(t,u) p(X_{u,t} = g)}{\sum_{t' \in V}\sum_{u \in G}c(t',u) p(X_{u,t'} = g)}$$ $$\lambda_{u,x} = \frac{\sum_{t \in V}c(t,u) p(X_{u,t} = x)}{\sum_{x' \in \{g,c,s\}}\sum_{t \in V}c(t,u) p(X_{u,t} = x')}$$ After convergence of the EM algorithm, all the parameters are estimated including the group model, $\theta_g$, which is a distribution over terms representing shared group preferences as well as $\lambda_{u,g}$, $\lambda_{u,c}$, and $\lambda_{u,s}$ for each user $u$ which determine the contribution of each model in each user’s preferences. In this section, we proposed a generative user model as a mixture of group, general (or collection), and specific models, and showed that the latent distribution of group model over terms can be extracted as the group’s profile. Experiments {#sec:exp} =========== In this section, we present our experiments to evaluate the effectiveness of the estimated language model of groups in the task of contextual suggestion. Furthermore, we analyse the effect of group granularity on group profiling. We first explain the data collection used in our experiments and then present the evaluation results. Data Collection {#sec:dataset} --------------- In this research, we have made use of the TREC 2015 contextual suggestion[^1] Batch task dataset. Contextual suggestion is the task of searching for complex information needs that are highly dependent on both context and user interests. The dataset contains the information from 207 users including their age, gender, and set of rated places or activities as the user preferences (rates are in the range of -1 to 4). The task is to generate a list of ranked suggestions from a set of candidate attractions, by giving the user information as well as some information about the context, including location of trip, trip season, trip type, tripe duration, and the type of group the person is travelling with. For each user, we consider rated suggestions that are annotated with rates of more than 2 as relevant. Furthermore, we generate the user language models as a mixture of their relevant preferences considering the rates. Based on the information in the dataset, we divide users into several groups. Groupings are based on the users information and context information. Table \[tbl:stat\] presents grouping criteria, the groups, and number of users in each group. Group Profiling for Contextual Suggestion {#sec:groupprofiling} ----------------------------------------- In this section, we investigate our second research question: “[How effective are group profiles to customize content suggestion for individual users?]{}” We generate group-based rankings of suggestions to evaluate the quality of group profiles in content customization. To this end, one of the grouping approaches given in Table \[tbl:stat\] is chosen, e.g. based on users’ age. Then we estimate language model of each group employing the approach explained in Section \[sec:gp\]. Afterward, regarding the information of the given request, i.e. the user information and context information, the group which the user belongs to is selected and based on the similarity of the language model of the selected group and the language model of candidate, the ranked list of the suggestions is generated. Beside the group-based ranking, we generate a ranked list of suggestions based on the preferences of the user as a baseline. To do so, a language model is estimated as the mixture of the model of user preferences regarding their ratings and based on the similarity of the preferences language model and the candidate language model, a ranked list is generated. Furthermore, according to the explanation in Section \[sec:gp\], the contribution of each of *specific*, *group*, and *general* models in each user model is learned as the model parameters, i.e. $\lambda_{u,s}$, $\lambda_{u,g}$, and $\lambda_{u,c}$. Having these parameters empowers us to efficiently combine the group-based model with the preferences-based model for content customization. To this end, we smooth the preferences-based model of user with both the group model and the general model using JM-smoothing employing the learn parameters. To evaluate the quality of the combination, we have done experiments considering different grouping criteria. Figure \[fig:Chart1\] presents the performance of employing different grouping approaches for group-based suggestion as well as preferences-based suggestion. The combinations of preferences-based suggestion and group-based suggestion are also reported. As can be seen, among the group-based strategies, suggestions based on the duration of the trip is the most effective strategy. Also age of the user and the type of the group the user travels with, are rather important while type of the trip is not so important. This could be due to the fact that most of the time, the user’s interests and beloved attractions do not change based on the type of trip which could be “business” or “holiday”. On the other hand, combining the preferences-based suggestions with group-based suggestions in all grouping strategies leads to improvement. This means in case of incompleteness of user’s profile, customizing the content based on the groups that user belongs to, implicitly fills the missing information and improves the performance of suggestions. However, this depends on the quality of the groups profiles that should reflect essential common (not general, not specific) characteristics of the groups. Effect of Group Granularity {#sec:gg} --------------------------- In this section, we investigate our third research question: “[How does the user’s group granularity affect the quality of the group’s profile?]{}” In the grouping stage, sometimes users can be grouped based on different levels of granularity. For example, having the age of users, discretization can be done using binning with different sizes of bin. In this section, we analyse the effect of granularity of groups, and consecutively the size of the groups with a fixed volume of train data, on the quality of group profiling. We have selected “age” of users as the grouping criterion and tried different bin sizes for discretization: 5 years, 10 years, 20 years and 40 years. Figure \[fig:Chart2\] shows the quality of groups profiles on different levels of granularity and consequently on different sizes of groups in the task of contextual suggestion. Each point in the figure represents a group of users and its position determines its size and the performance of group-based contextual suggestion for the users within the group. Moreover the horizontal lines represent overall performance of different levels of granularity. As can be seen, since the number of sample users is limited fine-grained grouping leads to having smaller groups. So small number of samples affects the group profiling quality and slightly decreases the performance. While coarse-grained grouping leads to having large groups that leads to not being able to adequately customize the group profile. In our dataset, 10 years granularity for “age” has the best performance since the formed groups are big enough so that the group profiling approach is able to estimate high quality models, and they are small enough so that the group profiles are easily distinguishable which leads to a more effective customization. Conclusions {#sec:con} =========== In this paper, we dealt with the problem of group profiling. The main aim of this paper was *[to develop a language model for a group of users based on the group preferences, which contains all, and only, essential shared commonalities of the group members, and to employ such a group profile to customize content suggestion for individual users]{}* Our first research question was: *[How to estimate models for group profiles capturing exactly the shared commonalities of their members?]{}*  We proposed to consider each user preferences as a mixture of general, specific and its group preferences and estimated the latent group preferences as the shared concerns among all group members. Our second research question was: *[How effective are group profiles to customize content suggestion for individual users?]{}*  We utilized the proposed group profiling approach for the task of contextual suggestion, and our experimental results showed that considering group-based suggestions along with user preferences-based suggestions can improve the content customization. Our third research question was: *[How does the user’s group granularity affect the quality of the group’s profile?]{}*We designed an experiment to investigate how group granularity may affect the group profiling quality. We found that the grouping approach should result in groups that are big enough to enable group profiling to infer high quality models and small enough to enable the extracted model to make customization for group members. As the future work, we are going to find a way for learning how to employ group-based suggestions on the basis of different grouping criteria simultaneously. For example, how to combine suggestions based on the age with suggestions based on the gender. A further development would be to evaluate the proposed group profiling approach on other tasks and other kinds of data including non-textual data. [**Acknowledgments** ]{} This research is funded in part by the European Community’s FP7 (project meSch, grant \# 600851) and the Netherlands Organization for Scientific Research (WebART project, NWO CATCH \# 640.005.001; ExPoSe project, NWO CI \# 314.99.108; DiLiPaD project, NWO Digging into Data \# 600.006.014). [^1]: <https://sites.google.com/site/treccontext/trec-2015>
{ "pile_set_name": "ArXiv" }
--- abstract: 'The differential conductance in a suspended few layered graphene sample is found to exhibit a series of quasi-periodic sharp dips as a function of bias at low temperature. We show that they can be understood within a simple model of dynamical Coulomb blockade where energy exchanges take place between the charge carriers transmitted trough the sample and a dissipative electromagnetic environment with a resonant phonon mode strongly coupled to the electrons.' author: - 'A. Chepelianskii' - 'P.Delplace' - 'A.Shailos' - 'A.Kasumov' - 'R.Deblock' - 'M.Monteverde' - 'M.Ferrier' - 'S.Guéron' - 'H.Bouchiat' title: 'Phonon assisted dynamical Coulomb blockade in a thin suspended graphite sheet.' --- One of the great challenges of molecular electronics is to access electron-phonon coupling at the single molecule level. Mechanically tunable atomic break junctions with trapped small molecules such as ($H_2$, $D_2$, $H_20$) have been shown to exhibit a spectroscopic signature of their characteristic phonon modes [@VanRuyten]. The signature of phonons is also spectacular in the Coulomb blockade regime for a molecular single electron transistor: The typical resonant tunneling peaks as a function of gate or source-drain voltage are surrounded by satellites, which correspond to the emission or absorption of one or several phonons. Specific vibrational modes were identified in this way in fullerenes and suspended carbon nanotubes [@park; @dekkervib; @vanderzant]. Theoretical models [@flensberg03a; @flensberg03b; @mitra04] were developed to describe these vibrational side bands in molecular transistors, involving either a quantum or classical treatment of the electron phonon coupling. In all these investigations the single level spacing within the molecule is larger than the energy of the vibration modes coupled to the molecule, so that only a single molecular level needs to be considered. In the present work we investigate the opposite limit of a mesoscopic dot where both the single level spacing and Coulomb charging energy are smaller than the energy of the phonon mode considered. Moreover the transmission of the electrodes corresponds to an intermediate tunneling regime described by the physics of dynamical Coulomb blockade (DCB). The samples are micron size few atomic layer graphite foils suspended between two platinum electrodes. The differential conductance exhibits at low temperature around zero bias a power law increase characteristic of DCB through the contacts. The graphite foil sample itself constitutes the electromagnetic environment. More original, on the thinest sample (with 30 graphene layers) a series of periodic replica of this Coulomb blockade anomaly was detected at multiples of 20 meV, corresponding approximately to the lowest energy out of plane optical mode in graphite (ZO’)[@rubio]. These sharp dips were not observed on two control graphite samples which were likewise suspended, had similar resistance and lateral dimensions but were more than 30 times thicker. We analyse these results with a simple model, inspired by [@mitra04], of a mesoscopic island connected to electrodes via tunnel barriers. We model the island by a continuous electronic spectrum coupled to a single phonon mode, which leads to an oscillating transmission of the barriers at the contacts [@mitra04]. This model can also be solved using the so called $P(E)$ theory developed by Ingold and Nazarov [@ingold] to describe energy exchanges of a Coulomb blocked tunnel junction with a dissipative electromagnetic environment and presents a striking analogy with the behavior of a tunnel junction coupled to an electromagnetic resonator in series with an ohmic environment as investigated by [@devoret95]. We finally deduce from the field dependence of these dips an order of magnitude for the relevant electron-phonon coupling parameter in the system. The samples were prepared by exfoliation of a highly oriented pyrolytic graphite (HOPG) single crystal and deposition across a slit etched in a silicon nitride membrane separating two Pt metallic contacts. The number of graphene layers was estimated from transmission electron microscopy pictures, see fig.\[samplepanddIdVhighV\] for the thinnest sample, which contains between 25 and 35 layers. The electrical contacts were obtained just by pressing the sample onto the electrodes. Thus the sample resistance, $ R_t= 100 k\Omega$ at 4.2 K, mainly consists of the resistance at the contact, and increases as the temperature is reduced. The square resistance of the graphite layer itself is not expected to be larger than $5 k\Omega$, the maximal resistance of a single graphene sheet. The differential conductance $dI/dV$ was either measured by modulating the voltage bias and measuring the current modulation by standard lock-in detection or deduced from the differential resistance obtained by applying a small ac current of typical amplitude 1 to 10nA superimposed to a dc current bias. The dc voltage drop V through the sample was then deduced by integration of $dV/dI =f(I)$. The triangular shape of $dI/dV =f(V)$ observed at high bias (above 0.15 V, see fig.\[samplepanddIdVhighV\]) can be related to the linear dependence of the density of states $\nu(E$), characteristic of the band structure of graphene as well as of graphite at high enough energy [@bandstructurewallace]: Indeed, the electronic transmission between the graphite sample and the underlying electrodes is low, so that the voltage drop occurs mainly at the contacts. The differential conductance can then be written as: $$dI/dV \propto \Gamma_L \Gamma_R( \alpha \nu(EF +\alpha eV) + (1-\alpha) \nu (EF - (1-\alpha)eV)) \label{asym}$$ where $\Gamma_L$ and $\Gamma_R$ are the transmissions of the left and right contacts respectively. The parameter $1/2 \leq \alpha \leq 1$ characterizes the asymmetry of the contact resistances and voltage drop, $\alpha = 1/2$ corresponds to symmetrical contacts with $V/2$ voltage drop at each contact. The asymmetry observed between positive and negative bias is attributed to a combination of a slight doping of the sample together with some asymmetry in the transmission of the electrodes. We now focus on the conductance at low voltage (below 0.12 V) which exhibits a pronounced dip at zero bias. This behavior is characteristic of Coulomb blockade through a small capacitance tunnel junction in series with a dissipative electromagnetic environment which can exchange energy with the tunneling quasi particles on a scale much smaller than the charging energy. This yields the so called Dynamical Coulomb Blockade (DCB) [@ingold]. The differential conductance data is expected to follow a scaling behavior as a function of bias and temperature: $$G(V) = dI/dV = T^z f(eV/k_BT) \label{scaling}$$ with $\lim_{x\rightarrow 0} f(x)= Cst$ and $\lim_{x\rightarrow \infty} f(x)=x ^z$, and the exponent $z$ is expected to be $\alpha^2 R/R_Q$ inthe case of Two asymmetric junctions, where R is the resistance of the environment and $R_Q= h/2 e^2$ is the resistance quantum. The data shown in fig.\[scalingdynamicCB\] yields z of the order of $0.25\pm 0.05 $. Such a power law dependence also was found in the two other thicker samples, with similar exponents. It is thus reasonable to assume that the dissipative ohmic environment in the present case is constituted by the few top graphene layers of the graphite samples. Note also that a similar behavior was already observed on multiwall carbon nanotubes with low conductance contacts [@bachtold]. ![ Bias dependence of the differential conductance measured on a suspended thin foil of graphite. Inset: a) transmission electron microscopy picture of the sample. b) Side view taken at high resolution from which it is possible to estimate the number of graphene layers of the order of 30. \[samplepanddIdVhighV\]](photo.eps){width="9cm"} ![Differential conductance in the vicinity of zero bias measured on a thick suspended graphite sample measured at several temperatures between 300mK (lower curve) and 1K (upper curve) The continuous line is a power law of exponent 0.25. The data can be rescaled according to eq.\[scaling\] with z =0.25. Full circles: temperature dependence of the zero bias resistance. \[scalingdynamicCB\]](figscaling.eps){width="9cm"} More original, as shown on fig. \[peaksg\], is the bias dependence of the differential conductance measured on the thinest sample investigated which is 10 nm thick and contains thirty graphene layers. It exhibits a series of eight sharp dips resembling the zero bias one and nearly equally spaced by $20\pm 2 mV$. Their amplitude decreases with increasing voltage except for the broader dip at 50 mV which can be decomposed into 2 overlapping negative peaks centered around 40 and 60mV as suggested by the data taken at 1nA ac excitation. The energy scale of 20 meV does not correspond to any simple electronic energy scale in the sample, whose charging energy is in the meV range and level spacing in the $10\mu eV$ range. On the other hand the lowest energy optical phonon in graphite (so called Z0’) has an energy of 15meV [@rubio]. This mode, which emerges from the out of plane transverse acoustic mode of graphene [@rubio; @xray], is only present in graphite and corresponds to the two neighboring, non equivalent, planes vibrating in phase opposition along the c axis. This phonon mode has been observed experimentally in graphite by inelastic X-ray scattering [@xray] and scanning tunneling spectroscopy [@vitali] with an energy of $15 \pm 1meV$. The observed peak positions at multiple values of 20 mV instead of 15 mV can be attributed to the asymmetry of the contacts which corresponds to the parameter $\alpha \simeq 0.75$ in eq.\[asym\]. These dips are only observed on the 10 nm thick foil and were not detected on the two thicker (more than 100 nm) samples. This can be understood considering that the conversion from electric energy (depending only on the resistance of the tunneling barriers) into mechanical vibrations leads to an induced vibration amplitude inversely proportional to the number of layers in the graphite foil. The suspended character of the sample is also essential, since interaction with a substrate suppresses considerably the amplitude of induced vibrations as already demonstrated on carbon nanotubes [@dekkervib]. Note that STM spectroscopy on bulk graphite [@vitali] also reveals inelastic contributions due to plasmons which are not detected in the present experiment. In order to explain the data more quantitatively we extended the work of Mitra et al. [@mitra04] on the phonon assisted Coulomb staircase observed in the transport through fullerenes molecules. The coupling between the ZO’-phonon mode and the electrons in the graphite sample is described using a Holstein Hamiltonian [@Holstein]. In the absence of disorder this Hamiltonian $H_G$ reads: $$\begin{aligned} H_{G} = \sum_k \epsilon_k c_k^+ c_k + \lambda \; \hbar \omega \sum_k c_k^+ c_k (b^+ + b) + \hbar \omega \; b^+ b\end{aligned}$$ where $c^+_k$ and $c_k$ are the fermionic electron creation and annihilation operators in momentum space, $\epsilon_k$ the electronic energy, and $b^+$ and $b$ the creation and annihilation operators of the bosonic ZO’-phonon of frequency $\omega$. In contrast with previous work [@mitra04], the electronic energy level spacing is small compared to the phonon energy $\hbar \omega$. The parameter $\lambda $ is the dimensionless electron phonon coupling constant which we assume to be of the order of unity like in carbon nanotubes [@lambda]. The coupling to the leads is then described in a dynamical Coulomb-blockade formalism [@ingold], by a Hamiltonian of the form $H_T = \sum_{k,k'} T_{k,k'} a_k c^+_{k'} e^{-i \phi} + hc$. Here $a_k$ are the electron annihilation operators in the leads, $T_{k,k'}$ are the tunnel amplitudes and $\phi$ is a phase operator describing the electromagnetic environment of the junction. ![ (a)Differential conductance measured at 70mK on the thin graphene foil depicted in fig.\[ssamplepanddIdVhighV\], for to current excitations 1nA (thin line) and 10nA (bold line). Note the sharp dips nearly equally spaced by 20mV , except the broad dip centered around 50mV which can be decomposed into 2 overlapping dips centered around 40 and 60mV as suggested by the data taken at 1nA ac excitation.(b) Theoretical fit for different phonon temperatures with the following parameters deduced from the geometry of the sample and the differential conductance data at low bias described by DCB : $\alpha = 0.75$, $R_T = 125 {\rm k \Omega}$, $\Delta = 200 {\rm mV}$, $e^2/C = \hbar \omega / 4$. The environment is described with a resistance $R / R_Q = 1$. The only free adjustable parameter is $\lambda = 0.7$. \[peaksg\]](comptheographene.eps){width="9cm"} The Hamiltonian $H_G$ can be diagonalized with a canonical Lang-Firsov transformation: $b' = e^{-S} b e^{S}$, $H_G' = e^{-S} H_G e^{S}$ where $S = \lambda \sum_k c_k^+ c_k (b - b^+)$. In the limit when the charging energy of the sample is negligible [@mitra04] the transformed Hamiltonian reads simply $H_G = \sum_k \epsilon_k c_k^+ c_k + \hbar \omega \; b^+ b$ where we have omitted the primes for the transformed operators. In the transformed basis the transfer Hamiltonian $H_T$ is given by : $$\begin{aligned} H_T = \sum_{k,k'} T_{k,k'} a_k c^+_{k'} e^{-i \phi + \lambda (b - b^+)} + hc \label{eqeph}\end{aligned}$$ This expression is obtained by expanding the product $e^{-S} H_T e^{S}$ under the assumption that the environment phase $\phi$ commutes with the phonon operators. It shows that the coupling to phonons essentially changes the phase operator of the junction. As a result the current through the junction can be expressed with an effective $P(E)$ function describing the probability of electrons to loose an energy $E$ in a tunnel transition as in usual DCB theory. Since the electromagnetic environment and phonon operators commute, this function can be expressed as a convolution $$\begin{aligned} P(E) = \int d E' P_{env}(E') P_{ph}(E - E') \label{PE}\end{aligned}$$ where $P_{env}(E)$ is the probability of emitting a photon of energy $E$ in the $RC$ environment of the junction and $P_{ph}$ is the probability of emitting a phonon in the sample. The probability distribution $P_{ph}$ can be obtained by noticing that the corresponding phase operator $i \lambda (b - b^+)$ is analogous to that of an electromagnetic $LC$ circuit with resonant frequency $\frac{1}{\sqrt{LC}} = \omega$ [@ingold] (this result can also be obtained directly by tracing out the phonon degrees of freedom in eq.\[eqeph\]): $$\begin{aligned} P_{ph}(E) = e^{-\lambda2 \coth( \frac{\beta \hbar \omega}{2} )} \sum_k \delta( E - k \hbar \omega_0 ) e^{k \beta \hbar \omega / 2} I_k( \frac{\lambda2}{ \sinh( \beta \hbar \omega / 2 ) } ) \label{PEPH}\end{aligned}$$ where $\beta$ is the inverse of the phonon temperature $T_{ph}$. Using Eqs. (\[PE\] and \[PEPH\]) and standard expressions for current as a function of $P(E)$ [@ingold] it is possible to compute the current through the sample. For example in the case of symmetric contacts ($\alpha = 1/2$) and constant density of states, $d I/ d V = \frac{1}{R_T} \int dE \left( P(e V / 2 - E) + P(-e V / 2 - E) \right)$ where $R_T $ is the tunnel resistance of each contact. Note the similarity with the DCB in a tunnel junction in series with a LC resonator [@devoret95]. The case of an energy dependent density of states , and of asymmetric contacts can readily be included by straightforward generalization of this expression according to eq.\[asym\]. For comparison with the experiments we assume that the density of states of graphite is of the form: $\nu(E) = \nu_0 ( 1 + \frac{|E|}{\Delta} )$. This formula is exact for bilayer graphene with $ \Delta \approx 400 meV $ [@Bilayer] and $|E|< \Delta $. We also take the values of $R_T$, environment resistance $R$ and charging energy deduced from the geometry and conductance data at low bias. The only free adjustable parameter is $\lambda = 0.7$. As shown on fig.\[peaksg\] the agreement with experimental data is only qualitative especially at zero temperature , where theory predicts zero conductance at zero bias which is not observed experimentally . Better agreement is found when a finite phonon temperature is introduced. Since the sample is suspended the populations of phonons are supposed to be strongly out of equilibrium. We have tentatively introduced a phonon temperature increasing linearly with bias which leads to better agreement with experimental data. However, surprisingly, the best fit is obtained by imposing a bias-independent phonon temperature which does not seem a priori very physical. We have also in our model neglected the electronic temperature which is expected to be in the K range. Moreover we have not included the expected broadening of the phonon modes (even at very low temperature) due to the strong coupling to electrons. This may explain why the best fit corresponds to a finite phonon temperature. ![Evolution of the differential conductance dips with magnetic field. Insets: field dependence of the amplitude and position of the dips labeled 1,2 and 3. \[magneticfield\]](peakfield.eps){width="9cm"} In the following we discuss the magnetic field dependence of the dips in the differential conductance ( fig. \[magneticfield\]). They vary both in amplitude and position between 0 and 5 T . Whereas the first two dips and the forth one are shifted toward lower frequency with increasing magnetic field (as seen both at positive and negative bias) the third peak is shifted toward high frequency (we consider here the negative bias data since the second and third dips at positive bias can barely be resolved as discussed above). The amplitude of the dips (see inset in fig.\[magneticfield\]) may decrease or increase with magnetic field (peak 2 and 3), or vary in a non monotonic way (peak 0 and 1). The relative shifts of the dips with magnetic field by an typical amount of 5 to 20$\%$ are of the same order of magnitude as the relative variations of their magnitudes. We attribute these effects to the field dependent density of states of the graphite foil in the field range where Shubnikov de Haas oscillations just start to show up. Even if these observations are not yet understood in detail they indicate a strong electron phonon coupling and justify the value of $\lambda = 0.7$ since a relationship such as $\Delta \omega(B) / \omega(B= 0) = \lambda ^2 \omega \nu(E_F) \Delta G(B) / G(B=0)$ between the typical phonon frequency magnetic dependence and magneto conductance, is expected to hold [@fuchs]. High electron phonon coupling has already been reported in graphene concerning in plane optical modes [@castroneto07]. In the present case however, strong electron phonon coupling between transverse ZO’ vibrations and strongly anisotropic transport in the thin graphite layer is not straight-forward but can be understood if the electrical contacts between the two electrodes and the graphite foil take place through distinct graphene mono-layers which is highly probable. In conclusion we have shown evidence of differential conductance sharp dips in a suspended thin layer of graphite with 30 graphene foils. These dips can be interpreted within a simple model of dynamical Coulomb blockade with an environment strongly coupled to the lowest energy optical phonon mode ZO’ of graphite. The magnetic field dependence of the effect corroborates this interpretation. Aknowlegments: We aknowledge M. Kociak for the transmission electron microscopy pictures, J.N.Fuchs and M.Goerbig for fruitful discussions on the electron-phonon coupling in graphite. [99]{} A. Levy Yeyati and J.M. van Ruitenbeek, Les Houches Session LXXXI “Nanophysics: Coherence and Transport”, H. Bouchiat et al., eds. (Elsevier, Amsterdam, 2005) 495-535. O. Tal, M. Krieger, B. Leerink, and J.M. van Ruitenbeek, Phys. Rev. Lett. [**100**]{} (2008). J. Park, A.N. Pasupathy, J.I. Goldsmith, C. Chang, Y. Yaish, J.R. Petta, M. Rinkoski, J.P. Sethna, H. Abruna, P.L. McEuen, and D.C. Ralph, Nature [**17**]{}, 722 (2002) H. Park, J. Park, A.K.L. Lim, E.H. Anderson, A.P. Alivisatos, and P.L. McEuen, Nature [**407**]{}, 57 (2000). B. J. LeRoy, S. G. Lemay, J. Kong and C. Dekker, Nature [**432**]{}, 372(2004). S. Sapmaz, P. Jarillo-Herrero, Ya.M. Blanter, C. Dekker, and H. S. J. van der Zant Phys.Rev.Lett.[**96**]{}, 026801 (2006). Karsten Flensberg, Phys. Rev. [**B 68**]{}, 205323 (2003). Stephan Braig and Karsten Flensberg, Phys. Rev. [**B 68**]{}, 205324 (2003). A. Mitra, I. Aleiner, and A. J. Millis Phys.Rev [**B 69**]{}, 245302 (2004). L. Wirtz and A.Rubio Solid State Comm. 131 ,141(2004). G.L. Ingold and Y.V. Nazarov, in “Single Charge Tunneling”, edited by H. Grabert and MH Devoret, NATO ASI, Ser. B, Vol. 294 (Plenum, New York, 1991), cond-mat/0508728 T.Holtz, C.Urbina,D.ESteve and M. Devoret Phys.Rev.Lett.[**73**]{}, 3455 (1995). P.R. Wallace, Phys.Rev [**71**]{},622 (1947). A. Bachtold, M. de Jonge, K. Grove-Rasmussen, P. L. McEuen, M. Buitelaar, and C. Schönenberger, Phys. Rev. Lett. [**87**]{}, 166801 (2001). M. Mohr, J. Maultzsch, E. Dobardžic, S. Reich, I. Miloševic, M. Damnjanovic, A. Bosak, M. Krisch, and C. Thomsen Phys. Rev. B [**76**]{}, 035439 (2007) L.Vitali et al. Phys.Rev [**B 69**]{}, 12141 (2004) T. Holstein, Ann. Phys. N.Y. [**8**]{}, 325 (1959); [**8**]{}, 343 (1959) M.S. Dresselhaus, G. Dresselhaus, and P.C. Eklund, Science of Fullerenes and Carbon Nanotubes  Academic Press, San Diego, (1996). T. Ando Jour. of Phys. Soc. of Japan [**76**]{}, 104711 (2007) J.N. Fuchs (private communication). A.H. Castro Neto and Franciso Guinea, Phys. Rev. [**B**]{} 75, 045404 (2007).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose the description of the granular matter which is based on distribution of dry friction coefficients. Using such a concept and a simple one-dimensional packing of grains we solve the silo problem. The friction coefficients at contacts are determined both by geometry of packing configuration and the stress distribution in a medium. Within such an approach the Janssen coefficient $k_{J} $ is determined and its dependence on the particle-particle and boundary-particle friction coefficients is obtained. Also we investigate the conditions for the appearance of the maximum in the pressure distribution with the depth with overweight on top. As an outcome of our work we propose the general framework to the description of the granular matter as a continual medium which is characterized by the field of the dry friction tensor.' author: - 'K. S. Glavatskiy' - 'V. L. Kulinskii' title: '$\mu$-model for the statics of dry granular medium' --- Introduction {#introduction .unnumbered} ============ The properties of granular matter differ drastically from those of other continuous media like solids, liquids and gases. Despite its mechanical nature the problem of description of granular media is still open problem. Some arguments put forward recently [@du95; @kadanoff/rmp/1999] even question the possibility of the description of such media basing on the hydrodynamic approach. The latter usually applied to continuous media with short-range interparticle forces. But nonconservative nature of the friction force hampers direct application of such an approach based on local balance equations for physical quantities like momentum, energy, entropy, etc. In such a complex situation with the dynamics the static properties of the granular medium are easier to investigate. There are several characteristic static phenomena, which any theory should explain. They are connected with the static distribution of the pressure along a silo. Namely, the deviation from Pascal law and the nonmonotonic dependence of the apparent mass on the overweight on the top of a silo. Lets us review them shortly. In contrast with conventional fluids, “hydrostatic” pressure in granular media reaches the asymptotic finite value $p_0$ at some finite depth $\lambda$, and can be described as $$\label{janss} p(z) = p_0 \left(1-\exp(-z/\lambda)\right)\,,\quad \lambda = {\frac}{R}{2\mu k_J}$$ where $R$ is the width of the silo, $\mu$ is the static friction coefficient between the grains and the walls of the silo, and $k_J$ is the so-called Janssen coefficient. First explanation of this behavior originated from the Jannsen model [@janssen] (see also [@degennes/rmp/1999]), based on two simple assumptions: i) the linear relation between $\sigma _{xz} $ and $\sigma_{xx} $ components of the stress tensor, which is analogous to the Coulomb-Amonton law for the dry friction, and ii) the linear ratio between other two components of the stress tensor: $\sigma _{xx} = k_{J} \sigma _{zz}$. Since then, many approaches to the statics of granular medium have been proposed either giving the grounds for the Janssen relations [@ovralezclement/pre/2003; @clement/epj/2005], or building the theory without using them [@pitman/pre/1998; @socolar/pre/1998; @coppersmith/pre/1996]. One of the simplest approach treats the granular media within the framework of linear isotropic elasticity. Indeed, the Janssen coefficient $k_{J}$ might be expressed through the Poisson ratio of the media, and numerics recovers the Janssen relations between the components of the stress tensor [@clement/epj/2005]. Detailed comparisons with experimental results even allow to extract the relation between the Poisson ratio and the granular packing fraction [@clement/epj/2005]. However, such a picture is hardly adequate for the granular media of rigid particles. In addition, it is hard to control realization of the Coulomb threshold condition everywhere at the walls, which, obviously, influences the experimental results [@vanel/inbook/98]. Other approaches do not exploit elastic nature of the grains, but use the Coulomb-Amonton law for the stress tensor components inside the media or at the walls [@cates/prl/2000]. To compete with indeterminancy of the problem, the granular media is assumed to be at the verge of Coulombic failure everywhere in the bulk. However, this assumption about fully mobilized friction seems to be too restrictive [@degennes/rmp/1999], since the frictional forces can be varied provided that the medium stays at rest. In order to simplify the problem which include randomness in the distribution of such coefficients, specific assumptions about the geometrical distribution of the grains should be made, e.g. within some lattice models [@coppersmith/pre/1996; @socolar/pre/1998]. Such models give the results which are consistent with continuum theories for the average stresses. A particularly interesting issue is the weight distribution in a silo with some overweight on top. Experiments show the maximum in the pressure distribution with the depth, in contradiction with predictions of the simple Janssen model [@cates/prl/2000]. Interestingly, this maximum appears almost at the same depth where pressure at the silo without the overweight changes in $e$ times. Although some of mentioned approaches do not contradict experiment, but the physical reasons of this phenomenon are still unclear. The aim of this paper is to propose the description of the granular media which incorporates an additional dry friction tensor field. In the simplest 1D geometry considered below it reduces to the distribution of the dry friction coefficients. We show that in this case it is possible to get the expression for the Janssen coefficient. Besides, we are able to explain the the weight distributions in experiments with overweight as a result of the inhomogeneity in the friction coefficient distribution. The existence of characteristic scale for such inhomogeneity leads to the parametric dependence of the relation “apparent mass - filled mass“ on the ratio of two length scales characteristic inhomogeneity and the saturation length for the bulk pressure. The structure of the paper is as follows. In Section \[section1\] we introduce the $\mu $-model and illustrate it on the simple 1D granular packing, which resembles the real granular packing in silo. The results and comparisons are presented in Section  \[section2\] of the paper. We discuss the force distribution with and without overweight for simple packing and also find how Janssen coefficient $k_{J}$ can be evaluated from microscopic characteristics of the media. In Section \[section3\] we discuss the grounds of $\mu$-model using the results in Section \[section2\]. Conclusions are given in Section \[conclusion\]. $\mu$-model for simple 1D packing {#section1} ================================= The grains at contact are subjected to static friction force. The value of such a force is determined both by the geometry of the contact surfaces and the stress at the contact. The value of the friction force is between 0 and the maximum value. The last is described by the Coulomb-Amonton law for static friction. Exact value is defined by the condition to keep the grain at rest. Since a grain is in contact with its neighbors at several points the net force is the sum of reaction forces for each contact. Because of this, granular system cannot be described using deterministic approach. There are more unknown variables than the equations. The granular matter can implement different force configurations even if the packing configuration is the same. Between two grains the reaction force acts. It is expedient to split it into two components: normal (normal reaction force) and tangential (friction forces). For each point of contact we can write $$\label{eq1} F_{\tau} = \mu \cdot F_{n}\,,$$ where coefficient $\mu $ is different for different contact point but it’s value is in the interval $[0,\mu _{f} ]$, where $\mu _{f} $ is the static friction coefficient and determines the static friction angle. Obviously these coefficients for each contact point together with the normal stress determine the force configuration of the system. In the continuous limit, the set of these coefficients transforms into the tensor field $\mu $. Such a tensor field becomes additional characteristic of a granular medium. To illustrate the idea we apply this model to simple granular packing shown in Fig. \[fig1\]. It is a regular pseudo-1D packing of identical rigid spheres in the vertical chute with a proper width. We choose 1-D model in order get explicit analytical results. Though such models seem to be oversimplified they grasp the main feature of the granular systems, namely the jamming. Also they are widely used in modelling avalanches within framework of SOC [@soc]. In particular as is shown in [@du95] 1-D systems shows nontrivial dynamic which is due to nonpotential character of the interparticle interactions. Moreover, according to the coefficient of friction $\mu$ is defined in a way irrespective on the dimension of the system. Thus it can be defined both for the grains and for the mesoscopic regions of the granular matter thus giving rise to the introduction of the $\mu$-field. ![Pseudo one-dimensional packing of identical spheres[]{data-label="fig1"}](fig1) Denote the normal force between grains $i$ and $i+1$ by $P_{i,i + 1}$, and between the grain $i$ and the wall by $N_{i}$. Let us denote the friction coefficients for each “grain-grain” pair by $\mu _{i,i + 1} $, and the ones for the “grain-wall‘’’ pair by $\sigma _{i} $. The set of values $P_{i,i + 1}$, $N_{i}$, $\mu _{i,i + 1}$, and $\sigma _{i}$ determines the force configuration of the system. The geometrical configuration of the system is represented by the angles between normal to the surface of contact of two grains and the vertical as $\alpha _{i,i + 1} $. The force equilibrium for each grain can be written as $$\label{eq2} \begin{array}{l} N_{i} - P_{i - 1,i} \vartheta_{i-1,i} - P_{i,i + 1} \vartheta_{i,i + 1} = 0 \\ - m_{i} g + \sigma _{i} N_{i} - P_{i - 1,i} \theta_{i-1,i} + P_{i,i + 1} \theta_{i,i + 1} = 0 \,. \end{array}$$ where $$\begin{array}{ll} \vartheta_{i,i + 1} = \sin \alpha_{i,i + 1} - \mu_{i,i + 1} \cos \alpha_{i,i + 1} \,,\\ \theta_{i,i + 1} = \cos \alpha _{i,i + 1} + \mu _{i,i + 1} \sin \alpha _{i,i + 1} \end{array}$$ We omit the equation for the moment of force to illustrate how $\mu$-approach works. The 1D nature of the system permits to write the expression for the the force between two particles, namely: $$\label{eq3} P_{i,i + 1} = {\frac{{g}}{{\vartheta_{i,i + 1}}}}{\sum\limits_{k = 0}^{i} {{\frac{{m_{k} T_{k+1,i}}} {{\eta _{k,k + 1} + \sigma_{k}}}}}} + P_{0} T_{0,i} {\frac{{\vartheta_{ - 1,0}}} {{\vartheta_{i,i + 1} }}} \,,$$ where $$T_{i_{1},i_{2}} = {\prod\limits_{n = i_{1}}^{i_{2}} {{\frac{{\eta_{n - 1,n} - \sigma _{n}}} {{\eta_{n,n + 1} + \sigma_{n} }}}}}$$ and $$\eta_{i,i + 1} = {\frac{{1 + \mu_{i,i + 1} tg\alpha_{i,i + 1}}}{{tg\alpha_{i,i + 1} - \mu_{i,i + 1}}} }$$ and $P_{0}$ is the overweight. Equation (\[eq3\]) allows to analyze the load distribution through the silo with and without overweight with some distribution of the contact friction coefficients $\mu_{i,i + 1}$. Results of the $\mu$-model for simple 1D packing {#section2} ================================================ It is well known that force distribution in granular media depend not only on the silo height but also on the history and method of preparation of the granular matter sample [@vanel/pre/1999; @howell99c]. Within the approach proposed such a method can be modelled, e.g. by the distribution of the coefficients $\sigma_{i}$, which shows the stresses at the walls. Basing on the Eq. (\[eq3\]) as the exact microscopic solution of the model problem, we can model such a feature, by the set of coefficients $\{\mu_{i,i + 1},\sigma _{i}\}$ and angles $\{\alpha _{i,i + 1}\}$. These data determine the geometrical and stress configuration of the system. The question about deviation of static pressure distribution in granular media from the Pascal law is of particular interest since it is the characteristic feature of such kind of materials. Note that that Eq. (\[eq3\]) reduces to the Pascal law with the pressure being proportional to the depth if the walls are absolutely smooth, i.e. $\sigma _{i} = 0$. This limiting case is in correspondence with the fact that the deviation from the PL is due to nonlinear dependence of the tangential component of the stress along the wall. Other configurations give the deviation form the PL. We investigate some of them numerically. To compare our results with known experimental data we take the random distributions of the friction coefficients $\sigma_{i}~\in~ [0.25, 0.27]$ and $\mu_{i,i + 1}\in [0.63, 0.73]$ with the limiting values chosen as the best fits to the simulational values of [@clement/epj/2005], and $\alpha _{i,i + 1} = \pi / 4$. Statistical averaging was performed over the 100 configurations for 50 grains, each one of mass 10 g. The resulting dependence between apparent and filled mass is shown in Fig. \[fig2\]. ![Dependence the apparent mass on the filled mass of Eq. (\[eq3\]) with random distributions of $\mu_{i,i+1}\in [0.63,0.73]$ and $\sigma _{i}\in [0.25,0.27]$, and $\alpha_{i,i+1}=\pi /4$, $m_{i}=10 g $. Data are comparable with results of Ref. [@clement/epj/2005].[]{data-label="fig2"}](fig2.eps) Force distribution in the silo without an overweight in continuous limit {#sec2/1} ------------------------------------------------------------------------ Another relatively simple case is the “homogeneous” granular packing with $\mu_{i,i+1}=\mu$, $\sigma_{i}=\mu_{w}$ and $\alpha_{i,i+1}=\alpha$. In this case one can take the continuous limit in Eq. . For a silo without an overweight ($P_0=0$), one gets $$\label{ph} P(\tilde {h}) = \rho g\lambda \left( {1 - e^{ - \tilde {h} / \lambda}} \right)\,,$$ with characteristic length $$\label{lambda} \lambda = {\frac{{d}}{{\vartheta \cdot(\eta + \mu _{w} ) \cdot \zeta \cdot \ln \left( {{\frac{{\eta + \mu _{w}}} {{\eta - \mu _{w}}} }} \right)}}}$$ where $\tilde{h}$ is the effective height: $\tilde {h} = h \cdot \vartheta \cdot (\mu _{w} + \eta )$, and configuration parameters $$\eta = {\frac{{1 + \mu \cdot \tan\alpha}} {{\tan\alpha - \mu}} }\,,\quad \vartheta = \sin \alpha - \mu \cos \alpha \,,\quad \zeta = {\frac{{1 + \sin \alpha}}{{\cos \alpha}}}$$ and $d$ is chute size. So, we can see how the Janssen result can be obtained due to microscopic approach. Note, that if $\tan\alpha = \mu $, Eq.  again reduces to the PL since grains do not lean against the wall. Another words the static friction between grains is enough to keep them at rest without any wall. $$\label{ph1} P(h) = \rho gh \cdot \cos \alpha\,\,.$$ Our result can be compared with Janssen formula . Formula was obtained for vertical cylindrical chute. To compare it with results of our model it should be obtained for parallelepiped chute, which is infinite in one direction and has the profile as shown on Fig. \[fig1\]. Calculation for this case transforms formula to the following: $$\label{myjanss} p(z) = p_0 \left(1-\exp(-z/\lambda)\right)\,,\quad \lambda = {\frac}{d}{2 \mu k_J}$$ In the “homogeneous” state the analogue of the Janssen coefficient $k_{J}$ can be devised via comparison of Eqs.  and with the result : $$\label{kj} k_{J} = {\frac{{\vartheta \cdot \zeta}}{{2}}} \cdot (\eta / \mu _{w} + 1) \cdot \ln \left( {{\frac{{\eta / \mu _{w} + 1}}{{\eta / \mu _{w} - 1}}}} \right).$$ Equation (\[kj\]) relates the Janssen coefficient with the “microscopic” characteristics of the granular packing. In Fig. \[fig3\] we illustrate the dependence of $k_{J} $ on friction at the wall ($\mu _{w}$) for different values of internal friction $\mu$. ![The Janssen coefficient $k_{J}$ as a function of friction $\mu _{w} $ on the walls from Eq.  for different $\mu $.[]{data-label="fig3"}](fig3.eps) Force distribution in the silo with overweight {#sec2/2} ---------------------------------------------- The influence of the $\mu$ distribution on the distribution of the pressure can also be shown by considering the system when overweight is present. For our numerical studies of the force distribution in the silo with overweight we use $\sigma_{i}=0.25$ and value of overweight $P_{0}=80.8 g$, which correspond to the value of grain-wall friction coefficient and overweight of Ref. [@clement/epj/2005]. Vessel contains 50 grains, each one of mass 10 g. The resulting dependence between apparent and filled mass is shown in Fig. \[fig4\]. ![Dependence of the apparent mass on the filled mass. Bottom curve: force distribution without overweight. Middle curve: force distribution with 80.8 g overweight for $\mu_{i,\,i+1}=0.65$. Top curve: force distribution with 80.8 g overweight for $\mu_{i,\,i+1}=0.65-0.1 e^{-\,0.5\, i}$. For all curves $\sigma_{i} = 0.25$, $\alpha _{i,i + 1} = \pi / 4$, $m_{i} = 10(g)$. Data are comparable with results of Ref. [@clement/epj/2005]. []{data-label="fig4"}](fig4.eps) Bottom curve shows the force distribution in “homogeneous” media, where all $\mu_{i,\,i+1}=\mu_{0}$ without overweight. As one can see from previous section this curve is nothing but Janssen exponential distribution. In case of overweight presence in such a “homogeneous” media, the force distribution is described by the middle curve. This result was also predicted by Janssen but it contradicts with the experiment. Since the force configuration is determined by the distribution of the coefficients $\mu_{i,\,i+1}$, we suppose that the latter has the same functional character as that for the pressure without overweight. The grounds of this assumption will be expanded in the next section. Choosing the $\mu$-distribution as following: $$\label{mui} \mu_{i,\,i+1} = \mu_0 + b e^{-\,c\, i}\,,$$ we adjust parameters $b,c$ so that to achieve the best fit (the top curve on Fig. \[fig4\]) to data of Ref. [@clement/epj/2005]. Thus we can see, that the maximum in the force distribution in the silo with overweight can be obtained if the friction coefficients are changed in the same way as the pressure changes without an overweight. Discussion of the $\mu$-model {#section3} ============================= The results obtained in previous section for simplified 1D model within the framework of $\mu$-model can be extended further since the final results do not contain any microscopic characteristics of such a specific model. Indeed, the proposed approach allows us to switch the description of the granular media from the consideration of force network to the distribution of friction coefficients, which in continuous limit transforms to the field of tensor $\mu$. Such a field becomes an additional characteristic of granular media which cannot be obtained due to common approach (e.g from Newton’s equations). Additional statistical arguments about how this field is distributed should be used. As one can see from Sec. \[section2\], we modelled the distribution of the $\mu$ in two different ways. First was uniform distribution $\mu=const$, and we showed how such an assumption conforms with previous theoretical results and experimental data. Another distribution was of exponential form and here we give the grounds for such a choice. Spatial distribution of the $\mu$-field {#sec3/1} --------------------------------------- Let us consider the distribution of pressure in a silo with additional weight on its top. Suppose, that overweight is implemented by another silo of the same material, in which the pressure has reached its saturated value. Thus, the considered part of the granular media is actually belongs to the region where the pressure has reached it’s saturated value. Therefore the pressure in this part must be equal to overweight. These reasonings are confirmed by Janssen’s model [@janssen] and our results, in which it is assumed that friction either at the wall or in the bulk is $const$. This is in obvious contradiction with the experiment [@ovralezclement/pre/2003], which shows that there is a maximum in the pressure dependence on depth. It is possible to get such a nonmonotonic dependence of pressure on overweight in some theoretical approaches (see e.g. [@pitman/pre/1998]), but there is no clear explanation why it appears. In addition, they are based on the assumptions $\mu = const$, which is adequate only near the Coulomb threshold. As we can see from the experiment, there are the “screening” region of size $\lambda$. In case of absence of the overweight, the pressure increases there and in case of overweight presence, the pressure has maximum within this region. Under overloading the stress configuration within this region changes more drastically than in the bulk. Within the approach proposed it can be described as the spatial distribution of the coefficients $\mu_{i,i+1}$ and $\sigma_{i}$, in general. In continuum approach it corresponds to some dependence $\mu(z)$ in the bulk and $\sigma(z)$ at the boundary. Since $\mu$ characterizes the stress configuration, its spatial character must be similar to that of pressure in the system without overweight. This assumption can be viewed as a first approximation in the expansion spatially distributed function in a series of approximation. Indeed the function $\mu(z)$ can be written as $$\label{mu0} \mu(z) = \mu_0 + \mu_1(z) + \mu_2(z) + \ldots\,\,.$$ Here $\mu_0$ is constant part of $\mu$, which is used in most of theoretical descriptions. Within the approach proposed $\mu_1(z)$ changes in the same way as the pressure without overweight changes. Then $\mu_2$ takes into account changing in pressure in granular medium where $\mu(z)= \mu_0 + \mu_1(z)$, etc. As one can see from previous section, to satisfy the experimental results, it is enough to model the distribution of $\mu_{i,i+1}$ as: $$\label{mui1} \mu_{i,i+1} = \mu_0 + b e^{-c i}\,,$$ omitting higher order approximations. We take $\mu_0$ as the bulk value, coefficient $b$ itself governs the gap between bulk and border value of $\mu$, both $b$ and $c$ are responsible for the speed of $\mu$ increasing. At the values of parameters, found in Sec. \[section2\], $\mu$ becomes saturated very quickly, so we have very thin region where $\mu$ changes from $0.55$ to $0.65$ (see Fig. \[fig5\]). ![Dependence of the spatially distributed $\mu$ on the depth in units of filled mass.[]{data-label="fig5"}](fig5.eps) But, as one can see from Fig \[fig4\], this region, where $\mu$ has such a dependence is enough to change the pressure distribution from the flat curve to the curve with the maximum. There is no maximum in the pressure dependence with depth, when $\mu=const$. This result, which was predicted by Janssen model, is confirmed by physical arguments, given above. Maximum appears when $\mu$ is distributed within a bulk in a given way. To satisfy the experimental results, region where $\mu$ changes in such a way must be narrow. Thus we can say about changing only near-boundary values of $\mu$. Note, that we have an essential difference in pressure dependencies only if overweight is present. When there is no overweight, the pressure dependence for constant $\mu$ and the pressure dependence for changing $\mu$ are almost the same. We can see that presence of an overweight reveals the real distribution of $\mu$. So we can conclude the following: a) the value of $\mu$ is not constant and has some distribution; b) such a distribution can be revealed only by the presence of an overweight and this distribution is essential only near the boundary. The last fact can be explained in the following way. When we fill the silo with the granular material some stress configuration is implemented in it. If then we put an overweight at the top of the silo, that configuration will become broken, and new one will be implemented. This happens because near-boundary layer feels this overweight and react on it. Other layers do not feel the overweight in essential way because of jamming in the upper layer. Other words, to fill the silo to height $2h$ at once is not the same that to fill the silo to height $h$ first and then $h$ again. This also illustrates how the stress distribution in the granular media depends on the history of packing creation. Macroscopic parameters {#sec3/2} ---------------------- Previous consideration allows to conclude that there must be at least two characteristic scales which characterize the state of silo under overloading. Followed by [@clement/epj/2005] we built rescaled dependencies (see Fig. \[fig6\]). ![Rescaled dependencies the apparent mass on the filled mass for different parameters $b$, which changes from $-0.2$ to $0$ through $0.02$. The bottom curve corresponds to $b=0$, the upper one corresponds to $b=-0.2$. Overweight and saturated pressure equals $80.32$ g. Apparent and filled mass expressed in overweight units. Other parameters are the same as for Fig. \[fig4\].[]{data-label="fig6"}](fig6.eps) As one can see the bottom curve corresponds to the $\mu=const$ because the fact that $b=0$ is equivalent to the $\mu_{i,i+1}=\mu_0$. For other values of $b$ $\mu$ is not $const$ and thus the apparent mass is not $const$ either. The distribution of the apparent mass on filled mass has maximum, with value which depends on $b$ shown at Fig. \[fig7\]. ![Dependence the maximum of apparent mass on the $\mu$ changing gap in case of 80.83 g overweight presence.[]{data-label="fig7"}](fig7.eps) One can see from this that there is no universal rescaled curve for different parameters. This fact was also mentioned in [@clement/epj/2005]. Within the approach proposed this is the consequence of the existence of additional scale parameter $\lambda_\mu$. Such a scale becomes evident in the presence of overweight if $\mu$ is not constant but $\mu$ is distributed within bulk. According to the proposed distribution of the $\mu$, $\lambda_\mu$ depends on $b$ and $c$, the gap in which $\mu$ changes and the steepness of $\mu$ saturation. The dependencies shown on Fig. \[fig6\] must be parameterized using this parameter. In other words, this parameter splits Janssen’s curve if silo is overloaded. Conclusion. {#conclusion} =========== We propose the description of the granular media at rest based on the introduction of the spatial distribution for contact coefficients $\mu$ of dry friction. With the help of the simple static model we investigate the distribution of the weight in the silo. It is shown that in the case without overweight and homogeneous distribution of the friction coefficients the Janssen result is recovered. In a case of overweight we predict the maximum for the apparent mass as a function of a filled one, which is observed in experiments. It is important that the nature of such a maximum is related to the inhomogeneity in the spatial distribution of the dry friction coefficients. Such a distribution is formed due to the jamming of the grains in the upper layers which bear most of the overload. We put forward physical arguments which allow to obtain such distribution of the friction coefficients by taking into account the inhomogeneity of the pressure distribution with the height. Note the these result are obtained without any assumption about elasticity of the grains made in quasielastic approaches [@clement/epj/2005]. In addition within the proposed approach it is possible to explain the absence of simple rescaling law for the overshooting effect by the presence of two characteristic length. The first length is the Janssen length $\lambda_J$ and characterizes the distribution of pressure, the other one is the scale of spatial distribution for dry friction coefficient. In the continuous limit the development of the approach proposed implies the introduction of the tensor field for dry friction. Such a field becomes additional characteristic of a granular medium which is determined by both the geometrical and the stress configurations. The possibility of such a description is due to the fact that the interparticle interactions in the granular media are of short distance character [@ll7]. It gives the grounds to expect that at least the static of granular medium should be described with the traditional framework of general elasticity theory with proper modifications. Authors thank to Dr. K. Shundyak for numerous discussions about the results. His critical remarks were also substantial for the final form of the paper.
{ "pile_set_name": "ArXiv" }
[EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH]{} CERN-OPEN-2006-065\ November 20, 2006 [**Quantitative Analysis of the\ Publishing Landscape in High-Energy Physics**]{} Salvatore Mele[^1]$^{,}$[^2], David Dallman, Jens Vigen, Joanne Yeomans\ CERN, CH-1211, Genève 23, Switzerland\ [**Abstract**]{} World-wide collaboration in high-energy physics (HEP) is a tradition which dates back several decades, with scientific publications mostly coauthored by scientists from different countries. This coauthorship phenomenon makes it difficult to identify precisely the “share” of each country in HEP scientific production. One year’s worth of HEP scientific articles published in peer-reviewed journals is analysed and their authors are uniquely assigned to countries. This method allows the first correct estimation on a [*pro rata*]{} basis of the share of HEP scientific publishing among several countries and institutions. The results provide an interesting insight into the geographical collaborative patterns of the HEP community. The HEP publishing landscape is further analysed to provide information on the journals favoured by the HEP community and on the geographical variation of their author bases. These results provide quantitative input to the ongoing debate on the possible transition of HEP publishing to an Open Access model. Introduction ============ High-energy physics (HEP) is commonly regarded as one of the most international and collaborative scientific disciplines. Over the last six decades, large experiments at accelerators of ever-increasing energy brought together first dozens, then hundreds and now thousands of scientists from an increasingly wider spectrum of countries. Furthermore, theoretical HEP predates by a long time present-day cross-border communication as a truly global enterprise. This endeavour was fostered by a long-standing tradition of scientific exchange, regular gatherings and long-term visits to several major centres of attraction by scientists. As a consequence of this well-established and thriving cross-border tradition, coauthorship of HEP articles by scientists affiliated to institutes in different countries is the norm rather than the exception. At the same time this coauthorship phenomenon complicates bibliometric studies aimed at evaluating the relative contributions of different countries to the production of HEP articles. This article presents an analysis of the distribution of HEP authorship over several countries and institutes, taking into account the coauthorship phenomenon on a [*pro rata*]{} basis. This analysis is based on one year’s worth of HEP articles, selected as presented in Section \[sec:2\]. Section \[sec:2bis\] explains the data-analysis procedure and discusses some bibliometric results. Results on the geographical distribution of HEP authorship are presented in Section \[sec:3\] and then interpreted in Section \[sec:4\] in terms of global collaborative patterns. The publishing landscape is investigated in Section \[sec:5\], which identifies the journals most used by HEP authors. Section \[sec:6\] presents additional results on the breakdown of the author base of the leading HEP journals among different countries; the distribution over different journals of the HEP scientific production of several countries and institutes is also discussed. These results are particularly relevant as they constitute a quantitative basis for the ongoing debate on the possible transition of HEP publishing to an Open Access model [@taskForce]. No assessment of the economical implications of such a transition is possible without clear and uncontroversial data on the contributions of different countries to HEP scientific publishing, which is presented here for the first time. Data Sample {#sec:2} =========== The [*preprint culture*]{} in HEP pioneered the free distribution of scientific results. For decades, theoretical physicists and scientific collaborations, eager to disseminate their findings in a way faster than the distribution of scholarly publications, printed and mailed hundreds, even thousands, of copies of their manuscripts before submitting them to peer-reviewed journals. This preprint culture tended, however, to favour the large laboratories and universities that could afford mailing large numbers of preprints while receiving comprehensive regular mailings [@luisella]. The spread of the Internet and the inception of the [arXiv]{} repository [@arXiv] ushered a new era for the preprint culture, offering all scientists a level playing field. In its current implementation, [arXiv]{} allows researchers to submit their preprints and browse or receive regular feeds on recent submissions in their area of interest [@arxivurl]. The [arXiv]{} repository and its mirrors collect the [*corpus*]{} of HEP articles, classified into four categories: - [hep-ex]{}, for high energy experimental physics; - [hep-lat]{} for studies of lattice field theory; - [hep-ph]{} for particle phenomenology; - [hep-th]{} for string, conformal and field theory. The attribution of articles to a particular category is performed by the authors themselves at submission time. The system supports cross referencing while multiple submission is frowned upon so that no double counting of the same article from two categories is expected in the following analysis. This analysis is based on all preprints submitted to [arXiv]{} in the year 2005 and classified in one of the four HEP categories. Owing to its widespread preprint culture, this sample represents a faithful snapshot of HEP peer-reviewed scientific literature. As in many other disciplines, HEP results are often presented in preliminary form at international conferences or workshops before being officially released in the form of a publication in a peer-reviewed journal. Results are then often summarised at other conferences in the following years. Preprints usually appear describing these conference contributions and therefore [arXiv]{} stores multiple, albeit different, entries corresponding to different phases of the life-cycle of a scientific result. To avoid this form of multiple counting of the same piece of work, the following analysis is restricted to preprints subsequently published in peer-reviewed journals. This requirement also removes lecture notes, theses and other unpublished material submitted to [arXiv]{} but not relevant for this analysis. The data on which this analysis is based are extracted from the [SPIRES]{} database [@spires] hosted at SLAC, the Stanford Linear Accelerator Center in California, and jointly compiled together with DESY, the Deutsches Elektronen-Synchrotron in Hamburg, and FNAL, the Fermi National Accelerator Laboratory in Illinois. This database is chosen as it has a complete coverage of the HEP articles in [arXiv]{} and in addition includes publication information. As an example, the sample of preprints submitted to the [hep-ex]{} category in [arXiv]{} during 2005, and subsequently published, is obtained with the following query: [FIND EPRINT HEP-EX/05\# AND PS P AND NOT TYPE C\ AND NOT TYPE L AND NOT TYPE B AND NOT TYPE T]{} Conference articles, lecture notes, theses and books are explicitly removed from the search. The samples for the other three [arXiv]{} categories are obtained [*mutatis mutandis*]{}. ------ ------- ------- --------------- ------- ------- --------------- ------- ------- --------------- ------- ------- --------------- ------- ------- --------------- Year $N_S$ $N_P$ $\varepsilon$ $N_S$ $N_P$ $\varepsilon$ $N_S$ $N_P$ $\varepsilon$ $N_S$ $N_P$ $\varepsilon$ $N_S$ $N_P$ $\varepsilon$ 2005 854 338 40% 663 246 37% 3918 2207 56% 3238 2225 69% 8673 5016 58% 2004 885 349 39% 586 261 45% 4138 2534 61% 3357 2361 70% 8966 5505 61% 2003 771 287 37% 575 227 39% 3964 2381 60% 3275 2428 74% 8585 5323 62% 2002 885 293 33% 583 218 37% 4245 2383 56% 3333 2482 74% 9046 5376 59% 2001 819 328 40% 574 218 38% 4228 2499 59% 3181 2305 72% 8802 5350 61% 2000 735 324 44% 508 235 46% 4124 2390 58% 3144 2259 72% 8511 5208 61% 1999 666 317 48% 588 244 41% 4076 2602 64% 2825 2180 77% 8155 5343 66% 1998 406 231 57% 623 282 45% 3807 2442 64% 2774 2061 74% 7610 5016 66% 1997 325 192 59% 548 227 41% 3615 2305 64% 2865 1990 69% 7353 4714 64% 1996 166 82 49% 558 248 44% 3327 2149 65% 2626 1924 73% 6677 4403 66% 1995 158 99 63% 437 228 52% 2990 2008 67% 2347 1704 73% 5932 4039 68% 1994 67 35 52% 447 202 45% 2500 1714 69% 2349 1639 70% 5363 3590 67% 1993 $-$ $-$ $-$ 374 209 56% 1762 1275 72% 2084 1460 70% 4220 2944 70% 1992 $-$ $-$ $-$ 321 180 56% 755 559 74% 1378 1080 78% 2454 1819 74% 1991 $-$ $-$ $-$ 4 3 75% $-$ $-$ $-$ 302 228 75% 306 231 75% ------ ------- ------- --------------- ------- ------- --------------- ------- ------- --------------- ------- ------- --------------- ------- ------- --------------- \[tab:0\] Data Analysis {#sec:2bis} ============= Table 1 presents the numbers of hits obtained by the [SPIRES]{} query in the four categories and their sum for the year 2005 as well as the entire historical record. A total of 5016 articles are selected for the year 2005. The total numbers of submissions for each [arXiv]{} category obtained with queries such as: [FIND EPRINT HEP-EX/05\#]{} are also presented in Table 1 together with their sum. The difference with the sample considered in this article is composed of conference articles and unpublished material. The ratios of the numbers of published articles to the numbers of [arXiv]{} submissions is also presented in Table 1. The historical evolution of the numbers in Table 1 is interesting: early years show a gradual increase in the number of submissions, consistent with the gradual adoption of the system, while numbers for later years are consistent with a plateau structure with year-to-year variations of a few percentage points. The queries on which this article is based were performed in the second half of October 2006 and one could argue that some preprints submitted in late 2005 could have still been in the editorial process and would not therefore have yet appeared in peer-reviewed journals. If the five-year period $2000-2004$ is used to predict the number of articles extracted by the query for the year 2005, this is just 6% above the number actually observed, leading to the conclusion that no large systematic bias affects the size of the sample under consideration. There are no reasons to believe that any sizable systematic effect from a small fraction of “undiscovered” articles would affect the relative contributions of different countries presented in the following. Figure 1 presents the distribution among the four different [arXiv]{} categories of the 5016 articles on which this analysis is based. Experimental results account for just 6.7% of the total. ![Distribution by [arXiv]{} category of the sample used in this analysis, corresponding to 5016 preprints submitted in the year 2005 and subsequently published in peer-reviewed journals.](apd){width="\textwidth"} \[fig:0\] ![Distributions of the number of authors of [hep-lat]{}, [hep-ph]{} and [hep-th]{} articles used in this analysis. The distributions are normalised to unit area and their mean is indicated.](averages3){width="75.00000%"} \[fig:1\] A first bibliometric result extracted from this study is the distribution of the number of authors per article. Figure 2 presents the distribution of the number of authors of each article in the three non-experimental classes [hep-lat]{}, [hep-ph]{} and [hep-th]{}. The average number of authors for the three classes are 3.6, 2.9 and 2.3, respectively. The average number of authors for the sum of the three classes is 2.6. The average number of authors for the [hep-ex]{} class is about 290. The distribution of the number of authors is biased by the fact that a dozen large experimental collaborations appear several times in the data sample. The breakdown of the considered [arXiv:hep-ex]{} sample into different experiments is shown in Figure 3. Implications of the large number of authors in experimental collaborations are discussed in Reference [@iupap]. ![Number of articles from the large experimental collaborations submitted to [ arXiv:hep-ex]{} in 2005 and subsequently published in peer-reviewed journals. The “Other” category comprises collaborations which published less than 4 articles as well as articles with less than 40 authors. The total number of articles is 338.](papersPerCollaboration){width="\textwidth"} \[fig:coll\] Unfortunately, as of today, no database allows an automatic extraction of bibliographic information concerning author affiliations for HEP articles at the level needed for this analysis. Therefore each article satisfying the query had to be inspected to perform a manual classification of the authors according to their affiliation. The output format of [SPIRES]{} partly alleviates this problem as author affiliations are often readable off the standard web-based output of the queries without having to access the article metadata on a publisher’s web site or the full-text version in [ arXiv]{}. Author affiliations were classified into 22 classes, listed in the first column of Table 2. European, American and Asian countries are singled out according to their contribution to the global HEP scientific production, down to a lower limit of about 1%. The contribution from CERN, the world’s largest HEP laboratory, is shown separately. The remaining countries are divided into two classes: CERN Member States[^3] and the remaining countries. As the vast majority of HEP in Italy is funded by INFN, the Istituto Nazionale di Fisica Nucleare, its contribution has been considered [*in lieu*]{} of the Italian one. Italian authors without an INFN affiliation are counted in the “Other Member States” category. As mentioned above, medium- and long-term visits of authors to different institutes and major laboratories is the staple diet of the HEP collaborative soul. As a consequence, authors of HEP articles often have multiple affiliations. Three principles to assign authors with multiple affiliations to a single class are followed in the order they are presented below. 1. If one of the multiple affiliations of an author is a HEP laboratory, the author is assigned to that laboratory in the case of CERN, or to the host nation of the laboratory in the other cases. 2. If only one of the multiple affiliations of an author corresponds to one of the countries explicitly singled out for the analysis, the author is assigned to that country. 3. If more than one of the multiple affiliations of an author corresponds to one of the countries explicitly singled out for the analysis, the author is assigned to a country or institution, according to an indicator which takes into account their [*pro-capita*]{} Gross Domestic Product and their expected share of the HEP scientific production. Distribution of the HEP Production by Country {#sec:3} ============================================= The first result of this analysis is the calculation of the share of HEP publications authored by each of the 22 countries and institutions into which the authors are classified. For each article in one of the four [arXiv]{} categories, each of the 22 countries and institutions is attributed a fraction of the article corresponding to the number of authors associated to that country, divided by the total number of authors. The sum of these fractions over all the articles of an [arXiv]{} category, divided by the total number of articles in that category, defines the share of a particular country or institution. The results are listed in Table 2 for the four [arXiv]{} categories as well as for their average. Figure 4 presents the distribution of the HEP scientific production over different countries. To our knowledge, this is the first result on the distribution of the HEP scientific literature by country where the phenomenon of coauthorship is taken into account. It is interesting to combine the results presented in Table 2 into the three largest sections of HEP authorship: CERN and its Member States, the United States, and the remaining countries. These results are presented in Table 3 for the four [arXiv]{} classes and their average. Figures 5 and 6 show a summary of the distributions of HEP authorship for the [arXiv]{} classes and their average, respectively. hep-ex hep-lat hep-ph hep-th Average --------------------- -------- --------- -------- -------- --------- CERN 0.9% 1.1% 1.7% 1.1% 1.3% Germany 6.3% 19.5% 10.3% 6.5% 8.8% UK 6.4% 6.3% 6.6% 8.5% 7.4% INFN 11.0% 5.8% 5.6% 5.3% 5.8% France 4.1% 2.0% 3.3% 3.2% 3.2% Spain 0.8% 1.2% 3.5% 2.6% 2.8% Switzerland 1.2% 1.1% 1.2% 0.9% 1.1% Sweden 0.2% 1.2% 0.8% 1.0% 0.9% Portugal 0.3% 0.5% 1.4% 0.5% 0.9% Netherlands 0.6% 0.5% 0.5% 1.4% 0.9% Other Member States 3.5% 3.3% 6.7% 7.9% 6.8% Russia 5.1% 3.5% 5.6% 4.0% 4.8% Israel 0.3% 0.8% 0.9% 1.3% 1.0% United States 40.2% 30.0% 22.8% 22.3% 24.1% Canada 1.8% 1.7% 2.0% 3.6% 2.7% Brazil 0.7% 0.8% 1.9% 3.8% 2.6% India 0.4% 2.0% 2.7% 3.0% 2.6% Japan 6.3% 9.2% 6.4% 8.4% 7.4% China 6.4% 2.3% 6.6% 2.6% 4.6% Korea 1.1% 0.2% 1.8% 2.0% 1.8% Taiwan 1.1% 0.5% 1.6% 0.8% 1.2% Other Countries 1.1% 6.5% 6.0% 9.3% 7.2% : Distribution of HEP scientific literature over different countries and institutions for the four HEP [arXiv]{} classes and their average. \[tab:1\] hep-ex hep-lat hep-ph hep-th Average ---------------------- -------- --------- -------- -------- --------- CERN & Member States 35.5% 42.3% 41.6% 38.8% 40.0% United States 40.2% 30.0% 22.8% 22.3% 24.1% Other Countries 24.3% 27.7% 35.6% 38.9% 35.9% : Distribution of HEP scientific production over three geographical groups for the four HEP [arXiv]{} classes and their average. \[tab:2\] ![image](papersPerCountry){width="\textheight"} \[fig:2c\] -- -- -- -- \[fig:2b\] ![Distribution of HEP scientific production over three geographical groups.](apc1){width="\textwidth"} \[fig:2a\] Collaborative Patterns in HEP {#sec:4} ============================= The data sample under investigation allows a study of the collaborative patterns in HEP in order to answer a natural question: which groups of countries and institutions collaborate? A simplified approach to address this question is chosen, in which only three large groups of authors are considered, according to their affiliation to one of three sections of HEP authorship: CERN and its Member States, the United States, and the remaining countries. Results from more complex analyses of other data samples focusing on author-to-author collaborative networks are presented in Reference [@colla]. Each article is assigned to one of seven mutually-exclusive classes: 1. all the authors are associated to CERN or any of its Member States; 2. all the authors are associated to the United States; 3. no authors are associated to CERN, its Member States or the United States; 4. some authors are associated to CERN or one of its Member States and some to the United States, but none to any other country; 5. some authors are associated to CERN or one of its Member States and some to other countries, but none to the United States; 6. some authors are associated to the United States and some to other countries but none to CERN or any of its Member States; 7. at least one author is associated to CERN or one of its Member States, one to the United States and one to some other country. Figure 7 presents the fraction of HEP articles in each of these seven classes while Figure 8 shows the results for the four separate [arXiv]{} disciplines. Distribution of HEP Publications among Journals {#sec:5} =============================================== The 5016 articles considered in this study appeared in 89 different peer-reviewed journals. The distribution of articles over the different journals is presented in Table 4 for the four different HEP disciplines and their global average, which is also shown in Figure 9. Only the 11 journals with a share above 1% are considered in Table 4 and Figure 9. However, the share of Nuclear Instruments and Methods in Physics Research (NIM) is also singled out. The contribution to this journal is interesting as this title is the reference journal for instrumentation in HEP. The low share of this journal in the total is due to the reduced contribution of experimental HEP to the total production compared to the theoretical and phenomenological studies, as presented in Figure 1. However, the low percentage of instrumentation articles among the total amount of experimental articles, 2.7%, is also due to the far less widespread culture of self-archiving results in [arXiv]{} in the HEP instrumentation community. A direct inspection of articles published in NIM in 2005 revealed about 30% of articles of potential interest for HEP instrumentation which had not been submitted to [arXiv]{}, neither before nor after publication. \[fig:3a\] \[fig:3b\] ![image](papersPerJournal){width="\textheight"} \[fig:4\] An analysis of the results in Table 4 shows that 83% of HEP articles are published in just six journals: Physical Review (A through E); Journal of High Energy Physics (JHEP); Physics Letters (A and B); Nuclear Physics (A and B); Physical Review Letters and the European Physical Journal (A and C). Journal Publisher hep-ex hep-lat hep-ph hep-th Average -------------------- ------------------ -------- --------- -------- -------- --------- Phys. Rev. APS 31.7% 52.8% 41.5% 19.7% 31.7% JHEP SISSA $-$ 14.2% 10.0% 31.8% 19.2% Phys. Lett. Elsevier 21.3% 15.9% 16.4% 11.6% 14.6% Nucl. Phys. Elsevier 1.2% 6.5% 7.3% 10.7% 8.4% Phys. Rev. Lett. APS 29.0% 2.4% 4.4% 1.8% 4.8% Eur. Phys. J. Springer 10.7% 2.0% 7.0% 1.0% 4.3% J. of Phys. IOP $-$ 0.8% 2.1% 3.1% 2.3% Mod. Phys. Lett. World Scientific 1.2% 0.8% 2.3% 2.6% 2.3% Int. J. Mod. Phys. World Scientific 0.3% 1.6% 1.4% 2.3% 1.8% Class. Quan. Grav. IOP $-$ $-$ 0.1% 3.8% 1.7% JCAP SISSA $-$ $-$ 1.0% 1.3% 1.0% NIM Elsevier 2.7% $-$ 0.1% $-$ 0.2% Others $-$ 2.1% 2.8% 6.5% 10.2% 7.7% : Distribution of HEP articles over different journals for the four HEP [arXiv]{} classes and their average. Only journals with a total share above 1% are considered, with the exception of Nuclear Instruments and Methods in Physics Research (NIM). The remaining 77 journals are grouped under “Others”. The publishers of the different journals are also indicated. \[tab:3\] These six journals are published by just four publishers: the American Physical Society, Elsevier, SISSA and Springer, as detailed in Table 4. It is interesting to split the corpus of HEP scientific literature discussed in this article according to the publisher of the journal in which the article appeared. The results are presented in Figure 10. A total of 87% of HEP articles are published by the same four publishers listed above. ![Distribution of HEP articles over different publishers. A total of 87% of HEP articles are published by four publishers: APS, Elsevier, SISSA and Springer.](papersPerPublisher){width="90.00000%"} \[fig:5\] Geographical Analysis of HEP Journals {#sec:6} ===================================== The quantitative information on the different countries and institutions contributing to each of the HEP articles considered in this analysis allows the estimation of the geographical distribution of the authors for each of the 12 journals listed in Table 4. The analysis of Section \[sec:3\] is repeated for each journal and the results are presented in Table 5 for all 22 countries and institutions considered in this article, as well as their grouping into three sections: CERN and its Member States, the United States, and the remaining countries. Figures 11 and 12 present these results in graphical form, with the contributions from CERN and its Member States grouped. In addition to the geographical distribution of the authors for the major HEP journals, it is interesting to identify the most popular journals of the single countries and institutions considered in this analysis. To extract this information, all articles with at least one author from a given country or institution are first selected. Then, the fraction of authorship of this country or institution is calculated for each article. This fraction is assigned to the journal where the article appeared. The sum of all these fractions for each journal provides a score of the popularity of the journal. If the sum of these scores is used to measure the total HEP scientific production of the country, it can be used to normalise each score and obtain the fractions of the HEP production of the country in the different journals. The results of this study are presented in Table 6 for each of the 22 countries and institutions discussed in this article. The last three lines of Table 6 present the results summed over three groups: CERN and its Member States, the United States and the remaining countries. The results for these groups are presented in Figure 13. Figure 14 and 15 present results for some European countries and institutions and Figure 16 presents results for some of the remaining countries. Conclusions, with a Note on Open Access ======================================= This article presents the results of the first bibliometric study of HEP publishing which accounts for the widespread phenomenon of coauthorship. The share of HEP scientific results published by several countries and institutions is correctly calculated and provides interesting insight into the collaborative patterns within the HEP community. The publishing landscape of HEP is further analysed to provide information on the journals most used by the HEP community and on the geographical distribution of their authors. It is interesting to put these results into the wider context of a possible transition of HEP publishing to an Open Access model [@taskForce]. The finding that 83% of HEP articles are published in just six journals and that 87% of the articles appear in journals published by just four publishers is particularly interesting. It demonstrates that the number of partners to be engaged with in a debate on a change of the HEP publishing model is relatively small. The worldwide collaborative patterns in HEP, which are quantified in this article, suggest that once a limited number of countries embrace an Open Access publishing model, a “domino effect” likely to spread this policy to other countries, through coauthorship links. Last, but not least, the assessment of the relative contribution to the worldwide production of HEP scientific results which takes into account the coauthorship phenomenon, presented in Table 2 and Figure 4, might constitute the basis for a model where each country or institution would contribute with their “fair share” towards the financial cost of Open Access publishing. Acknowledgments {#acknowledgments .unnumbered} =============== The idea behind this analysis came up in many discussions with Rüdiger Voss and Gigi Rolandi on the topic of Open Access. We are indebted to Sandrine Reyes and Susanne Schäfer for their help in the compilation of the data set and to our colleagues at SLAC and elsewhere for maintaining and operating [SPIRES]{}. [99]{} R. Voss [[*et al.*]{}]{}, [*Report of the Task Force on Open Access Publishing in Particle Physics*]{}, 2006.\ [http://cdsweb.cern.ch/search.py?recid=966160&ln=en]{} L. Goldschmidt-Clermont, [*Communication Patterns in High-Energy Physics*]{}, 1965;\ published in High Energy Physics Libraries Webzine, issue 6, March 2002.\ [http://library.cern.ch/HEPLW/6/papers/1/]{} P. Ginsparg, [*First Steps Towards Electronic Research Communication*]{} Computers in Physics [**8**]{} (1994) 390.\ Additional material can be found at [http://people.ccmr.cornell.edu/\~ginsparg/blurb/]{} H. Aihara [[*et al.*]{}]{}, [*Report by the Working Group on Authorship in Large Scientific Collaborations in Experimental High Energy Physics*]{}, IUPAP-C11, 2005.\ [http://www.iupap.org/commissions/c11/reports/WG\_authorship\_100105.pdf]{} M.E.J. Newmann, [*The structure of scientific collaboration networks*]{} Proc. Nat. Acad. Sci. U.S.A. [**98**]{} (2001) 404 \[arxiv:cond-mat/0007214\];\ X. Liu [[*et al.*]{}]{}, [*Co-authorship networks in the digital library research community*]{} Information Processing & Management [**41**]{} (2005) 1462 \[arxiv:cs.DL/0502056\];\ M.A. Rodriguez, [*A Multi-Graph to Support the Scholarly Communication*]{} \[arxiv:cs.DL/0601121\]. ----------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- CERN 0.7% 2.0% 1.5% 2.5% 0.7% 2.4% 1.3% 0.4% 0.3% 1.4% 2.0% 0.3% 1.1% Germany 7.2% 9.3% 9.3% 13.5% 9.0% 14.1% 6.1% 4.2% 5.4% 6.5% 7.8% 1.6% 8.5% UK 6.1% 10.4% 6.5% 7.8% 5.1% 9.8% 10.6% 3.5% 5.1% 16.6% 12.6% 16.0% 4.4% INFN 4.5% 6.9% 5.8% 10.3% 5.4% 5.4% 4.6% 4.0% 5.3% 4.3% 4.4% 14.6% 5.7% France 2.5% 2.3% 4.0% 5.3% 2.8% 5.4% 3.4% 1.7% 2.2% 1.0% 4.8% 3.0% 5.0% Spain 2.6% 4.8% 2.3% 2.2% 2.9% 1.8% 2.3% 0.9% 1.0% 1.8% 5.3% 0.2% 1.8% Switzerland 0.6% 1.3% 1.9% 1.8% 0.9% 0.5% 2.2% $-$ $-$ $-$ 2.3% 0.2% 0.5% Sweden 0.6% 1.6% 1.0% 1.2% 0.7% 1.0% $-$ 0.4% $-$ $-$ 4.9% $-$ 0.6% Portugal 1.1% 0.5% 1.0% 0.6% 1.2% 0.2% 1.7% 1.2% $-$ 2.3% $-$ $-$ 1.0% Netherlands 0.4% 2.2% 0.5% 1.1% 0.4% 0.4% $-$ $-$ 1.1% 1.6% 2.9% 0.2% 0.7% Other M.S. 5.6% 7.9% 6.1% 7.6% 4.0% 10.4% 9.6% 3.5% 9.3% 9.0% 9.0% 10.9% 8.1% Russia 3.9% 1.5% 5.7% 3.9% 2.0% 8.6% 5.7% 7.4% 4.9% 2.3% 0.8% 12.6% 15.2% Israel 1.2% 1.3% 0.7% 1.4% 0.3% 1.1% $-$ 1.2% 1.1% 1.5% 0.7% 0.2% 0.1% United States 30.8% 24.3% 19.2% 21.0% 48.1% 6.9% 10.8% 16.8% 23.0% 24.4% 16.3% 31.7% 10.8% Canada 3.0% 3.0% 2.0% 3.6% 2.8% 0.7% 3.9% 3.4% 0.3% 7.1% 2.6% 1.4% 1.0% Brazil 2.8% 1.7% 3.3% 0.2% 0.7% 5.3% 6.4% 5.4% 5.1% 1.1% 1.8% 1.0% 3.0% Japan 8.3% 5.8% 7.9% 7.2% 4.9% 2.4% 3.1% 4.3% 9.4% 4.2% 11.3% 1.6% 13.6% China 5.6% 1.9% 5.8% 1.8% 2.2% 10.7% 7.5% 4.6% 3.0% 2.3% 4.1% $-$ 6.8% India 2.4% 2.4% 3.6% 1.5% 1.1% 2.8% 5.2% 7.0% 6.3% 2.7% 5.4% $-$ 1.7% Taiwan 1.8% 0.5% 1.5% 0.6% 1.4% 1.6% $-$ 1.7% $-$ $-$ $-$ $-$ 0.8% Korea 1.8% 2.6% 2.6% 0.9% 1.3% 0.5% $-$ 3.6% $-$ 1.1% $-$ 0.7% 0.8% Other Countries 6.5% 5.8% 7.8% 4.1% 2.3% 7.9% 15.7% 24.6% 17.0% 8.6% 1.0% 3.8% 8.9% Total 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% CERN & Member States 31.9% 49.2% 40.1% 53.8% 33.0% 51.5% 41.7% 19.9% 29.7% 44.5% 56.0% 47.0% 37.3% United States 30.8% 24.3% 19.2% 21.0% 48.1% 6.9% 10.8% 16.8% 23.0% 24.4% 16.3% 31.7% 10.8% Other Countries 37.3% 26.4% 40.7% 25.2% 18.9% 41.6% 47.5% 63.3% 47.3% 31.0% 27.6% 21.3% 51.9% Total 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% ----------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- \[tab:5\] ----------------- ------- ------- ------- ------- ------ ------- ------ ------ ------ ------ ------ ------ ------- ------ CERN 15.5% 29.3% 16.4% 15.6% 2.4% 7.7% 2.2% 0.7% 0.4% 1.9% 1.5% $-$ 6.3% 100% Germany 26.1% 20.4% 15.4% 12.8% 4.9% 6.9% 1.6% 1.1% 1.1% 1.3% 0.9% $-$ 7.5% 100% UK 26.2% 27.0% 12.8% 8.8% 3.3% 5.7% 3.3% 1.1% 1.2% 3.9% 1.7% 0.5% 4.5% 100% INFN 24.4% 22.7% 14.6% 14.7% 4.4% 4.0% 1.8% 1.6% 1.6% 1.3% 0.8% 0.5% 7.5% 100% France 24.7% 13.4% 18.2% 13.5% 4.1% 7.2% 2.4% 1.2% 1.2% 0.5% 1.5% 0.2% 11.8% 100% Spain 29.1% 33.3% 12.2% 6.5% 5.0% 2.8% 1.9% 0.7% 0.7% 1.1% 1.9% $-$ 4.8% 100% Switzerland 18.3% 24.0% 26.4% 14.4% 4.1% 2.1% 4.7% $-$ $-$ $-$ 2.2% $-$ 3.8% 100% Sweden 19.8% 33.5% 15.7% 10.9% 3.6% 4.9% $-$ 1.1% $-$ $-$ 5.5% $-$ 5.0% 100% Portugal 39.0% 10.7% 16.8% 5.7% 6.5% 1.1% 4.4% 3.1% $-$ 4.4% $-$ $-$ 8.3% 100% Netherlands 14.8% 47.0% 8.7% 10.6% 2.2% 1.9% $-$ $-$ 2.2% 3.1% 3.4% 0.1% 6.0% 100% Other M.S. 26.2% 22.2% 13.0% 9.3% 2.8% 6.6% 3.2% 1.2% 2.4% 2.3% 1.3% 0.4% 9.1% 100% Russia 25.8% 6.1% 17.3% 6.8% 2.0% 7.8% 2.8% 3.6% 1.8% 0.8% 0.2% 0.6% 24.4% 100% Israel 38.5% 25.2% 10.2% 11.4% 1.4% 4.8% $-$ 2.6% 1.9% 2.6% 0.6% 0.1% 0.7% 100% United States 40.5% 19.4% 11.6% 7.3% 9.6% 1.2% 1.0% 1.6% 1.7% 1.8% 0.7% 0.3% 3.4% 100% Canada 35.2% 21.3% 10.9% 11.2% 5.1% 1.1% 3.4% 2.9% 0.2% 4.6% 1.0% 0.1% 3.0% 100% Brazil 34.4% 12.4% 18.3% 0.7% 1.3% 8.8% 5.7% 4.8% 3.4% 0.8% 0.7% 0.1% 8.8% 100% Japan 35.6% 15.0% 15.5% 8.1% 3.2% 1.4% 1.0% 1.3% 2.2% 1.0% 1.6% $-$ 14.1% 100% China 38.4% 7.9% 18.1% 3.3% 2.3% 10.0% 3.7% 2.3% 1.1% 0.9% 0.9% $-$ 11.2% 100% India 28.3% 17.2% 19.8% 4.6% 1.9% 4.6% 4.5% 6.0% 4.2% 1.8% 2.1% $-$ 4.9% 100% Taiwan 48.0% 8.8% 18.3% 4.6% 5.6% 6.1% $-$ 3.4% $-$ $-$ $-$ $-$ 5.1% 100% Korea 32.7% 27.8% 21.0% 4.4% 3.6% 1.2% $-$ 4.7% $-$ 1.1% $-$ 0.1% 3.5% 100% Other Countries 28.7% 15.5% 15.9% 4.7% 1.5% 4.7% 5.1% 7.9% 4.2% 2.1% 0.1% 0.1% 9.6% 100% CERN & Member States 25.3% 23.7% 14.6% 11.2% 4.0% 5.6% 2.4% 1.1% 1.3% 1.9% 1.4% 0.3% 7.2% 100% United States 40.5% 19.4% 11.6% 7.3% 9.6% 1.2% 1.0% 1.6% 1.7% 1.8% 0.7% 0.3% 3.4% 100% Other Countries 33.0% 14.2% 16.5% 5.9% 2.5% 5.0% 3.1% 4.0% 2.3% 1.5% 0.8% 0.1% 11.1% 100% ----------------- ------- ------- ------- ------- ------ ------- ------ ------ ------ ------ ------ ------ ------- ------ \[tab:6\] -- -- -- -- \[fig:6\] -- -- -- -- \[fig:7\] [c]{}\ \ \ \[fig:8\] -- -- -- -- \[fig:9\] -- -- -- -- \[fig:10\] -- -- -- -- \[fig:11\] [^1]: Corresponding author: Salvatore.Mele@cern.ch [^2]: On leave of absence from INFN, Napoli, Italy [^3]: CERN Member States not already listed in the first column of Table 2 are: Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, Greece, Hungary, Norway, Poland and the Slovak Republic.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Satellite galaxies in groups and clusters are more likely to have low star formation rates (SFR) and lie on the ‘red-sequence’ than central (‘field’) galaxies. Using galaxy group/cluster catalogs from the Sloan Digital Sky Survey Data Release 7, together with a high-resolution, cosmological -body simulation to track satellite orbits, we examine the star formation histories and quenching timescales of satellites of $\mstar > 5 \times 10 ^ {9} \msun$ at $z \approx 0$. We first explore satellite infall histories: group preprocessing and ejected orbits are critical aspects of satellite evolution, and properly accounting for these, satellite infall typically occurred at $z \sim 0.5$, or $\sim 5 \gyr$ ago. To obtain accurate initial conditions for the SFRs of satellites at their time of first infall, we construct an empirical parametrization for the evolution of central galaxy SFRs and quiescent fractions. With this, we constrain the importance and efficiency of satellite quenching as a function of satellite and host halo mass, finding that satellite quenching is the dominant process for building up all quiescent galaxies at $\mstar < 10 ^ {10} \msun$. We then constrain satellite star formation histories, finding a ‘delayed-then-rapid’ quenching scenario: satellite SFRs evolve unaffected for $2 - 4 \gyr$ after infall, after which star formation quenches rapidly, with an e-folding time of $< 0.8 \gyr$. These quenching timescales are shorter for more massive satellites but do not depend on host halo mass: the observed increase in satellite quiescent fraction with halo mass arises simply because of satellites quenching in a lower mass group prior to infall (group preprocessing), which is responsible for up to half of quenched satellites in massive clusters. Because of the long time delay before quenching starts, satellites experience significant stellar mass growth after infall, nearly identical to central galaxies. This fact provides key physical insight into the subhalo abundance matching method.' author: - | Andrew R. Wetzel${}^1$, Jeremy L. Tinker${}^2$, Charlie Conroy${}^3$, and Frank C. van den Bosch${}^1$\ $^{1}$Department of Astronomy, Yale University, New Haven, CT 06520, USA\ $^{2}$Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10013, USA\ $^{3}$Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA bibliography: - 'biblio.bib' date: June 2012 title: 'Galaxy evolution in groups and clusters: satellite star formation histories and quenching timescales in a hierarchical Universe' --- \[firstpage\] methods:numerical – galaxies: clusters: general – galaxies: evolution – galaxies: groups: general – galaxies: haloes – galaxies: star formation. Introduction ============ Observations have long shown that galaxies in denser regions are more likely to have low star formation rates (SFR), lie on the red sequence, and exhibit more evolved (elliptical) morphologies than similar mass galaxies in less dense regions, from massive galaxies in clusters [@Oem74; @DavGel76; @Dre80; @DreGun83; @PosGel84; @BalMorYee97; @PogSmaDre99] to the lowest mass satellites in the Local Group [@Mat98]. Large-scale galaxy surveys, such as the Sloan Digital Sky Survey [SDSS; @SDSS], have enabled detailed examinations of the correlations between these galaxy properties and their environment at $z \approx 0$ [see @BlaMou09 for a recent review]. Several such early works showed that galaxy SFR/color depends on small-scale ($\lesssim 1 \mpc$) environment, with little-to-no additional dependence on larger scale environment [@HogBlaBri04; @KauWhiHec04; @BlaEisHog05]. More physically, this environmental dependence has been shown to result from satellite galaxies and the properties of their host dark matter halo [@WeivdBYan06a; @BlaBer07; @WilZibBud10; @TinWetCon11], where ‘satellite’ galaxies are all those that are not the massive ‘central’ galaxy at the core of the host halo. These results are physically meaningful, given that the virial radius corresponds to a physical transition from the low-density ‘field’ environment to a high-density, virialized region. After a satellite falls into a host halo, the strong gravitational tidal forces prevent the satellite’s (sub)halo from accreting dark matter and also strip mass from the subhalo from the outside-in [e.g., @DekDevHet03; @DieKuhMad07]. Additionally, if the host halo is massive enough to host a stable, virial accretion shock [@DekBir06], then its thermalized gas also can heat and strip any extended gas in the subhalo [@BalNavMor00; @KawMul07; @McCFreFon08]. Therefore, satellites (eventually) experience reduced gas cooling/accretion rates onto their disc after infall, a phenomenon known as ‘strangulation’ or ‘starvation’ [@LarTinCal80]. More drastically, in the extreme case of both high gas density and satellite velocity, ram-pressure can strip cold gas directly from the disc [@GunGot72; @AbaMooBow99; @ChuvGoKen09]. The dense collection of galaxies in a host halo also allows for the possibility of strong gravitational interactions with neighboring galaxies, known as ‘harassment’ [@FarSha81; @MooLakKat98], and satellites can merge with one another [@MakHut97; @AngLacBau09; @WetCohWhi09a; @WetCohWhi09b; @WhiCohSmi10; @Coh12]. All of these mechanisms are expected to play some role in quenching satellite star formation, though their importance, particularly as a function of host halo mass, remains in debate. To constrain satellite quenching processes, many works have examined the SFRs/colors of satellites in groups/clusters in detail at $z \approx 0$ [e.g., @BalNavMor00; @EllLinYee01; @DePColPea04; @WeivdBYan06a; @BlaBer07; @KimSomYi09; @vdBAquYan08; @HanSheWec09; @KimSomYi09; @PasvdBMo09; @vdLWilKau10; @PreBalJam11; @PenLilRen11; @WetTinCon12a; @WooDekFab13]. In general, these works found that the fraction of satellite that are quiescent/red, $\fsatq$, depends primarily, and independently, on three quantities: $\fsatq$ (1) increases with satellite mass, (2) increases with the mass of the host halo, and (3) increases toward halo center. Trend (1) is caused, at least partially, by the underlying dependence on stellar mass set by central galaxies prior to infall. Trend (2) is sometimes interpreted as satellites being quenched more rapidly in more massive host halos, but the hierarchical nature of halo growth, namely, the possibility of quenching as a satellite in a lower mass halo prior falling into a more massive halo (‘group preprocessing’) complicates this interpretation [e.g., @ZabMul98; @McGBalBow09]. Finally, trend (3) implies an evolutionary trend, because a satellite’s halo-centric radius negatively correlates with its time since infall [e.g., @GaoWhiJen04; @DeLWeiPog12]. Similar satellite trends persists out to at least $z \sim 1$ [e.g., @CucIvoMar06; @CooNewCoi07; @GerNewFab07; @TraSaiMou09; @PenLilKov10; @McGBalWil11; @GeoLeaBun11; @MuzWilYee12]. Several works have gone beyond simple satellite SFR/color cuts to examine observationally the nature of the full SFR/color distribution. These works have shown that the color [@BalBalNic04; @Ski09] and SFR [@BalEkeMil04; @McGBalWil11; @PenLilRen11; @WetTinCon12a; @WooDekFab13; @WijHopBro12] distribution of galaxies is strongly bimodal across all environmental/halo regimes, and the SFR/color of actively star-forming/blue galaxies does not vary with any environmental measure. As noted in many of the above works, these results imply that the environmental process(es) takes considerable time (several Gyrs) to affect satellite SFR. In this paper, we seek to use the aforementioned observational trends, which we presented in detail in @WetTinCon12a, to quantify—in a robust, statistical manner—the star formation histories and quenching timescales of satellite galaxies at $z = 0$ across a wide range of both satellite and host halo masses. Understanding satellite quenching mechanisms and the timescales over which they operate is important for elucidating the physical processes that occur in groups and clusters, but also for a comprehensive understanding of galaxy evolution overall, given that satellites constitute $\sim 1 / 3$ of all low-mass galaxies [e.g., @YanMovdB07]. Satellite galaxies also provide unique laboratories for examining gas depletion and its relation to star formation because, unlike central galaxies, satellites are thought not to accrete gas from the field after infall. Furthermore, because satellites are significantly more likely to lie on the red sequence, many methods for identifying galaxy groups/clusters rely on selecting red-sequence galaxies [e.g., @GlaYee00; @KoeMcKAnn07a], so a detailed understanding of the systematics of these methods requires characterizing the timescale over which satellites migrate onto the red sequence after infall and how this timescale depend on host halo mass and redshift. Many works have investigated satellite SFR evolution and quenching through the use of semi-analytic models (SAMs) applied to cosmological -body simulations. In one early example, @BalNavMor00 modeled satellite SFR as declining exponentially after infall on a cold gas consumption timescale of a few Gyrs to account for radial gradients of average SFR and color in clusters. Many SAMs assumed that a satellite subhalo’s hot gas is stripped instantaneously as it passes within the host halo’s virial shock, but this scenario quenches star formation too rapidly; only models that remove/strip gas more gradually produce realistic quiescent fractions [@WeivdBYan06b; @FonBowMcC08; @KanvdB08; @BooBen10], but, in general they have difficulty in correctly reproducing the full SFR distribution. Though, @WeiKauvdL10 recently implemented a modification of the SAM of @DeLBla07 in which the diffuse gas around a satellite galaxy is stripped at the same rate as its host dark matter subhalo (10 - 20% loss per Gyr), showing that this modification produces a satellite SSFR distribution that broadly is in agreement with observations. However, understanding the results of SAMs is complicated by the fact that they do not completely accurately model the evolution of central galaxies, so they do not provide fully accurate initial conditions at infall for satellites. Relatedly, a few works have examined satellite SFR evolution in cosmological hydrodynamic simulations of galaxy groups [e.g., @FelCarMay11], arguing that quenching occurs largely through the lack of gas accretion and (to a lesser degree) gas stripping after infall. Instead of attempting fully to forward-model all of the relevant physical processes for satellites, our approach is to parametrize satellite star formation histories and constrain their quenching timescales in as much of an empirical manner as possible. We start with detailed measurements of the SFRs of satellites at $z = 0$ from our SDSS group catalog that we presented in @WetTinCon12a and review in §\[sec:method\]. We also describe our cosmological $N$-body simulation, which we use to create a mock group catalog to compare our models to observations robustly. With this simulation, we explore the infall times of satellites in §\[sec:infall-time\]. We then develop an accurate, empirical parametrization for the initial SFRs of satellites at their time of infall in §\[sec:sfr\_at\_infall\]. Having accurate initial SFRs of satellites and measurements of their final SFRs at $z = 0$, we examine the importance and efficiency of satellite quenching in §\[sec:quench\_importance\_efficiency\] and their star formation histories and quenching timescales in §\[sec:sfr-evol\_sat\]. With this, we then examine where satellites were when they quenched in §\[sec:where\_quench\] and their stellar mass growth in §\[sec:m-star\_growth\]. Finally, we discuss the implications of our results for subhalo abundance matching in Appendix \[sec:mass\_growth\_sham\]. This paper represents the third in a series of four. In @TinWetCon11, hereafter , we described our SDSS galaxy sample, presented our method for identifying galaxy groups/clusters, and showed that central and satellite galaxy quiescent fractions are essentially independent of the large-scale environment beyond their host halo. In @WetTinCon12a, hereafter , we used our SDSS group catalog to examine in detail the SFR distribution of satellite galaxies and its dependence on stellar mass, host halo mass, and halo-centric radius, finding that the SFR distribution is strongly bimodal in all regimes. Based on this, we argued that satellite star formation must evolve in the same manner as central galaxies for several Gyrs after infall, but that once satellite quenching starts, it occurs rapidly. In @WetTinCon13a, hereafter , we will examine quenching in galaxies groups and clusters, focusing on ‘ejected’ satellites that passed within a more massive host halo but have orbited beyond the virial radius. We will show that these ejected satellites can explain essentially all trends for star formation quenching in galaxies beyond the the virial radius of groups/clusters. Finally, in @WetTinCon13b, hereafter , we will use the detailed orbital histories from our simulation to constrain the physical mechanisms responsible for satellite quenching. For clarity, we outline some nomenclature. We refer to galaxies as ‘quiescent’ in an observational sense: having low SFR but without regard to how or when SFR faded. By contrast, we refer to satellite ‘quenching’ in our models as the physical process of SFR fading rapidly below the quiescence threshold, under the ansatz that once a satellite is quenched it remains so indefinitely. Our galaxy group catalog refers to ‘group’ in a general sense, as a set of galaxies that occupy a single host halo, regardless of its mass, and we will use ‘(host) halo’ as a more general term for group or cluster. Finally, we cite all masses using $h = 0.7$ for the Hubble parameter. Methods {#sec:method} ======= In this section, we first briefly describe our galaxy sample and group-finding algorithm. (For full details, see for our galaxy sample and group-finding algorithm, and for our SFR measurements.) We then describe our simulation, subhalo finding and tracking, and methodology for making galaxy and group catalogs in the simulation. SDSS Galaxy Catalog ------------------- Our galaxy sample is based on the NYU Value-Added Galaxy Catalog [@BlaSchStr05] from SDSS Data Release 7 [@AbaAdeAgu09]. Galaxy stellar masses are from the [kcorrect]{} code of @BlaRow07, assuming a @Cha03 initial mass function (IMF). We construct two volume-limited samples of all galaxies with $M_r - 5 \log(h) < -18$ and $-19$[^1] , which go out to $z = 0.04$ and $0.06$, from which we identify stellar mass completeness limits of $5 \times 10 ^ {9}$ and $1.3 \times 10 ^ {10} \msun$, respectively. Combining these samples leads to an overall median redshift of $z = 0.045$, though we will indicate this as $z = 0$ for brevity. For our galaxy star formation metric we use specific star formation rate, $\ssfr = \sfr / \mstar$, based on the current release[^2] of the spectral reductions of @BriChaWhi04, with updated prescriptions for active galactic nuclei (AGN) contamination and fiber aperture bias corrections following @SalRicCha07. These SSFRs are derived primarily from emission lines (mostly $\halpha$), but in cases of strong AGN contamination or no measurable emission lines, the SSFRs are inferred from $\dnfk$. Roughly, $\ssfr \gtrsim 10 ^ {-11} \yrinv$ are based almost entirely on $\halpha$, $10 ^ {-12} \lesssim \ssfr \lesssim 10 ^ {-11} \yrinv$ are based on a combination of emission lines, and $\ssfr \lesssim 10 ^ {-12} \yrinv$ are based almost entirely on $\dnfk$ and should be considered upper limits to the true value [@SalRicCha07]. The use of spectroscopically derived SSFRs is critical for our analysis because dust reddening causes simple red/blue color cuts to overestimate the quiescent fraction by up to 50%, particularly at lower mass (see Fig. 1 in ). SDSS Group Catalog {#sec:group_catalog_sdss} ------------------ Motivated by the paradigm that all galaxies reside in host dark matter halos, we identify groups of galaxies that occupy the same host halo and their halo properties through a modified implementation of the group-finding algorithm of @YanMovdB05a [@YanMovdB07]. For our group catalog, we define dark matter host halos such that the mean matter density interior to the virial radius is 200 times the mean background matter density: $\mthm = 200 \bar{\rho}_{\rm m} \frac{4}{3} \pi \rthm^3$. We place galaxies into groups through an iterative procedure outlined in , using the $\mstar > 10 ^ {9.7} \msun$ sample at $z < 0.04$ and the $\mstar > 10 ^ {10.1} \msun$ sample at $0.04 < z < 0.06$. We assign dark matter halo masses to groups by matching the abundance of halos above a given dark matter mass to the abundance of groups above a given total stellar mass: $n(> M_{\rm vir, halo}) = n(> M_{\rm star, group})$. Here, we use the host halo mass function from @TinKraKly08, based on a flat, $\Lambda$CDM cosmology of $\Omega_{\rm m} = 0.27$, $\Omega_{\rm b} = 0.045$, $h = 0.7$, $n_{\rm s} = 0.95$ and $\sigma_8 = 0.82$, consistent with a wide array of observations [e.g., @KomSmiDun11 and references therein]. Every group contains one ‘central’ galaxy, which by definition is the most massive, and can contain any number (including zero) of less massive ‘satellite’ galaxies. We define a group’s center by the location of its most massive galaxy. However, in reality the most massive galaxy is not always the one closest to the minimum of a halo’s potential well, particularly at high halo mass [@SkivdBYan11]. This effect arises largely because of the shallowness of the $\mstar - \msubhalo$ relation at high mass, such that any non-negligible scatter in this relation leads to a significant probability that if a less massive halo falls into a more massive halo, the central galaxy of the less massive halo has higher stellar mass than that of the more massive halo. However, these are typically cases in which two galaxies in a halo have similarly high mass, and in this regime almost all galaxies exhibit quiescent SSFRs regardless of central versus satellite demarcation. Also, because we will assume realistic scatter (0.15 dex) in the $\mstar - \msubhalo$ relation in making our simulation group catalog (see §\[sec:galaxy\_catalog\_sim\]), this effect propagates into our model results as well. Because of projection effects and redshift-space distortions, our groups inevitably contain interloping galaxies: some central galaxies are mis-assigned as satellites in higher mass halos (reducing group purity), and conversely some satellites are mis-assigned as central galaxies of lower mass halos (reducing group completeness). As detailed in , an average of $\sim 10\%$ of galaxies are mis-assigned in this way [see also @YanMovdB07]. In this work, we apply the same group-finding algorithm to our simulation, as described below, allowing us to examine our theoretical results in ‘observational’ space and correct for these effects. Simulation and Subhalo Tracking {#sec:simulation} ------------------------------- Our goal is to understand the SFR evolution of galaxies in groups/clusters and how this evolution connects with satellite infall times and lifetimes, which are governed by complex dynamical processes. To this end, we require a cosmological simulation that both provides significant statistics across a broad range of host halo masses and can robustly track satellite evolution from first infall to final merging/disruption. We employ a dissipationless, $N$-body simulation using the TreePM code of @Whi02 with flat, $\Lambda$CDM cosmology of $\Omega_{\rm m} = 0.274$, $\Omega_{\rm b} = 0.0457$, $h = 0.7$, $n = 0.95$ and $\sigma_8 = 0.8$, nearly identical to the cosmology used in making our group catalog. To achieve both high resolution and significant volume, the simulation evolves $2048 ^ 3$ particles in a $250 \hmpc$ box, with a particle mass of $1.98 \times 10^8 \msun$ and a Plummer equivalent smoothing of $2.5 \hkpc$. Initial conditions are generated at $z = 150$ using second-order Lagrangian Perturbation Theory, with a displacement RMS of 38% of the mean inter-particle spacing. 45 outputs are stored from $z = 10$ to 0, spaced evenly in $\ln(a)$, with output time spacings of 400 and $650 \myr$ at $z = 1$ and 0, respectively. This same simulation was used in @WhiCohSmi10. We identify ‘host halos’ using the Friends-of-Friends (FoF) algorithm [@DavEfsFre85] with a linking length of $b = 0.168$ times the mean inter-particle spacing, which groups particles bounded by an isodensity contour of $\sim 100 \times$ the mean matter density. ($b = 0.2$ is often used, but it is more susceptible to joining together distinct, unbound structures.) Note that this halo definition is different from the spherical overdensity definition used in making the SDSS group catalog, but we address this issue in §\[sec:group\_catalog\_sim\]. Within host halos, we identify ‘subhalos’ as overdensities in phase space through a 6-dimensional FoF algorithm (FoF6D), also described in @WhiCohSmi10. Based on extensive experimentation, we use a configuration space linking length of 0.078 of the simulation’s mean interparticle spacing and a velocity linking length of 0.368 of each host halo’s 1D velocity dispersion.[^3] Our tests show that our FoF6D implementation leads to good agreement with *SUBFIND* [@SprWhiTor01] for massive, well-resolved subhalos, but FoF6D is significantly more robust in tracking low-mass subhalos and those that pass close to halo center. For both host halos and subhalos, we keep all objects with at least 50 particles, and we define its center and velocity by the position and velocity of its most bound particle. We track host halos and subhalos across simulation outputs and build merger trees as described in @WetCohWhi09b and @WetWhi10, with slight modifications as given below. We assign to each (sub)halo a unique ‘child’ (sub)halo at the next simulation output, based on its 20 most bound particles. We track subhalo histories across four consecutive outputs at a time because a subhalo can briefly disappear while passing through the center of its host halo or another subhalo. In these cases, we ensure that the subhalo is identified at each output by creating ‘virtual’ subhalos via interpolating the properties of the temporarily disappearing subhalos between outputs. If a (sub)halo has multiple ‘parent’ (sub)halos at the previous output, we identify the main parent as the most massive one (using $\mmax$ for subhalos, see below), and we use this main parent in tracking back a (sub)halo’s history and identifying its main progenitor at an earlier time. We define a ‘central’ subhalo as being the most massive subhalo in a newly-formed (has no parent) host halo at a given simulation output. The central subhalo almost always corresponds to the object at the minimum of a halo’s potential well. A subhalo retains its ‘central’ definition until falling into (specifically, becoming linked via FoF to) a more massive host halo, becoming a ‘satellite’ subhalo. Every sufficiently bound halo hosts one central subhalo at its core and can host zero, one, or multiple satellite subhalos, so these ‘central’ and ‘satellite’ definitions for subhalos closely reflect those of galaxies in the group catalog. We assign to each subhalo a maximum mass, $\mmax$, motivated by the strong correlation of this quantity with galaxy stellar mass (see §\[sec:group\_catalog\_sim\]). A subhalo’s $\mmax$ is based on the maximum host halo mass that it ever had as a central subhalo (so it corresponds to FoF halo mass and not FoF6D subhalo mass). For a central subhalo, $\mmax$ almost always corresponds to its current halo mass, the primary exception being those that have passed through a more massive halo and been ejected (see §\[sec:infall\_eject\]). For a satellite subhalo, $\mmax$ almost always corresponds to its halo mass sometime prior to infall, though not necessarily prior to infall because a satellite typically undergoes some mass stripping just prior to infall, arising from the strong tidal forces near a massive halo. (The two effects above motivate our use of $\mmax$ instead of mass at infall, which was used in @WetCohWhi09b [@WetCohWhi09a] and @WetWhi10.) Thus, a satellite’s $\mmax$ remains fixed after infall, unless it merges with another satellite, in which case the resultant satellite is given the sum of its parents’ $\mmax$ values. As demonstrated in @WetWhi10, simulations at our high resolution scale can resolve and track massive satellite subhalos past the point at which the galactic stellar component that they host would (start to) be stripped, merge with the central galaxy, or otherwise be disrupted. Not accounting for this galactic merging/disruption in -body simulations can lead to stronger small-scale clustering than seen in observations. This effect is even more significant for our phase-space FoF6D subhalo finder, which tracks subhalos more robustly down to small halo radii. To account for this effect, we use the subhalo merging/disruption scheme in @WetWhi10 and remove subhalos with $M_{\rm bound} / \mmax < 0.007$, which provides good agreement with mass/luminosity-dependent clustering, satellite fractions, and luminosity functions in clusters for this simulation and FoF6D subhalo finder. Thus, we properly resolve the orbital histories and infall times of all satellites in our sample ($\mstar > 5 \times 10 ^ {9} \msun$ corresponds to subhalo $\mmax > 3 \times 10 ^ {11} \msun$, see below). Simulation galaxy catalog {#sec:galaxy_catalog_sim} ------------------------- Under the assumption that a galaxy resides at the center of each surviving dark matter subhalo, we assign stellar mass using subhalo abundance matching [SHAM; @ValOst06; @ConWecKra06]. This method assumes a one-to-one mapping that preserves rank ordering between subhalo $\mmax$ (or maximum circular velocity) and galaxy $\mstar$, such that $n(> \mmax) = n(>\mstar)$, allowing one to assign $\mstar$ to subhalos empirically using an observed stellar mass function (SMF) that is recovered, by design. SHAM has succeeded in reproducing many observed galaxy statistics, including spatial clustering, satellite fractions, cluster luminosity functions, and luminosity-velocity relations [@ConWecKra06; @ValOst06; @BerBulBar06; @WanLiKau06; @YanMovdB09; @WetWhi10; @TruKlyPri11]. We note that, despite these successes, SHAM in its simplest incarnation, using $\mmax$ for both satellite and central subhalos, may not be fully accurate in assigning stellar mass to both satellite and central subhalos simultaneously, because there is some freedom in allowing satellites to follow a different relation [@NeiLiKho11; @YanMovdB12; @MosNaaWhi13]. We discuss this issue further in Appendix \[sec:mass\_growth\_sham\]. For this work, we use the SMF from @LiWhi09, based on the same SDSS NYU-VAGC sample as our galaxy catalog, including the same $K$-correction and IMF. We apply SHAM at a simulation output of $z = 0.05$, close to the median redshift of our SDSS catalog. While SHAM in its simplest implementation assumes a one-to-one correspondence between $\mmax$ and $\mstar$, a scatter of $0.15 - 0.2$ dex in this relation is suggested by observations [e.g., @ZheCoiZeh07; @YanMovdB08; @MorvdBCac09; @WetWhi10; @LeaTinBun12]. Thus, in our implementation we assume 0.15 dex log-normal scatter in $\mstar$ at fixed $\mmax$, achieved by deconvolving the observed SMF with a log-normal filter such that we recover the observed SMF after adding this scatter. Simulation group catalog {#sec:group_catalog_sim} ------------------------ To make robust comparisons with our SDSS group catalog, we produce a ‘simulation group catalog’ by applying the same group-finding algorithm that we use in SDSS to our simulation galaxy catalog. (We use the distant observer approximation and do not produce a light-cone.) While we base our models of SFR evolution on true satellite versus central demarcation in the simulation, we effectively ‘observe’ the results at $z = 0.05$ through the simulation group catalog, which includes the effects of interloping galaxies caused by redshift-space distortions and any other systematics of the group-finding algorithm. In , we will show that the galaxy distributions in the simulation group catalog closely match those of the SDSS group catalogs. Our simulation group catalog also allows us to correct for the effects from interloping galaxies in measuring satellite and central galaxy quiescent fractions in the SDSS group catalog. Interlopers have little effect on central galaxy quiescent fractions because central galaxies strongly outnumber satellites, but interlopers do cause the observed satellite quiescent fractions to be $\sim 10\%$ too low (see Appendix C of ). To correct for these effects, we create a ‘mock’ SDSS group catalog by empirically assigning SSFRs to galaxies in the simulation group catalog, matching the observed SSFR distribution separately for satellite and central galaxies in narrow bins of galaxy and host halo mass. We then measure quiescent fractions in the mock catalog according to each galaxy’s true (real-space) satellite/central designation, which provides values unbiased by redshift-space distortions. Finally, our use of a simulation group catalog mitigates any inconsistency between the simulation’s FoF halo definition, which allows arbitrary morphology, and the spherical overdensity halo definition applied to SDSS, because we always compare the two galaxy catalogs using the same halo definition. For reference, $\mthm \approx 1.2\,M_{\rm FoF}(b=0.168)$. Satellite infall types and times {#sec:infall-time} ================================ To inform our models for the evolution of satellite SFR and interpret our results, we first use the simulation to explore the pathways and times of infall for satellites at $z = 0.05$ (the median redshift of our SDSS catalog). To highlight physical trends free from the ambiguities of redshift-space distortions, in this subsection we do not use the simulation group catalog, but we examine satellites based directly on the simulation halo catalog. Satellite infall and ejection {#sec:infall_eject} ----------------------------- In examining the infall times of satellites, we consider two ways to define infall: the time of most infall into the host halo, or the time of infall into host halo. The latter includes any time spent in a lower mass halo before falling into the current host halo. The latter also naturally incorporates satellites that temporarily orbit beyond their host halo’s virial radius, $\rvir$, which we call ‘ejected satellites’, and then fall in again [@GilKneGib05; @LudNavSpr09; @WanMoJin09]. If one considers only their most recent infall, these ejected satellites might appear to be central galaxies falling in directly from the field for the first time, but as we will show, it is more appropriate to consider them as satellites since their time of first infall. We define the ‘first’ infall of a subhalo as the first time that it becomes a satellite (is linked by the FoF algorithm) in another host halo, and ‘recent’ infall as the time that it falls into the main progenitor of its current host halo. In defining first infall, we additionally require that a subhalo remains a satellite for at least two consecutive simulation outputs to avoid cases of temporary bridging as a subhalo briefly passes through the outskirts of a host halo. Fig. \[fig:infall-type-frac\_v\_m-halo\] shows the fraction of infall types for satellites in the simulation at $z = 0.05$, as a function of and current host halo mass. Shaded widths show 68% confidence interval as given by a beta distribution [@Cam11]. In low-mass halos, 90% of satellites in our mass range fell in as central galaxies directly from the field for the first time, but this fraction drops below 50% in halos $> 10 ^ {14} \msun$. Thus, (group preprocessing). So, group preprocessing of star formation is potentially important in this regime, as we will explore in §\[sec:where\_quench\]. We also examine the dependence on satellite stellar mass (not shown), though it is much weaker than for host halo mass, with the fraction that fell in as a satellite in another halo or for the second (or more) time after being ejected both dropping from 20% at $\mstar = 10 ^ {9.7} \msun$ to 10% at $\mstar = 10 ^ {11.3} \msun$. These infall fractions agree broadly with those of previous works [@BerSteBul09; @McGBalBow09; @DeLWeiPog12], though each of those works find quantitative differences arising largely from different satellite and group mass limits [see discussion in @DeLWeiPog12]. Fig. \[fig:infall-type-frac\_v\_m-halo\] also shows that satellites that have been ejected beyond their host halo’s $\rvir$ play an important role in satellite evolution. We find that these satellites spend typically $1 - 2 \gyr$ inside a host halo when they first fall in, experiencing a single pericentric passage before being ejected. They then spend longer time ($2 - 3 \gyr$) orbiting as a central galaxy beyond $\rvir$ until they fall in again. Both timescales have no significant dependence on satellite mass, but both do have broad distributions extending out to $4 - 5 \gyr$, implying that some satellites are kicked out via multi-body encounters after several orbital passages [@LudNavSpr09]. After being ejected, $\gtrsim 90\%$ of these (now central) galaxies continue to lose halo mass, with a typical ejected satellite currently having half of the halo mass that it had at the time of ejection.[^4] This halo mass stripping occurs as ejected satellites orbit in the hot, dynamic environment surrounding a massive host halo. This continued halo mass loss strongly suggests that ejected satellite galaxies evolve in a similar way as those within $\rvir$ and would also exhibit truncated SFRs. In , we will examine the trends of ejected satellites as a function of halo-centric distance, showing that the ejected fraction around massive halos can account for the enhanced quiescent fraction of central galaxies out to $\sim 2.5\,\rthm$ around massive clusters that we noted in [see also @WanYanMo09]. Thus, we conclude that SFRs in ejected satellite continue evolve in a similar manner as those within $\rvir$, and we will treat the two populations identically in our models for SFR evolution. Satellite infall times {#sec:infall-time_v_m} ---------------------- With the above definitions of first and recent infall, Fig. \[fig:t-inf\_v\_m-halo\] shows how surviving satellites’ time since infall (and redshift of infall) depends on their current host halo mass. Time since infall does not depend on either host halo mass or satellite stellar mass (latter not shown). There is a slight decrease at high halo mass, because more massive halos formed more recently, gaining many satellites via the recent infall of groups (Fig. \[fig:infall-type-frac\_v\_m-halo\]). By contrast, time since infall exhibits a strong increase with halo mass, a result of the increased fraction of infalling groups and ejected satellites from Fig. \[fig:infall-type-frac\_v\_m-halo\] [see also @WanLiKau07; @DeLWeiPog12]. We also find that more massive satellites experienced their first infall slightly more recently, with median time since infall falling from $5 \gyr$ at $\mstar = 10 ^ {9.7} \msun$ to $3.5 \gyr$ at $\mstar = 10 ^ {11.3} \msun$. This stellar mass dependence is caused by hierarchical structure growth, such that later infalling satellites had more time to grow in mass before infall, coupled with shorter lifetimes for more massive satellites, such that satellites that fell in early but have survived to the present are preferentially of lower mass. Half of satellites in our mass range first fell in at $z \gtrsim 0.5$, with a broad tail out to $z \ge 1$ (shaded region), so they typically have experienced $\gtrsim 4 \gyr$ evolving as a satellite. Furthermore, while a satellite typically has spent $\sim 3 \gyr$ in its current host halo, the timescale is twice as long if in a $> 10 ^ {14} \msun$ halo. Thus, on average, satellites have spent 1/3 - 1/2 of their galactic lifetime evolving as a satellite, highlighting the importance of satellite evolution as a component of galaxy evolution. These infall times also highlight the importance of obtaining accurate satellite SFR initial conditions prior to infall to understand their subsequent SFR evolution. Finally, we emphasize that we limit our analysis to subhalos that our simulation resolves well (at least 1500 particles at infall), and that we have accounted for satellite merging/disruption in a way that matches various observed galaxy statistics, providing accurate satellite lifetimes. Thus, we expect our results to be robust against resolution effects. Defining satellite infall ------------------------- We have shown that more than half of satellites in host halos $> 10 ^ {14} \msun$ fell in as a satellite in another halo or as an ejected satellite, and that, as a result, time since infall is higher in more massive halos if considering infall. Combined with three key observational results from , this provides strong evidence that the satellite-specific quenching process(es) begins at infall, regardless of the host halo’s mass. First, satellites exhibit an enhanced quiescent fraction even in the lowest mass host halos that we probed ($3 \times 10 ^ {11} \msun$). To the extent that this remains true at higher redshift, then the satellite-specific process(es) begins upon infall into any halo, regardless its mass. Second, central galaxies out to $\sim 2\,\rthm$ around massive host halos exhibit an enhanced quiescent fraction, implying that ejected satellites can become/remain quenched. Third, the satellite quiescent fraction increases with host halos mass even at fixed projected distance from halo center, $\dproj / \rvir$, including satellites at $\dproj \approx \rvir$, which tend to have fallen into their halo recently. More specifically, at fixed $\dproj / \rvir$, time since infall does not increase with halo mass, but time since infall does (see ). Taken together, these constitute strong evidence that satellite-specific quenching process(es) starts to set in at infall, and we will use only this definition of infall henceforth. For a satellite that experienced first infall at time $\tinf$, we refer to its time since first infall, $\tsinf = t - \tinf$. The quenching of star formation in satellites ============================================= In this section, we present our main results on satellite quenching. First, in §\[sec:sfr\_at\_infall\], we construct an empirical parametrization for the evolution of SFRs of central galaxies to provide accurate initial conditions for the SFRs of satellites at their time of first infall. Then, in §\[sec:quench\_importance\_efficiency\] we explore the importance and efficiency of satellite quenching in an empirical manner by combining satellite infall times with our parametrization for initial quiescent fractions from §\[sec:qu.frac-evol\_cen\]. Finally, in §\[sec:sfr-evol\_sat\] we develop models for satellite SFR evolution after infall to constrain the timescales over which satellites are quenched. SFR in satellites at the time of first infall {#sec:sfr_at_infall} --------------------------------------------- To understand the evolution of SFR in satellites after infall, we first need accurate initial conditions for their SFRs just prior to infall, as given by the SFRs of central galaxies of the appropriate stellar mass at a satellite’s redshift of infall. As we showed in §\[sec:infall-time\_v\_m\], satellites in our mass range first fell in typically at $z \approx 0.5$, with a broad distribution out to $z > 1$, so using $z \approx 0$ central galaxy properties as the initial conditions for satellites, which has been a common approach [e.g., @vdBAquYan08; @TinWet10; @PenLilRen11; @DeLWeiPog12], is a poor approximation. Thus, we require a statistical parametrization of the full SFR distribution of central galaxies as a function of both stellar mass and redshift. In order to describe this quantity both accurately and in a manner that is independent of (and minimally degenerate with) our models of satellite SFR evolution after infall, we proceed in an empirical manner. While some previous approaches have modeled the evolution of galaxy stellar mass and SFR empirically [e.g., @ConWec09], these studies examined only the SFRs of galaxies, not taking into account the bimodal nature of the SFR distribution, in particular, quiescent galaxies. Our approach is more comprehensive, as our evolution parametrization contains two key components: the fraction of central galaxies that are quiescent and the normalization of galaxy SFRs. We describe these in turn. ### Evolution of the quiescent fraction for central galaxies {#sec:qu.frac-evol_cen} Our goal is to start with observations of the evolution of the quiescent fraction for galaxies out to $z = 1$ and decompose this into the evolution for satellite and central galaxies separately, thus allowing us to use the values for central galaxies to provide the initial quiescent fractions for satellites at infall. In order to quantify reasonable systematic modeling uncertainty, we implement two contrasting parameterizations for this satellite-central decomposition at $z > 0$. Our ‘fiducial’ parametrization separates satellite and central galaxies based on analysis of spatial clustering measurements [@TinWet10]: here, the satellite quiescent fraction does not evolve at fixed stellar mass, and the central galaxy quiescent fraction declines rapidly with redshift. As an ‘alternative’ parametrization, we allow both satellite and central galaxy quiescent fractions to evolve at the same rate, which leads to the central galaxy quiescent fraction declining more gradually with redshift. We include both of these contrasting parametrizations with the goal of bracketing reasonable modeling uncertainty in our approach. We begin by describing our fiducial parametrization. We can obtain the fraction of central galaxies that are quiescent, $\fcenq = \ncenq / \ncen$, by knowing the fraction of all galaxies that are quiescent, $\fallq = \nallq / \nall$, the fraction of all galaxies that are satellites, $\fsat = \nsat / \nall$, and the fraction of satellites that are quiescent, $\fsatq = \nsatq / \nsat$, via $$\label{eq:qu.frac_v_z_cen} \fcenq = \frac{\fallq - \fsatq \fsat}{1 - \fsat} \, ,$$ in which each fraction depends on stellar mass and redshift. [|c|c|cccc|]{}\ \ Fit & Parameter &\ & &\ & & $9.5-10.0$ & $10.0-10.5$ & $10.5-11.0$ & $11.0-11.5$\ & $A$ & 0.227 & 0.471 & 0.775 & 0.957\ & $\alpha$ & -2.1 & -2.2 & -2.0 & -1.3\ & $B_0$ & 0.33 & 0.30 & 0.25 & 0.17\ & $B_1$ & -0.055 & -0.073 & -0.11 & -0.10\ & $C_0$ &\ & $C_1$ &\ \ \ & $D_0$ &\ & $D_1$ &\ To determine the quiescent fraction for all galaxies, $\fallq(\mstar, z)$, we combine our SDSS results for all galaxies at $z = 0$ with quiescent fractions from the COSMOS survey at $z < 1$ [@DroBunLea09]. While the results of @DroBunLea09 are based on photometric spectral energy distributions (SED), the significant number of photometric bands (30) in COSMOS helps to ensure accurate redshifts, stellar masses, and active versus quiescent demarcations. In particular, @DroBunLea09 identified active versus quiescent galaxies using full SED fitting, which minimizes the effects of dust contamination as compared with simpler color cuts.[^5] Fig. \[fig:qu.frac\_v\_z\]a shows the evolution of the quiescent fraction in bins of stellar mass, along with the best-fit relation in each mass bin $$\label{eq:qu.frac_v_z} \fallq(\mstar, z) = A(\mstar) \times (1 + z) ^ {\alpha(\mstar)} \,.$$ We split our SDSS sample into narrow redshift bins, and we anchor the fit to the quiescent fraction in the lowest ($z < 0.04$) bin. Note that the evolution within SDSS broadly agrees with the fits to much higher redshift. In all mass bins, the quiescent fraction has at least doubled since $z = 1$. To determine the satellite fraction, $\fsat(\mstar, z)$, we use the simulation directly, motivated by the agreement (within observational uncertainty) of this quantity between the simulation and our SDSS group catalog at $z = 0$ [see also @WetWhi10]. We use SHAM to assign stellar mass to subhalos at each redshift, using the SMF from SDSS [@LiWhi09] at $z = 0.1$ and from COSMOS [@DroBunLea09] at $z = 0.3, 0.5, 0.7, 0.9$, assuming 0.15 dex $\mstar - \mmax$ scatter in all cases. Fig. \[fig:qu.frac\_v\_z\]b shows the evolution of the satellite fraction in stellar mass bins. We find that linear growth with redshift at fixed mass, as given by $$\label{eq:sat.frac_v_z} \fsat(\mstar, z) = B_0(\mstar) + B_1(\mstar) z \, ,$$ provides a reasonable fit, as Fig. \[fig:qu.frac\_v\_z\]b shows. Note that the fluctuations with redshift are driven by the evolution of the SMF in @DroBunLea09 and not by subhalo statistics. Finally, to determine the quiescent fraction for satellites, $\fsatq(\mstar, z)$, we first fit to the stellar mass dependence at $z = 0$ from our SDSS group catalog, as Fig. \[fig:qu.frac\_v\_m-star\] shows. Recall that we use the simulation group catalog to remove the effects of galaxy interlopers caused by redshift-space distortions. We find that $$\label{qu.frac_v_m_sat} \fsatq(\mstar, z = 0) = C_0 + C_1 \log(\mstar)$$ provides a reasonable fit in our mass range, as shown by the dotted curve. We then impose that there is no evolution of this quantity, such that $\fsatq(\mstar, z) = \fsatq(\mstar, z = 0)$. This choice is motivated by the results of @TinWet10, who found no evolution in the quiescent fraction for satellites at fixed magnitude at $z \le 1$, based on halo occupation modeling of the spatial clustering and number densities of galaxy samples from the Classifying Objects by Medium-Band Observations (COMBO-17) [@PhePeaMei06] and Deep Extragalactic Evolutionary Probe (DEEP2) [@CoiNewCro08] surveys. Note that find similar results from the spatial clustering of COSMOS galaxies. Putting these ingredients into equation (\[eq:qu.frac\_v\_z\_cen\]), we obtain the quiescent fraction for central galaxies as a function of stellar mass and redshift, as Fig. \[fig:qu.frac\_v\_z\]c shows (solid curves). The difference from the overall quiescent fraction (panel a) is modest at high mass but is more significant at low mass, where central versus satellites quiescent fractions differ more strongly and the satellite fraction is higher. This fiducial parametrization is based on the satellite quiescent fraction not evolving at fixed stellar mass since $z = 1$. This behavior is motivated by spatial clustering measurements and also is supported by @GeoLeaBun11, who examined galaxy groups of mass $10 ^ {13 - 14} \msun$ in COSMOS out to $z = 1$ and found no significant evolution in the quiescent fraction of group members, at least for sufficiently massive galaxies ($\mstar > 3 \times 10 ^ {10} \msun$) that their sample is complete. However, several other works observe that the quiescent/red fraction of galaxies in clusters decreases with increasing redshift [e.g., @ButOem84; @PogvdLDeL06; @McGBalWil11]. In order to investigate this possible systematic uncertainty, we develop an alternate parametrization to quantify the impact of non-zero evolution of the satellite quiescent fraction at fixed stellar mass. Motivated by the idea that, at least out to $z \sim 1$, the quiescent fraction for satellites is always higher than for central galaxies of the same mass, as supported by observations [e.g., @CooNewCoi07; @McGBalWil11; @GeoLeaBun11], we parametrize the quiescent fractions for both satellite and central galaxies as decreasing with redshift at the same rate as for all galaxies, such that $\fcenq / \fsatq$ remains fixed.[^6] That is, at fixed mass, central galaxies evolve as $$\label{eq:qu.frac_v_z_cen_alt} \fcenq(\mstar, z) = \fcenq(\mstar, z = 0) \times (1 + z) ^ {\alpha(\mstar)} \, ,$$ using the same value of $\alpha$ as for all galaxies in the same stellar mass bin. We fit for (, z = 0) directly from the group catalog, as Fig. \[fig:qu.frac\_v\_m-star\] shows, finding that $$\label{eq:qu.frac_v_m_cen_alt} \fcenq(\mstar, z = 0) = D_0 + D_1 \log(\mstar / \msun)$$ provides a reasonable fit. The dashed curves in Fig. \[fig:qu.frac\_v\_z\]c show the resultant evolution of the central galaxy quiescent fraction in this alternate parametrization, which exhibits a more gradual decline with redshift. However, for both parametrizations, . Again, we emphasize that our two parametrizations provide significant contrast, and we consider their difference as representative of the reasonable systematic uncertainty in satellite initial conditions.[^7] Table \[tab:qu.frac\_v\_z\_cen\] lists the fits and parameters for each term in equations (\[eq:qu.frac\_v\_z\_cen\]) - (\[eq:qu.frac\_v\_m\_cen\_alt\]). For binned parameters, we will use spline interpolation as a function of $\log\left(\mstar\right)$ to obtain smooth stellar mass dependence. ### Evolution of SFR for central galaxies {#sec:sfr-evol_cen} Having parametrized the evolution of the quiescent fraction for central galaxies, we now develop a prescription for the evolution of their SFRs, which are observed to increase with redshift [e.g., @NoeWeiFab07]. For central galaxies that remain active at $z = 0$, we parametrize their star formation history through a modified exponential $\tau$ model, $$\begin{aligned} \label{eq:sfr-evol_cen} \sfrcen(t) & \propto & (t - t_f) \exp \left\{ -\frac{(t - t_f)}{\taucen} \right\} \\ \mstar(t) & = & f_{\rm retain} \int_{t_f} ^ t \sfr(t) \, \rm{d}t \notag\end{aligned}$$ with $t_f$ being the time of initial formation, which we take to be $t(z = 3)$ for all galaxies in our mass range, and $f_{\rm retain}$ being the fraction of stellar mass that is not lost through supernovae and stellar winds, which we take to be $f_{\rm retain} = 0.6$.[^8] To obtain $\taucen$, we place all active central galaxies in our SDSS group catalog into narrow (0.2 dex) bins of stellar mass and compute the median $\taucen$ in each bin using equation (\[eq:sfr-evol\_cen\]) with the stellar mass and median SSFR of the bin. This yields $\taucen$ values that range from 3.8 to $1.9 \gyr$ from $\mstar = 5 \times 10 ^ {9}$ to $2 \times 10 ^ {11} \msun$. We use the median $\taucen$ of each bin to evolve back the SFRs of all active galaxies in that bin, which agree with measured SFRs at $0.2 < z < 1.1$ from @NoeWeiFab07 to within 0.15 dex across our mass range, well within their measurement errors. This prescription keeps the width of the active galaxy SFR distribution fixed, also in agreement with @NoeWeiFab07. Thus, by design, our $\tau$ model agrees well with the full active galaxy SFR distributions from @NoeFabWei07 out to $z = 1$. While our $\tau$ model is highly simplified compared with the actual star formation histories of galaxies, we note that similar $\tau$ models have been shown to agree with observed average SFRs out to $z = 1$ (e.g., @NoeFabWei07b). Thus, for our specific purpose of assigning statistically accurate SFRs to active satellites at their time of infall, we consider this simplified but constrained model a reasonable empirical approach. Parametrizing the possible change in the SFRs of quiescent central galaxies is more difficult, given both noisier measurements at $z = 0$ and a lack of detailed SFR measurements at higher redshift. However, these uncertainties are largely irrelevant for us in practice, because satellites that fall in being already quiescent are at or quickly evolve below $\ssfr \approx 10 ^ {-12} \yrinv$, where our SDSS measurements are largely upper limits. For completeness, in our parametrization we assume that quiescent galaxies at all redshifts have the same SFR normalization as quiescent galaxies at $z = 0$. We also explored letting the SFR normalization for quiescent galaxies evolve by the same amount as for active galaxies, such that the separation between the peaks in the SFR distribution remains unchanged, though doing this does not alter significantly our model results at $z = 0$ in §\[sec:quench\_times\]. Note that equation (\[eq:sfr-evol\_cen\]) ignores the contribution from galaxy mergers when computing stellar mass growth, which means that it underestimates the amount of total stellar mass growth since $z = 1$ somewhat. We examined the importance of this effect by using the stellar masses of galaxies from SHAM out to $z = 1$ and following galaxy-galaxy merger histories in our simulation to $z = 0$. The typical amount of stellar mass growth via mergers since $z = 1$ for galaxies in our stellar mass range is always $< 10\%$ (usually much less); mergers become important only for galaxies of significantly higher mass. Thus, this introduces only a small bias in our parametrization as compared with modeling uncertainty of initial quiescent fractions in §\[sec:qu.frac-evol\_cen\]. In summary, to obtain the SFR distribution of central galaxies as a function of mass and redshift we first compute the median $\taucen$ for active central galaxies at a given stellar mass, we use this $\taucen$ to compute the increase in SFR to a given redshift, and we apply this increase to all active central galaxies at that mass, such that the active galaxy SFR distribution width does not evolve at fixed mass. We also assume that the SFR normalization for quiescent central galaxies does not evolve. We then evolve the relative fraction of active and quiescent central galaxies according to equation (\[eq:qu.frac\_v\_z\_cen\]) or (\[eq:qu.frac\_v\_z\_cen\_alt\]) by moving randomly chosen SFR values from the quiescent to the active side of the distribution. Fig. \[fig:ssfr-distr\_v\_z\] shows an example of the resultant evolution of the SSFR distribution, for central galaxies with $\mstar = 10 ^ {10 - 10.5} \msun$. We emphasize that our methodology reproduces the observed evolution of the quiescent fraction as well as the active galaxy SFR normalization and distribution width, providing accurate (statistical) initial conditions for the SFRs of satellites prior to first infall. We also note that, while our parametrization clearly also has interesting implications for the physics of how central galaxies evolve, we defer further investigation in this area to future work. For this paper, our parametrization is important only in providing accurate initial conditions for satellite SFRs. ### Assigning SFR to satellites at infall {#sec:assign_sfr_at_infall} We now describe how we assign initial SFRs to satellites at their time of first infall. For each satellite in the simulation at $z = 0$, we have its time of first infall, but in order to use our above parametrization, we must also know what stellar mass each satellite at $z = 0$ had at that time, at least in a statistically average sense. To estimate this, we posit that satellites have grown in stellar mass by the same amount, on average, as active central galaxies of the same stellar mass. This ansatz allows us to use equation (\[eq:sfr-evol\_cen\]), with the same median value of $\taucen$ as for central galaxies of the same stellar mass at $z = 0$, to calculate the average factor by which satellites were less massive at their time of first infall. As we will demonstrate in §\[sec:m-star\_growth\], this ansatz is not only self-consistent, but moreover it be satisfied in our parametrization, at least in the absence of significant, systematic stellar mass loss from tidal stripping.[^9] Note that this approach, based on statistical averaged star formation histories, ignores scatter in stellar mass growth. As a check on the accuracy and self-consistency of our approach, we also tried computing stellar mass growth for central galaxies directly from SHAM, by differencing each central galaxy’s stellar mass at $z = 0$ from what it had at a given redshift according to SHAM. While this alternate approach leads to higher scatter in the amount of stellar mass growth over a given time interval (for example, $20 - 30\%$ since $z = 1$), it leads to average stellar mass growths that are consistent with the above method to within 10%. Thus, our approach is self-consistent, at least in parametrizing the average stellar mass growth of active galaxies. Thus, for each satellite (including ejected satellites) in the simulation at $z = 0$, we assign its SFR at its redshift of first infall by drawing randomly from the central galaxy SFR distribution at the appropriate stellar mass at that redshift, using equation (\[eq:sfr-evol\_cen\]). For satellites that fell in prior to $z = 1$, we extrapolate the quiescent fraction and $\tau$ model fits to higher redshift, though most satellites that fell in at $z > 1$ are low-mass and had minimal likelihoods of being quiescent at infall, so changing the extrapolation changes our results by only a few percent. To highlight the importance of our parametrization for assigning accurate initial quiescent fractions to satellites, the green region in Fig. \[fig:qu.frac\_v\_m-star\] shows the fraction of satellites at $z = 0$ that were quiescent prior to first infall. The region boundaries are determined by our two parametrizations, with solid and dashed curves corresponding to those in Fig \[fig:qu.frac\_v\_z\]c. If convolved with satellite infall times, the resultant quiescent-prior-to-infall fractions differ by less than 10%, small compared with the marked difference from the quiescent fraction for central galaxies at $z = 0$ (red curve), which has been assumed for satellite initial conditions in previous works, as mentioned above. Furthermore, the significant difference between satellite quiescent fractions at infall and at $z = 0$ (blue curve) indicates the importance of the satellite quenching process, which we will explore next. Importance & efficiency of satellite quenching {#sec:quench_importance_efficiency} ---------------------------------------------- We now explore the importance of satellite star formation quenching in building up the full population of quiescent (red-sequence) galaxies at $z = 0$ as well as the efficiency by which satellites are quenched. The results in this subsection are essentially empirical, relying only on our parametrization for satellite initial quiescent fractions from §\[sec:qu.frac-evol\_cen\]. These results are independent of any particular model for the mechanism(s) or timescale of satellite quenching but provide insight into the efficiency of the mechanism(s) as a function of stellar mass. The basic quantity that we use in this subsection is the number density of satellites (including ejected satellites) at $z = 0$ that quenched , given by $$\begin{aligned} \label{eq:quench_as_sat} \nsatqassat(\mstar(z = 0)) & = & \nsatqnow(\mstar(z = 0)) - \\ & & \nsatqinf(\mstar(z = 0)) \, . \nonumber\end{aligned}$$ Here, $\nsatqnow$ is the number density of satellites that are quiescent at $z = 0$ from the SDSS group catalog (again, using the simulation group catalog to correct for interloping galaxies), and $\nsatqinf$ is the number density of satellites at $z = 0$ that were quiescent prior to infall from our initial condition parametrization. We express the latter quantity as a function of stellar mass at $z = 0$, based on our assumption that satellites have grown in stellar mass by the same amount as central galaxies (which we will justify in §\[sec:m-star\_growth\]). ### Importance of satellite quenching {#sec:quench_importance} We first examine the contribution of satellite quenching to building up the quiescent population at $z = 0$, as Fig. \[fig:quench-as-sat.frac\_v\_m-star\] shows. First, to understand the importance of satellite quenching on just the satellite population, the red region shows what fraction of currently quiescent satellites were quenched as satellites, $\nsatqassat / \nsatqnow$. The region width indicates the uncertainty from our two initial condition parametrizations in §\[sec:qu.frac-evol\_cen\]. At $\mstar < 10 ^ {10} \msun$, essentially all quiescent satellites quenched as satellites. At $\mstar \sim 10 ^ {11} \msun$, this fraction is half, because half of satellites were already quiescent as central galaxies prior to infall (Fig. \[fig:qu.frac\_v\_m-star\]). Thus, in our mass regime . To demonstrate the importance of satellite quenching on the entire galaxy population, the blue region shows what fraction of all currently quiescent galaxies quenched as satellites, $\nsatqassat / \nallqnow$. This fraction decreases significantly with stellar mass, such that the vast majority of galaxy quenching occurs via central galaxies at $\mstar \gtrsim 10 ^ {11} \msun$. In this regime, central galaxies are as likely as satellites to be quiescent at $z = 0$ (Fig. \[fig:qu.frac\_v\_m-star\]), and they outnumber satellites by a factor of $\gtrsim 6$ (Fig. \[fig:qu.frac\_v\_z\]b), so central galaxies dominate the production of the quiescent population. By contrast, at the low mass end, even though central galaxies still outnumber satellites by a factor of $\sim 3$, satellite quenching is so much stronger that satellites dominate the production of the quiescent population. Thus, . Furthermore, no central (‘isolated’) galaxies are observed to be quiescent at $\mstar \lesssim 10 ^ {9} \msun$ [@GehBlaYan12], so satellite quenching is the process for quenching galaxies at such low mass. We emphasize the importance of accurate satellite initial conditions for these results. Several previous works have attempted to infer the impact of satellite quenching by assuming that satellite initial conditions can be approximated via central galaxies at $z = 0$ [e.g., @vdBAquYan08; @TinWet10; @PenLilRen11; @DeLWeiPog12]. However, this assumption necessarily underestimates the importance of satellite quenching, because central galaxies were less likely to be quenched at higher redshift (Fig. \[fig:qu.frac\_v\_z\]c). To highlight the impact of this assumption, the dotted blue curve in Fig. \[fig:quench-as-sat.frac\_v\_m-star\] shows the resultant $\nsatqassat / \nallqnow$ if one assumes (incorrectly) that central galaxy SFRs at $z = 0$ represent satellite initial conditions, that is, using $\nsatqinf = \fcenqnow \nsat$ in equation (\[eq:quench\_as\_sat\]). This assumption underestimates the true importance of satellite quenching by a factor of at least 50% in our mass range. ### Efficiency of satellite quenching {#sec:quench_efficiency} We next examine the efficiency by which satellites are quenched, as given by the fraction of satellites that were active at infall (able to be quenched) that then quenched as satellites after infall, $\nsatqassat / \nsatainf$. Fig. \[fig:quench-after-infall-frac\_v\_m-star\] (green region) shows this fraction as a function of stellar mass, with the width again indicating the uncertainty in satellite initial quiescent fractions. At low mass ($\mstar < 10 ^ {10} \msun$) half of satellites that were active at the time of infall have been quenched by now, while the other half still actively form stars. By contrast, at high mass ($\mstar > 10 ^ {11} \msun$) essentially all initially active satellites have been quenched. Thus, . Physically, this implies that more massive satellites are quenched more rapidly, as we will show in §\[sec:quench\_time\_binary\]. To elucidate the differing quenching efficiencies for satellite versus central galaxies, the orange region in Fig. \[fig:quench-after-infall-frac\_v\_m-star\] shows what fraction of active-at-infall satellites would have quenched had they instead remained central galaxies. This fraction is given by $\left( \fcenqnow - \fsatqinf \right) / \fcenanow$, with $\fsatqnow$ and $\fcenqnow$ being the fractions of satellite and central galaxies, respectively, that are quiescent at $z = 0$, and $\fcenanow = 1 - \fcenqnow$. Like satellites, central galaxies also quench more efficiently at higher mass, though with a lower overall efficiency. From Fig. \[fig:quench-after-infall-frac\_v\_m-star\], it might naively appear that the differing quenching efficiency for satellite versus central galaxies is stronger at lower mass, though one must interpret these fractions carefully. In , we argued that a robust comparison is given by the satellite quiescent fraction excess $$\label{eq:qu.frac_excess} \fsatqexcess = \frac{\fsatqnow - \fcenqnow}{\fcenanow} \,,$$ which represents the excess fraction of satellites that were quenched after infall that would not have been quenched had they remained central galaxies. As shown by the dotted curve in Fig. \[fig:quench-after-infall-frac\_v\_m-star\], $\fsatqexcess$ is independent of stellar mass.Furthermore, in we showed that the stellar mass independence of $\fsatqexcess$ persists across all host halo masses and halo-centric radii. If the same physical mechanism(s) that quenches central galaxies operates on satellites, then $\fsatqexcess$ indicates how much more of an effect the satellite-specific quenching mechanism(s) has. In this scenario, the invariance of $\fsatqexcess$ suggests that the efficiency of the satellite quenching process(es) is independent of stellar mass. However, the physical processes that are thought to quench central galaxies—such as virial shock heating, mergers, active galactic nuclei—and their relative importance as a function of stellar mass remain topics of active investigation. Thus, it is unclear if such central galaxy quenching processes are important for satellites, and if they are, whether they occur before or after the onset of any satellite-specific processes. We will investigate the physical mechanisms of satellite quenching in more detail in , and we note that the results in this paper do not depend on the exact mechanism(s) at play. To summarize this empirically-motivated subsection: (1) satellite quenching dominates the production of quiescent satellites at all masses we probe, and it dominates the production of quiescent galaxies at $\mstar < 10 ^ {10} \msun$, and (2) more massive satellites are quenched more efficiently. SFR evolution of satellites {#sec:sfr-evol_sat} --------------------------- We now use the satellite infall times from our simulation to extend the results of the previous subsection and constrain satellite SFR evolution and quenching timescales. This subsection presents the primary results of this paper. The relative importance of various mechanisms for quenching satellites—such as strangulation, ram-pressure stripping, and harassment—and the details of how their effects propagate to influencing satellite star formation remain topics of active investigation. Nonetheless, for most plausible physical processes, the likelihood that a satellite has been quenched increases with its time since infall. Motivated by this idea, we proceed under the following ansatz: if a satellite was active at the time of first infall, As we will show in , this simple ansatz yields satellite quiescent fractions that have the correct dependencies on both halo-centric distance and halo mass, as compared with our observational results in , because both the satellite quiescent fraction and $\tsinf$ increase with decreasing halo-centric distance. This agreement implies that the scatter between quenching likelihood and $\tsinf$ must be small, because a scenario in which quenching likelihood and $\tsinf$ have large scatter would lead to satellite quiescent fraction radial gradients that are too shallow (see for more).[^10] Thus, our ansatz provides a well-motivated means to constrain statistically the timescales over which satellites are quenched. In this subsection, we examine satellite quenching timescales in two ways, the first being simpler and more empirical, the second being more physical. First, in §\[sec:quench\_time\_binary\], we consider quenching simply in the binary sense, that is, when SSFR falls below the $10 ^ {-11} \yrinv$ bimodality threshold. We constrain the $\tsinf$ at which satellites are quenched in this binary sense, which we refer to as the satellite quenching time, $\tq$. Then, in §\[sec:quench\_times\], we decompose this quenching time into two more physically informative timescales: the time delay after infall at which star formation to be quenched, $\tqdelay$, and the characteristic e-folding time over which SFR fades once quenching has started, $\tauqfade$. ### Quenching timescale of satellites {#sec:quench_time_binary} We first constrain the time since infall at which satellites are quenched (fall below $\ssfr = 10 ^ {-11} \yrinv$), $\tq$. To do this, we select all satellites in the simulation at $z = 0$, and we use our satellite initial quiescent fraction parametrization from §\[sec:qu.frac-evol\_cen\] to compute whether each was active or quiescent prior to infall. Those that were quiescent remains so thereafter. For those that were active at the time of infall, we designate the ones with $\tsinf > \tq$ as having been quenched. Using narrow bins of both satellite and host halo mass, we adjust $\tq$ until the satellite quiescent fraction in the simulation group catalog matches that of the SDSS group catalog. Repeating this procedure in each mass bin yields $\tq$ as a function of both satellite mass and host halo mass. Fig. \[fig:quench-time\_v\_m-star\] (top) shows how $\tq$ depends on satellites’ current stellar mass, in bins of current host halo mass. Region widths indicate the uncertainty from satellite initial quiescent fractions (solid and dashed curves correspond to those in Fig. \[fig:qu.frac\_v\_z\]c). As the quenching efficiency results of §\[sec:quench\_efficiency\] suggested, more massive satellites are quenched more rapidly. The most massive satellites are quenched $\sim 2 \gyr$ after infall, while those at $\mstar < 10 ^ {10} \msun$ form stars actively for $\sim 5 \gyr$ after infall before being quenched. Note that $5 \gyr$ is over half the age of the Universe at the typical redshift that they fell in, so low-mass satellites have spent as much as half of their entire star-forming lifetimes as satellites. In , we showed that satellites at $z = 0$ are more likely to be quiescent if they reside in more massive host halos. But interestingly, in Fig. \[fig:quench-time\_v\_m-star\] we find no significant, systematic dependence of satellites’ $\tq$ on their current host halo mass (there are fluctuations at low stellar mass, but they are not monotonic). This halo mass independence naturally arises from our tying satellites’ quenching to their time since infall, which incorporates the increased influence of group infall and ejection/re-infall in more massive halos, leading to a natural increase in $\tsinf$ with halo mass (Fig. \[fig:t-inf\_v\_m-halo\]). Recall that this choice was motivated by the absence of a minimum host halo mass at $z = 0$ for affecting satellite SFR. To the extent that this fact holds true out to $z \sim 0.5$ ($\sim 5 \gyr$ ago, the timescale over which $\tq$ is sensitive), Fig. \[fig:quench-time\_v\_m-star\] indicates that there is little-to-no freedom for $\tq$ to depend on host halo mass, because we obtain $\tq$ by empirically matching the observed quiescent fraction at $z = 0$ in each host halo mass bin. In other words, given that a $\sim 10 ^ {12} \msun$ host halo quenches a low-mass satellite $\sim 4 \gyr$ after infall, as demanded by Fig. \[fig:quench-time\_v\_m-star\], a similar halo that then falls into a massive cluster will bring in satellites that already are quenched. Thus, the overall $\tq$ in massive clusters is set by a combination of $\tq$ from this group preprocessing and from satellites that fell directly into the cluster from the field, but as Fig. \[fig:quench-time\_v\_m-star\] shows, this overall $\tq$ is effectively the same for massive clusters as for a Milky-Way halo.[^11] Thus, . This lack of dependence on host halo mass may be surprising, given that more massive host halos represent more severe environments, having higher gas densities, temperatures, and satellite orbital velocities at a given scaled distance, $d / \rvir$. But it is not clear that all possible satellite quenching processes should depend on host halo mass. For example, if satellite quenching is driven simply by the inability to accrete gas after infall, then quenching occurs when a satellite exhausts its gas reservoir, independent of its host halo’s mass. Alternately, while the dominant satellite quenching process(es) may be more rapid at a given $d / \rvir$ in a more massive host halo, this is mitigated by the fact that dynamical friction causes a satellite of a given mass to orbit to smaller $d / \rvir$ more quickly in a lower mass host halo [@BoyMaQua08; @JiaJinFal08]. We will examine the dependence of satellite quenching on host halo mass with physically motivated models applied to orbital histories in . ### ‘Delayed-then-rapid’ quenching of satellites {#sec:quench_times} While the satellite binary quenching timescale that we measured above, $\tq$, is advantageous in its simplicity, it is insensitive to the details of how satellite SFR evolves. We now seek to understand satellite star formation histories more fully, as constrained by the full SSFR distribution at $z = 0$. In , we showed that the satellite SSFR distribution is bimodal, similar to central galaxies, across our stellar mass range. The SSFR values of the active galaxy peak and bimodality break, as well as the fraction of galaxies near the bimodality break (‘green valley’), do not vary with central versus satellite demarcation, host halo mass or halo-centric radius. As we argued, these observations imply that (1) satellite SFRs evolve in the same manner as central galaxies for several Gyrs after infall, (2) the time since infall at which satellite SFR starts to be affected is long compared with the time over which SFR fades, and (3) the latter timescale does not depend on host halo mass or halo-centric radius. To quantify these timescales, we build on these trends and construct a physically motivated, two-stage model for satellite SFR evolution. The initial SFR for a satellite at its time of first infall, $\tinf$, is given by our parametrization in §\[sec:sfr\_at\_infall\]. If a satellite was quiescent prior to infall, we do not evolve its SFR. If a satellite was active at infall, we allow its SFR after infall to fade gradually in the same manner as central galaxies of the same stellar mass, using equation (\[eq:sfr-evol\_cen\]). This central-type, gradual fading continues across a ‘delay’ time, $\tqdelay$. If $\tsinf > \tqdelay$, only then does a satellite start to be quenched, at which point we parametrize its SFR evolution via exponential fading, with $\tauqfade$ being the characteristic e-folding time over which SFR fades. Defining $\tqstart = \tinf + \tqdelay$, satellite SFR evolves as $$\label{eq:sfr-evol_sat} \sfrsat(t) = \begin{cases} \sfrcen(t) & t < \tqstart \\ \sfrcen(\tqstart) e ^ {\left\{ -\frac{(t - \tqstart)}{\tauqfade} \right\}} & t > \tqstart \end{cases}$$ to the redshift of our group catalog. Note that $\tqdelay$ and $\tauqfade$ relate to $\tq$ from the previous subsection via $\tq = \tqdelay + N \tauqfade$, with $N = \ln \left[ \ssfr(\tqstart) / 10 ^ {-11} \yrinv \right]$. Because the initial SFRs that we assign to satellites are based on observed distributions, any possible measurement uncertainty propagates into our resultant model SFRs at $z = 0$ as well, allowing for robust comparison with SDSS. Also, for any satellite whose SSFR evolves below $\approx 10 ^ {-12} \yrinv$, we assign it as having $\ssfr = 10 ^ {-12} \yrinv$ plus log-normal scatter of 0.2 dex, which effectively mocks the measurement limits and scatter in the @BriChaWhi04 method. Our physical, two-stage model for satellite SFR evolution has two free parameters (timescales) to constrain: $\tqdelay$ and $\tauqfade$. We allow these timescales to vary, independently, with both satellite and host halo mass, again constrained to yield the observed satellite quiescent fraction at $z = 0$ in the joint mass bins. The mere existence of a bimodal distribution with non-zero population at intermediate SSFRs requires that both timescales are non-zero. However, to explore the impact of the two timescales, we consider three scenarios: 1. $\tqdelay = 0$, $\tauqfade$ fit to the quiescent fraction 2. $\tauqfade = 0$, $\tqdelay$ fit to the quiescent fraction 3. $\tqdelay$, $\tauqfade$ jointly fit to full SSFR distribution Fig. \[fig:ssfr-distr\_times\] shows each resultant SSFR distribution at $z = 0$ in bins of stellar mass, for all satellites in host halos with $\mthm > 10 ^ {12} \msun$, using our fiducial parametrization for satellite initial quiescent fractions. (Using our alternate parametrization shifts $\tqdelay$ to slightly longer values, as Fig. \[fig:quench-time\_v\_m-star\] (bottom) shows, but does not affect $\tauqfade$ or the quality of our fits.) We discuss each scenario in turn. First, we examine scenario (a), in which satellite quenching begins immediately at infall ($\tqdelay = 0$), and SFR fades slowly over a long $\tauqfade$. Fig. \[fig:ssfr-distr\_times\]a shows the resultant SSFR distributions that yield the correct satellite quiescent fractions. This slow-fade scenario puts far too many satellites at intermediate SSFRs, violating the bimodality and leading to a qualitatively incorrect distribution. . Next, we consider the opposite scenario (b), in which satellite quenching is delayed after infall, but once it starts, it occurs instantly ($\tauqfade = 0$). Fig. \[fig:ssfr-distr\_times\]b shows the resultant SSFR distributions, which have a qualitatively correct bimodality, including a correct SSFR values of the active peak and bimodality break. In particular, the agreement of the SSFR distribution for active galaxies confirms that they have evolved in the same manner as active central galaxies. However, the bimodality break is clearly too strong, with a deficit of galaxies at intermediate SSFRs. . Finally, in scenario (c) we allow both $\tqdelay$ and $\tauqfade$ to vary, as fit to the full SSFR distribution, providing a unique solution, as Fig. \[fig:ssfr-distr\_times\]c shows. This simple, two-stage quenching scenario produces a SSFR distribution in excellent agreement with observations at each stellar mass bin, with $\tauqfade$ being $10 - 20\%$ of $\tqdelay$. In Fig. \[fig:quench-time\_v\_m-star\] (bottom), we show more explicitly how the best-fit $\tqdelay$ plus $\tauqfade$ times from Fig. \[fig:ssfr-distr\_times\]c depend on current satellite mass, in bins of current host halo mass, including uncertainty in $\tqdelay$ from satellite initial quiescent fractions. As with $\tq$ in the previous subsection, both $\tqdelay$ and $\tauqfade$ do not depend on the mass of the host halo. We do not plot separate curves for $\tauqfade$ in bins of host halo mass, because the dominant uncertainty in $\tauqfade$, shown in Fig. \[fig:quench-time\_v\_m-star\], comes from fitting the full SSFR distribution, which is larger than any systematic changes with host halo mass or satellite initial conditions. Furthermore, as mentioned in §\[sec:sfr-evol\_cen\], we find that these $\tauqfade$ values do not depend on whether or not we evolve the SFR normalization for quiescent central galaxies. To summarize this subsection, satellite SFR evolves, at least on average, via a ‘delayed-then-rapid’ quenching scenario: satellite SFR remained unaffected for $2 - 4 \gyr$ after first infall (depending on stellar mass), after which SFR fades rapidly, with an e-folding time of $< 0.8 \gyr$. Both timescales are shorter for more massive satellites have no significant dependence on host halo mass. Implications of satellite quenching timescales {#sec:implication} ============================================== Using the quenching timescales that we constrained in §\[sec:quench\_times\], we now explore two implications for satellite evolution. First, in §\[sec:where\_quench\] we explore where satellites were when they quenched, focusing on the importance of group preprocessing. Second, in §\[sec:m-star\_growth\] we use our constrained model for SFR evolution to examine how much satellites have grown in stellar mass since infall. Where were satellites when they quenched? {#sec:where_quench} ----------------------------------------- We have argued that the physical process(es) responsible for quenching satellites sets in at the time of first infall, but that its effects take considerable time to propagate before quenching starts. We now ask: where were satellites at the moment when they started quenching? In §\[sec:infall\_eject\], we examined what fraction of satellites fell in directly from the field versus as a satellite in a group. We now extend those results, using the quenching times from §\[sec:sfr-evol\_sat\], to examine what fraction of currently quiescent satellite quenched (a) prior to first infall, (b) in a different host halo prior to falling into their current host halo, or (c) in their current host halo? For each surviving satellite in the simulation, we compute if it was active at the time of first infall, as before. If so, we use the $\tqdelay$ values from §\[sec:quench\_times\] for the time at which it started to quench. We then compute whether the satellite was in the main progenitor of its current host halo or was in a different host halo at the time that quenching started. Fig. \[fig:where-quench\_v\_m-star\] shows what fraction of currently quiescent satellites quenched in the three different regimes, as a function of current stellar mass, in bins of current host halo mass. Fig. \[fig:where-quench\_v\_m-star\]a shows what fraction of currently quiescent satellites already were quenched as a central galaxy prior to first infall. As discussed in §\[sec:quench\_importance\_efficiency\], this fraction increases with stellar mass, both because the central galaxy quiescent fraction is higher at higher stellar mass and because higher mass satellites fell in more recently, when central galaxies were more likely to be quiescent. Fig. \[fig:where-quench\_v\_m-star\]a also shows that quenching prior to first infall is more important in lower mass host halos, again because satellites in lower mass host halos fell in more recently. Fig. \[fig:where-quench\_v\_m-star\]b shows what fraction of currently quiescent satellites started quenching in a different host halo prior to falling into their current host halo, indicating complete group preprocessing. Opposite to the trends for quenching prior to first infall, this fraction is higher for lower mass satellites and in higher mass host halos, both trends a result of the hierarchical nature of halo growth (§\[sec:infall-time\]). In particular, half of all low-mass ($\mstar < 10 ^ {10} \msun$) quiescent satellites in massive clusters ($\mthm > 10 ^ {14} \msun$) started quenching as a satellite in a group. Finally, Fig. \[fig:where-quench\_v\_m-star\]c shows what fraction of currently quiescent satellites quenched while in their current host halo. This mode of quenching dominates at most masses, though the fraction is alway $\lesssim 70\%$; its importance wanes both at high stellar mass, where quenching prior to first infall dominates, and at low stellar mass, where the importance of group preprocessing increases. Only about half of quiescent satellites within massive clusters ($> 10 ^ {14} \msun$) quenched there. In summary, group preprocessing has a critical impact on satellite star formation histories. We have argued that time spent as a satellite in another host halo is important as far as starting the quenching process, but these results demonstrate the impact of complete group preprocessing. This is particularly important for satellites in clusters, in which $15 - 50\%$ of all quiescent satellites started quenching as a satellite in another host halo. Given the hierarchical nature of halo growth, group preprocessing should be only more important for quenching satellites below our $5 \times 10^9 \msun$ stellar mass limit. Stellar mass growth after infall {#sec:m-star_growth} -------------------------------- So far, we examined satellite star formation histories with a focus on star formation evolution, but in this last subsection we examine the implications for stellar mass growth. In §\[sec:sfr-evol\_sat\], we showed that satellites continue to form stars actively, in the same manner as central galaxies, for $2 - 4 \gyr$ after infall, which represents as much as half of their total star-forming lifetimes. Thus, satellites have the capacity to grow significantly in stellar mass via star formation after infall. For now, we ignore any other processes that might affect satellite stellar mass evolution, such as tidal stripping or merging, though we discuss these in Appendix \[sec:mass\_growth\_sham\]. To quantify the amount of stellar mass growth via star formation that satellites at $z = 0$ have experienced since first infall, we use our model for satellite SFR evolution given by equation (\[eq:sfr-evol\_sat\]), with the appropriate values of $\taucen$ and the quenching timescales $\tqdelay$ and $\tauqfade$ from §\[sec:quench\_times\] given a satellite’s stellar mass at $z = 0$. We integrate $\sfr(t)$ to obtain the stellar mass formed since first infall, again assuming that 40% of this stellar mass is lost through supernovae and stellar winds. For a satellite that was quiescent prior infall, its SFR has been sufficiently low that we can neglect any stellar mass growth since that time. We then examine statistical trends by computing the median fractional stellar mass growth since first infall in bins of stellar mass at $z = 0$. Fig. \[fig:m-star-growth\_v\_m-star\] (top) shows the median ratio of a satellite’s stellar mass at $z = 0$ to the mass that it had at the time of its first infall, as a function of its current stellar mass. As before, region widths show uncertainty in satellite initial quiescent fractions from §\[sec:qu.frac-evol\_cen\]. Considering all surviving satellites (grey region), their median stellar mass growth since infall is negligible at high mass but is 50% at $\mstar < 10 ^ {10} \msun$. This mass dependence arises because lower mass satellites are more likely both to have fallen in earlier when SFRs were higher and to have been active at the time of infall. The blue and red regions in Fig. \[fig:m-star-growth\_v\_m-star\] show median values for currently active and quiescent satellites, respectively. Overall, currently active satellites have experienced significantly less stellar mass growth since infall than currently quiescent galaxies. While perhaps counter-intuitive, this trend is readily understandable. Even though active satellites are still growing in stellar mass, to remain active they necessarily fell in more recently, meaning both lower SFRs at the time of infall and less time for mass growth after infall. To understand currently quiescent satellites, note that they are composed of two populations: those that were quiescent prior to infall and those that quenched after infall. While those that were quiescent prior to infall did not grow in stellar mass at all, those that quenched after infall necessarily fell in early, when SFRs were much higher, and they then spent several Gyrs actively forming stars before being quenched. While the former population dominates at high mass, the latter dominates at low mass, leading to stellar mass typically having more than doubled since infall at $\mstar < 10 ^ {10} \msun$. Thus, To put this result in context, we compare the stellar mass growth experienced by satellite versus central galaxies. Fig. \[fig:m-star-growth\_v\_m-star\] (bottom) shows the median ratio of satellite mass at $z = 0$ to what it would be if all satellites that were active at infall remained active to $z = 0$, that is, if satellites never quench. This approximately indicates the ratio of stellar mass that satellites have compared to what they would have if they had remained a central galaxy. (In fact, the masses of satellite versus central galaxies are closer than in Fig. \[fig:m-star-growth\_v\_m-star\], because some central galaxies have quenched since the time that a satellite fell in, but we do not attempt to fully model central galaxy SFR evolution here.) While this mass ratio is unity for currently active satellites (by definition), considering the entire satellite population, the median ratio remains consistent with unity (grey region). Considering just currently quiescent satellites, which have experienced the most truncated mass growth, the median reduction in stellar mass at $z = 0$ is never more than 10% (red region). Furthermore, considering just those satellites that quenched infall, the reduction is still only 10%, though it remains at that level across all stellar mass (not shown). Thus, despite the clear importance that satellite quenching has on instantaneous SFR, . This behavior arises for two reasons: satellites evolve for considerable time ($2 - 4 \gyr$) after infall until they start to be quenched, and galaxies in our mass range form the vast majority of their stars at high redshift ($z \gtrsim 0.5$) when their SFRs were much higher, so quenching at low redshift has little impact on their final stellar mass. As outlined in §\[sec:sfr\_at\_infall\], in order to assign accurate initial SFRs to satellites at their time of first infall, we estimated their stellar mass at that time via the ansatz that they grew by the same amount as central galaxies of the same stellar mass. This is not obviously a good approximation, for instance, if observations had constrained both $\tqdelay$ and $\tauqfade$ in §\[sec:quench\_times\] to be quite short. However, the results of this subsection reassuringly show that our approach is statistically self-consistent to good approximation. Moreover, we investigated alternative scenarios in which satellites have grown significantly less in stellar mass since infall than central galaxies, but this generically leads to even longer quenching times, and thus more implied stellar mass growth, so these scenarios are not internally self-consistent. Thus, based on Fig. \[fig:m-star-growth\_v\_m-star\], the only self-consistent scenario is that stellar mass growth in satellites is the same as central galaxies to within 10% (at least in the absence of significant, systematic stellar mass loss from tidal stripping). Finally, the results of this section provide physical insight and possible improvement into the implementation of the SHAM method in assigning galaxy stellar mass to central and satellites subhalos. The basic idea of SHAM is that one can assign instantaneous stellar mass to all subhalos, central and satellite, based simply on some measure of their subhalo mass (or circular velocity). Our results show that, because satellites and central galaxies grow in stellar mass by essentially the same amount, on average, it is justifiable to assign stellar mass to all subhalos under a single, simple prescription. However, our results do imply possible tension with the that SHAM typically is implemented, specifically, through the use of the maximum/infall subhalo mass, which does not evolve after infall for satellites. We discuss this issue in Appendix \[sec:mass\_growth\_sham\]. Summary and Discussion {#sec:summary} ====================== Summary ------- Using a galaxy group/cluster catalog from SDSS Data Release 7, together with a cosmological -body simulation to track satellite orbits, we examined in detail the star formation histories of satellite galaxies at $z \approx 0$, focusing on their times since infall, quenching timescales, and stellar mass growth after infall. Applying the same group-finding algorithm to our simulation as we used in SDSS allows us to make robust comparisons of model results to observations. To obtain accurate initial conditions for the SFRs of satellites at their time of first infall, we constructed an empirically based, statistical parametrization for the evolution of central galaxy SFRs out to $z = 1$; this is critical for the accuracy of our results because, at fixed stellar mass, the quiescent fraction for central galaxies more than doubles from $z = 1$ to 0. Our primary result is that, at least on average, satellite SFR evolves via a ‘delayed-then-rapid’ quenching scenario: satellite SFR remained unaffected for several Gyrs after first infall, after which quenching occurs rapidly, as Fig. \[fig:sfr-evol\_diagram\] summarizes. In more detail, our main results are: Fewer than half of satellites in massive clusters fell in directly from the field; the rest fell in as a satellite in another host halo or experienced secondary infall after becoming ejected. Satellites at $z = 0$ experienced their first infall typically at $z \sim 0.5$, or $\sim 5 \gyr$ ago, with a broad tail out to $z \ge 1$. Less massive satellites and those in more massive host halos fell in earlier. Satellite quenching always dominates the production of quiescent satellites. Moreover, satellite quenching is responsible for producing the majority of quiescent (red-sequence) galaxies at $\mstar < 10 ^ {10} \msun$ by $z = 0$. Based on observations, we argued that the process(es) responsible for quenching satellites begins after infall into any other host halo. As constrained by the satellite SSFR distribution at $z = 0$, satellites, at least on average, then remain actively star-forming for $2 - 4 \gyr$ (depending on stellar mass) after first infall, unaffected by their host halo, before quenching starts. Once quenching has started, the e-folding time over which SFR fades is $< 0.8 \gyr$. More massive satellites start to be quenched more rapidly after infall, and once quenching has started, their SFRs fade more quickly. However, these quenching timescales do not depend on host halo mass. The observed increase in the satellite quiescent fraction with host halo mass arises because of the increased importance of group preprocessing and ejection/re-infall in more massive host halos. Half of low-mass ($\mstar < 10 ^ {10} \msun$) quiescent satellites in clusters started quenching in another host halo before falling into the cluster. Across all satellite masses, the fraction of quiescent satellites that were quenched within their current host halo is never more than $\sim 70\%$. Because satellite quenching is so delayed, low-mass satellites have experienced considerable stellar mass growth via star formation since infall: satellites at $\mstar < 10 ^ {10} \msun$ are, on average, 50% more massive than at infall. Moreover, the average amount of mass growth via star formation in satellite and central galaxies is identical to within 10%. This provides key physical insight into the abundance matching technique for assigning stellar mass to subhalos, as outlined in Appendix \[sec:mass\_growth\_sham\]. Relation to satellite gas content {#sec:quench_time_gas} --------------------------------- We first discuss the relation of our results to satellite gas content. Satellites provide unique laboratories for examining gas depletion and its relation to star formation because, unlike central galaxies, satellites’ subhalos are thought not to accrete matter after infall: the strong gravitational tidal forces in the host halo both prevent a satellite’s subhalo from accreting new matter and strip any existing subhalo matter, including gas, from the outside-in. Additionally, any thermalized gas in the host halo can heat and ram-pressure strip any extended subhalo gas, and in the extreme case of both high gas density and satellite velocity, ram-pressure can strip cold gas directly from the disc. We first discuss the implications of our $\tqdelay$ results, that is, that SFR in satellites evolves for $2 - 4 \gyr$ after infall, depending on stellar mass, . While this timescale may represent, to some degree, the statistical average of those of individual satellites, it is informative to consider in the context of gas depletion times. Given that star formation is fueled by cold, molecular gas [@WonBli02; @BigLerWal08], this means that a significant quantity of such gas must persist in a satellite’s disc for that amount of time. One possibility is that a sufficient reservoir of cold gas was present in the disc at the time of infall. As a constraint, we compare our $\tqdelay$ times to observed cold gas depletion times, defined as $M_{\rm gas} / \sfr$. At $z = 0$, observed atomic gas depletion times in $\mstar > 10 ^ {10} \msun$ galaxies are $\sim 3 \gyr$, with large scatter but no systematic dependence on stellar mass or SFR [@SchCatKau10]. Incorporating the additional $\sim 30\%$ of the gas that is molecular [@SaiKauKra11], the total gas depletion time would extend to $\sim 4 \gyr$. This timescale can be even longer to the extent that gas recycled from stellar mass loss fuels star formation [e.g., @LeiKra11]. If valid at higher redshift, this depletion time would be sufficient to accommodate our $\tqdelay$ values all of the atomic gas converts to stars. Observed gas ratios, $M_{\rm gas} / \mstar$, provide another constraint. In §\[sec:m-star\_growth\], we showed that satellites experience significant stellar mass growth via star formation. In particular, currently quiescent, low-mass satellites have more than doubled their stellar mass since infall, which requires a gas reservoir comparable in mass to their stars at the time of infall. Observations at $z = 0$ show that actively star-forming galaxies at $\mstar \sim 10 ^ {10} \msun$ have total cold gas masses that are $\sim 40\%$ of their stellar mass [@CatSchKau10; @SaiKauKra11], and that this gas ratio increases with decreasing stellar mass, being near unity for galaxies just below our mass threshold [e.g., @GehBlaMas06]. However, currently quiescent, low-mass satellites fell in at higher redshift (typically, $z = 0.5 - 1$), and if the total cold gas fraction increases with redshift at a rate suggested by observations of molecular gas in actively star-forming, massive ($\mstar > 10 ^ {10} \msun$) galaxies at $z = 0.4 - 1.4$ [@DadBouWal10; @TacGenNer10; @GeaSmaMor11], then these satellites would have had enough cold gas in their disc to accommodate the significant stellar mass growth in Fig. \[fig:m-star-growth\_v\_m-star\]. Furthermore, the cold gas in the disc could be replenished for some time after infall if the most concentrated and tightly-bound component of the extended subhalo gas continues to cool/accrete onto the disc for several Gyrs before being stripped. X-ray observations show that roughly half of massive satellites in groups and clusters retain extended, hot gas halos, though truncated as compared with central (‘field’) galaxies [@SunDonVoi07; @JelBinMul08]. Simulations also show retention of extended subhalo gas: @McCFreFon08 found that satellite subhalos can retain a significant fraction ($\sim 30\%$) of their hot gas for several Gyrs after infall, while @SimWeiDav09 and @KerKatFar09 found that satellites continue to accrete significant gas onto their disc, though at a reduced rate compared with central galaxies. Adding this replenishment to what cold gas was already in the disc at infall, the total gas reservoir in/around satellites appears fully sufficient to fuel their extended SFR and stellar mass growth as demanded by $\tqdelay$. Finally, regarding $\tqdelay$, we note that the halo radius crossing time, given our virial definition, is $t_{\rm cross} = \rvir / \vvir = 2.7 (1 + z) ^ {-3 / 2} \gyr$, independent of host halo mass. More precisely, numerically integrating satellite orbits in an NFW potential (assuming energy and angular momentum conservation) using typical initial orbital parameters from @Wet11 for satellites in host halos in our mass range, the average time from infall to first pericentric passage is somewhat shorter at $2 (1 + z) ^ {-3 / 2} \gyr$, independent of satellite mass. Thus, the onset of satellite quenching occurs near the time of pericentric passage for more massive satellites and $1 - 2 \gyr$ after pericentric passage for lower mass satellites, as we will explore in more detail in . We next discuss the implications of our $\tauqfade$ results, that is, that once quenching has started, satellite SFR fades rapidly, with $\tauqfade$ being 0.8 to $0.2 \gyr$ from $\mstar = 5 \times 10 ^ 9$ to $2 \times 10 ^ {11} \msun$. A lack of star formation implies a lack of dense, molecular gas, so it is interesting to compare $\tauqfade$ to observed molecular gas depletion times, defined as $M_{\rm H_2} / \sfr$, for galaxies near the quenching threshold. @SaiKauWan11 examined the molecular gas depletion times in a large sample of SDSS galaxies, finding typically $\sim 1 \gyr$ at $\mstar = 10 ^ {10 - 11} \msun$. However, they found that the depletion time increases with decreasing SSFR, being $\sim 2 \gyr$ at $\ssfr \sim 10 ^ {-11} \yrinv$. Such a long depletion time in galaxies near the quenching threshold is difficult to understand in a scenario in which satellites simply use up their molecular gas, which may suggest that additional processes are at play in reducing the molecular gas density in satellites as they are quenched, possibly via tidal or ram-pressure stripping or internal feedback processes. However, note that the sample in @SaiKauWan11 is composed primarily of central (‘field’) galaxies, and it is unclear whether the significant gas reservoirs in these nearly quiescent galaxies result from them not fully depleting their cold gas while quenching, or by them accreting gas after they have quenched. If the latter holds, the cold gas properties of satellites as they are being quenched could be quite different. Several works have examined the gas content of satellites in the Virgo cluster, finding that while the atomic gas masses of Virgo satellites are significantly lower than for galaxies of the same mass in the field [@HuaHayGio12; @SerOosMor12], the molecular gas masses are quite similar [@KenYou86; @YouBurDav11]. This result suggest that, while processes like tidal or ram-pressure stripping may play a role in removing atomic gas from the outer regions of satellites, they have little impact on the molecular gas that fuels star formation. Indeed, observations of ram-pressure stripping [@SunDonVoi07; @ChuvGoKen09; @AbrKenCrow11] typically show diffuse, atomic gas being stripped from the outer regions of the disc, while the dense, molecular gas towards the core survives intact, a phenomenon also seen in simulations [@TonBry09]. Overall, we conclude that the gas reservoir in/around satellites at their time of infall is sufficient to fuel their necessary star formation histories and stellar mass growth. Our result that satellite quenching can be parametrized simply by time since first infall, with no significant dependence on host halo mass, suggests that simple gas depletion (‘strangulation’) most naturally explains satellite quenching, though more work is needed to understand if gas self-depletion alone can account for our ‘delayed-then-rapid’ quenching scenario with sufficiently short $\tauqfade$. Our $\tauqfade$ values are marginally-to-significantly shorter than observed molecular gas depletion times, particularly for galaxies near the quenching threshold, possibly suggesting that some process other than simple molecular gas depletion is at play, though it is not clear if external stripping processes can explain this. In , we will examine these issues in more detail by developing physical models for satellite SFR evolution. Comparison with other work -------------------------- Our satellite quenching timescales are broadly consistent with previous works that parametrized the evolution of SFR in satellites and argued that it is affected over long ($2 - 3 \gyr$) timescales [e.g., @BalNavMor00; @WanLiKau07; @McGBalBow09; @MahMamRay11]. Recently, @DeLWeiPog12 examined the infall times of satellites in a SAM applied to the Millennium simulation, accounting for hierarchical halo growth and group preprocessing; comparing with observed satellite quiescent fractions as a function of host halo mass and halo-centric radius, they argued that satellites take $\sim 5 - 7 \gyr$ to quench after falling into halos $> 10 ^ {13} \msun$. However, these previous works generally only used observed quiescent/red fractions to constrain a single, overall quenching timescale, as in our Fig. \[fig:quench-time\_v\_m-star\]. A significant aspect of our approach is using the overall SSFR distribution to constrain satellite SFR evolution more completely, through which we have shown that satellites experience a ‘delayed-then-rapid’ quenching scenario, which is not possible measuring just quiescent fractions. Furthermore, our use of empirically based satellite initial SFRs and a mock simulation group catalog to compare robustly with observations allows us to constrain these timescales empirically and robustly. Our results place strong constraints on semi-analytic approaches to modeling the physics of satellite SFR evolution [e.g., @FonBowMcC08; @KanvdB08; @WeiKauvdL10; @KimYiKho11]. Our results suggest that a successful physical model would allow satellite SFR to evolve, environmentally unaffected, for $2 - 4 \gyr$ (depending on stellar mass) after infall, possibly through continued accretion/cooling of extended subhalo gas. Our results also suggest that one does not need to impose any explicit dependence on host halo mass to this process. We emphasize that our quenching timescales are valid for satellites at $z \sim 0$, given that our approach is sensitive to satellite SFR evolution within the last $\sim 4 \gyr$, and it is not clear that these timescales remain fixed at higher redshift. We have checked this in our framework by applying our quenching timescales from $z = 0$ to satellites at higher redshift and comparing with the observed evolution in the quiescent fraction for all galaxies from Fig. \[fig:qu.frac\_v\_z\]. While our model does agree within observational uncertainties at $z \lesssim 0.3$, we find that this approach quenches too few galaxies at higher $z$. One one hand, this discrepancy could be interpreted as evidence against the accuracy of our ’delayed-then-rapid’ quenching scenario. Alternately, satellite quenching times simply may be shorter at higher redshift. Using halo occupation modeling of galaxy spatial clustering measurements, @TinWet10 showed that the satellite quiescent fraction does not evolve with redshift at fixed magnitude. This lack of satellite evolution is supported broadly by observations of massive galaxies ($\mstar > 3 \times 10 ^ {10} \msun$) in X-ray-selected groups of mass $10 ^ {13 - 14} \msun$ out to $z \sim 1$ in COSMOS [@GeoLeaBun11]. If the satellite quiescent fraction does not evolve at fixed satellite and host halo mass, then the ratio of a satellite’s quenching time to its dynamical friction lifetime must remain roughly fixed, implying that the satellite quenching time is shorter at higher redshift: $\tq \propto (1 + z) ^ {-3 / 2}$ [@TinWet10]. Furthermore, because the SSFR distribution of galaxies in groups is strongly bimodal out to at least $z \sim 0.4$ [@McGBalWil11], this suggests that the delayed-then-rapid quenching scenario remains true at higher redshift. Also, if $\tqdelay$ decreases more quickly than $\tauqfade$, then there would be a higher fraction of satellites at intermediate SSFRs at higher redshift, a trend that is suggested by the significant fraction of ‘green valley’ galaxies in groups at $z \sim 1$ [@BalMcGWil11]. In all, these result suggest that satellite quenching times are shorter at higher redshift, as we will investigate and quantify further in . We also emphasize that our results are predicated on the accuracy of the SHAM method for assigning stellar mass to both satellite and central subhalos across $z = 0 - 1$. While numerous works support the (statistical) accuracy of this approach, as outlined in §\[sec:galaxy\_catalog\_sim\], to any extent that SHAM assigns biased stellar masses to satellite subhalos at these redshifts, this would bias our derived quenching timescales. We have argued that group preprocessing is the primary reason that satellites in more massive host halos are more likely to be quiescent. Group preprocessing should manifest itself via an increased quiescent fraction for satellites that remain in a sub-group after falling into a cluster [@WhiCohSmi10], to the extent that such sub-groups are observationally identifiable [@Coh12]. Promisingly, examining the red fraction of galaxies in clusters at $z = 0.2 - 0.5$, @LiYeeEll09 saw that, within and at the outskirts of these clusters, galaxies that appear associated with groups exhibit a higher red fraction than those that are not, providing direct evidence for the importance of group preprocessing. Our results also connect with satellites of much lower mass in the Local Group. Naively extending our quenching timescales in Fig. \[fig:quench-time\_v\_m-star\] to lower mass implies that dwarf satellites in the Milky Way take $5 \gyr$ to quench after infall. Given that the vast majority of satellites in the Local Group are quiescent, this implies that they fell in $> 5 \gyr$ ago, as supported by detailed comparisons of their positions and velocities to those of similar satellites in simulation [@RocPetBul12]. Conversely, our quenching timescales also reinforce recent claims that the Large and Small Magellanic Clouds, which are both actively star-forming, fell into the Milky Way halo within the last few Gyrs [@BesKalHer07]: had they fallen in much earlier, our results indicate that they would no longer be actively star-forming. Examining the stellar age distributions of satellites in the Local Group, @OrbGneWei08 found that, even for those that are currently quiescent, a large fraction have experienced a significant amount of recent star formation: half of satellites with $\mstar > 10 ^ 7 \msun$ have formed more than 10% of their stellar mass in the last $2 \gyr$. This trend, observed in lower mass satellites than we examined, supports our general result that satellites continue to form stars over extended timescales and thus grow in stellar mass considerably after infall. Examining more massive ($\mstar > 10 ^ 9 \msun$), quiescent elliptical/lenticular galaxies in the Coma cluster, @TraFabDre08 found that their mean ages are identical to those of similar galaxies that are in the field, and that they have experienced star formation as recently at $z \sim 0.2$, trends that again suggest significant star formation after infall. Finally, @SmiLucPri12 examined the stellar ages of quiescent satellites in the Coma cluster, finding a significant decrease in age with cluster-centric radius at low mass ($\mstar < 10 ^ {10} \msun$) but a much weaker trend at higher mass. As they argued, this trend implies that satellite-specific quenching plays a stronger role in quenching lower mass satellites, consistent with our results that more satellites were quenched as central galaxies prior to infall at higher mass. Adding a large sample of stellar ages [e.g., @GalChaBri05] directly to our group catalog would provide additional constraints on our derived star formation histories, as we will examine in future work. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Michael Blanton, David Hogg, and collaborators for publicly releasing the NYU VAGC, Jarle Brinchmann and the MPA-JHU collaboration for publicly releasing their spectral reductions, and Martin White for simulation data. We thank Marla Geha, Eyal Neistein, and Gary Mamon for enlightening discussions. The simulation was analyzed at the National Energy Research Scientific Computing Center. Implications of satellite stellar mass growth for subhalo abundance matching {#sec:mass_growth_sham} ============================================================================ In §\[sec:m-star\_growth\], we showed that stellar mass growth via star formation is nearly identical in satellite and central galaxies. We now explore the implications of this result for subhalo abundance matching (SHAM), and we discuss other processes that influence the evolution of satellite stellar mass. As described in §\[sec:galaxy\_catalog\_sim\], the SHAM technique for assigning stellar mass to subhalos has been successful in matching many observed galaxy statistics. At some level, this success is surprising: despite the differing physics of satellite and central galaxy evolution, with SHAM one assigns stellar mass to both a central and satellite subhalo based simply on the maximum subhalo mass (or circular velocity) that it experienced, $\mmax$, regardless of whether it is a central or satellite subhalo or its time since infall. Our result, that satellite and central galaxies grow in stellar mass by essentially the same amount, on average, justifies assigning stellar mass to both satellite and central subhalos under a single, simple prescription. However, our results suggest tension with the that SHAM typically is implemented, through the use of $\mmax$ (or maximum circular velocity). Because satellite subhalos are stripped of their mass after falling into a host halo, their $\mmax$ almost always occurs prior to infall and remains constant thereafter. (In our tracking scheme, a satellite can grow in $\mmax$ if it merges with another satellite.) By contrast, a central subhalo’s $\mmax$ continues to grow as its halo grows. Thus, if one uses the same instantaneous $\mstar - \mmax$ relation to assign stellar mass to both satellite and central subhalos, which is how SHAM typically is applied, this necessarily implies that satellites have grown less in stellar mass than central subhalos. Instead, the results of §\[sec:m-star\_growth\] suggest that satellite subhalos should have a higher instantaneous $\mstar$ for their given $\mmax$ than central subhalos. Under the assumption (for now) that satellite and central galaxies grow in stellar mass by the same amount, on average, we quantify this possible tension by examining the differential $\mmax$ growth of satellite versus central subhalos. For each satellite at $z = 0$, we record the time that it reached $\mmax$ as a central subhalo, which typically was prior to first infall. We then identify all central subhalos at $z = 0$ (discounting ejected satellites) that experienced the same $\mmax$ at the same time, and we compute the median difference in $\mmax$ at $z = 0$ between the satellite and these central subhalos, which we call $\Delta \mmax$. We can then treat satellite and central subhalo stellar mass growths as self-consistent and identical by increasing each satellite subhalo’s $\mmax$ at $z = 0$ by its $\Delta \mmax$. Using these increased $\mmax$ values for satellites, we reassign stellar mass via SHAM to the whole subhalo population. The difference between these stellar masses and the stellar masses from standard $\mmax$, which we call $\Delta \mstar$, indicates the amount by which the standard SHAM implementation underestimates satellites’ stellar mass. Fig. \[fig:m-grow\_v\_m\_sham\] shows $\Delta \mmax / \mmax$ and $\Delta \mstar / \mstar$ as a function of satellites’ mass at $z = 0$. (For this exercise, we extend to masses below our SDSS group catalog limit). The top panel shows the fraction by which a central subhalo’s $\mmax$ at $z = 0$ is higher than that of a satellite, for subhalos that had the same $\mmax$ prior to the satellite’s infall. At low mass, a satellite’s $\mmax$ at $z = 0$ is typically 20% lower than its central subhalo counterparts, though with scatter to much higher values. This fraction declines with increasing $\mmax$, but only weakly because of the competing effects that more massive central subhalos grow in mass more rapidly, but more massive satellites fell in more recently. Fig. \[fig:m-grow\_v\_m\_sham\] (bottom) shows the same, but for stellar mass, across a range that corresponds to the top panel. The mass dependence is now stronger, because of the changing steepness in the $\mstar - \mmax$ relation with mass: at $\mmax \gtrsim 10 ^ {12} \msun$, $\mstar$ increases slowly with $\mmax$, so a given increase in $\mmax$ leads to a much smaller increase in $\mstar$, while at lower mass the opposite trend occurs. The underestimation of satellite stellar mass is negligible at high mass, but at $\mstar < 10 ^ 9 \msun$ it is as high as 40%, with scatter to much higher values. Thus, to the extent that satellite and central galaxies mass growths are in fact identical, the standard implementation of SHAM significantly underestimates the stellar mass of low-mass satellites. @PasGalFon10 also examined the differential stellar mass growths of central and satellite galaxies since infall, though in the context of the semi-analytic model of @WanDeLKit08. They also saw a negligible difference at high mass and an increasing difference at lower mass. However, their difference was driven primarily by the rapid quenching of satellites after infall in their model, which we have argued cannot match the observed SSFR distribution [see also @WeivdBYan06b; @FonBowMcC08]. Our empirically motivated approach indicates that the stellar mass growths via star formation should be nearly identical, on average, implying that satellite and central subhalos lie on separate instantaneous $\mstar - \mmax$ relations, a scenario that can, in principle, still match spatial clustering measurements [@NeiLiKho11]. Promisingly, @YanMovdB12 recently implemented a self-consistent approach to SHAM, in which they found evidence that satellite and central subhalo stellar mass growths are similar, in support of our results. However, an important caveat to Fig. \[fig:m-grow\_v\_m\_sham\] is that star formation is not the only process that affects satellite stellar mass evolution. Satellites can merge with one another [@AngLacBau09; @KimBauCol09; @WetCohWhi09a; @WetCohWhi09b], though we expect that the impact of satellite mergers on stellar mass growth is subdominant at these masses (). More importantly, satellites can lose stellar mass via tidal stripping. As outlined in §\[sec:simulation\], we use a simple binary procedure for merging/disrupting highly stripped satellite subhalos, but we do not account for partial stripping of stellar mass. There are many reasons to expect that surviving satellites have been at least partially stripped, based on both the properties of satellites [e.g., @YanMovdB09; @KimBauCol09; @PasGalFon10; @SimWeiDav10] and the presence of diffuse intracluster light (ICL) from stripped satellites [e.g., @WilGovWad04; @ConWecKra07; @RudMihFre09; @PucSprSij10]. It is possible that the canceling effects of stellar mass growth via star formation and stellar mass loss via tidal stripping cause satellite and central subhalos to lie on essentially the same instantaneous $\mstar - \mmax$ relation [see also @SimWeiDav10]. If true, then SHAM ascribes the correct stellar masses to satellites though a fortuitous coincidence: the retarded growth of a satellite’s $\mmax$, and thus its $\mstar$, accurately captures the physical process of tidal stripping. If so, then Fig. \[fig:m-grow\_v\_m\_sham\] must also indicate the average amount of stellar mass stripping that surviving satellites at $z = 0$ have suffered. Direct observations of the stellar mass fraction in ICL are $\sim 10 - 25\%$ [@ZibWhiSch05; @GonZarZab07; @KriBer07], broadly consistent with Fig. \[fig:m-grow\_v\_m\_sham\]. Finally, we note the implications of these results to SHAM as applied to dwarf satellites in the Local Group. While Fig. \[fig:m-grow\_v\_m\_sham\] extends only down to about the mass of the Small Magellanic Cloud, it also indicates the increasing importance of satellite stellar mass growth at lower mass. Thus, understanding the relative importance of stellar mass growth and tidal stripping becomes even more important for dwarf satellites in the Local Group. If local dwarf satellites have not been stripped of stellar mass significantly, as might be implied by their tight luminosity-metallicity relation [@KirCohSmi11], then Fig. \[fig:m-grow\_v\_m\_sham\] implies that the usual SHAM approach may underestimate their stellar masses. \[lastpage\] [^1]: In , we wrote this as $M_r < -18$ and -19, but we did not assume a value for $h$ in calculating magnitudes. [^2]: http://www.mpa-garching.mpg.de/SDSS/DR7/ [^3]: In rare cases, using these parameters leads to zero subhalos in a low-mass halo. To avoid having halos with no subhalos, mostly for tracking purposes, we slowly increase the linking lengths in those halos, and we stop if we identify at least two subhalos. This procedure does not affect halo mass ranges used in this work. [^4]: If a subhalo grows in $\mmax$ by $> 50\%$ since ejection, we define it to be a ‘newly’ formed halo and discard it from the ejected population, though this affects only a few percent of ejected satellites. [^5]: Despite using the same IMF, the SED-based stellar masses in @DroBunLea09 are estimated to be $\sim 0.2$ dex higher on average than those in our SDSS catalog. However, their stellar masses also have larger scatter because of their photometric redshifts. In terms of quiescent fractions, these effects largely cancel out, so we do not attempt to renormalize stellar masses. See . [^6]: In detail, $\fcenq / \fsatq$ must evolve somewhat because $\fsat$ evolves, but $\fsat$ evolution causes $\fcenq / \fsatq$ to decrease by $< 10\%$ to $z = 1$. [^7]: We also tried an extreme scenario in which satellites are responsible for the evolution of the quiescent fraction for all galaxies, to see if it is possible that the quiescent fraction for central galaxies does not evolve at fixed mass. However, even in the extreme scenario of no quiescent satellites by $z = 1$, the central galaxy quiescent fraction at fixed mass still decrease by a factor of at least 2 from $z = 0$ to 1 to account for the overall galaxy quiescent fraction — there are simply not enough satellites. [^8]: In our model, this mass loss occurs instantaneously, though in reality, it is an extended process. As @LeiKra11 showed, the vast majority ($\sim 90\%$) of stellar mass loss occurs within the first $2 \gyr$, so our approximation is good given that 90% of satellites fell in earlier than $2 \gyr$ ago. [^9]: If surviving satellites have experienced significant, systematic stellar mass loss from tidal stripping, this would mean that they were more massive at infall, so their initial quiescent fractions were higher, than in our model. However, for feasible amounts of average stripping of $\lesssim 30\%$ (see Appendix \[sec:mass\_growth\_sham\]), Fig. \[fig:qu.frac\_v\_m-star\] (green region) shows that the initial quiescent fractions would not increase by more than $\sim 5\%$. [^10]: More rigorously, because we use a step-function threshold in $\tsinf$, selecting the maximally oldest surviving satellites to quench, any scatter in the relation between quenching and $\tsinf$ would lead to a characteristic quenching time (at which 50% of active-at-infall satellites have quenched) that is necessarily shorter. Thus, our quenching timescales are strictly upper limits, but, as we have argued, the scatter should be small. [^11]: Given that our SDSS group catalog constrains $\tq$ a function of host halo mass at $z = 0$ and not as a function of host halo mass at the time of infall directly, if group preprocessing were the primary mode of quenching satellites, this could mitigate the inferred dependence of $\tq$ on host halo mass. However, as we will show in §\[sec:where\_quench\], most satellites quenched when they were in their current host halo, so any possible mitigation would be modest.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Charmed mesons in dense matter are studied within a unitary coupled-channel approach which takes into account Pauli-blocking effects and meson self-energies in a self-consistent manner. We obtain the open-charm meson spectral functions in this dense medium, and discuss their implications on hidden charm and charm scalar resonances and on the formation of $D$-mesic nuclei.' author: - | Laura Tolos$^{1,2}$, Daniel Gamermann$^3$, Carmen Garcia-Recio$^4$, Raquel Molina$^5$,\ Juan Nieves$^5$, Eulogio Oset$^5$, Angels Ramos$^3$ title: Heavy mesons in dense matter --- [ address=[$^1$Theory Group. KVI. University of Groningen, Zernikelaan 25, 9747 AA Groningen, The Netherlands\ $^2$ Instituto de Ciencias del Espacio (IEEC/CSIC), Campus Universitat Autónoma de Barcelona,\ Facultat de Ciències, Torre C5, E-08193 Bellaterra (Barcelona), Spain\ $^3$Departament d’Estructura i Constituents de la Matèria, Universitat de Barcelona,\ Diagonal 647, 08028 Barcelona, Spain\ $^4$Departamento de F[í]{}sica Atómica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada, Spain\ $^5$Instituto de F[í]{}sica Corpuscular (centro mixto CSIC-UV), Institutos de Investigación de Paterna,\ Aptdo. 22085, 46071, Valencia, Spain]{} ]{} Introduction ============ The properties of open and hidden charm mesons in a hot dense environment are being the focus of recent analysis. Indeed, part of the physics program of the PANDA and CBM experiments at the future FAIR facility at GSI [@gsi00] will search, among others, for in-medium modification of hadrons in the charm sector, providing a first insight into the charm-nucleus interaction. The in-medium modification of the properties of open-charm mesons may lead to formation of $D$-mesic nuclei [@tsushima99], and will also affect the renormalization of charm and hidden-charm scalar hadron resonances in nuclear matter, providing information not only about their nature but also about the interaction of the open-charm mesons with nuclei. In the present paper we obtain the open-charm spectral functions in dense matter within a self-consistent approach in coupled channels, and analyze the effect on the properties of dynamically-generated charm and hidden charm scalar resonances and provide some insight into the formation of $D$-nucleus bound states. Charmed mesons in matter ======================== The self-energy in symmetric nuclear matter for open-charm pseudoscalar ($D$) and vector ($D^*$) mesons is obtained following a self-consistent coupled-channel procedure. The transition potential of the Bethe-Salpeter equation for different isospin, total angular momentum, and finite density and temperature is derived from effective lagrangians, which will be discussed in the next subsection. The $D$ and $D^*$ self-energies are then obtained summing the transition amplitude for the different isospins over the nucleon Fermi distribution at a given temperature (see details in Refs. [@TOL07; @tolos09]). Then, the meson spectral function reads $$\begin{aligned} &&S_{D(D^*)}(q_0,{\vec q}, \rho, T)= \nonumber \\ && -\frac{1}{\pi}\frac{{\rm Im}\, \Pi_{D(D^*)}(q_0,\vec{q},\rho, T)}{\mid q_0^2-\vec{q}\,^2-m_{D(D^*)}^2- \Pi_{D(D^*)}(q_0,\vec{q},\rho, T) \mid^2 } \ , \ \ \ \ \ \ \ \label{eq:spec}\end{aligned}$$ with $\Pi_{D(D^*)}$ being the self-energy at given energy $q_0$, momentum $\vec{q}$, density $\rho$ and temperature $T$. SU(4) and SU(8) schemes {#su4} ----------------------- ![Left: $D$ meson spectral function for the SU(4) model. Right: $D$ and $D^*$ spectral functions in the SU(8) scheme. We show the $D$ and $D^*$ meson free masses for reference (dotted lines). \[fig1\]](paper_spectral_tot "fig:"){width="50.00000%" height="6cm"} ![Left: $D$ meson spectral function for the SU(4) model. Right: $D$ and $D^*$ spectral functions in the SU(8) scheme. We show the $D$ and $D^*$ meson free masses for reference (dotted lines). \[fig1\]](art_spec "fig:"){width="50.00000%" height="5cm"} The open-charm meson spectral functions are obtained from the Bethe-Salpeter equation in coupled-channels taking, as bare interaction, two kinds of bare potential. First, we consider a type of broken $SU(4)$ $s$-wave Weinberg-Tomozawa (WT) interaction supplemented by an attractive isoscalar-scalar term and using a cutoff regularization scheme. We fix this cutoff by generating dynamically the $I=0$ $\Lambda_c(2595)$ resonance. A new resonance in $I=1$ channel, $\Sigma_c(2800)$, is generated [@LUT06; @mizutani06]. The in-medium solution incorporates Pauli blocking, baryon mean-field bindings and meson self-energies [@TOL07]. In l.h.s. of Fig. \[fig1\] we display the $D$ meson spectral function for different momenta, temperatures and densities. At $T=0$ the spectral function shows two peaks: the $\Lambda_c(2595) N^{-1}$ and the quasi(D)-particle peak mixed with the $\Sigma_c(2880) N^{-1}$. Those states dilute with increasing temperature while the quasiparticle peak gets closer to its free value. Finite density results in a broadening of the spectral function because of the increased phase space. Secondly, heavy-quark symmetry (HQS) is implemented by treating on equal footing heavy pseudoscalar and vector mesons, such as the $D$ and $D^*$ mesons. The $SU(8)$ WT includes pseudoscalars and vector mesons together with $J=1/2^+$ and $J=3/2^+$ baryons [@magas09; @gamermann10]. This symmetry is, however, strongly broken in nature and we adopt the physical hadron masses and different weak non-charmed and charmed pseudoscalar and vector meson decay constants. We also improve on the regularization scheme in matter beyond the cutoff method [@tolos09]. In this scheme, all resonances in the $SU(4)$ model are reproduced and new resonant states are generated [@magas09] due to the enlarged Fock space. However, the nature of some of those resonances is different regarding the model. While the $\Lambda_c(2595)$ emerges as a $DN$ quasibound state in the $SU(4)$ model, it becomes predominantly a $D^*N$ quasibound state in the $SU(8)$ scheme. The modifications of these resonances in the nuclear medium strongly depend on the coupling to $D$, $D^*$ and $N$ and are reflected in the spectral functions. On the r.h.s of Fig. \[fig1\] we display the $D$ and $D^*$ spectral functions, which show then a rich spectrum of resonance ($Y_c$)-hole($N^{-1}$) states. As density increases, these $Y_cN^{-1}$ modes tend to smear out and the spectral functions broaden with increasing phase space, as seen for the $SU(4)$ model [@mizutani06]. ![$D_{s0}(2317)$ (left) and $X(3700)$ (middle) resonances in nuclear matter. $D^0$ nucleus bound states (right) \[fig2\]](ds02317){height="5.5cm"} ![$D_{s0}(2317)$ (left) and $X(3700)$ (middle) resonances in nuclear matter. $D^0$ nucleus bound states (right) \[fig2\]](x37){height="5.5cm"} Scalar resonance in matter ========================== The analysis of the properties of scalar resonances in nuclear matter is crucial in order to understand their nature, whether they are $q \bar q$, molecules, mixtures of $q \bar q$ with meson-meson components, or dynamically generated resonances from meson-meson scattering. In the following we study the charmed resonance $D_{s0}(2317)$ [@Kolomeitsev:2003ac; @guo06; @Gamermann:2006nm] together with a hidden charm scalar meson, $X(3700)$, predicted in Ref. [@Gamermann:2006nm], which might have been observed [@Gamermann:2007mu] by the Belle collaboration [@Abe:2007sy]. Those resonances are generated dynamically by solving the coupled-channel Bethe-Salpeter equation for two pseudoscalar mesons [@Molina:2008nh]. The $D_{s0}(2317)$ mainly couples to the $DK$, while the hidden charm state $X(3700)$ couples most strongly to $D\bar{D}$. Thus, any change in the $D$ meson properties in nuclear matter will have an important effect on these resonances. The $D$ meson self-energy is given in the $SU(4)$ model without the phenomenological isoscalar-scalar term, but supplemented by the $p$-wave self-energy [@Molina:2008nh]. In Fig. \[fig2\] the $D_{s0}(2317)$ (left) and $X(3700)$ (middle) resonances are displayed via the squared transition amplitude for the corresponding dominant channel at different densities. The $D_{s0}(2317)$ and $X(3700)$ resonances, which have a zero and small width, develop widths of the order of 100 and 200 MeV at normal nuclear matter density, respectively. This is due to the opening of new many-body decay channels as the $D$ meson gets absorbed in the nuclear medium via $DN$ and $DNN$ inelastic reactions. We do not extract any clear conclusion for the mass shift. We suggest to look at the transparency ratio to investigate those in-medium widths, since it is very sensitive to the in-medium width of the resonance. D-mesic nuclei ============== $D$-meson bound states in $^{208}$Pb were predicted [@tsushima99] relying upon an attractive $D$ meson potential. The observation of those bound states might be, though, problematic due to their widths, as in the case of the SU(4) model [@TOL07]. However, for the scheme with HQS [@tolos09] the $D$ meson in nuclear matter has a sufficiently small width with respect to the mass shift to form bound states in nuclei. In order to compute de $D$-nucleus bound states, we solve the Schrödinger equation. We concentrate on $D^0$-nucleus bound states [@carmen10]. The potential that enters in the equation is an energy-dependent one that results from the zero-momentum $D$-meson self-energy within the SU(8) model [@tolos09]. In Fig. \[fig2\] (right) we show $D^0$ meson bound states in different nuclei. We observe that the $D^0$-nucleus states are weakly bound, in contrast to previous results [@tsushima99]. Their experimental detection is, though, difficult. Conclusions =========== Open-charm mesons ($D$ and $D^*$) in dense matter have been studied within a self-consistent coupled-channel approach taking, as bare interaction, different effective lagrangians. The in-medium solution accounts for Pauli blocking effects and meson self-energies. We have analyzed the evolution in matter of the open-charm meson spectral functions and discussed their effects on the $D_{s0}(2317)$ and the predicted $X(3700)$ in nuclear matter, and suggested to look at transparency ratios to investigate the in-medium width of those resonances. We have finally analyzed the possible formation of $D$-mesic nuclei. Only weakly bound $D^0$-nucleus states seem to be feasible within the SU(8) scheme that incorporates heavy-quark symmetry. However, its experimental detection is most likely a challenging task. L.T. acknowledges support from the RFF program of the University of Groningen. This work is partly supported by the EU contract No. MRTN-CT-2006-035482 (FLAVIAnet), by the contracts FIS2008-01661 and FIS2008-01143 from MICINN (Spain), by the Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), by the Generalitat de Catalunya contract 2009SGR-1289 and by Junta de Andalucía under contract FQM225. We acknowledge the support of the European Community-Research Infrastructure Integrating Activity “Study of Strongly Interacting Matter” (HadronPhysics2, Grant Agreement n. 227431) under the 7th Framework Programme of EU. [9]{} http://www.gsi.de/ fair K. Tsushima, D. H. Lu, A. W. Thomas, K. Saito, and R. H. Landau, *Phys. Rev. C* [**59**]{}, 2824 (1999). L. Tolos, A. Ramos, and T. Mizutani, *Phys. Rev. C* [**77**]{}, 015207 (2008). L. Tolos, C. Garcia-Recio, and J. Nieves, *Phys. Rev. C* [**80**]{}, 065202 (2009). J. Hofmann, and M. F. M. Lutz, *Nucl. Phys. A* **763**, 90 (2005); M. F. M. Lutz, and C. L. Korpa, *Phys. Lett. B* **633**, 43 (2006). T. Mizutani, and A. Ramos, *Phys. Rev. C* [**74**]{}, 065201 (2006). C. Garcia-Recio, V. K. Magas, T.  Mizutani, J. Nieves, A. Ramos, L. L. Salcedo, and L. Tolos, *Phys. Rev. D* [**79**]{}, 054004 (2009). D. Gamermann, C. Garcia-Recio, J. Nieves, L. L. Salcedo and L. Tolos, *Phys. Rev.  D* [**81**]{}, 094016 (2010). E. E. Kolomeitsev, and M. F. M. Lutz, *Phys. Lett. B* [**582**]{}, 39 (2004). F. K. Guo, P. N. Shen, H. C. Chiang, and R. G. Ping, *Phys. Lett. B* [**641**]{}, 278 (2006). D. Gamermann, E. Oset, D. Strottman, and M. J. Vicente Vacas, *Phys. Rev. D* [**76**]{}, 074016 (2007). D. Gamermann, and E. Oset, *Eur. Phys. J. A* [**36**]{}, 189 (2008). K. Abe [*et al.*]{} \[Belle Collaboration\], *Phys. Rev. Lett.* [**100**]{}, 202001 (2008). R. Molina, D. Gamermann, E. Oset, and L. Tolos, *Eur. Phys. J. A* [**42**]{}, 31 (2009). C. Garcia-Recio, J. Nieves and L. Tolos, *Phys. Lett. B* [**690**]{}, 369 (2010).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The magnetic phase diagram of the ferromagnetic Kondo lattice model is determined at T=0 in 1D, 2D, and 3D for various magnitudes of the quantum mechanical localized spins ranging from $S=\frac{1}{2}$ to classical spins ($S\rightarrow \infty$). We consider the ferromagnetic phase, the paramagnetic phase, and the ferromagnetic/antiferromagnetic phase separated regime. There is no significant influence of the spin quantum number on the phase boundaries except for the case $S=\frac{1}{2}$, where the model exhibits an instability of the ferromagnetic phase with respect to spin disorder. Our results give support, at least as far as the low temperature magnetic properties are concerned, to the classical treatment of the $S=3/2-$spins in the intensively investigated manganites, for which the ferromagnetic Kondo-lattice model is generally employed to account for magnetism.' author: - 'J. Kienert and W. Nolting' title: Magnetic phase diagram of the Kondo lattice model with quantum localized spins --- introduction ============ The ferromagnetic Kondo lattice model (FKLM), also known as $s$-$d$ model or double exchange model, has drawn a lot of attention over the past years in the field of magnetism and electronic correlations. The model consists of Bloch electrons coupled to localized spins sitting on the sites of a crystal lattice. For the case of strong (Hund) coupling and an energetically favored parallel orientation of a localized moment and an electron, Zener proposed the double exchange mechanism to explain ferromagnetism (FM) in manganites.[@Zen51] The gain in kinetic energy of the conduction electrons favors a parallel configuration of the localized spins. In the framework of a two-site model, Anderson and Hasegawa showed that the hopping amplitude of the electrons between sites $i$ and $j$ is proportional to $cos(\theta_{ij}/2)$, where $\theta_{ij}$ is the angle between the localized spins.[@AH55] A major field of application for the FKLM is linked to the phenomenon of colossal magnetoresistance[@Ram97] in the manganese compounds already aimed at by Zener. Here, the 5 Mn d-shells are split by the crystal field into three degenerate $t_{2g}$-orbitals forming localized spins of $S=\frac{3}{2}$. They interact via Hund’s rule with itinerant electrons stemming from the remaining two degenerate $e_{g}$-orbitals. With a Hund exchange interaction mostly estimated to be several times the hopping amplitude[@SPV96; @MCS98; @Dag03] the manganites belong to the rather strongly coupled materials. Although there are other important aspects to take into account when modelling the rich physics of the manganites, like eletron-phonon-coupling and the second conduction band, the FKLM already in its simplest single band version is crucial for understanding at least the magnetic properties of this class of substances.[@Dag03] A hot topic where the FKLM is used as a basic model are the diluted magnetic semiconductors (DMS) with promising technical applications for microelectronics.[@Ohno99; @ZFS04] These materials consist of a (often III/V as, e.g., GaAs) semiconducting host and substituted transition metal impurities (e.g. Mn) occupying cation sites, the latters exhibiting ferromagnetism due to a coupling of the localized cation spins mediated by a spin exchange interaction with valence and impurity band holes. In the case of DMS, this interaction is considered intermediate.[@HS05] There have been various types of approaches to solving the many-body problem of the FKLM in order to get a phase diagram. On the theoretical side, several treatments are based on Dynamical Mean Field Theory (DMFT).[@Yun98; @Dag98; @NMK00; @CMS01] In Ref. a Schwinger-boson method is used and applied to 2D and 3D. Bosonization in 1D is extensively discussed in Ref. . More recently an analytical continuum field theory in 2D was employed.[@PMTG05] Valuable information to compare theoretical results with can be gathered from Monte Carlo simulations.[@Yun98; @Dag98; @Ali01] The main feature of the magnetic phase diagram is the same in all these works: with increasing coupling strength ferromagnetism (FM) prevails for all charge carrier densities except for a more or less small region around half-filling where antiferromagnetism (AFM) or FM/AFM phase separation exists. Most approaches rely on the assumption that the local spins can be treated classically. This simplification has been justified by checking the classical spin against quantum spin approaches, both giving similar results.[@Yun98; @Dag98] More recently, a phase diagram in 1D was obtained by means of the Density-Matrix Renormalization Group (DMRG) yielding numerically exact results for a quantum spin $S=1/2$.[@GHAA04] However, we do not know of any systematic, quantitative analysis of the influence of the spin magnitude on the magnetic properties of the FKLM. In this work we present phase diagrams of the FKLM at T=0 for several spin quantum numbers in 1D, 2D, and 3D. We use an equation of motion approach exploiting exactly solvable limiting cases and exact relations among Green functions and among their spectral moments. By evaluating the (free) energy at T=0 we can distinguish three different phases: paramagnetic (PM), ferromagnetic, and ferromagnetic-antiferromagnetic phase separated (PS). The latter two are the typical phases in the strong coupling region of the phase diagram[@Yun98; @Dag98; @GHAA04; @Yin03; @PMTG05] which is the regime relevant for the manganites and on which we want to focus in our work. It should be mentioned that in the weak to intermediate coupling regime several other phases like canted, spiral, or island have been found.[@GHAA04; @Yin03; @PMTG05] One of the main results will be that there are no major differences in the FM-PS phase boundaries for spin quantum numbers $S>1/2$. The case $S=1/2$ is special: here we obtain an instability of the ferromagnetic against the paramagnetic phase. To understand this behavior we discuss the spectral weight distribution of the excitation spectrum and its modifications by a variation of the quantum character of the localized spins. The paper is organized as follows. After presenting our theoretical approach in section II we discuss the phase diagrams of the FKLM in 1D, 2D, and 3D for different quantum numbers of the localized spins in section III. A summary and an outlook on possible improvements of our theory are given in section IV. Theory ====== The Hamiltonian of the FKLM reads $$\label{Hklm} H= -t\sum_{\langle ij \rangle\sigma}c_{i\sigma}^{\dag}c_{j\sigma}-J\sum_{i}{\mathbf s}_{i}\cdot {\bf S}_{i}\;.$$ The first term describes Bloch electrons of spin $\sigma$ with a nearest neighbor hopping integral $t$. $c_{i\sigma}^{({\dag})}$ annihilates (creates) an electron of spin $\sigma$ at lattice site $i$. The lattice is chosen to be simple cubic in our case. $\mathbf s$ is the electron spin and $\bf S$ the localized spin operator, and both are coupled through a Hund exchange $J$. Using standard second quantization notation the interaction term can be rewritten $$\label{Hklmquant} H_{int}= -\frac{1}{2}J\sum_{i\sigma}z_{\sigma}S_{i}^{z}n_{i\sigma}+S_{i}^{-\sigma}c_{i\sigma}^{\dag}c_{i-\sigma}$$ with $z_{\uparrow,\downarrow}=\pm 1$, $S_{i}^{z,+,-}$ are the z-component, raising and lowering operators for a localized spin at site $i$, and $n_{i\sigma}=c_{i\sigma}^{\dag}c_{i\sigma}$ is the occupation number operator at site $i$. The many-body problem of the above Hamiltonian is solved with the knowledge of the one-electron Green function $G_{{\bf k}\sigma}(E)$, or, equivalently, the electronic self-energy $\Sigma_{{\bf k}\sigma}(E)$: $$\begin{aligned} \label{GF} \nonumber G_{ij\sigma}(E)&=&\langle\langle c_{i\sigma};c^{\dag}_{j\sigma}\rangle\rangle_{E}=\frac{1}{N}\sum_{\bf k}G_{{\bf k}\sigma}(E)e^{i{\bf k}({\bf R}_i-{\bf R}_j)}\;,\\ G_{{\bf k}\sigma}(E)&=&\frac{\hbar}{E-\epsilon({\bf k})-\Sigma_{{\bf k}\sigma}(E)}\;,\end{aligned}$$ with the Bloch dispersion $\epsilon({\bf k})$. One can then calculate the internal energy $U$ of the FKLM, being equivalent to the free energy at T=0, for the ferromagnetic and the paramagnetic state. There is a simple relation between the energy $U$ of the FKLM and the imaginary part of the corresponding one-particle Green function, $$\label{Utot} U=\langle H \rangle=\frac{1}{N\hbar}\sum_{i\sigma}\int_{-\infty}^{+\infty}dEf_{-}(E)ES_{ii\sigma}(E)\;,$$ where $S_{ii\sigma}(E)=-\frac{1}{\pi}\Im G_{ii\sigma}(E)$ is the local spectral density and $f_{-}$ denotes the Fermi function.[@remark1] Note that the existence of the ferromagnetic state is supposed and not the result of a self-consistent calculation, i.e. the magnetization is a parameter in our scheme. The method we chose to solve the Hamiltonian (\[Hklm\]) for the Green function (\[GF\]) is a moment conserving decoupling approach (MCDA), which does [*not*]{} require the localized spins to be classical. This theory has been applied before in model studies[@SN02] and to real substances[@ScN01; @SEN04]. For a detailed account of the decoupling procedure we refer the reader to Ref.[@NRM97]. Here we summarize the main points of the method and want to emphasize features which are important for the following discussion of our results. The starting point is the equation of motion for the Green function (\[GF\]). The generated higher Green functions read $$\begin{aligned} \label{I} I_{ik,j\sigma}(E) &=&\langle\langle{S_{i}^{z}c_{k\sigma};c^{\dag}_{j\sigma}}\rangle\rangle_{E}\;,\\ \label{F} F_{ik,j\sigma}(E) &=&\langle\langle{S_{i}^{-\sigma}c_{k-\sigma};c^{\dag}_{j\sigma}}\rangle\rangle_{E}\;.\end{aligned}$$\ $I_{ik,j\sigma}(E)$ is a Ising-like GF solely comprising the z-components of the spins, whereas $F_{ik,j\sigma}(E)$ describes spin flip processes which are neglected when using classical localized spins. After writing down the equations of motion of $I_{ik,j\sigma}(E)$ and $F_{ik,j\sigma}(E)$ the decoupling is performed. Of special importance for correlation effects is the treatment of the local higher Green functions, namely $$\begin{aligned} \label{eq:F1} F^{(1)}_{ii,j\sigma}(E)&=&\langle\langle S_{i}^{-\sigma}S_{i}^{z}c_{i-\sigma};c^{\dag}_{j\sigma}\rangle\rangle_{E}\;,\\ \label{eq:F2} F^{(2)}_{ii,j\sigma}(E)&=&\langle\langle S_{i}^{-\sigma}S_{i}^{\sigma}c_{i\sigma};c^{\dag}_{j\sigma}\rangle\rangle_{E}\;,\\ \label{eq:F3} F^{(3)}_{ii,j\sigma}(E)&=&\langle\langle S_{i}^{-\sigma}n_{i\sigma}c_{i-\sigma};c^{\dag}_{j\sigma}\rangle\rangle_{E}\;,\\ \label{eq:F4} F^{(4)}_{ii,j\sigma}(E)&=&\langle\langle S_{i}^{z}n_{i-\sigma}c_{i\sigma};c^{\dag}_{j\sigma}\rangle\rangle_{E}\;.\end{aligned}$$ These functions are expressed in terms of the lower order GF (\[GF\]), (\[I\]), and (\[F\]) with coefficients fitted by the first two spectral moments of these GFs, respectively, representing a non-perturbative approximation for the whole temperature range.[@NRM97] The choice of the “correct” lower order GF is guided by some non-trivial limiting cases which we summarize next. For an assumed complete ferromagnetic polarization ($\langle S^{z}\rangle=S$) of the FKLM one obtains from the spectral representation of the Green functions: $$\begin{aligned} \label{F1Sz_S} F^{(1)}_{ii,j\sigma}(E)&\stackrel{\langle S^z\rangle =S}{=}&\left(\left(S-\frac{1}{2}\right)+\frac{1}{2}z_{\sigma}\right)F_{ii,j\sigma}(E)\;,\\ \label{F2Sz_S} F^{(2)}_{ii,j\sigma}(E)&\stackrel{\langle S^z\rangle =S}{=}&SG_{ij\sigma}(E)-z_{\sigma}I_{ii,j\sigma}(E)\;.\end{aligned}$$ For $S=\frac{1}{2}$, a case which is of particular importance to our investigation due to its maximum amount of quantum fluctuations, the following relations hold at any temperature (i.e. at any $\langle S_{z} \rangle$): $$\begin{aligned} \label{F1S0_5} F^{(1)}_{ii,j\sigma}(E)&\stackrel{S=\frac{1}{2}}{=}&\frac{1}{2}z_{\sigma}F_{ii,j\sigma}(E)\;,\\ \label{F2S0_5} F^{(2)}_{ii,j\sigma}(E)&\stackrel{S=\frac{1}{2}}{=}&\frac{1}{2}G_{ij\sigma}(E)-z_{\sigma}I_{ii,j\sigma}(E)\;.\end{aligned}$$ Furthermore in the case of a fully occupied conduction band: $$\begin{aligned} \label{F3_n2} F^{(3)}_{ii,j\sigma}(E)&\stackrel{n=2}{=}&F_{ii,j\sigma}(E)\;,\\ \label{F4_n2} F^{(4)}_{ii,j\sigma}(E)&\stackrel{n=2}{=}&I_{ii,j\sigma}(E)\;.\end{aligned}$$ The above exact relations are used to motivate the following ansatz for the higher local GF: $$\begin{aligned} \label{F1Interpolation} F^{(1)}_{ii,j\sigma}(E)&=&\alpha_{1\sigma}G_{ij\sigma}(E)+\beta_{1\sigma}F_{ii,j\sigma}(E)\;,\\ \label{F2Interpolation} F^{(2)}_{ii,j\sigma}(E)&=&\alpha_{2\sigma}G_{ij\sigma}(E)+\beta_{2\sigma}I_{ii,j\sigma}(E)\;,\\ \label{F3Interpolation} F^{(3)}_{ii,j\sigma}(E)&=&\alpha_{3\sigma}G_{ij\sigma}(E)+\beta_{3\sigma}F_{ii,j\sigma}(E)\;,\\ \label{F4Interpolation} F^{(4)}_{ii,j\sigma}(E)&=&\alpha_{4\sigma}G_{ij\sigma}(E)+\beta_{4\sigma}I_{ii,j\sigma}(E)\;. \end{aligned}$$ The temperature dependent interpolation coefficients $\alpha_{i\sigma},~\beta_{i\sigma}~(i=1,..,4)$ depend on various correlation functions and are listed in Appendix A. It is easily verified that the approximations (\[F1Interpolation\])-(\[F4Interpolation\]) fulfill the exact limiting cases (\[F1Sz\_S\])-(\[F4\_n2\]). In addition our approach reproduces the limit of the ferromagnetically saturated semiconductor (T=0, band occupation n=0).[@Nol7] The resulting self-energy $\Sigma_{\sigma}(E)$ is local and depends on various expectation values of pure fermionic, mixed fermionic-spin, and pure localized spin character: $$\begin{aligned} {{\Sigma}_{\sigma}}=F(\langle n_{\sigma}\rangle, \langle S_{}^{-\sigma}c_{\sigma}^{\dag}c_{-\sigma}\rangle,~\langle S^{z}n_{\sigma}\rangle,\\ \langle{S^{z}}\rangle,~\langle{(S^{z})^2}\rangle,~\langle{(S^{z})^3}\rangle,~\langle{S^{+}S^{-}}\rangle)\;. \nonumber\end{aligned}$$ The site indices have been dropped due to translational invariance. Whereas the first two types can be calculated within the MCDA the localized spin correlation functions are known for ferromagnetic saturation and in the paramagnetic state (see Appendix B). The many-body problem represented by (\[Hklm\]) can thus be solved approximately for the Green function (\[GF\]) in a self-consistent manner.[@NRM97] We emphasize that the quantum mechanical character of the localized spins is fully retained in our approach. Furthermore there is no restriction to the parameter range within which our method can be applied. In order to determine the phase boundary between the ferromagnetic and the ferromagnetic-antiferromagnetic phase separated region we first have to solve the Hamiltonian (\[Hklm\]) for an antiferromagnetic configuration. Using the standard sublattice decomposition for bipartite lattices and neglecting the off-diagonal elements of the self-energy matrix[@NMR96] one obtains the following Green function for sublattice A: $$\begin{aligned} G^{A}_{{\bf k}\sigma}(E)&=&\frac{\hbar}{E-\epsilon'({\bf k})-\Sigma^{A}_{{\bf k}\sigma}(E)-\frac{|t({\bf k})|^2}{E-\epsilon'({\bf k})-\Sigma^{B}_{{\bf k}\sigma}(E)}}\;, \end{aligned}$$ with the diagonal elements of the self-energy matrix $\Sigma^{A}_{{\bf k}\sigma}=\Sigma^{B}_{{\bf k}-\sigma}$, the sublattice dispersion $\epsilon'({\bf k})$ and the inter-sublattice dispersion $t({\bf k})$. The approximate solution for the self-energy presented above for the translationally invariant case can be obtained analogously for the antiferromagnetic case. The energy of the antiferromagnetic phase can be evaluated using (\[Utot\]) by simply replacing $S_{ii\sigma}(E)$ by $S^{A}_{ii\sigma}(E)$. Averaging over the sublattices is not necessary due to symmetry reasons, i.e. the summation over the sublattices is absorbed into the spin summation. In this work we restrict our considerations concerning AFM to $G$-type antiferromagnetism, i.e. the spins of all nearest neighbors of a given lattice site belong to the other sublattice. This kind of antiferromagnetism is typical in the strong coupling regime at and near half-filling because it allows for a maximum kinetic energy gain by virtual hopping processes, unlike a FM configuration which forbids these by Pauli’s Principle. We assume the ground state has N$\rm\acute{e}$el structure, i.e. we set the two magnetic sublattices to be saturated, $\langle S_{z}^{A}\rangle = S = -\langle S_{z}^{B}\rangle$. Furthermore we do not take into account possible canted AFM configurations. We used the method proposed in Ref. to determine the FM/PS phase boundary.[@remark2] This criterium for electronic phase separation is based on the separation into AFM regions with one electron at each site and FM domains with an occupation $n_c$, a picture suggested by numerical results in Ref. . On increasing $n$ the AFM part grows until it occupies the whole system at half-filling. The total energy can be written as $$\begin{aligned} \label{U_tot_PS} U_{tot}(n,v) = (1-v)U_{AFM} + vU_{FM}\left(1-\frac{1-n}{v}\right)\end{aligned}$$ where $U_{tot}$ is the total energy per site, $U_{FM}$ is the FM energy per site and its argument is the particle density in the FM regions ($n$ is the total particle density), $U_{AFM}$ is the AFM energy per site, and $v=V_{FM}/V_{tot}$ is the FM volume fraction of the total system size. Minimizing the total energy with respect to $v$ yields the following condition for the critical electron density $n_c$ at which electronic phase separation sets in (i.e. $v=1$): $$\label{E_minimum} U_{AFM}=U_{FM}(n_c)+(1-n_c)U'_{FM}(n_c)\;.$$ $U'_{FM}$ is the derivative of $U_{FM}$ with respect to the particle density. Note that we consider the electron density and not the hole density in the FKLM so that the corresponding formula in Ref. is modified accordingly. If one varies the chemical potential $\mu$ continuously the electron densities at which phase separation is present correspond to band occupations that cannot be stabilized as, e.g., demonstrated in the MC simulations in Ref. . Given the jump in the electron density on varying $\mu$ this kind of transition appears to be first-order. However, [*enforcing*]{} any value of $n$ as in our case and having in mind the picture of AFM regions gradually taking over the whole system the transition from FM to AFM rather appears to be continuous. Results and Discussion ====================== We computed phase diagrams for different spin quantum numbers $S$ for a simple cubic lattice, a square lattice, and a 1D chain at zero temperature. To simulate a classical spin, i.e. a spin that can be oriented in any direction in space, we used a spin quantum number $S=10$. In order to obtain a FM-PM-PS phase diagram we evaluated the total energy at T=0 for the saturated ferromagnetic, for the paramagnetic, and for the antiferromagnetic ($n=1$) spin configuration of the core spins, respectively, as a function of the occupation number $n$ and for several values of the Hund coupling $J$. To compare the results for different $S$ we take the proper scaling $\propto JS$ of the interaction into account and normalize the localized spins. Thus in the following we consider localized spins ${\bf S}/S$ coupled to itinerant electrons by $JS$. Before starting the discussion of our results we should add two remarks. First, it is well known that 1D systems exhibit some pecularities; for example, an integer spin nearest-neighbor Heisenberg chain has a gap in its excitation spectrum ([*Haldane gap*]{}). Moreover, non-local correlations are important, whereas our approach is based on a local self-energy. However, our approximate theory is applicable for any finite dimension and thus we consider it worthwhile to present results for $D=1$, too. Secondly, we have to address the issue of anisotropy. The Mermin-Wagner theorem[@MW66] forbids spontaneous symmetry breaking in $D<3$ for an isotropic Hamiltonian like (\[Hklm\]) at finite temperatures. In order for our results to be relevant at small temperatures, too, we have to add an (infinitesimally) small anisotropy term, e.g. a single-ion anisotropy taking spin orbit coupling into account. Being orders of magnitude smaller than the leading energy scale in our system, the Hund coupling $J$, it will not alter the phase boundaries visibly. There is another benefit of adding an anisotropy to (\[Hklm\]). On decreasing $S$ the assumption of a N$\rm\acute{e}$el state for the antiferromagnetic phase becomes questionable due to quantum fluctuations. In 1D this approximation even breaks down completely. These fluctuations are suppressed by anisotropy.[@Faz99] Fig. \[U1\] shows the total energy of the FM and PM phases as a function of the band occupation $n$ and of the AFM phase at $n=1$. The result of the right hand side of Eq. (\[E\_minimum\]) is also plotted. The spin is $S=1/2$ corresponding to a maximum of quantum fluctuations. The paramagnetic ground state at $n_{c,1}=0.45$ emerges well before the criterium (\[E\_minimum\]) for PS is fulfilled at $n_{c,2}=0.72$, indicating an instability of ferromagnetism against spin disorder. In our calculations we did not find a second transition from PM to PS for $S=1/2$ at $n>n_{c,1}$. On the other hand for $S\ge 1$ and the values of $J$ we considered ($JS/t\ge6$) we find that the critical value of $n$ for the onset of phase separation is always lower than the electron density where FM becomes unstable against PM, i.e. $n_{c,2}<n_{c,1}$. To further analyse the FM-PM transition for $S=1/2$ we show in Fig. \[U2\] more results for the total energy and compare them to calculations based on classical localized spins. Whereas the PM energy is lower for the quantum spin over the whole range of electron densities the FM energies are practically the same for both spins. This can be related to the quasiparticle excitation spectrum. As can be seen by (\[Utot\]) the total energy of the FKLM is determined in complete formal analogy to the free electron case, i.e. by the (quasiparticle) density of states. Our findings suggest the following picture: In the FM phase there is a parallel alignment of the conduction electrons with respect to the localized spins. Given a saturated FM spin background it is not possible for an itinerant electron to flip its spin by spin exchange. There is no significant occupation in the $\downarrow$-band of the spectrum for any $S$. This is, at least for high Hund coupling, consistent with the excitation spectra we obtained (see FM QDOS in the inset of Fig. \[qdos\_PM\]). For the $\uparrow$-electrons the localized moments act as an effective field and their quasiparticle density of states (QDOS) is merely the Bloch density of states shifted by $-\frac{1}{2}JS$. Within this picture the scaling by $JS$ is expected to transfer directly to the total energy which is indeed the case as can be seen by the practically identical curves in Fig. \[U2\] (right). In other words, the scaling of the FM energy with $JS$ expresses the fact that spin waves are frozen. The situation is different in the paramagnetic case. Now there are (energetically unfavored) states for $\uparrow$- and $\downarrow$-electrons with an antiparallel orientation to the localized spins. Whereas the lower energy band in Fig. \[qdos\_PM\] has a “parallel character” of localized and itinerant magnetic moment the upper band corresponds to an antiparallel orientation. As there are more eigenstates with a parallel spin-spin alignment one expects larger spectral weight of the corresponding peaks in the excitation spectrum. The lower the magnitude of the localized spin the higher this difference becomes: the parallel case “outweighs” the antiparallel case most dominantly for $S=1/2$ (“triplet” vs. “singlet”). This can immediately be verified in the zero bandwidth limit.[@Nol7] There is indeed a higher spectral weight for low spin quantum numbers as can be seen in Fig. \[qdos\_PM\]. On the other hand increasing $S$ results in an equal distribution of spectral weight for both bands in the classical limit.[@remark3] Thus, the paramagnetic state for lower magnitudes of the localized spins has lower total energy, as is observed in Fig. \[U2\] (left). Let us now proceed to the discussion of the magnetic phase diagrams we obtained in the spatial dimensions $D=$1,2, and 3. Fig. \[pd\_1D\] shows the phase diagram in $D=1$. The phase boundaries indicate the transition from FM to the phase separated FM-AFM regime except for the case $S=1/2$ which gives a FM-PM transition. For comparison we added results by other authors obtained by numerical methods. Our findings give the same overall picture as earlier works with an increasing FM region as the Hund coupling becomes stronger. However we find that the $S=1/2$-FKLM has a significantly reduced ferromagnetic stability due to its “early” transition to a PM state compared to higher localized spin quantum numbers. A second important feature is that we do not see any major differences in our phase boundaries for $S>1/2$. Our results compare well with the MC numerical phase diagram (classical spins) and with the DMRG results ($S=3/2$). There is no considerable variation of the phase boundary for $S=3/2$ and $S=\infty$ either.[@Yun98] The deviations are largest for $S=1/2$: here our theory appears to underestimate the FM region. However, the authors of Ref. attribute the fact that their FM region is reduced as compared to Ref. to the higher number of lattice sites they include in their computation. Hence it would be desirable to have more DMRG results with larger system sizes to see how the phase boundaries change. We did not find a second transition from PM to PS for $S=1/2$, neither did we take spiral or incommensurate correlations into account as was done in Ref. and Ref. . However we can state that the reduction of the [*maximal*]{} region of FM for $S=1/2$ with respect to higher spin quantum numbers is consistent with what we learn from the results obtained by other approaches. Fig. \[pd\_2D\] and \[pd\_3D\] show the phase diagrams for 2D and 3D, respectively.[^1] They give essentially the same picture as in 1D. Again we observe an unstability of the FM phase against PM for $S=1/2$ only, reducing the region of FM stability as compared with higher $S$. There is no significant change of the phase boundary for $S>1/2$ apart from some enhancement of FM for classical core spins and larger Hund couplings in all dimensions. The $S=\infty$ results in 2D are in accordance with MC simulations in Ref. which yield FM for the full range of band filling at large $J$. We note a slight enlargement of the PS region in 3D for all $S$. However we do not want to emphasize the quantitative differences too much. As was already pointed out in Ref. the small differences in the energies of the different phases lead to uncertainties in the exact location of the phase boundaries. In our case we estimate these error bars to be about 10% with respect to the corresponding electron density $n$. For the same reason we are careful not to put too much significance into the $D$-dependence of the crossing points of our $S=3/2$ and classical $S$ phase boundaries. However it is interesting to note that there is a crossing in all dimensions. As before our findings agree well with those published by other authors who used different methods. There is one exception: compared to the other results the DMFT phase boundaries (however, for a $D=\infty$-Bethe lattice) from Ref. seem to overestimate FM considerably. We conclude that the magnetic properties of the FKLM at strong coupling and as far as the magnetic phases we investigated are concerned are rather insensitive to variations of the spatial dimension, at least at T=0. This falls in line with other results obtained using classical localized spins.[@Dag98] Summary and Outlook =================== We have presented magnetic phase diagrams for the ferromagnetic Kondo lattice model in $D=1,2,$ and $3$. Our method is based on an equation of motion decoupling procedure fulfilling exact relations among Green functions and among their spectral moments. It does not require the assumption of classical spins. To determine the phase boundaries we computed and compared the total energy of the different phases at zero temperature. There are three main results. First the case $S=1/2$ appears to be special exhibiting a reduced region of ferromagnetism in the $J$-$n$-plane due to an instability of FM against spin disorder. Increasing the electron density this transition always takes place before FM/AFM phase separation can occur. Secondly the phase boundaries for $S>1/2$ appear to be quite robust with respect to changes of the magnitude of the localized spin. This supports the widespread usage of classical localized spins in the treatment of the FKLM. Finally, these two features are recovered and quantitatively similar in the phase diagrams in all dimensions we investigated, namely 1D, 2D, and 3D. To our knowledge there are no numerical phase diagram results in 2D and 3D with quantum localized spins as this is numerically a quite demanding task. It would be interesting to explore if the same trends as in 1D hold for $S=1/2$ using numerically exact methods like DMRG. It should also be a worthwhile task to examine the influence of the spin magnitude at higher temperatures up to the Curie temperature (in the vicinity of which colossal magnetoresistance occurs in the manganites). Phase boundaries may of course change to a certain extent with the crystal lattice structure, i.e. Bloch density of states. More changes can be expected when including a finite next-nearest neighbor hopping integral. Finally we focussed on the phases thought to be relevant for the intermediate to strong-coupling regime and left out other phases that come into play in the weak-coupling case. These issues are left for further investigation. Appendix A: Interpolation coefficients {#appendix-a-interpolation-coefficients .unnumbered} ====================================== Exploiting spectral moment relations leads to the following coefficients in the approximations (\[F1Interpolation\])-(\[F4Interpolation\]) for the higher order local Green functions: $$\begin{aligned} \label{eq:interpolkoeff1} \alpha_{1\sigma}&=&0\\ \label{eq:interpolkoeff2} \beta_{1\sigma}&=&\frac{K_{1\sigma}+4\Delta_{-\sigma}-3z_{\sigma}\mu_{-\sigma}-\eta_{\sigma}}{\langle S^{-\sigma}S^{\sigma}\rangle+2z_{\sigma}\Delta_{-\sigma}-\gamma_{\sigma}}\\\nonumber\\ \label{eq:interpolkoeff3} \alpha_{2\sigma}&=&\langle S^{-\sigma}S^{\sigma}\rangle-\beta_{2\sigma}\langle S^{z}\rangle\\ \label{eq:interpolkoeff4} \beta_{2\sigma}&=&\frac{K_{2\sigma}+2\eta_{\sigma}}{\langle (S^{z})^{2}\rangle-\langle S^{z}\rangle^{2}-\gamma_{\sigma}}\\\nonumber\\ \label{eq:interpolkoeff5} \alpha_{3\sigma}&=&-\gamma_{\sigma}\\ \label{eq:interpolkoeff6} \beta_{3\sigma}&=&\frac{\mu_{\sigma}-z_{\sigma}\eta_{\sigma}+2z_{\sigma}\vartheta+z_{\sigma}\gamma_{\sigma}\langle S^{z}\rangle}{\langle S^{-\sigma}S^{\sigma}\rangle+2z_{\sigma}\Delta_{-\sigma}-\gamma_{\sigma}}\\\nonumber\\ \label{eq:interpolkoeff7} \alpha_{4\sigma}&=&\Delta_{-\sigma}+\beta_{4\sigma}\langle S^{z}\rangle\\ \label{eq:interpolkoeff8} \beta_{4\sigma}&=&\frac{z_{\sigma}K_{3\sigma}-\mu_{-\sigma}-z_{\sigma}\eta_{\sigma}}{\langle (S^{z})^{2}\rangle-\langle S^{z}\rangle^{2}-\gamma_{\sigma}} \end{aligned}$$ \ with the abbreviations: $$\begin{aligned} K_{1\sigma}&=&3z_{\sigma}\langle S^{\sigma}S^{-\sigma}\rangle+(S(S+1)-4)\langle S^{z}\rangle +z_{\sigma}\langle (S^{z})^{2}\rangle \nonumber\\ &&-2z_{\sigma}S(S+1)(1-\langle n_{-\sigma}\rangle) -\langle(S^{z})^{3}\rangle\\ K_{2\sigma}&=&\left(S(S+1)-\langle S^{-\sigma}S^{\sigma}\rangle\right)\langle S^{z}\rangle \nonumber\\ &&-z_{\sigma}\langle (S^{z})^{2}\rangle-\langle(S^{z})^{3}\rangle\\ K_{3\sigma}&=&z_{\sigma}S(S+1)\langle n_{-\sigma}\rangle+\Delta_{-\sigma}(1-z_{\sigma}\langle S^{z}\rangle)\end{aligned}$$ The mixed expectation values $$\begin{aligned} \label{gamma} \gamma_{\sigma}&=&\langle S^{-\sigma}c^{\dag}_{\sigma}c_{-\sigma}\rangle\\ \label{Delta} \Delta_{\sigma}&=&\langle S^{z}n_{\sigma}\rangle\\ \mu_{\sigma}&=&\langle S^{-\sigma}S^{\sigma}n_{\sigma}\rangle\\ \eta_{\sigma}&=&\langle S^{-\sigma}S^{z}c_{\sigma}^{\dag}c_{-\sigma}\rangle\\ \vartheta&=&\langle S^{z}n_{\sigma}n_{-\sigma}\rangle\end{aligned}$$ can all be evaluated with the corresponding Green functions using the spectral theorem. Appendix B: Localized spin expectation values {#appendix-b-localized-spin-expectation-values .unnumbered} ============================================= It holds for ferromagnetic saturation: $$\begin{aligned} \label{eq:spinerwartungswerteferro} \langle{(S^{z})^2}\rangle &=&S^2\\ \langle{(S^z)^3}\rangle &=&S^3\\ \langle{S^{-\sigma}S^{\sigma}}\rangle &=&S(1-z_{\sigma})\end{aligned}$$ and for the paramagnetic phase: $$\begin{aligned} \label{eq:spinerwartungswertepara} \langle{(S^{z})^2}\rangle &=&\frac{1}{3}S(S+1)\\ \langle{(S^z)^3}\rangle &=&0\\ \langle{S^{-\sigma}S^{\sigma}}\rangle &=&\frac{2}{3}S(S+1)\end{aligned}$$ [14]{} C. Zener, Phys. Rev. [**81**]{}, 440 (1951); Phys. Rev. [**82**]{}, 403 (1951) P. W. Anderson and H. Hasegawa, Phys. Rev. [**100**]{}, 675 (1955) A. P. Ramirez, J. Phys.: Condens. Matter [**9**]{}, 8171 (1997) S. Satpathy, Z. S. Popovic, and F. R. Vukajlovic, Phys. Rev. Lett. [**76**]{}, 960 (1996) M. Quijada, J. Cerne, J. R. Simpson, H. D. Drew, K. H. Ahn, A. J. Millis, R. Shreekala, R. Ramesh, M. Rajeswari, and T. Venkatesan, Phys. Rev. B [**58**]{}, 16093 (1998) E. Dagotto, Nanoscale Phase Separation and Colossal Magnetoresistance, The Physics of Manganites and Related Compounds, Springer Series in Solid-State Sciences [**136**]{} (2003) H. Ohno, J. Magn. Magn. Mat. [**200**]{}, 110 (1999) I. $\breve{Z}$uti$\acute{c}$, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. [**76**]{}, 323 (2004) E. H. Hwang and S. Das Sarma, Phys. Rev. B [**72**]{}, 035210 (2005) K. Nagai, T. Momoi, and K. Kubo, J. Phys. Soc. Jpn. [**69**]{}, 1837 (2000) A. Chattopadhyay, A. J. Millis, and S. Das Sarma, Phys. Rev. B [**64**]{}, 012416 (2001) S. Yunoki, J. Hu, A. L. Malvezzi, A. Moreo, N. Furukawa, and E. Dagotto, Phys. Rev. Lett. [**80**]{}, 845 (1998) E. Dagotto, S. Yunoki, A. L. Malvezzi, A. Moreo, J. Hu, S. Capponi, D. Poilblanc, and N. Furukawa, Phys. Rev. B [**58**]{}, 6414 (1998) L. Yin, Phys. Rev. B [**68**]{}, 104433 (2003) M. Gul$\rm \acute{a}$csi, Adv. Phys. [**53**]{}, 769 (2004) D. Pekker, S. Mukhopadhyay, N. Trivedi, and P. M. Goldbart, Phys. Rev. B [**72**]{}, 075118 (2005) H. Aliaga, B. Normand, K. Hallberg, M. Avignon, and B. Alascio, Phys. Rev. B [**64**]{}, 024422 (2001) D. J. Garcia, K. Hallberg, B. Alascio, and M. Avignon, Phys. Rev. Lett. [**93**]{}, 177204 (2004) In the FKLM the total energy can be related to the local one-electron Green function. This is due to the bilinearity of the Hamiltonian in the fermionic operators. In other models, e.g. the Hubbard model, the corresponding formula is slightly more complicated and contains non-local terms. C. Santos and W. Nolting, Phys. Rev. B [**65**]{}, 144419 (2002) R. Schiller and W. Nolting, Phys. Rev. Lett. [**86**]{}, 3847 (2001) C. Santos, W. Nolting, and V. Eyert, Phys. Rev. B [**69**]{}, 214412 (2004) W. Nolting, S. Rex, and S. Mathi Jaya, J. Phys.: Condens. Matter [ **9**]{}, 1301 (1997) W. Nolting, Grundkurs Theoretische Physik 7, Viel-Teilchen-Theorie, Springer Berlin (2005) W. Nolting, S. M. Jaya, and S. Rex, Phys. Rev. B [**54**]{}, 14455 (1996) By PS we always denote the coexistence of FM and AFM phases in the system. N. D. Mermin and H. Wagner, Phys. Rev. Lett. [**17**]{}, 1133 (1966) P. Fazekas, Lecture Notes on Electron Correlation and Magnetism, Series in Modern Condensed Matter Physics Vol. 5, World Scientific (1999) The different centers of gravity of the upper band are due to a different scaling behavior. In the zero bandwidth limit[@Nol7] the “antiparallel” excitation level scales as $J(S+1)$ rather than $JS$. In the framework of our theory the upper energy band can be interpreted in fact as a mixture of two bands, where one can be identified with the already mentioned parallel alignment of an electron and a localized magnetic moment and the other is linked to the break-up of a parallel coupling by a second conduction electron.[@JK1] J. Kienert, “Das korrelierte Kondo-Gitter-Modell”, Diplomarbeit, Humboldt-Universität zu Berlin (2001) [^1]: We point out that the 2D results taken from Ref. into Ref. for comparison are larger than the originally published data by a factor 2.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a $q$-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel ($\lambda \sim $1550nm), comprising an aggregate 80 Gbit/s, were transmitted $\sim$1m over the lab table with $<$-16.4 dB ($\sim 2 \%$) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties $<$ 3.41dB.' author: - 'Giovanni Milione,$^{1,2,3,4*}$' - 'Martin P. J. Lavery$^5$' - Hao Huang$^6$ - Yongxiong Ren$^6$ - Guodong Xie$^6$ - 'Thien An Nguyen$^{1,2}$' - 'Ebrahim Karimi$^{7}$' - 'Lorenzo Marrucci$^{4,8}$' - 'Daniel A. Nolan$^{4,9}$' - 'Robert R. Alfano$^{1,2,3,4}$' - 'Alan E. Willner$^6$' title: '4 $\times$ 20 Gbit/s mode division multiplexing over free space using vector modes and a $q$-plate mode (de)multiplexer' --- Introduction ============ ![image](Fig1){width="\linewidth"} Mode division multiplexing (MDM) is the method of optical communication where spatial modes are used as information channels carrying independent data streams. In general, MDM can potentially increase the information capacity of optical communication in an amount proportional to the number of modes used. MDM has been used in optical fiber communication; for comprehensive reviews see [@Richardson2013; @Ryf2012]. Potentially, MDM can also be used in free space optical communication (FSO) [@Acampora2002; @Willebrand2001]. Polarization division multiplexing and wavelength division multiplexing have been used to increase the information capacity of FSO [@Cvijetic2010; @Ciaramella2009]. By definition, spatial modes are solutions to Maxwell’s wave equation and can be represented by many bases [@Black2009]. In principle, any basis can be used for MDM. For example, what is sometimes referred to as the basis of orbital angular momentum (OAM) modes has been used to increase the information capacity of FSO and free space communication with millimeter waves [@Wang2012; @Huang2014a; @Yan]. There is also a basis sometimes referred to as the basis of vector modes or vector beams. Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging [@Brown2; @Zhan]. In this work, vector modes are used to increase the information capacity of FSO via MDM. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a $q$-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes, each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel ($\lambda \sim $1550nm), comprising an aggregate 80 Gbit/s, were transmitted $\sim$1m over the lab table, with $<$-16.4 dB ($\sim 2 \%$) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties $<$ 3.41dB. Vector modes {#vector-modes .unnumbered} ============ A vector mode is defined here as a solution to the wave equation in free space whose electric field is given by the equation [@Brown2; @Zhan]: $$\begin{aligned} \mathrm{ \bf E }(r,\phi) &=& f(r) \mathrm{ \bf V}_{\ell, \gamma}(\phi),\end{aligned}$$ where $(r, \phi )$ are cylindrical coordinates and $f(r)$ is a solution to the radial part of the wave equation, e.g., Bessel-Guassian [@Bessel], or Laguerre-Gaussian functions. $\mathrm{ \bf V}_{\ell, \gamma}(\phi)$ is a Jones vector describing the spatially inhomogeneous states of polarization of a vector mode given by the equation [@Stalder]: $$\begin{aligned} \mathrm{ \bf V}_{\ell, \gamma}(\phi) &=& \left(\begin{array}{c} \cos( \ell \phi + \gamma ) \\ \sin( \ell \phi + \gamma ) \end{array}\right),\end{aligned}$$ where $\ell = 0, \pm 1, \pm 2, ...$, and $\gamma = 0, \pi/2$. $\mathrm{ \bf V}_{+1,0}$ and $\mathrm{ \bf V}_{+1,\pi/2} $ are referred to as radial and azimuthal polarization and are shown in Fig. 1(a1) and Fig. 1(a2), respectively. $\mathrm{ \bf V}_{-1,0}$ and $\mathrm{ \bf V}_{-1,\pi/2}$ are shown in Fig. 1(b1) and Fig. 1(b2), respectively. The basis of vector modes comprises light’s space and polarization degrees of freedom; the number of MDM channels that can be used for MDM when using vector modes is the same as using, for example, OAM modes together with polarization division multiplexing. $q$-plate mode (de)multiplexer ============================== To use vector modes for MDM, a mode (de)multiplexer for vector modes is required. In general, a mode (de)multiplexer (separates) combines $N$ spatial modes at the (receiver) transmitter of an optical communication system. For example, a mode (de)multiplexer for OAM modes that has been demonstrated is based on passive beam (separation) combining [@Wang2012; @Huang2014a]. In passive beam (separation) combining, data streams from $N$ single mode optical fibers (SMFs) are transformed into $N$ OAM modes, or vice vera, via matching their wavefronts with $N$ spatial light modulators (SLMs). They are then (separated) combined via $N-1$ beam splitters. SLMs can also be used to generate vector modes [@Monika; @Kimane]. However, there are many methods to generate vector modes; for example see [@Hasman1; @Milionea; @Milioneb; @Milionec]. Here, a liquid crystal technology referred to as a $q$-plate is used [@Marruccia]. Analogous to how a SLM matches an OAM mode’s wavefront, a $q$-plate matches a vector mode’s spatially inhomogeneous state of polarization [@Marruccib]. A $q$-plate comprises a thin layer of pattern liquid crystal molecules in-between two thin glass plates. The molecules’ orientations are described by $q \phi$, where $\phi$ is the azimuthal coordinate and $q$ is a half-integer. $q=+1/2$ and $q=-1/2$ plates are schematically shown in Fig. 1(a) and Fig. 1(b), respectively. Effectively, a $q$-plate is a half-wave plate with an azimuthally varying fast axis and can be mathematically represented by a Jones matrix [@Marruccia]: $$\hat{Q} = \left(\begin{array}{cc} \cos( 2 q \phi ) & \sin( 2 q \phi) \\ \sin( 2 q \phi ) & -\cos( 2 q \phi ) \end{array}\right).$$ It can be shown using Jones calculus, upon propagation through a $q$-plate, the state of polarization of the fundamental mode of a SMF, given by the Jones vector $(\alpha \hspace{2mm} \beta)^T$ ($|\alpha|^2 + |\beta|^2 = 1; \alpha,\beta \in \mathbb{C}$, i.e., a linear combination of horizontal and vertical polarization) is transformed into a linear combination of vector modes, or vice versa, given by the equation: $$\begin{aligned} \alpha \mathrm{ \bf V}_{2q, 0}(\phi) + \beta \mathrm{ \bf V}_{2q, \pi/2}(\phi).\end{aligned}$$ Using a $q=+1/2$ or $q=-1/2$ plate, horizontal/vertical polarization can be transformed into $\mathrm{ \bf V}_{+1, 0}(\phi) / \mathrm{ \bf V}_{+1, \pi/2}(\phi)$ or $\mathrm{ \bf V}_{-1, 0}(\phi) / \mathrm{ \bf V}_{-1, \pi/2}(\phi)$ vector modes, or vice versa, as shown in Fig. 1(a) and Fig. 1(b), respectively. Two $q=+1/2$ and $q=-1/2$ plates were fabricated. The fabrication details can be found in [@Marruccic; @Marruccid]. Using the $q$-plates, a mode (de)multiplexer for vector modes based on passive beam (separation) combining was constructed as shown in Fig. 2. As a mode multiplexer- the fundamental modes of two SMFs at $\lambda \sim$1550nm were collimated to beam waists of $\sim$3mm using appropriate lenses (L) and then made to propagate through the $q$-plates. Then, the light beams from each SMF were aligned co-linearly via a non-polarizing beam splitter (BS). The $q$-plates were connected to an electronic signal generator generating a 1kHz square wave with a tunable voltage. The $q$-plates could be “turned on" and “turned off" by controlling the voltage. Each $q$-plate was “turned on" ($\sim$1.2 Volts). Note, there is a -1.16dB loss when the $q$-plate is “turned on." $\mathrm{ \bf V}_{+1, 0}(\phi) / \mathrm{ \bf V}_{+1, \pi/2}(\phi)$ and $\mathrm{ \bf V}_{-1, 0}(\phi) / \mathrm{ \bf V}_{-1, \pi/2}(\phi)$ vector modes as generated using the mode (de)multiplexer are shown in Fig. 1(a1) / Fig. 1(a2) and Fig. 1(b1) / Fig. 1(b2), respectively, along with their intensities as analyzed by a linear polarizer whose transmission axis is oriented at 0, 45, 90, and 135 degrees with respect to the lab. ![Mode (de)multiplexer for vector modes based on passive beam (separation) combing as describe in text. Green arrows indicates the light beam’s direction of propagation. ](Fig2){width="\linewidth"} ![image](Fig3){width="\linewidth"} Mode division multiplexing using vector modes {#mode-division-multiplexing-using-vector-modes .unnumbered} ============================================= Using the mode (de)multiplexer, four vector modes, each carrying a 20 Gbit/s quadrature phase shift keyed (QPSK) signal on a single wavelength channel ($\lambda \sim$1550nm), comprising an aggregate 80 Gbit/s, were transmitted as one light beam $\sim$1m over the lab table. Note, over this distance there is negligible atmospheric turbulence and beam divergence. The experimental setup is shown in Fig. 2. The transmitter was as follows. First, the output of an external cavity laser (ECL) tuned to $\lambda\sim1550$nm was modulated by an I-Q modulator to generate a 20 Gbit/s QPSK signal. The I-Q modulator was driven by a two-channel pattern generator generating two 10 Gbaud signals, being decorrelated pseudo-random-binary-bit-sequences of length $2^{15} - 1$, and comprising the in-phase (I) and quadrature (Q) components of the QPSK signal. The signal was then amplified by a low-noise erbium doped fiber amplifier (EDFA) and polarization division multiplexed (PDM) by splitting and decorrelating it in two SMFs, making the states of polarization in each SMF mutually orthogonal, (horizontal/vertical polarization), and then recombining them into one SMF. Finally, the resultant PDM-QPSK signal was mode multiplexed by again splitting and decorrelating it in two SMFs which were then connected to the SMFs of the mode (de)multiplexer, resulting in four vector modes each carrying a 20 Gbit/s QPSK signal. At the mode (de)multiplexer, due to twists and bends in the SMFs, the channels of each PDM-QPSK signal are not necessarily horizontal and vertical polarization. However they can be aligned using a polarization paddle controller (Pol-Con). Nonetheless, each channel is a linear combination of horizontal and vertical polarization as represented by two arbitrary antipodal points on the Poincarè sphere. Therefore, as given by Eq. 5, after propagation through a $q$-plate each channel of a PDM-QPSK signal is transformed into a linear combination of vector modes as represented by two arbitrary antipodal points on a higher-order Poincarè sphere [@Milione2011; @Milione2; @Aiello]. The channels originating from the $q=+1/2$ and $q=-1/2$ plates are labelled Ch1/Ch2 and Ch3/Ch4 as shown in Fig. 1(a4) and Fig. 1(b4), respectively. The intensities of Ch1/Ch2 and Ch3/Ch4, as recorded using the InGaAs camera, are shown in Fig. 3(a) and Fig. 3(b), respectively. After transmission over the lab table, the receiver was as follows. First, the channels were mode demultiplexed into the two SMFs of another mode (de)multiplexer. Note, Ch1/Ch2 and Ch3/Ch4 were received and discriminated via coupling to both SMFs. In accordance with Eq. 3, upon propagation through a $q=+1/2$ plate, $\mathrm{ \bf V}_{+1, 0}(\phi) / \mathrm{ \bf V}_{+1, \pi/2}(\phi)$ vector modes are transformed into horizontal/vertical polarization while $\mathrm{ \bf V}_{-1, 0}(\phi) / \mathrm{ \bf V}_{-1, \pi/2}(\phi)$ vector modes are transformed into higher-order vector modes. The intensities of Ch1/Ch2 and Ch3/Ch4 at the SMF corresponding to the $q=+1/2$ $q$-plate, as recorded using an InGaAs camera, are shown in Fig. 3(c) and Fig. 3(d), respectively. As can be seen, Ch1/Ch2 were transformed back into the fundamental mode of a SMF, i.e, a PDM-QPSK signal, and were coupled to the SMF. However, Ch3/Ch4 were transformed into higher-order vector modes and were not coupled to the SMF. The opposite is true for the $q=-1/2$ $q$-plate. The intensities of Ch1/Ch2 and Ch3/Ch4 at the SMF corresponding to the $q=-1/2$ $q$-plate, as recorded using an InGaAs camera, are shown in Fig. 3(e) and Fig. 3(f), respectively. Next, each resulting PDM-QPSK signal was amplified by an EDFA and aligned via a Pol-Con to a polarization diversity coherent receiver where they were polarization demultiplexed and coherently detected. The polarization diversity coherent receiver comprised a polarizing splitter, followed by two optical hybrids, connected to another ECL (“local oscillator”). Via intradyne detection, the analog QPSK signals were converted to digital signals by four balanced detectors. Using four digital oscilloscopes, the resultant digital signals were captured at 40 GSample/s ($\Delta$20 GHz) for offline digital signal processing (DSP). The captured constellations of the QPSK signals for all channels are shown in Fig 3(a). Mode crosstalk, Bit error rates, and power penalties {#mode-crosstalk-bit-error-rates-and-power-penalties .unnumbered} ==================================================== Mode crosstalk (MC) is defined as the transfer of power (signals) between channels. MC for each channel was measured at the mode demultiplexer by transmitting one channel at a time and measuring the power received by each of all channels. A normalized MC “matrix", comprising the MC measurements, are shown in Fig. 3(b). There is $<$-16.7dB ($\sim2\%$) MC for any channel. As can be seen, via the off-diagonal blocks, the greatest contribution to MC for any channel came from channels originating from different $q$-plates. As there is negligible atmospheric turbulence and beam divergence over the transmitted distance, this is attributed to mismatch of the numerical apertures and misalignment of the lenses with the SMFs of the mode demultiplexer. MC may also originate from “mode impurity," i.e., power is generated in vector modes other than those intended. A preliminary analysis indicates the vector modes as generated via the $q$-plates of the mode multiplexer have $< - 21.0$dB $(<1\%)$ mode impurity. A detailed analysis of the measurement of mode impurity will be reported elsewhere. For each channel and a“back to back" (B2B) channel, bit error rates (BER) were measured as a function of optical signal to noise ratio (OSNR) when all channels were simultaneously transmitted. BER vs. OSNR curves are shown in Fig. 3(a). The B2B channel is the original QPSK signal, transmitted and received, when the $q$-plates are “turned off." The performance of each channel can be assessed via its power penalty (PP), i.e., the difference in OSNR between each channel and the B2B channel at the forward error correction (FEC) threshold (BER = $3.8\times10^{-3}$). The PPs for all channels and their cumulative MCs are shown in Table 1. All channels reach a BER at the FEC threshold with a PP $<$ 3.41dB. Differences in PPs are attributed to polarization dependent loss throughout the various SMFs of the system and misalignment of the Pol-Cons. Conclusion {#conclusion .unnumbered} ========== In conclusion, vector modes were used to increase the information capacity of FSO via MDM. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a $q$-plate was introduced. As a proof of principle, using the mode (de)multiplexer four vector modes, each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel ($\lambda \sim $1550nm), comprising an aggregate 80 Gbit/s, were transmitted $\sim$1m over the lab table, with $<$-16.4 dB ($\sim 2 \%$) mode crosstalk. BERs for all vector modes were measured at the forward error correction threshold with power penalties $<$ 3.41dB. It has been conjectured upon propagation through atmospheric turbulence a vector mode will experience less scintillation as compared to an OAM mode [@Haus; @Gbur]. The propagation of vector modes through atmospheric turbulence is the subject of future work. When using OAM modes for MDM, the deleterious effects of atmospheric turbulence can be compensated via wavefront measurements [@Ren]. When using vector modes for MDM, comparable compensation of atmospheric turbulence may be possible via the measurement of spatially inhomogeneous polarization [@Angela]. It has been shown, the transmission of an OAM mode through scattering media depends on its polarization and OAM [@Milione4]. As vector modes are non-separable superpositions of circular polarized OAM modes [@Milione2011; @Milione2; @Aiello; @spinorbit], a comparable study of vector modes may be of interest, especially with regard to atmospheric light scattering. Vector modes are referred to as the “true modes" of an optical fiber. Use of the $q$-plate mode (de)multiplexer for optical fiber communication, as well as its compatibility with wavelength division multiplexing, especially with regard to “ring core" optical fibers, is the subject of current work [@Milione1; @Milione5; @Milione6; @Nenad; @Leslie]. It is noted, there are other light beams that have spatially inhomogeneous states of polarization, such as, Full Poincare beams [@Beckley]. However, as compared to vector modes, Full Poincare beams experience non-trivial dynamics upon propagation [@Milione3; @Hasman2]. ![(a) Quadrature phase shift keying (QPSK) constellations for Ch1, Ch2, Ch3, and Ch4 (b) Mode crosstalk matrix for Ch1, Ch2, Ch3, and Ch4 (c) Measurements of bit error rate (BER) as a function of optical signal to noise ratio (OSNR) for Ch1, Ch2, Ch3, Ch4, and a back to back (B2B) channel. Dashed line delineates forward error correction (FEC) threshold.](Fig4){width="\linewidth"} Vector mode Power Penalty \[dB\] Mode Crosstalk \[dB\] ------------- ---------------------- ----------------------- Ch1 0.44 -16.42 Ch2 3.40 -16.64 Ch3 1.46 -17.01 Ch4 0.41 -17.21 : **Power Penalties** Funding Information {#funding-information .unnumbered} =================== GM and RRA acknowledge support from AFOSR Grant. No. No. 47221-00-01, ARO Grant. No. 52759-PH-H, NSF GRFP Grant. No. 40017-00-04, and Corning, Inc. EK acknowledges support from the Canada Excellence Research Chairs (CERC) Program. MPJL acknowledges support from EPSRC and the Royal Academy of Engineering. USC acknowledges support from the DARPA InPho program. Acknowledgments {#acknowledgments .unnumbered} =============== GM thanka S. Slussarenko for fabricating the $q$-plates. Authors from USC thank Dr. Prem Kumar and Dr. Tommy Willis for helpful discussions. [1]{} D. J. Richardson, J. M. Fini, and L. E. Nelson, “Space division multiplexing in optical fibers," Nat. Photonics **7**(5), 354-362 (2013). T. Morioka, Y. Awaji, R. Ryf, P. J. Winzer, D. Richardson, and F. Poletti, “Enhancing optical communications with brand new fibers," IEEE Commun. Magazine, **50**(2), 31-42 (2012). A. Acampora, “Last mile by laser," Sci. Amer. **287**, 48-53 (2002). H. Willebrand and B. Ghuman, “Fiber optics without the fiber," IEEE Spectrum **38**(8), 40-45 (2001). E. Ciaramella, Y. Arimoto, G. Contestabile, M. Presi, A. D’Errico, V. Guarino, and M. Matsumoto, “1.28-Tb/s (32$\times$40 Gb/s) free-space optical WDM transmission system," IEEE Photon. Technol. Lett. **21**(16), 1121-1123 (2009). N. Cvijetic, D. Qian, J. yu, Y.-K. Huang, and T Wang, “Polarization-multiplexed optical wireless transmission with coherent detection," J. Lightw. Technol. **28**(8), 1218-1227 (2010). R. J. Black and L. Gagnon, “Optical Waveguide Modes: Polarization, Coupling and Symmetry," New York, NY, USA: McGraw-Hill, 2009 J. Wang, J. Y. Yang, I.M. Fazal, N. Ahmed, Y. Yan, H. Huang, Y. Ren, Y. Yue, S. Dolinar, M. Tur, and A. E. Willner, “Terabit free-space data transmission employing orbital angular momentum multiplexing," Nat. Phot. **6**, 488-496 (2012). H. Huang, G. Xie, Y. Yan, N. Ahmed, Y. Ren, Y. Yue, D. Rogawski, M. J. Willner, B. I. Erkmen, K. M. Birnbaum, S. J. Dolinar, M. P. J. Lavery, M. J. Padgett, M. Tur, and A. E. Willner, “100 Tbit/s free-space data link enabled by three-dimensional multiplexing of orbital angular momentum, polarization, and wavelength," Opt. Lett., **39**(2), 197-200 (2014). Y. Yan, G. Xie, M. P. J. Lavery, H. Huang, N. Ahmed, C. Bao, Y. Ren, Y. Cao, L. Li, Z. Zhao, A. F. Molisch, M. Tur, M. J. Padgett, and A. E. Willner, “High-capacity millimetre-wave communications with orbital angular momentum multiplexing," Nature Commun. 5, 1-9 (2014). A. Tovar, “Production and propagation of cylindrically polarized Laguerre-Gaussian laser beams," J. Opt. Soc. Am. A **15**, 2705-2711 (1998) T. G. Brown, *Unconventional Polarization States: Beam Propagation, Focusing and Imaging* (Elsevier, 2011), **56**, 81-129. Q. Zhan, “Cylindrical vector beams: from mathematical concepts to applications," Adv. Opt. Photon. **1**, 1-57 (2009). G. Milione, A. Dudley, T. A. Nguyen, K. Chakraborty, E. Karimi, A. Forbes, and R. R. Alfano, “Experimental measurement of the self-healing of radially and azimuthally polarized vector Bessel beams," to be published in J. Opt. M. Stalder and M. Schadt, “Linearly polarized light with axial symmetry generated by liquid-crystal polarization converters," Opt. Lett. **21**, 1948-1950 (1996). C. Maurer, A. Jesacher, S. Furhapter, S. Bernet, and M. Ritsch-Marte, ‘Tailoring of arbitrary optical vector beams," New J. Phys. **9**(78), (2007). S. Tripathi and K. Toussaint, “Versatile generation of optical vector fields and vector beams using a non-interferometric approach,” Opt. Express 20, 10788-10795 (2012). Z. Bomzon, G. Biener, V. Kleiner, and E. Hasman, “Radially and azimuthally polarized beams generated by space-variant dielectric subwavelength gratings,” Opt. Lett. 27, 285-287 (2002). H. I. Sztul, D. A. Nolan, G. Milione, X. Chen, J. Koh, and R. R. Alfano, “Cylindrical vector beam generation from spun fiber," Proc. SPIE 7227, 722704 (2009). G. Milione, H. I. Sztul, D. A. Nolan, J. Kim, M. Etienne, J. McCarthy, J. Wang, and R. R. Alfano, “Cylindrical vector beam generation from a multi elliptical core optical fiber,” Proc. SPIE 7950, 79500K (2011). G. Milione, H. Sztul, D. Nolan, J. Kim, M. Etienne, J. McCarthy, J. Wang, and R. Alfano, “Cylindrical vector beam generation from a multi elliptical core optical fiber,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper CTuB2. L. Marrucci, C. Manzo, and D. Paparo, “Optical spin-to-orbital angular momentum conversion in inhomogeneous anisotropic media," Phys. Rev. Lett. **96**, 163905 (2006). F. Cardano, E. Karimi, S. Slussarenko, L. Marrucci, C. de Lisio, and E. Santamato, “Polarization pattern of vector vortex beams generated by $q$-plates with different topological charges," Appl. Opt. **51**, C1-C6 (2012). S. Slussarenko, A. Murauski, T. Du, V. Chigrinov, L. Marrucci, and E. Santamato, “Tunable liquid crystal $q$-plates with arbitrary topological charge,” Opt. Express **19**, 4085-4090 (2011). S. Slussarenko, B. Piccirillo, V. Chigrinov, L. Marrucci, and E. Santamato, “Liquid crystal spatial-mode converters for the orbital angular momentum of light," J. Opt. **15**(2), 025406 (2013). G. Milione, H. I. Sztul, D. A. Nolan, and R. R. Alfano, “Higher-order Poincare sphere, Stokes parameters, and the angular momentum of light," Phys. Rev. Lett. **107**, 053601 (2011). G. Milione, S. E. Evans, D. A. Nolan, and R. R. Alfano, “Higher-order Pancharatnam-Berry phase and the angular momentum of light," Phys. Rev. Lett. **108**, 190401 (2012). A. Holleczek, A. Aiello, C. Gabriel, C. Marquardt, and G. Leuchs, “Classical and quantum properties of cylindrically polarized states of light,” Opt. Express 19, 9714-9736 (2011). W. Cheng, J. Haus, and Q. Zhan, “Propagation of vector vortex beams through a turbulent atmosphere," Opt. Express **17**, 17829-17836 (2009). Y. Gu, O. Korotkova, and G. Gbur, “Scintillation of nonuniformly polarized beams in atmospheric turbulence," Opt. Lett. **34**, 2261-2263 (2009). Y. Ren, G. Xie, H. Huang, N. Ahmed, Y. Yan, L. Li, C. Bao, M. Lavery, M. Tur, M. Neifeld, R. Boyd, J. Shapiro, and A. Willner, “Adaptive-optics-based simultaneous pre- and post-turbulence compensation of multiple orbital-angular-momentum beams in a bidirectional free-space optical link,” Optica 1, 376-382 (2014). A. Dudley, G. Milione, R. R. Alfano, A. Forbes, “All-digital wavefront sensing for structured light beams," Opt. Express **22**(11), 14031-14040 (2014). P. Shumyatsky, G. Milione, R. R. Alfano, “Optical memory effect from polarized Laguerre-Gaussian light beam in light-scattering turbid media," Opt. Comm. **321**, 116-123 (2014). E. Karimi, J. Leach, S. Slussarenko, B. Piccirillo, L. Marrucci, L. Chen, W. She, S. Franke-Arnold, M.J. Padgett, and E. Santamato, Phys. Rev. A 82, 022115 (2010). C. N. Alexeyev, A. N. Alexeyev, B. P. Lapin, G. Milione, and M. A. Yavorsky, “Spin-orbit-interaction-induced generation of optical vortices in multihelicoidal fibers," Phys. Rev. A **88**(6), 063814 (2013). G. Milione, D. A. Nolan, and R. R. Alfano, “Determining principal modes in a multimode optical fiber using the mode dependent signal delay method," to be published in J. Opt. Soc. Am. B. Y. Rumala, G. Milione, T. Nguyen, S. Pratavieira, Z. Hossain, D. Nolan, S. Slussarenko, E. Karimi, L. Marrucci, and R. Alfano, “Tunable supercontinuum light vector vortex beam generator using a $q$-plate,” Opt. Lett. **38**, 5083-5086 (2013). N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner, S. Ramachandran, “Terabit-Scale Orbital Angular Momentum Mode Division Multiplexing in Fibers," Science 340(6140), 1545-1548 (2013) C. Brunet, P. Vaity, Y. Messaddeq, S. LaRochelle, and L. Rusch, “Design, fabrication and validation of an OAM fiber supporting 36 states," Opt. Express 22, 26117-26127 (2014). A. Beckley, T. Brown, and M. Alonso, “Full Poincare beams,” Opt. Express **18**, 10777-10785 (2010). G. M. Philip, V. Kumar, G. Milione, and N. K. Viswanathan, “Manifestation of the Gouy phase in vector-vortex beams," Opt. Lett. **37**, 2667 (2012). A. Niv, G. Biener, V. Kleiner, and E. Hasman, “Manipulation of the Pancharatnam phase in vectorial vortices," Opt. Express 14, 4208-4220 (2006).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) are popular risk measures from academic, industrial and regulatory perspectives. The problem of minimizing CVaR is theoretically known to be of Neyman-Pearson type binary solution. We add a constraint on expected return to investigate the Mean-CVaR portfolio selection problem in a dynamic setting: the investor is faced with a Markowitz [@Markowitz] type of risk reward problem at final horizon where variance as a measure of risk is replaced by CVaR. Based on the complete market assumption, we give an analytical solution in general. The novelty of our solution is that it is no longer Neyman-Pearson type where the final optimal portfolio takes only two values. Instead, in the case where the portfolio value is required to be bounded from above, the optimal solution takes three values; while in the case where there is no upper bound, the optimal investment portfolio does not exist, though a three-level portfolio still provides a sub-optimal solution.' --- **Optimal Dynamic Portfolio with Mean-CVaR Criterion** , Federal Reserve Bank of New York, New York, NY 10045, USA. Email: jing.li@ny.frb.org [**Mingxin Xu**]{}, University of North Carolina at Charlotte, Department of Mathematics and Statistics, Charlotte, NC 28223, USA. Email: mxu2@uncc.edu **Keywords:** Conditional Value-at-Risk, Mean-CVaR Portfolio Optimization, Risk Minimization, Neyman-Pearson Problem\ **JEL Classification:** G11, G32, C61\ **Mathematics Subject Classification (2010):** 91G10, 91B30, 90C46\ Introduction {#Section: Introduction} ============ The portfolio selection problem published by Markowitz [@Markowitz] in 1952 is formulated as an optimization problem in a one-period static setting with the objective of maximizing expected return, subject to the constraint of variance being bounded from above. In 2005, Bielecki et al. [@BieleckiJinPliskaZhou] published the solution to this problem in a dynamic complete market setting. In both cases, the measure of risk of the portfolio is chosen as variance, and the risk-reward problem is understood as the “Mean-Variance” problem. Much research has been done in developing risk measures that focus on extreme events in the tail distribution where the portfolio loss occurs (variance does not differentiate loss or gain), and quantile-based models have thus far become the most popular choice. Among those, Conditional Value-at-Risk (CVaR) developed by Rockafellar and Uryasev [@RockafellarUryasevA] and [@RockafellarUryasevB], also known as Expected Shortfall by Acerbi and Tasche [@AcerbiTasche], has become a prominent candidate to replace variance in the portfolio selection problem. On the theoretical side, CVaR is a “coherent risk measure”, a term coined by Artzner et al. [@ArtznerDelbaenEberHeath1] and [@ArtznerDelbaenEberHeath2] in pursuit of an axiomatic approach for defining properties that a ‘good’ risk measure should possess. On the practical side, the convex representation of CVaR from Rockafellar and Uryasev [@RockafellarUryasevA] opened the door of convex optimization for Mean-CVaR problem and gave it vast advantage in implementation. In a one-period static setting, Rockafellar and Uryasev [@RockafellarUryasevA] demonstrated how linear programming can be used to solve the Mean-CVaR problem, making it a convincing alternative to the Markowitz [@Markowitz] Mean-Variance concept. The work of Rockafellar and Uryasev [@RockafellarUryasevA] has raised huge interest for extending this approach. Acerbi and Simonetti [@AcerbiSimonetti], and Adam et al. [@AdamHoukariLaurent] generalized CVaR to spectral risk measure in a static setting. Spectral risk measure is also known as Weighted Value-at-Risk (WVaR) by Cherny [@Cherny], who in turn studied its optimization problem. Ruszczynsk and Shapiro [@RuszczynskiShapiro] revised CVaR into a multi-step dynamic risk measure, namely the “conditional risk mapping for CVaR", and solved the corresponding Mean-CVaR problem using Rockafellar and Uryasev [@RockafellarUryasevA] technique for each time step. When expected return is replaced by expected utility, the Utility-CVaR portfolio optimization problem is often studied in a continuous-time dynamic setting, see Gandy [@Gandy] and Zheng [@Zheng]. More recently, the issue of robust implementation is dealt with in Quaranta and Zaffaroni [@QuarantaZaffaroni], Gotoh et al. [@GotohShinozakiTakeda], Huang et al. [@HuangZhuFabozziFukushima], and El Karoui et al. [@ElKarouiLimVahn]. Research on systemic risk that involves CVaR can be found in Acharya et al. [@AcharyaPedersenPhilipponRichardson], Chen et al. [@ChenIyengarMoallemi], and Adrian and Brunnermeier [@Adrian; @Brunnermeier]. To the best of our knowledge, no complete characterization of solution has been done for the Mean-CVaR problem in a continuous-time dynamic setting. Similar to Bielecki et al. [@BieleckiJinPliskaZhou], we reduce the problem to a combination of a static optimization problem and a hedging problem with complete market assumption. Our main contribution is that in solving the static optimization problem, we find a complete characterization whose nature is different than what is known in literature. As a pure CVaR minimization problem without expected return constraint, Sekine [@Sekine], Li and Xu [@LiXuA], Melnikov and Smirnov [@MelnikovSmirnov] found the optimal solution to be binary. This is confirmed to be true for more general law-invariant risk (preference) measures minimization by Schied [@Schied], and He and Zhou [@HeZhou]. The key to finding the solution to be binary is the association of the Mean-CVaR problem to the Neyman-Pearson problem. We observe in Section \[Subsection: Outline\] that the stochastic part of CVaR minimization can be transformed into Shortfall Risk minimization using the representation (CVaR is the Fenchel-Legendre dual of the Expected Shortfall) given by Rockafellar and Uryasev [@RockafellarUryasevA]. Föllmer and Leukert [@FollmerLeukert] characterized the solution to the latter problem in a general semimartingale complete market model to be binary, where they have demonstrated its close relationship to the Neyman-Pearson problem of hypothesis testing between the risk neutral probability measure $\tilde{P}$ and the physical probability measure $P$. Adding the expected return constraint to WVaR minimization (CVaR is a particular case of WVaR), Cherny [@Cherny] found conditions under which the solution to the Mean-WVaR problem was still binary or nonexistent. In this paper, we discuss all cases for solving the Mean-CVaR problem depending on a combination of two criteria: the level of the Radon-Nikodým derivative $\frac{d\tilde{P}}{dP}$ relative to the confidence level of the risk measure; and the level of the return requirement. More specifically, when the portfolio is uniformed bounded from above and below, we find the optimal solution to be nonexistent or binary in some cases, and more interestingly, take three values in the most important case (see Case 4 of Theorem \[T: Solution to Step 2\]). When the portfolio is unbounded from above, in most cases (see Case 2 and 4 in Theorem \[T: unbounded above\]), the solution is nonexistent, while portfolios of three levels still give sub-optimal solutions. Since the new solution we find can take not only the upper or the lower bound, but also a level in between, it can be viewed in part as a generalization of the binary solution for the Neyman-Pearson problem with an additional constraint on expectation. This paper is organized as follows. Section \[Section: Portfolio Selection\] formulates the dynamic portfolio selection problem, and compares the structure of the binary solution and the ‘three-level’ solution, with an application of exact calculation in the Black-Scholes model. Section \[Section: Solution\] details the analytic solution in general where the proofs are delayed to Appendix \[Section: Appendix\]; Section \[Section: Future Work\] lists possible future work. The Structure of the Optimal Portfolio {#Section: Portfolio Selection} ====================================== Main Problem {#Subsection: Outline} ------------ Let $(\Omega, \sF, (\sF)_{0\le t\le T}, P)$ be a filtered probability space that satisfies the usual conditions where $\sF_{0}$ is trivial and $\sF_{T}=\sF$. The market model consists of $d+1$ tradable assets: one riskless asset (money market account) and $d$ risky asset (stock). Suppose the risk-free interest rate $r$ is a constant and the stock $S_{t}$ is a $d$-dimensional real-valued locally bounded semimartingale process. Let the number of shares invested in the risky asset $\xi_{t}$ be a $d$-dimensional predictable process such that the stochastic integral with respect to $S_{t}$ is well-defined. Then the value of a self-financing portfolio $X_{t}$ evolves according to the dynamics $$dX_{t}=\xi_{t}dS_{t}+r(X_{t}-\xi_{t}S_{t})dt, \quad X_{0}=x_{0}.$$ Here $\xi_{t}dS_{t}$ and $\xi_{t}S_{t}$ are interpreted as inner products if the risky asset is multidimensional $d>1$. The portfolio selection problem is to find the best strategy $(\xi_{t})_{0\le t\le T}$ to minimize the Conditional Value-at-Risk (CVaR) of the final portfolio value $X_{T}$ at confidence level $0<\lambda<1$ , while requiring the expected value to remain above a constant $z$.[^1] In addition, we require uniform lower bound $x_{d}$ and upper bound $x_{u}$ on the value of the portfolio over time such that $-\infty<x_{d}<x_{0}<x_{u}\le\infty$. Therefore, our **Main Dynamic Problem** is $$\begin{aligned} \label{E: Main Problem Dynamic} &\inf_{\xi_{t}}CVaR_{\lambda}(X_{T})\\ \text{subject to }\quad &E[X_{T}]\ge z,\quad x_{d}\le X_{t}\le x_{u}\,\,a.s. \quad\forall t\in[0,T].\notag\end{aligned}$$ Note that the no-bankruptcy condition can be imposed by setting the lower bound to be $x_{d}=0$, and the portfolio value can be unbounded from above by taking the upper bound as $x_{u}=\infty$. Our solution will be based on the following complete market assumption. \[A: Complete Market and Continuous RND\] There is No Free Lunch with Vanishing Risk (as defined in Delbaen and Schachermayer [@DelbaenSchachermayer1]) and the market model is complete with a unique equivalent local martingale measure $\tilde{P}$ such that the Radon-Nikodým derivative $\frac{d\tilde{P}}{dP}$ has a continuous distribution. Under the above assumption any $\sF$-measurable random variable can be replicated by a dynamic portfolio. Thus the dynamic optimization problem (\[E: Main Problem Dynamic\]) can be reduced to: first find the optimal solution $X^{**}$ to the **Main Static Problem**, $$\begin{aligned} \label{E: Main Problem Static} &\inf_{X\in\sF}CVaR_{\lambda}(X)\\ \text{subject to }\quad &E[X]\ge z, \quad \tilde{E}[X]=x_{r}, \quad x_{d}\le X\le x_{u}\,\,a.s.\notag\end{aligned}$$ if it exists, and then find the dynamic strategy that replicates the $\sF$-measurable random variable $X^{**}$. Here the expectations $E$ and $\tilde{E}$ are taken under the physical probability measure $P$ and the risk neutral probability measure $\tilde{P}$ respectively. Constant $x_{r}=x_{0}e^{rT}$ is assumed to satisfy $-\infty<x_{d}<x_{0}\le x_{r}<x_{u}\le\infty$ and the additional capital constraint $\tilde{E}[X]=x_{r}$ is the key to make sure that the optimal solution can be replicated by a dynamic self-financing strategy with initial capital $x_{0}$. Using the equivalence between Conditional Value-at-Risk and the Fenchel-Legendre dual of the Expected Shortfall derived in Rockafellar and Uryasev [@RockafellarUryasevA], $$\label{D: CVaR} CVaR_{\lambda}(X)=\frac{1}{\lambda}\inf_{x\in\R}\left(E[(x-X)^{+}]-\lambda x\right), \quad \forall \lambda\in(0,1),$$ the CVaR optimization problem (\[E: Main Problem Static\]) can be reduced to an Expected Shortfall optimization problem which we name as the **Two-Constraint Problem:** **Step 1:** Minimization of Expected Shortfall $$\begin{aligned} \label{E: Step 1} &v(x)=\inf_{X\in\sF}E[(x-X)^{+}]\\ \text{subject to }\quad &E[X]\ge z, \quad(\textit{return constraint})\notag\\ &\tilde{E}[X]=x_{r}, \quad(\textit{capital constraint})\notag\\ &x_{d}\le X\le x_{u}\,\,a.s.\notag\end{aligned}$$ **Step 2:** Minimization of Conditional Value-at-Risk $$\label{E: Step 2} \inf_{X\in\sF}CVaR_{\lambda}(X)=\frac{1}{\lambda}\inf_{x\in\R}\left(v(x)-\lambda x\right).$$ To compare our solution to existing ones in literature, we also name an auxiliary problem which simply minimizes Conditional Value-at-Risk without the return constraint as the **One-Constraint Problem**: Step 1 in (\[E: Step 1\]) is replaced by **Step 1:** Minimization of Expected Shortfall $$\begin{aligned} \label{E: Step 1 One Constraint} &v(x)=\inf_{X\in\sF}E[(x-X)^{+}]\\ \text{subject to }\quad &\tilde{E}[X]=x_{r}, \quad(\textit{capital constraint})\notag\\ &x_{d}\le X\le x_{u}\,\,a.s.\notag\end{aligned}$$ Step 2 in (\[E: Step 2\]) remains the same. Main Result {#Subsection: Result} ----------- This subsection is devoted to a conceptual comparison between the solutions to the [*One-Constraint Problem*]{} and the [*Two-Constraint Problem*]{}. The solution to the Expected Shortfall Minimization problem in **Step 1** of the **One-Constraint Problem** is found by Föllmer and Leukert [@FollmerLeukert] under Assumption \[A: Complete Market and Continuous RND\] to be binary in nature: $$\label{E: optimal X without expectation constraint} X(x)=x_{d}\I_{A}+x\I_{A^{c}}, \quad\text{for } x_{d}<x<x_{u},$$ where $\I_{\cdot}(\omega)$ is the indicator function and set $A$ is defined as the collection of states where the Radon-Nikodým derivative is above a threshold $\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a\right\}$. This particular structure where the optimal solution $X(x)$ takes only two values, namely the lower bound $x_{d}$ and $x$, is intuitively clear once the problems of minimizing Expected Shortfall and hypothesis testing between $P$ and $\tilde{P}$ are connected in Föllmer and Leukert [@FollmerLeukert], the later being well-known to possess a binary solution by Neyman-Pearson Lemma. There are various ways to prove the optimality. Other than the Neyman-Pearson approach, it can be viewed as the solution from a convex duality perspective, see Theorem 1.19 in Xu [@Xu]. In addition, a simplified version to the proof of Proposition \[P: Three-Line Optimal for Step 2 in Two-Constraint Case\] gives a direct method using Lagrange multiplier for convex optimization. The solution to [*Step 2*]{} of [*One-Constraint Problem*]{}, and thus to the [*Main Problems*]{} in (\[E: Main Problem Dynamic\]) and (\[E: Main Problem Static\]) as a pure risk minimization problem without the return constraint is given in Schied [@Schied], Sekine [@Sekine], and Li and Xu [@LiXuA]. Since Step 2 only involves minimization over a real-valued number $x$, the binary structure is preserved through this step. Under some technical conditions, the solution to **Step 2** of the **One-Constraint Problem** is shown by Li and Xu [@LiXuA] (Theorem 2.10 and Remark 2.11) to be $$\begin{aligned} X^{*} &=x_{d}\I_{A^{*}}+x^{*}\I_{A^{*c}}, \quad\textbf{(Two-Line Configuration)}\label{E: optimal CVaR without expectation constraint}\\ CVaR_{\lambda}(X^{*}) &=-x_{r}+\frac{1}{\lambda}(x^{*}-x_{d})\left(P(A^{*})-\lambda \tilde{P}(A^{*})\right),\label{E: min CVaR two line}\end{aligned}$$ where $(a^{*}, x^{*})$ is the solution to the *capital constraint* ($\tilde{E}[X(x)]=x_{r}$) in [*Step 1*]{} and the *first order Euler condition* ($v'(x)=0$) in [*Step 2*]{}: $$\begin{aligned} x_{d}\tilde{P}(A)+x\tilde{P}(A^{c}) &=x_{r}, \quad(\textit{capital constraint})\label{E: capital constraint two line}\\ P(A)+\frac{\tilde{P}(A^{c})}{a}-\lambda &=0. \quad(\textit{first order Euler condition})\label{E: euler condition two line}\end{aligned}$$ A static portfolio holding only the riskless asset will yield a constant portfolio value $X\equiv x_{r}$ with $CVaR(X)=-x_{r}$. The diversification by managing dynamically the exposure to risky assets decreases the risk of the overall portfolio by an amount shown in (\[E: min CVaR two line\]). One interesting observation is that the optimal portfolio exists regardless whether the upper bound on the portfolio is finite $x_{u}<\infty$ or not $x_{u}=\infty$. This conclusion will change drastically as we add the return constraint to the optimization problem. The main result of this paper is to show that the optimal solution to the [*Two-Constraint Problem*]{}, and thus the **Main Problem** (\[E: Main Problem Dynamic\]) and (\[E: Main Problem Static\]), does not have a Neyman-Pearson type of binary solution, which we call [*Two-Line Configuration*]{} in (\[E: optimal CVaR without expectation constraint\]); instead, it has a [*Three-Line Configuration*]{}. Proposition \[P: Three-Line Optimal for Step 2 in Two-Constraint Case\] and Theorem \[T: Solution to Step 2\] prove that, when the upper bound is finite $x_{u}<\infty$ and under some technical conditions, the solution to **Step 2** of the **Two-Constraint Problem** turns out to be $$\begin{aligned} \label{E: optimal CVaR with expectation constraint} X^{**} &=x_{d}\I_{A^{**}}+x^{**}\I_{B^{**}}+x_{u}\I_{D^{**}}, \quad\textbf{(Three-Line Configuration)}\\ CVaR_{\lambda}(X^{**}_{T}) &=\frac{1}{\lambda}\left((x^{**}-x_{d})P(A^{**})-\lambda x^{**}\right),\end{aligned}$$ where $(a^{**}, b^{**}, x^{**})$ is the solution to the *capital constraint* and the *first order Euler condition*, plus the additional *return constraint* ($E[X(x)]=z$): $$\begin{aligned} x_{d}P(A)+xP(B)+x_{u}P(D) &=z, \quad(\textit{return constraint})\label{E: return constraint three line}\\ x_{d}\tilde{P}(A)+x\tilde{P}(B)+x_{u}\tilde{P}(D) &=x_{r}, \quad(\textit{capital constraint})\label{E: capital constraint three line}\\ P(A)+\frac{\tilde{P}(B)-bP(B)}{a-b}-\lambda &=0. \quad(\textit{first order Euler condition})\label{E: euler condition three line}\end{aligned}$$ The sets in equation (\[E: return constraint three line\])-(\[E: euler condition three line\]) are defined by different levels of the Radon-Nikodým derivative: $$A=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a\right\},\quad B=\left\{\omega\in\Omega\,:\,b\le\tfrac{d\tilde{P}}{dP}(\omega)\le a\right\}, \quad D=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b\right\}.$$ When the upper bound is infinite $x_{u}=\infty$, Theorem \[T: unbounded above\] shows that the solution for the optimal portfolio is no longer a [*Three-Line Configuration*]{}. It can be pure money market account investment ([*One-Line*]{}), binary ([*Two-Line*]{}), or very likely nonexist. In the last case, the infimum of the CVaR can still be computed, and a sequence of [*Three-Line Configuration*]{} portfolios can be found with their CVaR converging to the infimum. Example: Exact Calculation in the Black-Scholes Model {#Subsection: Example} ----------------------------------------------------- We show the closed-form calculation of the [*Three-Line Configuration*]{} (\[E: optimal CVaR with expectation constraint\])-(\[E: euler condition three line\]), as well as the corresponding optimal dynamic strategy in the benchmark Black-Scholes Model. Suppose an agent is trading between a money market account with interest rate $r$ and one stock[^2] that follows geometric Brownian motion $dS_{t}=\mu S_{t}dt+\sigma S_{t}dW_{t}$ with instantaneous rate of return $\mu$, volatility $\sigma$, and initial stock price $S_{0}$. The endowment starts at $x_{0}$ and bankruptcy is prohibited at any time, $x_{d}=0$, before the final horizon $T$. The expected terminal value $E[X_{T}]$ is required to be above a fixed level ‘$z$’ to satisfy the return constraint. When ‘$z$’ is low, namely $z\le z^{*}\define E[X^{*}]$, where $X^{*}$ is the optimal portfolio (\[E: optimal CVaR without expectation constraint\]) for the [*One Constraint Problem*]{}, the return constraint is non-binding and obviously the [*Two-Line Configuration*]{} $X^{*}$ is optimal. Let $\bar{z}$ be the highest expected value achievable by any self-financing portfolio starting with initial capital $x_{0}$ (see Definition \[D: zbar\] and Lemma \[L: zbar is the highest achievable expected return\]). When the return requirement becomes meaningful, i.e., $z\in(z^{*},\bar{z}]$, the [*Three-Line Configuration*]{} $X^{**}$ in (\[E: optimal CVaR with expectation constraint\]) becomes optimal. Since the Radon-Nikodým derivative $\frac{d\tilde{P}}{dP}$ is a scaled power function of the final stock price which has a log-normal distribution, the probabilities in equations (\[E: capital constraint three line\])-(\[E: euler condition three line\]) can be computed in closed-form: $$\begin{aligned} P(A) &= N(-\tfrac{\theta\sqrt{T}}{2} - \tfrac{\ln a}{\theta\sqrt{T}}),\quad P(D) = 1-N(-\tfrac{\theta\sqrt{T}}{2} - \tfrac{\ln b}{\theta\sqrt{T}}), \quad P(B) = 1-P(A)-P(D),\notag\\ \tilde{P}(A) &= N(\tfrac{\theta\sqrt{T}}{2} - \tfrac{\ln a}{\theta\sqrt{T}}),\quad \tilde{P}(D) = 1-N(\tfrac{\theta\sqrt{T}}{2} - \tfrac{\ln b}{\theta\sqrt{T}}),\quad \tilde{P}(B) = 1-\tilde{P}(A)-\tilde{P}(D),\notag\end{aligned}$$ where $\theta = \frac{\mu-r}{\sigma}$ and $N(\cdot)$ is the cumulative distribution function of a standard normal random variable. From these, the solution $(a^{**}, b^{**}, x^{**})$ to equations (\[E: capital constraint three line\])-(\[E: euler condition three line\]) can be found numerically. The formulae for the dynamic value of the optimal portfolio $X_{t}^{**}$, the corresponding dynamic hedging strategy $ \xi_{t}^{**}$, and the associated final minimal risk $CVaR_{\lambda}(X^{**}_{T})$ are: $$\begin{aligned} X_{t}^{**} &= e^{-r(T-t)}[x^{**}N(d_{+}(a^{**},S_{t},t)) + x_{d}N(d_{-}(a^{**},S_{t},t))]\\ &\qquad+e^{-r(T-t)}[x^{**}N(d_{-}(b^{**},S_{t},t)) + x_{u}N(d_{+}(b^{**},S_{t},t))]-e^{r(T-t)}x^{**},\\ \xi_{t}^{**} &= \frac{x^{**}-x_{d}}{\sigma S_{t}\sqrt{2\pi(T-t)}} e^{-r(T-t)-\frac{d^{2}_{-}(a^{**},S_{t},t)}{2}}+\frac{x^{**}-x_{u}}{\sigma S_{t}\sqrt{2\pi(T-t)}} e^{-r(T-t)-\frac{d^{2}_{+}(b^{**},S_{t},t)}{2}},\\ CVaR_{\lambda}(X^{**}_{T}) &=\frac{1}{\lambda}\left((x^{**}-x_{d})P(A^{**})-\lambda x^{**}\right),\end{aligned}$$ where we define $ d_{-}(a,s,t) =\tfrac{1}{\theta\sqrt{T-t}}[-\ln a + \tfrac{\theta}{\sigma}(\tfrac{\mu+r-\sigma^{2}}{2}t-\ln\tfrac{s}{S_{0}}) + \tfrac{\theta^{2}}{2}(T-t)], \quad d_{+}(a,s,t) =-d_{-}(a,s,t). $ Numerical results comparing the minimal risk for various levels of upper-bound $x_{u}$ and return constraint $z$ are summarized in Table \[table1\]. As expected the upper bound on the portfolio value $x_{u}$ has no impact on the [*One-Constraint Problem*]{}, as $(x^{*}, a^{*})$ and $CVaR_{\lambda}(X^{*}_{T})$ are optimal whenever $x_{u}\ge x^{*}$. On contrary in the [*Two-Constraint Problem*]{}, the stricter the return requirement $z$, the more the [*Three-Line Configuration*]{} $X^{**}$ deviates from the [*Two-Line Configuration*]{} $X^{*}$. Stricter return requirement (higher $z$) implies higher minimal risk $CVaR_{\lambda}(X^{**}_{T})$; while less strict upper bound (higher $x_{u}$) decreases minimal risk $CVaR_{\lambda}(X^{**}_{T})$. Notably, under certain conditions in Theorem \[T: unbounded above\], for all levels of return $z\in(z^{*},\bar{z}]$, when $x_{u}\to\infty$, $CVaR_{\lambda}(X^{**}_{T})$ approaches $CVaR_{\lambda}(X^{*}_{T})$, as the optimal solution cease to exist in the limiting case. ------------------------- ---------- ---------- -------------------------- ---------- ---------- ---------- $x_{u}$ 30 50 $x_{u}$ 30 30 50 $z$ 20 25 25 $x^{*}$ 19.0670 19.0670 $x^{**}$ 19.1258 19.5734 19.1434 $a^{*}$ 14.5304 14.5304 $a^{**}$ 14.3765 12.5785 14.1677 $b^{**}$ 0.0068 0.1326 0.0172 $CVaR_{5\%}(X^{*}_{T})$ -15.2118 -15.2118 $CVaR_{5\%}(X_{T}^{**})$ -15.2067 -14.8405 -15.1483 ------------------------- ---------- ---------- -------------------------- ---------- ---------- ---------- : Black-Scholes example for One-Constraint (pure CVaR minimization) and Two-Constraint (Mean-CVaR optimization) problems with parameters: $r=5\%$, $\mu =0.2,\,\sigma = 0.1$, $S_{0} = 10$, $T=2$, $x_{0} = 10$, $x_{d} = 0$, $\lambda=5\%$. Consequently, $z^{*}=18.8742$ and $\bar{z}=28.8866$.[]{data-label="table1"} Figure \[graph1\] plots the efficient frontier of the above Mean-CVaR portfolio selection problem with fixed upper bound $x_{u} = 30$. The curve between return level $z^{*}$ and $\bar{z}$ are the Mean-CVaR efficient portfolio from various [*Three-Line Configurations*]{}, while the straight line is the same Mean-CVaR efficient [*Two-Line Configuration*]{} when return constraint is non-binding. The star positioned at $(-x_{r},x_{r}) = (-11.0517,11.0517)$, where $x_{r} = x_{0}e^{rT}$, corresponds to the portfolio that invests purely in the money market account. As a contrast to its position on the traditional Capital Market Line (the efficient frontier for a Mean-Variance portfolio selection problem), the pure money market account portfolio is no longer efficient in the Mean-CVaR portfolio selection problem. $$\includegraphics[scale=.6]{zCVaR.pdf}$$ Analytical Solution to the Portfolio Selection Problem {#Section: Solution} ====================================================== Under Assumption \[A: Complete Market and Continuous RND\], the solution to the main Mean-CVaR optimization problem (\[E: Main Problem Static\]), i.e., the [*Two-Constraint Problem*]{} (\[E: Step 1\]) and (\[E: Step 2\]), will be discussed in two separate cases where the upper bound for the portfolio value is finite or infinite. The main results are stated in Theorem \[T: Solution to Step 2\] and Theorem \[T: unbounded above\] respectively. To create a flow of showing clearly how the optimal solutions are related to the [*Two-Line*]{} and [*Three-Line Configurations*]{}, all proofs will be delayed to Appendix \[Section: Appendix\]. Case $x_{u}<\infty$: Finite Upper Bound ---------------------------------------- We first define the general Three-Line Configuration and its degenerate Two-Line Configurations. Recall from Section \[Subsection: Result\] the definitions of the sets $A, B, D$ are $$\label{D: A, B, D definitions} A=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a\right\},\quad B=\left\{\omega\in\Omega\,:\,b\le\tfrac{d\tilde{P}}{dP}(\omega)\le a\right\},\quad D=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b\right\}.$$ \[D: Two definition A, B, D\]Suppose $x\in[x_{d}, x_{u}]$. 1. Any **Three-Line Configuration** has the structure $X=x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D}$. 2. The **Two-Line Configuration** $X=x\I_{B}+x_{u}\I_{D}$ is associated to the above definition in the case $a=\infty$, $B=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)\ge b\right\}$ and $D=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b\right\}$.\ The **Two-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}$ is associated to the above definition in the case\ $b=0$, $A=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a\right\}$, and $B=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)\le a\right\}$.\ The **Two-Line Configuration** $X=x_{d}\I_{A}+x_{u}\I_{D}$ is associated to the above definition in the case $a=b$, $A=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a\right\}$, and $D=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)< a\right\}$. Moreover, 1. **General Constraints** are the [*capital constraint*]{} and the equality part of the [*expected return constraint*]{} for a **Three-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D}$: $$\begin{aligned} &E[X]=x_{d}P(A)+xP(B)+x_{u}P(D)=z, \\ &\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}.\end{aligned}$$ 2. **Degenerated Constraints 1** are the [*capital constraint*]{} and the equality part of the [*expected return constraint*]{} for a **Two-Line Configuration** $X=x\I_{B}+x_{u}\I_{D}$: $$\begin{aligned} &E[X]=xP(B)+x_{u}P(D)=z, \\ &\tilde{E}[X]=x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}.\end{aligned}$$ **Degenerated Constraints 2** are the [*capital constraint*]{} and the equality part of the [*expected return constraint*]{} for a **Two-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}$: $$\begin{aligned} &E[X]=x_{d}P(A)+xP(B)=z, \\ &\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)=x_{r}.\end{aligned}$$ **Degenerated Constraints 3** are the [*capital constraint*]{} and the equality part of the [*expected return constraint*]{} for a **Two-Line Configuration** $X=x_{d}\I_{A}+x_{u}\I_{D}$: $$\begin{aligned} &E[X]=x_{d}P(A)+x_{u}P(D)=z, \\ &\tilde{E}[X]=x_{d}\tilde{P}(A)+x_{u}\tilde{P}(D)=x_{r}.\end{aligned}$$ Note that **Degenerated Constraints 1** correspond to the **General Constraints** when $a=\infty$; **Degenerated Constraints 2** correspond to the **General Constraints** when $b=0$; and **Degenerated Constraints 3** correspond to the **General Constraints** when $a=b$. We use the [*Two-Line Configuration*]{} $X=x_{d}\I_{A}+x_{u}\I_{D}$, where the value of the random variable $X$ takes either the upper or the lower bound, as well as its capital constraint to define the ‘Bar-System’ from which we calculate the highest achievable return. \[D: zbar\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, let $\bar{a}$ be a solution to the capital constraint $\tilde{E}[X]=x_{d}\tilde{P}(A)+x_{u}\tilde{P}(D)=x_{r}$ in **Degenerated Constraints 3** for the **Two-Line Configuration** $X=x_{d}\I_{A}+x_{u}\I_{D}$. In the ‘Bar-System’, $\bar{A}$, $\bar{D}$ and $\bar{X}$ are associated to the constant $\bar{a}$ in the sense $\bar{X}=x_{d}\I_{\bar{A}}+x_{u}\I_{\bar{D}}$ where $\bar{A}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>\bar{a}\right\}$, and $\bar{D}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)< \bar{a}\right\}$. Define the expected return of the ‘Bar-System’ as $\zbar=E[\bar{X}]=x_{d}P(\bar{A})+x_{u}P(\bar{D})$. \[L: zbar is the highest achievable expected return\] $\zbar$ is the highest expected return that can be obtained by a self-financing portfolio with initial capital $x_{0}$ whose value is bounded between $x_{d}$ and $x_{u}$: $$\zbar=\max_{X\in\sF}E[X]\quad s.t.\quad \tilde{E}[X]=x_{r}=x_{0}e^{rT}, \quad x_{d}\le X\le x_{u}\,\, a.s.$$ In the following lemma, we vary the ‘$x$’ value in the [*Two-Line Configurations*]{} $X=x\I_{B}+x_{u}\I_{D}$ and $X=x_{d}\I_{A}+x\I_{B}$, while maintaining the capital constraints respectively. We observe their expected returns to vary between values $x_{r}$ and $\bar{z}$ in a monotone and continuous fashion. \[L: z decreases when x is between xd and xr and increases when x is between xr and xu\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$. 1. Given any $x\in[x_{d}, x_{r}]$, let ‘$b$’ be a solution to the capital constraint $\tilde{E}[X]=x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}$ in **Degenerated Constraints 1** for the **Two-Line Configuration** $X=x\I_{B}+x_{u}\I_{D}$. Define the expected return of the resulting Two-Line Configuration as $z(x)=E[X]=xP(B)+x_{u}P(D)$.[^3] Then $z(x)$ is a continuous function of $x$ and decreases from $\zbar$ to $x_{r}$ as $x$ increases from $x_{d}$ to $x_{r}$. 2. Given any $x\in[x_{r}, x_{u}]$, let ‘$a$’ be a solution to the capital constraint $\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)=x_{r}$ in **Degenerated Constraints 2** for the **Two-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}$. Define the expected return of the resulting Two-Line Configuration as $z(x)=E[X]=x_{d}P(A)+xP(B)$. Then $z(x)$ is a continuous function of $x$ and increases from $x_{r}$ to $\zbar$ as $x$ increases from $x_{r}$ to $x_{u}$. From now on, we will concern ourselves with requirements on the expected return in the interval $z\in[x_{r}, \zbar]$ because on one side Lemma \[L: zbar is the highest achievable expected return\] ensures that there are no feasible solutions to the Main Problem (\[E: Main Problem Static\]) if we require an expected return higher than $\zbar$. On the other side, Lemma \[L: zbar is the highest achievable expected return\], Lemma \[L: z decreases when x is between xd and xr and increases when x is between xr and xu\] and Theorem \[T: Two Line Optimal in One Constraint Sase\] lead to the conclusion that a return constraint where $z\in(-\infty, x_{r})$ is too weak to differentiate the **Two-Constraint Problem** from the **One-Constraint Problem** as their optimal solutions concur. \[D: xz1 and xz2\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in[x_{r}, \zbar]$, define $x_{z1}$ and $x_{z2}$ to be the corresponding $x$ value for **Two-Line Configurations** $X=x\I_{B}+x_{u}\I_{D}$ and $X=x_{d}\I_{A}+x\I_{B}$ that satisfy **Degenerated Constraints 1** and **Degenerated Constraints 2** respectively. Definition \[D: xz1 and xz2\] implies when we fix the level of expected return $z$, we can find two particular feasible solutions: $X=x_{z1}\I_{B}+x_{u}\I_{D}$ satisfying $\tilde{E}[X]=x_{z1}\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}$ and $E[X]=x_{z1}P(B)+x_{u}P(D)=z$; $X=x_{d}\I_{A}+x_{z2}\I_{B}$ satisfying $\tilde{E}[X]=x_{d}\tilde{P}(A)+x_{z2}\tilde{P}(B)=x_{r}$ and $E[X]=x_{d}P(A)+x_{z2}P(B)=z$. The values $x_{z1}$ and $x_{z2}$ are well-defined because Lemma \[L: z decreases when x is between xd and xr and increases when x is between xr and xu\] guarantees $z(x)$ to be an invertible function in both cases. We summarize in the following lemma whether the [*Two-Line Configurations*]{} satisfying the capital constraints meet or fail the return constraint as $x$ ranges over its domain $[x_{d}, x_{u}]$ for the [*Two-Line*]{} and [*Three-Line Configurations*]{} in Definition \[D: Two definition A, B, D\]. \[L: Two-Line and their returns above or below z\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in[x_{r}, \zbar]$. 1. If we fix $x\in[x_{d}, x_{z1}]$, the Two-Line Configuration $X=x\I_{B}+x_{u}\I_{D}$ which satisfies the capital constraint $\tilde{E}[X]=x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}$ in Degenerated Constraints 1 **satisfies** the expected return constraint: $E[X]=xP(B)+x_{u}P(D)\ge z$; 2. If we fix $x\in(x_{z1}, x_{r}]$, the Two-Line Configuration $X=x\I_{B}+x_{u}\I_{D}$ which satisfies the capital constraint $\tilde{E}[X]=x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}$ in Degenerated Constraints 1 **fails** the expected return constraint: $E[X]=xP(B)+x_{u}P(D)<z$; 3. If we fix $x\in[x_{r}, x_{z2})$, the Two-Line Configuration $X=x_{d}\I_{A}+x\I_{B}$ which satisfies the capital constraint $\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)=x_{r}$ in Degenerated Constraints 2 **fails** the expected return constraint: $E[X]=xP(B)+x_{u}P(D)<z$; 4. If we fix $x\in[x_{z2}, x_{u}]$, the Two-Line Configuration $X=x_{d}\I_{A}+x\I_{B}$ which satisfies the capital constraint $\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)=x_{r}$ in Degenerated Constraints 2 **satisfies** the expected return constraint: $E[X]=xP(B)+x_{u}P(D)\ge z$. We turn our attention to solving **Step 1** of the **Two-Constraint Problem** (\[E: Step 1\]):\ **Step 1:** Minimization of Expected Shortfall $$\begin{aligned} &v(x)=\inf_{X\in\sF}E[(x-X)^{+}]\\ \text{subject to }\quad &E[X]\ge z, \quad(\textit{return constraint})\notag\\ &\tilde{E}[X]=x_{r}, \quad(\textit{capital constraint})\notag\\ &x_{d}\le X\le x_{u}\,\,a.s.\notag\end{aligned}$$ Notice that a solution is called for any given real number $x$, independent of the return level $z$ or capital level $x_{r}$. From Lemma \[L: Two-Line and their returns above or below z\] and the fact that the [*Two-Line Configurations*]{} are optimal solutions to **Step 1** of the **One-Constraint Problem** (see Theorem \[T: Two Line Optimal in One Constraint Sase\]), we can immediately draw the following conclusion. \[P: Two Line Optimal for Step 1 when x is between xd and xz1; xz2 and xu\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in[x_{r}, \zbar]$. 1. If we fix $x\in[x_{d}, x_{z1}]$, then there exists a **Two-Line Configuration** $X=x\I_{B}+x_{u}\I_{D}$ which is the optimal solution to **Step 1** of the **Two-Constraint Problem**; 2. If we fix $x\in[x_{z2}, x_{u}]$, then there exists a **Two-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}$ which is the optimal solution to **Step 1** of the **Two-Constraint Problem**. When $x\in(x_{z1}, x_{z2})$, Lemma \[L: Two-Line and their returns above or below z\] shows that the Two-Line Configurations which satisfy the capital constraints ($\tilde{E}[X]=x_{r}$) do not generate high enough expected return ($E[X]<z$) to be feasible anymore. It turns out that a novel solution of [*Three-Line Configuration*]{} is the answer: it can be shown to be both feasible and optimal. \[L: z decreases for Three-Line Configuration\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in[x_{r}, \zbar]$. Given any $x\in(x_{z1}, x_{z2})$, let the pair of numbers $(a, b)\in\R^{2}$ ($b\le a$) be a solution to the capital constraint $\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}$ in **General Constraints** for the **Three-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D}$. Define the expected return of the resulting Three-Line Configuration as $z(a,b)=E[X]=x_{d}P(A)+xP(B)+x_{u}P(D)$. Then $z(a, b)$ is a continuous function which decreases from $\zbar$ to a number below $z$: 1. When $a=b=\bar{a}$ from Definition \[D: zbar\] of ‘Bar-System’, the Three-Line Configuration degenerates to $X=\bar{X}$ and $z(\bar{a},\bar{a})=E[\bar{X}]=\zbar$. 2. When $b<\bar{a}$ and $a>\bar{a}$, $z(a, b)$ decreases continuously as $b$ decreases and $a$ increases. 3. In the extreme case when $a=\infty$, the Three-Line configuration becomes the Two-Line Configuration $X=x\I_{B}+x_{u}\I_{D}$; in the extreme case when $b=0$, the Three-Line configuration becomes the Two-Line Configuration $X=x_{d}\I_{A}+x\I_{B}$. In either case, the expected value is below $z$ by Lemma \[L: Two-Line and their returns above or below z\]. \[P: Three Line Optimal for Step 1 when x is between xz1 and xz2\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in[x_{r}, \zbar]$. If we fix $x\in(x_{z1}, x_{z2})$, then there exists a **Three-Line Configuration** $X=x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D}$ that satisfies the **General Constraints** which is the optimal solution to **Step 1** of the **Two-Constraint Problem**. Combining Proposition \[P: Two Line Optimal for Step 1 when x is between xd and xz1; xz2 and xu\] and Proposition \[P: Three Line Optimal for Step 1 when x is between xz1 and xz2\], we arrive to the following result on the optimality of the Three-Line Configuration. \[T: Solution to Step 1\]For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in[x_{r}, \zbar]$. $X(x)$ and the corresponding value function $v(x)$ described below are optimal solutions to **Step 1: Minimization of Expected Shortfall** of the **Two-Constraint Problem**: - $x\in(-\infty, x_{d}]$: $X(x) =$ any random variable with values in $[x_{d}, x_{u}]$ satisfying both the capital constraint $\tilde{E}[X(x)]=x_{r}$ and the return constraint $E[X(x)]\ge z$. $v(x) = 0.$ - $x\in[x_{d}, x_{z1}]$: $X(x) =$ any random variable with values in $[x, x_{u}]$ satisfying both the capital constraint $\tilde{E}[X(x)]=x_{r}$ and the return constraint $E[X(x)]\ge z$. $v(x) = 0.$ - $x\in(x_{z1}, x_{z2})$: $X(x) =x_{d}\I_{A_{x}}+x\I_{B_{x}}+x_{u}\I_{D_{x}}$ where $A_{x}, B_{x}, D_{x}$ are determined by $a_{x}$ and $b_{x}$ as in (\[D: A, B, D definitions\]) satisfying the General Constraints: $\tilde{E}[X(x)]=x_{r}$ and $E[X(x)]=z$. $v(x) = (x-x_{d})P(A_{x}).$ - $x\in[x_{z2}, x_{u}]$: $X(x) =x_{d}\I_{A_{x}}+x\I_{B_{x}}$ where $A_{x}, B_{x}$ are determined by $a_{x}$ as in Definition \[D: Two definition A, B, D\] satisfying both the capital constraint $\tilde{E}[X(x)]=x_{r}$ and the return constraint $E[X(x)]\ge z$. $v(x) = (x-x_{d})P(A_{x}).$ - $x\in[x_{u}, \infty)$: $X(x) =x_{d}\I_{\bar{A}}+x_{u}\I_{\bar{B}}=\bar{X}$ where $\bar{A}, \bar{B}$ are associated to $\bar{a}$ as in Definition \[D: zbar\] satisfying both the capital constraint $\tilde{E}[X(x)]=x_{r}$ and the return constraint $E[X(x)]=\zbar\ge z$.\ $v(x) = (x-x_{d})P(\bar{A})+(x-x_{u})P(\bar{B}).$ To solve **Step 2** of the **Two-Constraint Problem** (\[E: Step 2\]), and thus the Main Problem (\[E: Main Problem Static\]), we need to minimize $$\frac{1}{\lambda}\inf_{x\in\R}(v(x)-\lambda x),$$ where $v(x)$ has been computed in Theorem \[T: Solution to Step 1\]. Depending on the $z$ level in the return constraint being lenient or strict, the solution is sometimes obtained by the Two-Line Configuration which is optimal to the One-Constraint Problem, and other times obtained by a true Three-Line configuration. To proceed in this direction, we recall the solution to the **One-Constraint Problem** from Li and Xu [@LiXuA]. \[T: Two Line Optimal in One Constraint Sase\] 1. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$. $X=x_{r}$ is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **One-Constraint Problem** and the associated minimal risk is $$CVaR(X)=-x_{r}.$$ 2. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$. - If $\frac{1}{\bar{a}}\le\frac{\lambda-P(\bar{A})}{1-\tilde{P}(\bar{A})}$ (see Definition \[D: zbar\] for the ‘Bar-System’), then $\bar{X}=x_{d}\I_{\bar{A}}+x_{u}\I_{\bar{D}}$ is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **One-Constraint Problem** and the associated minimal risk is $$CVaR(\bar{X})=-x_{r}+\frac{1}{\lambda}(x_{u}-x_{d})(P(\bar{A})-\lambda\tilde{P}(\bar{A})).$$ - Otherwise, let $a^{*}$ be the solution to the equation $\frac{1}{a}=\frac{\lambda-P(A)}{1-\tilde{P}(A)}$. Associate sets $A^{*}=\left\{\omega\in\Omega\,:\,\frac{d\tilde{P}}{dP}(\omega)>a^{*}\right\}$ and $B^{*}=\left\{\omega\in\Omega\,:\,\frac{d\tilde{P}}{dP}(\omega)\le a^{*}\right\}$ to level $a^{*}$. Define $x^{*}=\frac{x_{r}-x_{d}\tilde{P}(A^{*})}{1-\tilde{P}(A^{*})}$ so that configuration $$X^{*}=x_{d}\I_{A^{*}}+x^{*}\I_{B^{*}}$$ satisfies the capital constraint $\tilde{E}[X^{*}]=x_{d}\tilde{P}(A^{*})+x^{*}\tilde{P}(B^{*})=x_{r}$.[^4] Then $X^{*}$ (we call the [**‘Star-System’**]{}) is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **One-Constraint Problem** and the associated minimal risk is $$CVaR(X^{*})=-x_{r}+\frac{1}{\lambda}(x^{*}-x_{d})(P(A^{*})-\lambda\tilde{P}(A^{*})).$$ \[D: z\*\] In part 2 of Theorem \[T: Two Line Optimal in One Constraint Sase\], define $z^{*}=\zbar$ in the first case when $\frac{1}{\bar{a}}\le\frac{\lambda-P(\bar{A})}{1-\tilde{P}(\bar{A})}$; define $z^{*}=E[X^{*}]$ in the second case when $\frac{1}{\bar{a}}>\frac{\lambda-P(\bar{A})}{1-\tilde{P}(\bar{A})}$. We see that when $z$ is smaller than $z^{*}$, the binary solutions $X^{*}$ and $\bar{X}$ provided in Theorem \[T: Two Line Optimal in One Constraint Sase\] are indeed the optimal solutions to Step 2 of the Two-Constraint Problem. However, when $z$ is greater than $z^{*}$, these Two-Line Configurations are no longer feasible in the Two-Constraint Problem. We now show that the Three-Line Configuration is not only feasible but also optimal. First we establish the convexity of the objective function and its continuity in a Lemma. \[L: convexity of v(x)\] $v(x)$ is a convex function for $x\in\R$, and thus continuous. \[P: Three-Line Optimal for Step 2 in Two-Constraint Case\] For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$, and a fixed level $z\in(z^{*}, \zbar]$.\ Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$. The solution $(a^{**}, b^{**},x^{**})$ (and consequently, $A^{**}, B^{**}$ and $D^{**}$) to the equations $$\begin{aligned} x_{d}P(A)+xP(B)+x_{u}P(D) &=z, \quad (\text{return constraint}) \\ x_{d}\tilde{P}(A)+x\tilde{P}(B)+x_{u}\tilde{P}(D) &=x_{r}, \quad (\text{capital constraint})\\ P(A)+\frac{\tilde{P}(B)-bP(B)}{a-b}-\lambda &=0, \quad (\text{first order Euler condition})\end{aligned}$$ exists. $X^{**}=x_{d}\I_{A^{**}}+x^{**}\I_{B^{**}}+x_{u}\I_{D^{**}} $ (we call the [**‘Double-Star System’**]{}) is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(X^{**})=\frac{1}{\lambda}\left((x^{**}-x_{d})P(A^{**})-\lambda x^{**}\right).$$ Putting together Proposition \[P: Three-Line Optimal for Step 2 in Two-Constraint Case\] with Theorem \[T: Two Line Optimal in One Constraint Sase\], we arrive to the [**Main Theorem**]{} of this paper. \[T: Solution to Step 2\]For fixed $-\infty<x_{d}<x_{r}<x_{u}<\infty$. 1. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$ and $z=x_{r}$. The pure money market account investment $X=x_{r}$ is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(X)=-x_{r}.$$ 2. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$ and $z\in(x_{r}, \zbar]$. The optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** does not exist and the minimal risk is $$CVaR(X)=-x_{r}.$$ 3. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$ and $z\in[x_{r}, z^{*}]$ (see Definition \[D: z\*\] for $z^{*}$). - If $\frac{1}{\bar{a}}\le\frac{\lambda-P(\bar{A})}{1-\tilde{P}(\bar{A})}$ (see Definition \[D: zbar\]), then the [**‘Bar-System’**]{} $\bar{X}=x_{d}\I_{\bar{A}}+x_{u}\I_{\bar{D}}$ is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(\bar{X})=-x_{r}+\frac{1}{\lambda}(x_{u}-x_{d})(P(\bar{A})-\lambda\tilde{P}(\bar{A})).$$ - Otherwise, the [**‘Star-System’**]{} $X^{*}=x_{d}\I_{A^{*}}+x^{*}\I_{B^{*}}$ defined in Theorem \[T: Two Line Optimal in One Constraint Sase\] is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(X^{*})=-x_{r}+\frac{1}{\lambda}(x^{*}-x_{d})(P(A^{*})-\lambda\tilde{P}(A^{*})).$$ 4. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$ and $z\in (z^{*}, \zbar]$. the [**‘Double-Star-Sytem’**]{} $X^{**}=x_{d}\I_{A^{**}}+x^{**}\I_{B^{**}}+x_{u}\I_{D^{**}}$ defined in Proposition \[P: Three-Line Optimal for Step 2 in Two-Constraint Case\] is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(X^{**})=\frac{1}{\lambda}\left((x^{**}-x_{d})P(A^{**})-\lambda x^{**}\right).$$ We observe that the pure money market account investment is rarely optimal. When the Radon-Nikodým derivative is bounded above by the reciprocal of the confidence level of the risk measure ($\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$), a condition not satisfied in the Black-Scholes model, the solution does not exist unless the return requirement coincide with the risk-free rate. When the Radon-Nikodým derivative exceeds $\frac{1}{\lambda}$ with positive probability, and the return constraint is low $z\in[x_{r}, z^{*}]$, the Two-Line Configuration which is optimal to the $CVaR$ minimization problem without the return constraint is also the optimal to the Mean-CVaR problem. However, in the more interesting case where the return constraint is materially high $z\in (z^{*}, \zbar]$, the optimal Three-Line-Configuration sometimes takes the value of the upper bound $x_{u}$ to raise the expected return at the cost the minimal risk will be at a higher level. This analysis complies with the numerical example shown in Section \[Subsection: Example\]. Case $x_{u}=\infty$: No Upper Bound ----------------------------------- We first restate the solution to the **One-Constraint Problem** from Li and Xu [@LiXuA] in the current context: when $x_{u}=\infty$, where we interpret $\bar{A}=\Omega$ and $\zbar=\infty$. \[T: Two Line Optimal in One Constraint Case Infinite Upper Bound\] 1. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$. The pure money market account investment $X=x_{r}$ is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **One-Constraint Problem** and the associated minimal risk is $$CVaR(X)=-x_{r}.$$ 2. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$. The [**‘Star-System’**]{} $X^{*}=x_{d}\I_{A^{*}}+x^{*}\I_{B^{*}}$ defined in Theorem \[T: Two Line Optimal in One Constraint Sase\] is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **One-Constraint Problem** and the associated minimal risk is $$CVaR(X^{*})=-x_{r}+\frac{1}{\lambda}(x^{*}-x_{d})(P(A^{*})-\lambda\tilde{P}(A^{*})).$$ We observe that although there is no upper bound for the portfolio value, the optimal solution remains bounded from above, and the minimal $CVaR$ is bounded from below. The problem of purely minimizing $CVaR$ risk of a self-financing portfolio (bounded below by $x_{d}$ to exclude arbitrage) from initial capital $x_{0}$ is feasible in the sense that the risk will not approach $-\infty$ and the minimal risk is achieved by an optimal portfolio. When we add substantial return constraint to the $CVaR$ minimization problem, although the minimal risk can still be calculated in the most important case ([*Case 4*]{} in Theorem \[T: unbounded above\]), it is truly an infimum and not a minimum, thus it can be approximated closely by a sub-optimal portfolio, but not achieved by an optimal portfolio. \[T: unbounded above\]For fixed $-\infty<x_{d}<x_{r}<x_{u}=\infty$. 1. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$ and $z=x_{r}$. The pure money market account investment $X=x_{r}$ is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(X)=-x_{r}.$$ 2. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$ and $z\in(x_{r}, \infty)$. The optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** does not exist and the minimal risk is $$CVaR(X)=-x_{r}.$$ 3. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$ and $z\in[x_{r}, z^{*}]$. The [**‘Star-System’**]{} $X^{*}=x_{d}\I_{A^{*}}+x^{*}\I_{B^{*}}$ defined in Theorem \[T: Two Line Optimal in One Constraint Sase\] is the optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** and the associated minimal risk is $$CVaR(X^{*})=-x_{r}+\frac{1}{\lambda}(x^{*}-x_{d})(P(A^{*})-\lambda\tilde{P}(A^{*})).$$ 4. Suppose $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$ and $z\in(z^{*}, \infty)$. The optimal solution to **Step 2: Minimization of Conditional Value-at-Risk** of the **Two-Constraint Problem** does not exist and the minimal risk is $$CVaR(X^{*})=-x_{r}+\frac{1}{\lambda}(x^{*}-x_{d})(P(A^{*})-\lambda\tilde{P}(A^{*})).$$ From the proof of the above theorem in [**Appendix**]{} \[Section: Appendix\], we note that in case [*4*]{}, we can always find a Three-Line Configuration as a sub-optimal solution, i.e., there exists for every $\epsilon>0$, a corresponding portfolio $X_{\epsilon}=x_{d}\I_{A_{\epsilon}}+x_{\epsilon}\I_{B_{\epsilon}}+\alpha_{\epsilon}\I_{D_{\epsilon}}$ which satisfies the [*General Constraints*]{} and produces a $CVaR$ level close to the lower bound: $CVaR(X_{\epsilon})\le CVaR(X^{*})+\epsilon.$ Future Work {#Section: Future Work} =========== The second part of Assumption \[A: Complete Market and Continuous RND\], namely the Radon-Nikodým derivative $\frac{d\tilde{P}}{dP}$ having a continuous distribution, is imposed for the simplification it brings to the presentation in the main theorems. Further work can be done when this assumption is weakened. We expect that the main results should still hold, albeit in a more complicated form.[^5] It will also be interesting to extend the closed-form solution for Mean-CVaR minimization by replacing CVaR with Law-Invariant Convex Risk Measures in general. Another direction will be to employ dynamic risk measures into the current setting. Although in this paper we focus on the complete market solution, to solve the problem in an incomplete market setting, the exact hedging argument via Martingale Representation Theorem that translates the dynamic problem (\[E: Main Problem Dynamic\]) into the static problem (\[E: Main Problem Static\]) has to be replaced by a super-hedging argument via Optional Decomposition developed by Kramkov [@Kramkov], and Föllmer and Kabanov [@FollmerKabanov]. The detail is similar to the process carried out for Shortfall Risk Minimization in Föllmer and Leukert [@FollmerLeukert], Convex Risk Minimization in Rudloff [@Rudloff], and law-invariant risk preference in He and Zhou [@HeZhou]. The curious question is: Will the Third-Line Configuration remain optimal? Appendix {#Section: Appendix} ======== The problem of $$\zbar=\max_{X\in\sF}E[X]\quad s.t.\quad \tilde{E}[X]=x_{r}, \quad x_{d}\le X\le x_{u}\,a.s.$$ is equivalent to the Expected Shortfall Problem $$\zbar=-\min_{X\in\sF}E[(x_{u}-X)^{+}]\quad s.t.\quad \tilde{E}[X]=x_{r}, \quad X\ge x_{d}\,a.s.$$ Therefore, the answer is immediate. $\endproof$ Choose $x_{d}\le x_{1}<x_{2}\le x_{r}$. Let $X_{1}=x_{1}\I_{B_{1}}+x_{u}\I_{D_{1}}$ where $B_{1}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)\ge b_{1}\right\}$ and $D_{1}=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b_{1}\right\}$. Choose $b_{1}$ such that $\tilde{E}[X_{1}]=x_{r}$. This capital constraint means $x_{1}\tilde{P}(B_{1})+x_{u}\tilde{P}(D_{1})=x_{r}$. Since $\tilde{P}(B_{1})+\tilde{P}(D_{1})=1$, $\tilde{P}(B_{1})=\frac{x_{u}-x_{r}}{x_{u}-x_{1}}$ and $\tilde{P}(D_{1})=\frac{x_{r}-x_{1}}{x_{u}-x_{1}}$. Define $z_{1}=E[X_{1}]$. Similarly, $z_{2}, X_{2}, B_{2}, D_{2}, b_{2}$ corresponds to $x_{2}$ where $b_{1}>b_{2}$ and $\tilde{P}(B_{2})=\frac{x_{u}-x_{r}}{x_{u}-x_{2}}$ and $\tilde{P}(D_{2})=\frac{x_{r}-x_{2}}{x_{u}-x_{2}}$. Note that $D_{2}\subset D_{1}$, $B_{1}\subset B_{2}$ and $D_{1}\backslash D_{2}=B_{2}\backslash B_{1}$. We have $$\begin{aligned} z_{1}-z_{2} &= x_{1}P(B_{1})+x_{u}P(D_{1})-x_{2}P(B_{2})-x_{u}P(D_{2})\\ &=(x_{u}-x_{2})P(B_{2}\backslash B_{1})-(x_{2}-x_{1})P(B_{1})\\ &=(x_{u}-x_{2})P\left(b_{2}<\tfrac{d\tilde{P}}{dP}(\omega)< b_{1}\right)-(x_{2}-x_{1})P\left(\tfrac{d\tilde{P}}{dP}(\omega)\ge b_{1}\right)\\ &=(x_{u}-x_{2})\int_{\left\{b_{2}<\tfrac{d\tilde{P}}{dP}(\omega)< b_{1}\right\}}\tfrac{dP}{d\tilde{P}}(\omega)d\tilde{P}(\omega) -(x_{2}-x_{1})\int_{\left\{\tfrac{d\tilde{P}}{dP}(\omega)\ge b_{1}\right\}}\tfrac{dP}{d\tilde{P}}(\omega)d\tilde{P}(\omega)\\ &>(x_{u}-x_{2})\frac{1}{b_{1}}\tilde{P}(B_{2}\backslash B_{1})-(x_{2}-x_{1})\frac{1}{b_{1}}\tilde{P}(B_{1})\\ &=(x_{u}-x_{2})\frac{1}{b_{1}}\left(\frac{x_{u}-x_{r}}{x_{u}-x_{2}}-\frac{x_{u}-x_{r}}{x_{u}-x_{1}}\right) -(x_{2}-x_{1})\frac{1}{b_{1}}\frac{x_{u}-x_{r}}{x_{u}-x_{1}}=0.\end{aligned}$$ For any given $\epsilon>0$, choose $x_{2}-x_{1}\le \epsilon$, then $$\begin{aligned} z_{1}-z_{2} &=(x_{u}-x_{1})P(B_{2}\backslash B_{1})-(x_{2}-x_{1})P(B_{2})\\ &\le (x_{u}-x_{1})P(B_{2}\backslash B_{1})\\ &\le (x_{u}-x_{1})\left(\frac{x_{u}-x_{r}}{x_{u}-x_{2}}-\frac{x_{u}-x_{r}}{x_{u}-x_{1}}\right)\\ &\le \frac{(x_{2}-x_{1})(x_{u}-x_{r})}{x_{u}-x_{2}}\le x_{2}-x_{1}\le \epsilon.\end{aligned}$$ Therefore, $z$ decreases continuously as $x$ increases when $x\in[x_{d}, x_{r}]$. When $x=x_{d}$, $z=\zbar$ from Definition \[D: zbar\]. When $x=x_{r}$, $X\equiv x_{r}$ and $z=E[X]=x_{r}$. Similarly, we can show that $z$ increases continuously from $x_{r}$ to $\zbar$ as $x$ increases from $x_{r}$ to $x_{u}$. $\endproof$ Lemma \[L: Two-Line and their returns above or below z\] is a logical consequence of Lemma \[L: z decreases when x is between xd and xr and increases when x is between xr and xu\] and Definition \[D: xz1 and xz2\]; Proposition \[P: Two Line Optimal for Step 1 when x is between xd and xz1; xz2 and xu\] follows from Lemma \[L: Two-Line and their returns above or below z\]; so their proofs will be skipped. Choose $-\infty<b_{1}<b_{2}\le \bar{b}=\bar{a}\le a_{2}<a_{1}<\infty$. Let configuration $X_{1}=x_{d}\I_{A_{1}}+x\I_{B_{1}}+x_{u}\I_{D_{1}}$ correspond to the pair $(a_{1}, b_{1})$ where $A_{1}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a_{1}\right\}, B_{1}=\left\{\omega\in\Omega\,:\,b_{1}\le\tfrac{d\tilde{P}}{dP}(\omega)\le a_{1}\right\}, D_{1}=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b_{1}\right\}$. Similarly, let configuration $X_{2}=x_{d}\I_{A_{2}}+x\I_{B_{2}}+x_{u}\I_{D_{2}}$ correspond to the pair $(a_{2}, b_{2})$. Define $z_{1}=E[X_{1}]$ and $z_{2}=E[X_{2}]$. Since both $X_{1}$ and $X_{2}$ satisfy the capital constraint, we have $$x_{d}\tilde{P}(A_{1})+x\tilde{P}(B_{1})+x_{u}\tilde{P}(D_{1})=x_{r}=x_{d}\tilde{P}(A_{2})+x\tilde{P}(B_{2})+x_{u}\tilde{P}(D_{2}).$$ This simplifies to the equation $$\label{E: for a2} (x-x_{d})\tilde{P}(A_{2}\backslash A_{1})=(x_{u}-x)\tilde{P}(D_{2}\backslash D_{1}).$$ Then $$\begin{aligned} z_{2}-z_{1} &=x_{d}P(A_{2})+xP(B_{2})+x_{u}P(D_{2})-x_{d}P(A_{1})-xP(B_{1})-x_{u}P(D_{1})\\ &=(x_{u}-x)P(D_{2}\backslash D_{1})-(x-x_{d})P(A_{2}\backslash A_{1})\\ &=(x_{u}-x)P(D_{2}\backslash D_{1})-(x_{u}-x)\frac{\tilde{P}(D_{2}\backslash D_{1})}{\tilde{P}(A_{2}\backslash A_{1})}P(A_{2}\backslash A_{1})\\ &=(x_{u}-x)\tilde{P}(D_{2}\backslash D_{1})\left(\frac{P(D_{2}\backslash D_{1})}{\tilde{P}(D_{2}\backslash D_{1})} -\frac{P(A_{2}\backslash A_{1})}{\tilde{P}(A_{2}\backslash A_{1})}\right)\\ &=(x_{u}-x)\tilde{P}(D_{2}\backslash D_{1})\left( \frac{\int_{\left\{b_{1}\le\tfrac{d\tilde{P}}{dP}(\omega)< b_{2}\right\}}\tfrac{dP}{d\tilde{P}}(\omega)d\tilde{P}(\omega)}{\tilde{P}(D_{2}\backslash D_{1})} -\frac{\int_{\left\{a_{2}<\tfrac{d\tilde{P}}{dP}(\omega)\le a_{1}\right\}}\tfrac{dP}{d\tilde{P}}(\omega)d\tilde{P}(\omega)}{\tilde{P}(A_{2}\backslash A_{1})}\right)\\ &\ge (x_{u}-x)\tilde{P}(D_{2}\backslash D_{1})\left(\frac{1}{b_{2}}-\frac{1}{a_{2}}\right)>0.\end{aligned}$$ Suppose the pair $(a_{1}, b_{1})$ is chosen so that $X_{1}$ satisfies the budget constraint $\tilde{E}[X_{1}]=x_{r}$. For any given $\epsilon>0$, choose $b_{2}-b_{1}$ small enough such that $P(D_{2}\backslash D_{1})\le \frac{\epsilon}{x_{u}-x}$. Now choose $a_{2}$ such that $a_{2}<a_{1}$ and equation (\[E: for a2\]) is satisfied. Then $X_{2}$ also satisfies the budget constraint $\tilde{E}[X_{2}]=x_{r}$, and $$z_{2}-z_{1}=(x_{u}-x)P(D_{2}\backslash D_{1})-(x-x_{d})P(A_{2}\backslash A_{1})\le (x_{u}-x)P(D_{2}\backslash D_{1})\le\epsilon.$$ We conclude that the expected value of the Three-Line configuration decreases continuously as $b$ decreases and $a$ increases. $\endproof$ In the following we provide the main proof of the paper: the optimality of the Three-Line configuration. Denote $\rho=\frac{d\tilde{P}}{dP}$. According to Lemma \[L: z decreases for Three-Line Configuration\], there exists a Three-Line configuration $\hat{X}=x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D}$ that satisfies the General Constraints: $$\begin{aligned} &E[X]=x_{d}P(A)+xP(B)+x_{u}P(D)=z, \\ &\tilde{E}[X]=x_{d}\tilde{P}(A)+x\tilde{P}(B)+x_{u}\tilde{P}(D)=x_{r}.\end{aligned}$$ where $$A=\left\{\omega\in\Omega\,:\,\rho(\omega)>\hat{a}\right\},\quad B=\left\{\omega\in\Omega\,:\,\hat{b}\le\rho(\omega)\le \hat{a}\right\},\quad D=\left\{\omega\in\Omega\,:\, \rho(\omega)<\hat{b}\right\}.$$ As standard for convex optimization problems, if we can find a pair of Lagrange multipliers $\lambda\ge0$ and $\mu\in\R$ such that $\hat{X}$ is the solution to the minimization problem $$\label{E: Langrange Multiplier} \inf_{X\in\sF, \,\,x_{d}\le X\le x_{u}}E[(x-X)^{+}-\lambda X-\mu \rho X]=E[(x-\hat{X})^{+}-\lambda \hat{X}-\mu \rho \hat{X}],$$ then $\hat{X}$ is the solution to the constrained problem $$\inf_{X\in\sF, \,\,x_{d}\le X\le x_{u}}E[(x-X)^{+}], \quad s.t.\quad E[X]\ge z, \quad\tilde{E}[X]=x_{r}.$$ Define $$\lambda=\frac{\hat{b}}{\hat{a}-\hat{b}}, \quad \mu=-\frac{1}{\hat{a}-\hat{b}}.$$ Then (\[E: Langrange Multiplier\]) becomes $$\inf_{X\in\sF, \,\,x_{d}\le X\le x_{u}}E\left[(x-X)^{+}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} X\right].$$ Choose any $X\in\sF$ where $x_{d}\le X\le x_{u}$, and denote $G=\{\omega\in\Omega\,:\, X(\omega)\ge x\}$ and $L=\{\omega\in\Omega\,:\, X(\omega)< x\}$. Note that $\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}}>1$ on set $A$, $0\le\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}}\le1$ on set $B$, $\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}}<0$ on set $D$. Then the difference $$\begin{aligned} &\,\,E\left[(x-X)^{+}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} X\right]-E\left[(x-\hat{X})^{+}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \hat{X}\right]\\ &= E\left[(x-X)\I_{L}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} X\left(\I_{A}+\I_{B}+\I_{D}\right)\right] -E\left[\left(x-x_{d}\right)\I_{A}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} (x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D})\right]\\ &=E\left[(x-X)\I_{L}+\left(\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} (X-x_{d})-(x-x_{d})\right)\I_{A} +\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x\right)\I_{B}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\I_{D}\right]\\ &\ge E\left[(x-X)\I_{L}+\left(X-x\right)\I_{A} +\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x\right)\I_{B}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\I_{D}\right] \\ &= E\left[(x-X)\left(\I_{L\cap A}+\I_{L\cap B}+\I_{L\cap D}\right)+\left(X-x\right)\left(\I_{A\cap G}+\I_{A\cap L}\right) +\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x\right)\I_{B}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\I_{D}\right]\\ &= E\left[(x-X)\left(\I_{L\cap B}+\I_{L\cap D}\right)+\left(X-x\right)\I_{A\cap G} +\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x\right)\I_{B}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\I_{D}\right]\\ &= E\left[(x-X)\left(\I_{L\cap B}+\I_{L\cap D}\right)+\left(X-x\right)\I_{A\cap G} +\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x\right)\left(\I_{B\cap G}+\I_{B\cap L}\right)+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\left(\I_{D\cap G}+\I_{D\cap L}\right)\right]\\ &= E\left[(x-X)\left(1-\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}}\right)\I_{B\cap L}+\left(x-X+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\right)\I_{D\cap L}+\left(X-x\right)\I_{A\cap G}\right.\\ &\left.\qquad\qquad+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x\right)\I_{B\cap G}+\tfrac{\rho-\hat{b}}{\hat{a}-\hat{b}} \left(X-x_{u}\right)\I_{D\cap G}\right]\ge0.\end{aligned}$$ The last inequality holds because each term inside the expectation is greater than or equal to zero. $\endproof$ Theorem \[T: Solution to Step 1\] is a direct consequence of Lemma \[L: Two-Line and their returns above or below z\], Proposition \[P: Two Line Optimal for Step 1 when x is between xd and xz1; xz2 and xu\], and Proposition \[P: Three Line Optimal for Step 1 when x is between xz1 and xz2\]. The convexity of $v(x)$ is a simple consequence of its definition (\[E: Step 1\]). Real-valued convex functions on $\R$ are continuous on its interior of the domain, so $v(x)$ is continuous on $\R$. $\endproof$ For $z\in(z^{*},\zbar]$, **Step 2** of the **Two-Constraint Problem** $$\frac{1}{\lambda}\inf_{x\in\R}(v(x)-\lambda x)$$ is the minimum of the following five sub-problems after applying Theorem \[T: Solution to Step 1\]: Case 1 : $$\frac{1}{\lambda} \inf_{(-\infty, x_{d}]}(v(x)-\lambda x) = \frac{1}{\lambda} \inf_{(-\infty, x_{d}]}(-\lambda x)=-x_{d};$$ Case 2 : $$\frac{1}{\lambda} \inf_{[x_{d}, x_{z1}]}(v(x)-\lambda x) = \frac{1}{\lambda} \inf_{[x_{d}, x_{z1}]}(-\lambda x)=-x_{z1}\le -x_{d};$$ Case 3 : $$\frac{1}{\lambda} \inf_{(x_{z1}, x_{z2})}(v(x)-\lambda x) = \frac{1}{\lambda} \inf_{(x_{z1}, x_{z2})}\left((x-x_{d})P(A_{x})-\lambda x\right);$$ Case 4 : $$\frac{1}{\lambda} \inf_{[x_{z2}, x_{u}]}(v(x)-\lambda x) = \frac{1}{\lambda} \inf_{[x_{z2}, x_{u}]}\left((x-x_{d})P(A_{x})-\lambda x\right);$$ Case 5 : $$\frac{1}{\lambda} \inf_{[x_{u}, \infty)}(v(x)-\lambda x) = \frac{1}{\lambda} \inf_{[x_{u}, \infty)}\left((x-x_{d})P(\bar{A})+(x-x_{u})P(\bar{B})-\lambda x\right).$$ Obviously, **Case 2** dominates **Case 1** in the sense that its minimum is lower. In **Case 3**, by the continuity of $v(x)$, we have $$\frac{1}{\lambda} \inf_{(x_{z1}, x_{z2})}\left((x-x_{d})P(A_{x})-\lambda x\right) \le \frac{1}{\lambda} \left((x_{z1}-x_{d})P(A_{x_{z1}})-\lambda x_{z1}\right)=-x_{z1}.$$ The last equality comes from the fact $P(A_{x_{z1}})=0$: As in Lemma \[L: z decreases for Three-Line Configuration\], we know that when $x=x_{z1}$, the Three-Line configuration $X=x_{d}\I_{A}+x\I_{B}+x_{u}\I_{D}$ degenerates to the Two-Line configuration $X=x_{z1}\I_{B}+x_{u}\I_{D}$ where $a_{x_{z1}}=\infty$. Therefore, **Case 3** dominates **Case 2**. In **Case 5**, $$\begin{aligned} \frac{1}{\lambda} \inf_{[x_{u}, \infty)}(v(x)-\lambda x) &= \frac{1}{\lambda} \inf_{[x_{u}, \infty)}\left((x-x_{d})P(\bar{A})+(x-x_{u})P(\bar{B})-\lambda x\right)\\ &= \frac{1}{\lambda} \inf_{[x_{u}, \infty)}\left((1-\lambda)x-x_{d}P(\bar{A})-x_{u}P(\bar{B})\right)\\ &= \frac{1}{\lambda}\left((1-\lambda)x_{u}-x_{d}P(\bar{A})-x_{u}P(\bar{B})\right)\\ &= \frac{1}{\lambda}\left((x_{u}-x_{d})P(\bar{A})-\lambda x_{u}\right)\\ &\ge \frac{1}{\lambda} \inf_{[x_{z2}, x_{u}]}\left((x-x_{d})P(A_{x})-\lambda x\right).\end{aligned}$$ Therefore, **Case 4** dominates **Case 5**. When $x\in[x_{z2}, x_{u}]$ and $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$, Theorem \[T: Solution to Step 1\] and Theorem \[T: Two Line Optimal in One Constraint Sase\] imply that the infimum in **Case 4** is achieved either by $\bar{X}$ or $X^{*}$. Since we restrict $z\in(z^{*}, \zbar]$ where $z^{*}=\zbar$ by Definition \[D: z\*\] in the first case, we need not consider this case in the current proposition. In the second case, Lemma \[L: z decreases when x is between xd and xr and increases when x is between xr and xu\] implies that $x^{*}< x_{z2}$ (because $z>z^{*}$). By the convexity of $v(x)$, and then the continuity of $v(x)$, $$\begin{aligned} \frac{1}{\lambda} \inf_{[x_{z2}, x_{u}]}\left((x-x_{d})P(A_{x})-\lambda x\right) &=\frac{1}{\lambda} \left((x_{z2}-x_{d})P(A_{x_{z2}})-\lambda x_{z2}\right)\\ &\ge \frac{1}{\lambda} \inf_{(x_{z1}, x_{z2})}\left((x-x_{d})P(A_{x})-\lambda x\right).\end{aligned}$$ Therefore, **Case 3** dominates **Case 4**. We have shown that **Case 3** actually provides the globally infimum: $$\frac{1}{\lambda}\inf_{x\in\R}(v(x)-\lambda x)=\frac{1}{\lambda} \inf_{(x_{z1}, x_{z2})}(v(x)-\lambda x).$$ Now we focus on $x\in(x_{z1}, x_{z2})$, where $X(x)=x_{d}\I_{A_{x}}+x\I_{B_{x}}+x_{u}\I_{D_{x}}$ satisfies the general constraints: $$\begin{aligned} &E[X(x)]=x_{d}P(A_{x})+xP(B_{x})+x_{u}P(D_{x})=z, \\ &\tilde{E}[X(x)]=x_{d}\tilde{P}(A_{x})+x\tilde{P}(B_{x})+x_{u}\tilde{P}(D_{x})=x_{r},\end{aligned}$$ and the definition for sets $A_{x}$, $B_{x}$ and $D_{x}$ are $$A_{x}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a_{x}\right\},\quad B_{x}=\left\{\omega\in\Omega\,:\,b_{x}\le\tfrac{d\tilde{P}}{dP}(\omega)\le a_{x}\right\},\quad D_{x}=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b_{x}\right\}.$$ Note that $v(x)=(x-x_{d})P(A_{x})$ (see Theorem \[T: Solution to Step 1\]). Since $P(A_{x})+P(B_{x})+P(D_{x})=1$ and $\tilde{P}(A_{x})+\tilde{P}(B_{x})+\tilde{P}(D_{x})=1$, we rewrite the capital and return constraints as $$\begin{aligned} x-z &=(x-x_{d})P(A_{x})+(x-x_{u})P(D_{x}), \\ x-x_{r} &=(x-x_{d})\tilde{P}(A_{x})+(x-x_{u})\tilde{P}(D_{x}).\end{aligned}$$ Differentiating both sides with respect to $x$, we get $$\begin{aligned} P(B_{x}) &=(x-x_{d})\frac{dP(A_{x})}{dx}+(x-x_{u})\frac{dP(D_{x})}{dx}, \\ \tilde{P}(B_{x}) &=(x-x_{d})\frac{d\tilde{P}(A_{x})}{dx}+(x-x_{u})\frac{d\tilde{P}(D_{x})}{dx}.\end{aligned}$$ Since $$\frac{d\tilde{P}(A_{x})}{dx}=a_{x}\frac{dP(A_{x})}{dx}, \quad \frac{d\tilde{P}(D_{x})}{dx}=b_{x}\frac{dP(D_{x})}{dx},$$ we get $$\frac{dP(A_{x})}{dx}=\frac{\tilde{P}(B_{x})-bP(B_{x})}{(x-x_{d})(a-b)}.$$ Therefore, $$\begin{aligned} (v(x)-\lambda x)' &=P(A_{x})+(x-x_{d})\frac{dP(A_{x})}{dx}-\lambda\\ &=P(A_{x})+\frac{\tilde{P}(B_{x})-bP(B_{x})}{a-b}-\lambda.\end{aligned}$$ When the above derivative is zero, we arrive to the first order Euler condition $$P(A_{x})+\frac{\tilde{P}(B_{x})-bP(B_{x})}{a-b}-\lambda=0.$$ To be precise, the above differentiation should be replaced by left-hand and right-hand derivatives as detailed in the Proof for Corollary 2.8 in Li and Xu [@LiXuA]. But the first order Euler condition will turn out to be the same because we have assumed that the Radon-Nikodým derivative $\frac{d\tilde{P}}{dP}$ has continuous distribution. To finish this proof, we need to show that there exists an $x\in (x_{z1}, x_{z2})$ where the first order Euler condition is satisfied. From Lemma \[L: z decreases for Three-Line Configuration\], we know that as $x\searrow x_{z1}$, $a_{x}\nearrow\infty$, and $P(A_{x})\searrow0$. Therefore, $$\lim_{x\searrow x_{z1}}(v(x)-\lambda x)' = -\lambda <0.$$ As $x\nearrow x_{z2}$, $b_{x}\searrow0$, and $P(D_{x})\searrow0$. Therefore, $$\lim_{x\nearrow x_{z2}}(v(x)-\lambda x)' = P(A_{x_{z2}})-\frac{\tilde{P}(A_{x_{z2}}^{c})}{a_{x_{z2}}}-\lambda.$$ This derivative coincides with the derivative of the value function of the Two-Line configuration that is optimal on the interval $x\in[x_{z2}, x_{u}]$ provided in Theorem \[T: Solution to Step 1\] (see Proof for Corollary 2.8 in Li and Xu [@LiXuA]). Again when $x\in[x_{z2}, x_{u}]$ and $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}>\frac{1}{\lambda}$, Theorem \[T: Solution to Step 1\] and Theorem \[T: Two Line Optimal in One Constraint Sase\] imply that the infimum of $v(x)-\lambda x$ is achieved either by $\bar{X}$ or $X^{*}$. Since we restrict $z\in(z^{*}, \zbar]$ where $z^{*}=\zbar$ by Definition \[D: z\*\] in the first case, we need not consider this case in the current proposition. In the second case, Lemma \[L: z decreases when x is between xd and xr and increases when x is between xr and xu\] implies that $x^{*}< x_{z2}$ (because $z>z^{*}$). This in turn implies $$P(A_{x_{z2}})-\frac{\tilde{P}(A_{x_{z2}}^{c})}{a_{x_{z2}}}-\lambda<0.$$ We have just shown that there exist some $x^{**}\in(x_{z1}, x_{z2})$ such that $(v(x)-\lambda x)'|_{x=x^{**}}=0$. By the convexity of $v(x)-\lambda x$, this is the point where it obtains the minimum value. Now $$\begin{aligned} CVaR (X^{**}) &=\frac{1}{\lambda}\left(v(x^{**})-\lambda x^{**}\right)\\ &= \frac{1}{\lambda}\left((x^{**}-x_{d})P(A^{**})-\lambda x^{**}\right).\end{aligned}$$ $\endproof$ Case 3 and 4 are already proved in Theorem \[T: Two Line Optimal in One Constraint Sase\] and Proposition \[P: Three-Line Optimal for Step 2 in Two-Constraint Case\]. In Case 1 where $\operatorname{ess\, sup}\frac{d\tilde{P}}{dP}\le\frac{1}{\lambda}$ and $z=x_{r}$, $X=x_{r}$ is both feasible and optimal by Theorem \[T: Two Line Optimal in One Constraint Sase\]. In Case 2, fix arbitrary $\epsilon>0$. We will look for a Two-Line solution $X_{\epsilon}=x_{\epsilon}\I_{A_{\epsilon}}+\alpha_{\epsilon}\I_{B_{\epsilon}}$ with the right parameters $a_{\epsilon}, x_{\epsilon}, \alpha_{\epsilon}$ which satisfies both the capital constraint and return constraint: $$\begin{aligned} &E[X_{\epsilon}]=x_{\epsilon}P(A_{\epsilon})+\alpha_{\epsilon} P(B_{\epsilon})=z, \label{E: z for epsilon two line}\\ &\tilde{E}[X_{\epsilon}]=x_{\epsilon}\tilde{P}(A_{\epsilon})+\alpha_{\epsilon}\tilde{P}(B_{\epsilon})=x_{r}, \label{E: xr for epsilon two line}\end{aligned}$$ where $$A_{\epsilon}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a_{\epsilon}\right\},\quad B_{\epsilon}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)\le a_{\epsilon}\right\},$$ and produces a CVaR level close to the lower bound: $$CVaR(X_{\epsilon})\le CVaR(x_{r})+\epsilon=-x_{r}+\epsilon.$$ First, we choose $x_{\epsilon}=x_{r}-\epsilon$. To find the remaining two parameters $a_{\epsilon}$ and $\alpha_{\epsilon}$ so that equations (\[E: z for epsilon two line\]) and (\[E: xr for epsilon two line\]) are satisfies, we note $$\begin{aligned} &x_{r}P(A_{\epsilon})+x_{r} P(B_{\epsilon})=x_{r},\\ &x_{r}\tilde{P}(A_{\epsilon})+x_{r}\tilde{P}(B_{\epsilon})=x_{r},\end{aligned}$$ and conclude that it is equivalent to find a pair of $a_{\epsilon}$ and $\alpha_{\epsilon}$ such that the following two equalities are satisfied: $$\begin{aligned} -\epsilon P(A_{\epsilon})+(\alpha_{\epsilon}-x_{r}) P(B_{\epsilon})&=\gamma,\\ -\epsilon \tilde{P}(A_{\epsilon})+(\alpha_{\epsilon}-x_{r})\tilde{P}(B_{\epsilon}) &=0,\end{aligned}$$ where we denote $\gamma=z-x_{r}$. If we can find a solution $a_{\epsilon}$ to the equation $$\label{E: b_epsilon two line} \frac{\tilde{P}(B_{\epsilon})}{P(B_{\epsilon})}=\frac{\epsilon}{\gamma+\epsilon},$$ then $$\alpha_{\epsilon}=x_{r}+\frac{\tilde{P}(A_{\epsilon})}{\tilde{P}(B_{\epsilon})}\epsilon,$$ and we have the solutions for equations (\[E: z for epsilon two line\]) and (\[E: xr for epsilon two line\]). It is not difficult to prove that the fraction $\frac{\tilde{P}(B)}{P(B)}$ increases continuously from $0$ to $1$ as $a$ increases from $0$ to $\frac{1}{\lambda}$. Therefore, we can find a solution $a_{\epsilon}\in(0, \frac{1}{\lambda})$ where (\[E: b\_epsilon two line\]) is satisfied. By definition (\[D: CVaR\]), $$CVaR_{\lambda}(X_{\epsilon}) =\frac{1}{\lambda}\inf_{x\in\R}\left(E[(x-X_{\epsilon})^{+}]-\lambda x\right) \le \frac{1}{\lambda}\left(E[(x_{\epsilon}-X_{\epsilon})^{+}]-\lambda x_{\epsilon}\right)=-x_{\epsilon}.$$ The difference $$CVaR_{\lambda}(X_{\epsilon})-CVaR(x_{r}) \le -x_{\epsilon}+x_{r}=\epsilon.$$ Under Assumption \[A: Complete Market and Continuous RND\], the solution in Case 2 is almost surely unique, the result is proved. $\endproof$ Case 1 and 3 are obviously true in light of Theorem \[T: Two Line Optimal in One Constraint Case Infinite Upper Bound\]. The proof for Case 2 is similar to that in the Proof of Theorem \[T: Solution to Step 2\], so we will not repeat it here. Since $E[X^{*}]=z^{*}<z$ in case 4, $CVaR(X^{*})$ is only a lower bound in this case. We first show that it is the true infimum obtained in Case 4. Fix arbitrary $\epsilon>0$. We will look for a Three-Line solution $X_{\epsilon}=x_{d}\I_{A_{\epsilon}}+x_{\epsilon}\I_{B_{\epsilon}}+\alpha_{\epsilon}\I_{D_{\epsilon}}$ with the right parameters $a_{\epsilon}, b_{\epsilon}, x_{\epsilon}, \alpha_{\epsilon}$ which satisfies the general constraints: $$\begin{aligned} &E[X_{\epsilon}]=x_{d}P(A_{\epsilon})+x_{\epsilon}P(B_{\epsilon})+\alpha_{\epsilon} P(D_{\epsilon})=z, \label{E: z for epsilon}\\ &\tilde{E}[X_{\epsilon}]=x_{d}\tilde{P}(A_{\epsilon})+x_{\epsilon}\tilde{P}(B_{\epsilon})+\alpha_{\epsilon}\tilde{P}(D_{\epsilon})=x_{r}, \label{E: xr for epsilon}\end{aligned}$$ where $$A_{\epsilon}=\left\{\omega\in\Omega\,:\,\tfrac{d\tilde{P}}{dP}(\omega)>a_{\epsilon}\right\},\quad B_{\epsilon}=\left\{\omega\in\Omega\,:\,b_{\epsilon}\le\tfrac{d\tilde{P}}{dP}(\omega)\le a_{\epsilon}\right\},\quad D_{\epsilon}=\left\{\omega\in\Omega\,:\, \tfrac{d\tilde{P}}{dP}(\omega)<b_{\epsilon}\right\},$$ and produces a CVaR level close to the lower bound: $$CVaR(X_{\epsilon})\le CVaR(X^{*})+\epsilon.$$ First, we choose $a_{\epsilon}=a^{*}$, $A_{\epsilon}=A^{*}$, $x_{\epsilon}=x^{*}-\delta$, where we define $\delta=\frac{\lambda}{\lambda-P(A^{*})}\epsilon$. To find the remaining two parameters $b_{\epsilon}$ and $\alpha_{\epsilon}$ so that equations (\[E: z for epsilon\]) and (\[E: xr for epsilon\]) are satisfies, we note $$\begin{aligned} &E[X^{*}] =x_{d}P(A^{*})+x^{*}P(B^{*})=z^{*},\\ &\tilde{E}[X^{*}]=x_{d}\tilde{P}(A^{*})+x^{*}\tilde{P}(B^{*})=x_{r},\end{aligned}$$ and conclude that it is equivalent to find a pair of $b_{\epsilon}$ and $\alpha_{\epsilon}$ such that the following two equalities are satisfied: $$\begin{aligned} -\delta (P(B^{*})-P(D_{\epsilon}))+(\alpha_{\epsilon}-x^{*}) P(D_{\epsilon})&=\gamma,\\ -\delta(\tilde{P}(B^{*})-\tilde{P}(D_{\epsilon}))+(\alpha_{\epsilon}-x^{*})\tilde{P}(D_{\epsilon}) &=0,\end{aligned}$$ where we denote $\gamma=z-z^{*}$. If we can find a solution $b_{\epsilon}$ to the equation $$\label{E: b_epsilon} \frac{\tilde{P}(D_{\epsilon})}{P(D_{\epsilon})}=\frac{\tilde{P}(B^{*})}{\frac{\gamma}{\delta}+P(B^{*})},$$ then $$\alpha_{\epsilon}=x^{*}+\left(\frac{\tilde{P}(B^{*})}{\tilde{P}(D_{\epsilon})}-1\right)\delta,$$ and we have the solutions for equations (\[E: z for epsilon\]) and (\[E: xr for epsilon\]). It is not difficult to prove that the fraction $\frac{\tilde{P}(D)}{P(D)}$ increases continuously from $0$ to $\frac{\tilde{P}(B^{*})}{P(B^{*})}$ as $b$ increases from $0$ to $a^{*}$. Therefore, we can find a solution $b_{\epsilon}\in(0, a^{*})$ where (\[E: b\_epsilon\]) is satisfied. By definition (\[D: CVaR\]), $$\begin{aligned} CVaR_{\lambda}(X_{\epsilon}) &=\frac{1}{\lambda}\inf_{x\in\R}\left(E[(x-X_{\epsilon})^{+}]-\lambda x\right)\\ &\le \frac{1}{\lambda}\left(E[(x_{\epsilon}-X_{\epsilon})^{+}]-\lambda x_{\epsilon}\right)\\ &=\frac{1}{\lambda}(x_{\epsilon}-x_{d})P(A_{\epsilon})-x_{\epsilon}.\end{aligned}$$ The difference $$\begin{aligned} CVaR_{\lambda}(X_{\epsilon})-CVaR(X^{*}) &\le \frac{1}{\lambda}(x_{\epsilon}-x_{d})P(A_{\epsilon})-x_{\epsilon}-\frac{1}{\lambda}(x^{*}-x_{d})P(A^{*})+x^{*}\\ &=\frac{1}{\lambda}(x^{*}-x_{d})(P(A_{\epsilon})-P(A^{*}))+\left(1-\frac{P(A_{\epsilon})}{\lambda}\right)(x^{*}-x_{\epsilon})=\epsilon.\end{aligned}$$ Under Assumption \[A: Complete Market and Continuous RND\], the solution in Case 4 is almost surely unique, the result is proved. $\endproof$ [19]{} [^1]: Krokhmal et al. [@KrokhmalPalmquistUryasev] showed conditions under which the problem of maximizing expected return with CVaR constraint is equivalent to the problem of minimizing CVaR with expected return constraint. In this paper, we use the term Mean-CVaR problem for both cases. [^2]: It is straight-forward to generalize the calculation to multi-dimensional Black-Scholes model. Since we provide in this paper an analytical solution to the static CVaR minimization problem, calculation in other complete market models can be carried out as long as the dynamic hedge can be expressed in a simple manner. [^3]: Threshold ‘$b$’ and consequently sets ‘$B$’ and ‘$D$’ are all dependent on ‘$x$’ through the capital constraint, therefore $z(x)$ is not a linear function of $x$. [^4]: Equivalently, $(a^{*}, x^{*})$ can be viewed as the solution to the capital constraint and the first order Euler condition in equations (\[E: capital constraint two line\]) and (\[E: euler condition two line\]). [^5]: The outcome in its format resembles techniques employed in Föllmer and Leukert [@FollmerLeukert] and Li and Xu [@LiXuA] where the point masses on the thresholds for the Radon-Nikodým derivative in (\[D: A, B, D definitions\]) have to be dealt with carefully.
{ "pile_set_name": "ArXiv" }
--- abstract: 'There exist many characterizations of Noetherian Cohen-Macaulay rings in the literature. These characterizations do not remain equivalent if we drop the Noetherian assumption. The aim of this paper is to provide some comparisons between some of these characterizations in non Noetherian case. Toward solving a conjecture posed by Glaz, we give a generalization of the Hochster-Eagon result on Cohen-Macaulayness of invariant rings, in the context of non Noetherian rings.' address: - 'M. Asgharzadeh, Department of Mathematics, Shahid Beheshti University, Tehran, Iran-and-School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran.' - 'M. Tousi, Department of Mathematics, Shahid Beheshti University, Tehran, Iran-and-School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran.' author: - Mohsen Asgharzadeh and Massoud Tousi title: 'On the notion of Cohen-Macaulayness for non Noetherian rings' --- Introduction ============ Throughout this paper all rings are commutative, associative, with identity, and all modules are unital. The theory of Cohen-Macaulay rings is a keystone in commutative algebra. However, the study of such rings have mostly been restricted to the class of Noetherian rings. On the other hand, certain families of non Noetherian rings and modules have achieved a great deal of significance in commutative algebra. For example, a surprising result of Hochster indicates that non vanishing of a certain $\check{C}ech$ cohomology module of the ring of absolute integral closure of a Noetherian domain implies the Directed Summand Conjecture, see [@Ho2 Theorem 6.1]. While Noetherian Cohen-Macaulay modules are studied in several research papers, not so much is known about them in the non Noetherian case. To the best of our knowledge, until 1992, there was not any idea for extending the concept of Cohen-Macaulayness to non Noetherian rings. In that time Glaz [@G3], considered the notion of Cohen-Macaulayness for not Noetherian rings and conjectured that invariant subrings of certain types of rings would be Cohen-Macaulay. Two years later, she [@G4 Page 219] defined an $R$-module $M$ to be Cohen-Macaulay (in the sense of Glaz) if for each prime ideal ${\frak{p}}$ of $R$, ${\operatorname{ht}}_{M}({\frak{p}})={\operatorname{p.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},M_{{\frak{p}}})$, where ${\operatorname{p.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},M_{{\frak{p}}})$ is the polynomial grade of ${\frak{p}}R_{{\frak{p}}}$ on $R_{{\frak{p}}}$-module $M_{{\frak{p}}}$. Unfortunately, coherent regular rings are not Cohen-Macaulay with this definition. Then, in the same paper, Glaz asked how one can define a non Noetherian notion of Cohen-Macaulayness such that the definition coincides with the original one in the Noetherian case, and that coherent regular rings are Cohen-Macaulay, see [@G4 Page 220]. In the following, we collect Glaz’s desired properties of the notion of Cohen-Macaulayness for non Noetherian rings. \[con\] Can one find a definition of the notion of non Noetherian Cohen-Macaulay rings such that it satisfies the following three conditions. 1. The definition coincides with the original definition in the Noetherian case. 2. Coherent regular rings are Cohen-Macaulay. 3. For a coherent regular ring $R$ and a group $G$ of automorphisms of $R$, assume that there exists a module retraction map $\rho:R {\longrightarrow}R^G$ and that $R$ is a finitely generated $R^G$-module. Then $R^G$ is Cohen-Macaulay. Then, Hamilton [@H1], [@H2], [@H3] has introduced the concept of weak Bourbaki (height) unmixed rings, as a first step towards non Noetherian Cohen-Macaulay rings. Hamilton [@H2] added the following two more properties that must be satisfied by non Noetherian Cohen-Macaulay rings. 1. $R$ is Cohen-Macaulay if and only if $R[X]$ is Cohen-Macaulay. 2. $R$ is Cohen-Macaulay if and only if $R_{{\frak{p}}}$ is Cohen-Macaulay for all prime ideals ${\frak{p}}$ of $R$. More recently, Hamilton and Marley [@HM] introduced a definition for non Noetherian Cohen-Macaulayness rings. If a ring $R$ satisfies their definition, then we say that $R$ is Cohen-Macaulay in the sense of Hamilton-Marley. They used the theory of $\check{C}ech$ cohomology modules to show that Cohen-Macaulayness in the sense of Hamilton-Marley satisfies the assertions (i) and (ii) of Conjecture \[con\]. Adopt the assumption of Conjecture \[con\] (iii) and assume in addition that $\dim R\leq2$ and $G$ is finite such that its order is a unit in $R$. Then Hamilton and Marley proved the assertion (iii) of Conjecture \[con\]. Also, they proved the if part of (H1) and (H2) by their definition. Perhaps it is worth pointing out that there are many characterizations of Noetherian Cohen-Macaulay rings and modules. In the non Noetherian case, these are not necessarily equivalent. All of these characterizations have been chosen as candidates for definition of non Noetherian Cohen-Macaulay rings, see Definition \[def1\]. The aim of the present paper is to provide some comparisons between these definitions in not necessarily Noetherian case. Also, toward solving Conjecture \[con\], we will present a definition of the notion of Cohen-Macaulayness in not necessarily Noetherian case. Let $R$ be a ring and ${\frak{a}}$ an ideal of $R$. The organization of this paper is as follows. In Section 2, we deal with the notion of grade of ideals on modules. There are many definitions for the notion of grade of an ideal of a non Noetherian ring. To make things easier, after recalling these definitions, for the convenience of the reader, we collect some of their properties. For our propose, it seems to be better to use the Koszul grade. This notion of grade is based on the work [@Ho1]. We denote the Koszul grade of an ideal ${\frak{a}}$ on an $R$-module $M$ by ${\operatorname{K.grade}}_R({\frak{a}},M)$. In Section 3, we explore interrelation between different definitions of non Noetherian Cohen-Macaulay rings. These definitions include the Glaz and Hamilton-Marley definitions and the notion of weak Bourbaki unmixed rings. Assume that $\mathcal{A}$ is a non empty subclass of the class of all ideals of a ring $R$. We give some connections between preceding modules and modules that are Cohen-Macaulay modules in the sense of $\mathcal{A}$ (note that an $R$-module $M$ is said to be Cohen-Macaulay in the sense of $\mathcal{A}$, if the equality ${\operatorname{ht}}_{M}({\frak{a}})={\operatorname{K.grade}}_R({\frak{a}},M)$ holds for all ideals ${\frak{a}}$ in $\mathcal{A}$). These classes of ideals include the class of all finitely generated ideals, prime ideals, maximal ideals and the class of all ideals. Our work in this section is motivated by observing that the inequality ${\operatorname{K.grade}}_R({\frak{a}},M)\leq{\operatorname{ht}}_{M}({\frak{a}})$ holds for all ideals ${\frak{a}}$ of $R$. In Section 4, we construct three methods for introducing examples of non Noetherian rings which are Cohen-Macaulay in the sense of any definition of Cohen-Macaulayness that appeared in the present paper. Our first example provides the Cohen-Macaulayness of the polynomial ring $R[X_1,X_2,\cdots]$, where $R$ is Noetherian and Cohen-Macaulay. Our second example implies the Cohen-Macaulayness of absolute integral closure of Noetherian complete local domains of prime characteristic. Our third example concludes the Cohen-Macaulayness of the perfect closure of Noetherian regular local domains of prime characteristic. In Section 5, we give another definition of Cohen-Macaulayness. We call it Cohen-Macaulayness in the sense of generalized Hamilton-Marley, see Definition \[def2\]. Concerning Conjecture \[con\], we will present the following theorem. \[main\] The following assertions hold. 1. A Noetherian ring is Cohen-Macaulay with original definition in Noetherian case if and only if it is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 2. Coherent regular rings are Cohen-Macaulay in the sense of generalized Hamilton-Marley. 3. Let $R$ be a Cohen-Macaulay ring in the sense of generalized Hamilton-Marley and $G$ a finite group of automorphisms of $R$ such that the order of $G$ is a unit in $R$. Assume that $R$ is finitely generated as an $R^G$-module. Then $R^G$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 4. Let $R$ be a Noetherian Cohen-Macaulay ring. Then the polynomial ring $R[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 5. If $R_{{\frak{p}}}$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley for all prime ideals ${\frak{p}}$ of $R$, then $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. After proving Theorem 1.2, we continue our study of the behavior of rings of invariants of different types of non Noetherian Cohen-Macaulay rings. In view of Definition 3.1, our list of the different definitions of Cohen-Macaulayness, includes Cohen-Macaulayness in the sense of (finitely generated) ideals, weak Bourbaki (height) unmixed. Different types of the notion of grade ====================================== In this section ${\frak{a}}$ is an ideal of a commutative ring $R$ and $M$ an $R$-module. We first give a general discussion on the notion of grade. There are many definitions for notion of grade of ${\frak{a}}$ on $M$. Grade over not necessarily Noetherian rings was first defined by Barger [@B] and Hochster [@Ho1]. After them, Alfonsi [@A] combined the grade notions of them into a more general notion of grade for non Noetherian rings and modules. In this section, for the convenience of the reader, we collect some of their properties. To make things easier, we first recall them. Let ${\frak{a}}$ be an ideal of a ring $R$ and $M$ an $R$-module. Take $\Sigma$ be the family of all finitely generated subideals ${\frak{b}}$ of ${\frak{a}}$. Here, $\inf$ and $\sup$ are formed in $\mathbb{Z} \cup \{\pm\infty\}$ with the convention that $\inf \emptyset=+ \infty$ and $\sup \emptyset=- \infty$. \(i) In order to give the definition of Koszul grade when ${\frak{a}}$ is finitely generated by a generating set $\underline{x}:=x_{1},\cdots, x_{r}$, we first denote the Koszul complex related to $\underline{x}$ by $\mathbb{K}_{\bullet}(\underline{x})$. Koszul grade of ${\frak{a}}$ on $M$ is defined by $${\operatorname{K.grade}}_R({\frak{a}},M):=\inf\{i \in\mathbb{N}\cup\{0\} | H^{i}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(\underline{x}), M)) \neq0\}.$$ Note that by [@BH Corollary 1.6.22] and [@BH Proposition 1.6.10 (d)], this does not depend on the choice of generating sets of ${\frak{a}}$. For an ideal ${\frak{a}}$ (not necessarily finitely generated), Koszul grade of ${\frak{a}}$ on $M$ can be defined by $${\operatorname{K.grade}}_R({\frak{a}},M):=\sup\{{\operatorname{K.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ By using [@BH Proposition 9.1.2 (f)], this definition coincides with the original definition for finitely generated ideals. \(ii) A finite sequence $\underline{x}:=x_{1},\cdots,x_{r}$ of elements of $R$ is called weak regular sequence on $M$ if $x_i$ is a nonzero-divisor on $M/(x_1,\cdots, x_{i-1})M$ for $i=1,\cdots,r$. If in addition $M\neq (\underline{x})M$, $\underline{x}$ is called regular sequence on $M$. The classical grade of ${\frak{a}}$ on $M$, denoted by ${\operatorname{c.grade}}_R({\frak{a}},M)$, is defined to the supremum of the lengths of all weak regular sequences on $M$ contained in ${\frak{a}}$. \(iii) (see [@N Page 149]) The polynomial grade of ${\frak{a}}$ on M is defined by $${\operatorname{p.grade}}_R({\frak{a}},M):=\underset{m\rightarrow\infty}{\lim} {\operatorname{c.grade}}_{R[t_1, \cdots,t_m]}({\frak{a}}R[t_1, \cdots,t_m],R[t_1,, \cdots,t_m]\otimes_R M).$$ \(iv) In the case that ${\frak{a}}$ is finitely generated by generating set $\underline{x}:=x_{1}, \cdots, x_{r}$, the $\check{C}ech$ grade of ${\frak{a}}$ on $M$ is defined by ${\operatorname{\check{C}.grade}}_{R}({\frak{a}},M):=\inf\{i\in \mathbb{N}\cup\{0\}|H_{\underline{x}}^{i}(M)\neq0\}$, where $H_{\underline{x}}^{i}(M)$ is denoted the $i$-th cohomology of $\check{C}ech$ complex of $M$ related to $\underline{x}$. [@HM Proposition 2.7] implies that $\inf\{i\in \mathbb{N}\cup\{0\}|H_{\underline{x}}^{i}(M)\neq0\}={\operatorname{K.grade}}_{R}({\frak{a}},M)$. So ${\operatorname{\check{C}.grade}}_{R}({\frak{a}},M)$ does not depend on the choice of the generating sets of ${\frak{a}}$. For not necessarily finitely generated ideal ${\frak{a}}$ the $\check{C}ech$ grade of ${\frak{a}}$ on $M$ is defined $${\operatorname{\check{C}.grade}}_{R}({\frak{a}},M):=\sup\{{\operatorname{\check{C}.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ By the same argument as (i), this is well-defined. \(v) (see [@B]) The ${\operatorname{Ext}}$ grade of ${\frak{a}}$ on $M$ is defined by $${\operatorname{E.grade}}_{R}({\frak{a}},M):=\inf\{i\in \mathbb{N}\cup\{0\}|{\operatorname{Ext}}^{i}_{R}(R/{\frak{a}}, M)\neq0\}.$$ \(vi) The local cohomology grade of ${\frak{a}}$ on $M$ is defined by $${\operatorname{H.grade}}_{R}({\frak{a}},M):=\inf\{i\in \mathbb{N}\cup\{0\}|H_{{\frak{a}}}^{i}(M):=\underset{n} {\varinjlim}{\operatorname{Ext}}^{i}_{R}(R/{\frak{a}}^{n}, M)\neq0\}.$$ \(vii) Let M be a finitely presented R-module and $N$ an $R$-module. By defining from [@A], ${\operatorname{grade}}_R(M,N)\geq n$ if and only if for every finite complex $$\textbf{P}_\bullet: P_n {\longrightarrow}P_{n-1} {\longrightarrow}\cdots {\longrightarrow}P_0 {\longrightarrow}M {\longrightarrow}0$$ of finitely generated projective R-modules $P_i$, there exists a finite complex $$\textbf{Q}_\bullet: Q_n {\longrightarrow}Q_{n-1} {\longrightarrow}\cdots {\longrightarrow}Q_0 {\longrightarrow}M {\longrightarrow}0$$ of finitely generated projective modules $Q_j$, and a chain map $\textbf{P}_\bullet {\longrightarrow}\textbf{Q}_\bullet$ over $M$ such that the induced maps: $H^i({\operatorname{Hom}}_R (\textbf{Q}_\bullet,N)) {\longrightarrow}H^i({\operatorname{Hom}}_R (\textbf{P}_\bullet,N))$ are zero maps for $0 \leq i < n$. ${\operatorname{grade}}_R(M,N)$ is equal to the largest integer $n$ for which the above condition is satisfied. If no such integer $n$ exists we put ${\operatorname{grade}}_R(M,N)=+\infty$. We now recall the definition of ${\operatorname{grade}}_R(L,.)$ for a general $R$-module $L$. By definition, ${\operatorname{grade}}_R(L,N)\geq n$ if for every $\ell\in L$, $(0:_R\ell)$ contains a finitely generated ideal $I_{\ell}$ satisfying ${\operatorname{grade}}_R(R/I_{\ell},N) \geq n$. [@G1 Theorem 7.1.10] implies that, if $L$ is finitely presented, then two definitions of ${\operatorname{grade}}_R(L,N)$ coincide. We shall write ${\operatorname{A.grade}}_R({\frak{a}},N)$ instead of ${\operatorname{grade}}_R(R/ {\frak{a}},N)$. In the next two propositions, we recall some properties and relations between different types of the notion of grade that appeared in Definition 2.1. In what follows we will make use them several times. \[pro1\]Let ${\frak{a}}$ be an ideal of a ring $R$ and $M$ an $R$-module. Then the following hold. 1. Let $\underline{y}:=y_1,\cdots,y_t$ be a regular sequence of elements of ${\frak{a}}$ on $M$. Then $${\operatorname{p.grade}}_{R}({\frak{a}},M)=t+{\operatorname{p.grade}}_{R}({\frak{a}},\frac{M} {\underline{y}M}).$$ 2. Let $f :R\longrightarrow S$ be a flat ring homomorphism. Then $${\operatorname{K.grade}}_{R}({\frak{a}},M)\leq {\operatorname{K.grade}}_{S}({\frak{a}}S,M\otimes_R S).$$ 3. Let ${\frak{a}}\subseteq{\frak{b}}$ be a pair of ideals of $R$. Then ${\operatorname{K.grade}}_{R}({\frak{a}},M)\leq {\operatorname{K.grade}}_{R}({\frak{b}},M)$. 4. (Change of rings) Let $f :R\longrightarrow S$ be a ring homomorphism and $N$ an $S$-module. Then ${\operatorname{K.grade}}_{R}({\frak{a}},N)={\operatorname{K.grade}}_{S}({\frak{a}}S,N)$. 5. Let $f :R\longrightarrow S$ be a faithfully flat ring homomorphism. Then $${\operatorname{K.grade}}_{R}({\frak{a}},M)={\operatorname{K.grade}}_{S}({\frak{a}}S,M\otimes_R S).$$ 6. ${\operatorname{p.grade}}_{R}({\frak{a}},M)={\operatorname{p.grade}}_{R}({\frak{p}},M)$ for some prime ideal ${\frak{p}}$ containing ${\frak{a}}$. 7. If ${\frak{a}}$ is finitely generated, then $${\operatorname{A.grade}}_R({\frak{a}},M)=\inf\{{\operatorname{A.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},M_{{\frak{p}}})|{\frak{p}}\in {\operatorname{V}}({\frak{a}})\cap{\operatorname{Supp}}_R M\}.$$ [**Proof.**]{} (i) This is Theorem 15 of chapter 5 in [@N]. \(ii) First assume that ${\frak{a}}$ is finitely generated by generating set $\underline{x}:=x_1,\cdots,x_n$. The symmetry of Koszul cohomology and Koszul homology says that $H_{i}(\mathbb{K}_{\bullet}(\underline{x})\otimes_RM))\cong H^{n-i}({\operatorname{Hom}}_R(\mathbb{K}_{\bullet}(\underline{x}),M))$, see [@BH Proposition 1.6.10 (d)]. Thus the claim in this case follows from [@BH Proposition 9.1.2 (c)]. The desired result for not necessarily finitely generated ideals follows from the first case. \(iii) In the case ${\frak{a}}\subseteq{\frak{b}}$ is a pair of finitely generated ideals of $R$, the claim is in [@BH Proposition 9.1.2 (f)]. The claim in general case follows from this. \(iv) First assume that ${\frak{a}}$ is finitely generated by generating set $\underline{x}$. The claim follows from the isomorphism ${\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(\underline{x}), N)\cong{\operatorname{Hom}}_S( \mathbb{K}_{\bullet}(\underline{x})\otimes_RS, N)$. Now, assume that ${\frak{a}}$ is a general ideal of $R$ (not necessarily finitely generated). Then, by the former case, we have ${\operatorname{K.grade}}_R({\frak{a}},N)\leq {\operatorname{K.grade}}_S({\frak{a}}S,N)$. Now, let $\underline{y}$ be a finite sequence of elements of ${\frak{a}}S$. Then there exists a finite sequence $\underline{x}$ of elements of ${\frak{a}}$ such that $\underline{y}S\subseteq \underline{x}S$. Again, by the former case, $${\operatorname{K.grade}}_S(\underline{y}S,N)\leq {\operatorname{K.grade}}_S(\underline{x}S,N)={\operatorname{K.grade}}_R (\underline{x}R,N)\leq {\operatorname{K.grade}}_R({\frak{a}},N).$$ This completes the proof. \(v) This is in [@G1 Lemma 7.1.7 (2)]. \(vi) This is Theorem 16 of chapter 5 in [@N]. \(vii) This is in [@G1 Theorem 7.1.11]. $\Box$ \[pro2\] Let ${\frak{a}}$ be an ideal of a ring $R$ and $M$ an $R$-module. Then the following hold. 1. ${\operatorname{c.grade}}_R({\frak{a}},M)\leq {\operatorname{p.grade}}_{R}({\frak{a}},M)={\operatorname{K.grade}}_{R} ({\frak{a}},M)={\operatorname{\check{C}.grade}}_{R}({\frak{a}},M)={\operatorname{A.grade}}_R({\frak{a}},M)$. 2. ${\operatorname{H.grade}}_{R}({\frak{a}},M)={\operatorname{E.grade}}_{R}({\frak{a}},M)$. 3. If ${\frak{a}}$ is finitely generated, then ${\operatorname{E.grade}}_{R}({\frak{b}},M)={\operatorname{K.grade}}_{R}({\frak{b}},M)$. [**Proof.**]{} (i) One can deduce easily, from Proposition \[pro1\] (i) that $${\operatorname{c.grade}}_R({\frak{a}},M)\leq {\operatorname{p.grade}}_{R}({\frak{a}},M).$$ Assume that $\Sigma$ runs through all finitely generated subideals ${\frak{b}}$ of ${\frak{a}}$. In light of [@N Theorem 5.11] we see that $${\operatorname{p.grade}}_{R}({\frak{a}},M)=\sup\{{\operatorname{p.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ In view of [@HM Proposition 2.7], one has $${\operatorname{p.grade}}_{R}({\frak{b}},M)={\operatorname{K.grade}}_{R}({\frak{b}},M)= {\operatorname{\check{C}.grade}}_{R}({\frak{b}},M)$$ for all finitely generated ideals ${\frak{b}}$ of $R$. This yields such equalities for all ideals ${\frak{a}}$ of $R$. On the other hand, equivalency $(1)\Leftrightarrow (4)$ of [@G1 Theorem 7.1.8], says that the equality ${\operatorname{A.grade}}_{R}({\frak{b}},M)={\operatorname{K.grade}}_{R}({\frak{b}},M)$ holds for all finitely generated ideals ${\frak{b}}$ of $R$. By definition, such equality holds for any ideals if one can shows that $${\operatorname{A.grade}}_{R}({\frak{a}},M)=\sup\{{\operatorname{A.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ To see this, first assume that ${\operatorname{A.grade}}_{R}({\frak{a}},M)\geq n$. Then one can find a finitely generated subideal $J$ of $({\frak{a}}:_R1)={\frak{a}}$ satisfying ${\operatorname{A.grade}}_R(J,M) \geq n$. So $$n\leq\sup\{{\operatorname{A.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ Conversely, let $n$ be an integer such that $\sup\{{\operatorname{A.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}\geq n$. Then there is a finitely generated subideal ${\frak{b}}_0$ of ${\frak{a}}$ such that ${\operatorname{A.grade}}_R({\frak{b}}_0,M)\geq n$. So, for any $r$ in $R$ we have ${\frak{b}}_0\subseteq ({\frak{a}}:_Rr)$ and ${\operatorname{A.grade}}_R({\frak{b}}_0,M)\geq n$. Hence ${\operatorname{A.grade}}_R({\frak{a}},M)\geq n$. \(ii) This follows from [@Str Proposition 5.3.15]. \(iii) This is in [@Str Proposition 6.1.6]. $\Box$ The assumptions and results of Proposition \[pro2\] are sharp. To see an example consider the following. \(i) In Proposition \[pro2\] (iii) the finitely generated assumption on ${\frak{a}}$ is really needed. To see this, let $R:=\mathbb{F}[x_1,\cdots,x_n,\cdots]/(x_1^1,\cdots,x_n^n,\cdots)$, where $\mathbb{F}$ is a field. Set ${\frak{a}}:=(x_1,\cdots,x_n,\cdots)$. Then by [@B Page 367], one has $${\operatorname{K.grade}}_R({\frak{a}},R)=0\neq {\operatorname{E.grade}}_R({\frak{a}},R).$$ \(ii) Adopt the notation of (i) and Assume that $\Sigma$ runs over all finitely generated subideals ${\frak{b}}$ of ${\frak{a}}$. By Proposition \[pro2\] (i), one has $${\operatorname{E.grade}}_R({\frak{b}},R)={\operatorname{H.grade}}_R({\frak{b}},R)={\operatorname{K.grade}}_R({\frak{b}},R)=0.$$ Therefore $${\operatorname{E.grade}}_{R}({\frak{a}},M)\neq\sup\{{\operatorname{E.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\},$$ and $${\operatorname{H.grade}}_{R}({\frak{a}},M)\neq\sup\{{\operatorname{H.grade}}_R({\frak{b}},M):{\frak{b}}\in\Sigma\}.$$ \(iii) Let $R:=\mathbb{F}[[X,Y]]$, where $\mathbb{F}$ is a field and set $M:=\bigoplus_{0\neq r\in(X,Y)} R/rR$. By inspection of [@Str Page 91], we find that ${\operatorname{E.grade}}_R({\frak{m}},M)=1$ and ${\operatorname{c.grade}}_R({\frak{m}},M)=0$. This shows that the inequality of Proposition \[pro2\] (i) does not equality in general. However, if $M$ is a finitely generated module over a Noetherian ring $R$, then [@BH Theorem 1.2.5] provides that ${\operatorname{c.grade}}_R({\frak{a}},M)= {\operatorname{E.grade}}_{R}({\frak{a}},M)$ for all ideals ${\frak{a}}$ of $R$ such that $M\neq {\frak{a}}M$. As an easy application of Proposition \[pro2\] (ii), we give an elementary proof of a result of Foxby. He proved the following result as an immediate application of the New Intersection Theorem and it has an important role in [@Fo]. (see [@Fo Corollary 1.5]) Let $(A,{\frak{m}})$ be a Noetherian local ring and $C$ an $A$-module satisfies $C\neq{\frak{m}}C$. Then ${\operatorname{E.grade}}_A({\frak{m}},C)\leq(\dim C\leq)\dim A$. [**Proof.**]{} Note that ${\operatorname{K.grade}}_A({\frak{m}},C)<\infty$, since $C\neq{\frak{m}}C$. By Grothendeick’s Vanishing Theorem, $H^i_{{\frak{m}}}(C)=0$ for all $i>\dim C$. Now, the claim follows by Proposition \[pro2\] (ii) and (i). $\Box$ Relations between different definitions of Cohen-Macaulay rings =============================================================== There are many characterizations of Noetherian Cohen-Macaulay modules in the literature. If we apply these characterizations to non Noetherian modules, then they are not necessarily equivalent. The aim of this section is to provide some relations between these definitions, when we apply them to not necessarily Noetherian rings and modules.\ **3.A. The basic definitions.** In this subsection we recall some candidates for the notion of Cohen-macaulayness in the context of non Noetherian rings and modules. In what follows we need the notion of weakly associated prime ideals of an $R$-module $M$. Recall that a prime ideal ${\frak{p}}$ is weakly associated to $M$ if ${\frak{p}}$ is minimal over $(0 :_R m)$ for some $m\in M $. We denote the set of weakly associated primes of $M$ by ${\operatorname{wAss}}_RM$. Also, in order to give the Hamilton and Marley definition of Cohen-Macaulayness, we need to recall the following definitions (a) and (b). 1. ([@Sch Definition 2.3]) Let $\underline{x} = x_{1}, \cdots, x_{r}$ be a system of elements of $R$. For $m \geq n$ there exists a chain map $\varphi^{m} _{n} (\underline{x}) :\mathbb{K}_{\bullet}(\underline{x}^{m})\longrightarrow \mathbb{K}_{\bullet}(\underline{x}^{n})$, which induces by multiplication of $(\prod x_{i})^{m-n} $. $\underline{x}$ is called weak proregular if for each $n>0$ there exists an $m \geq n$ such that the maps $H_{i}(\varphi^{m} _{n} (\underline{x})) :H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{m}))\longrightarrow H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{n}))$ are zero for all $i \geq 1$. 2. ([@HM Definition 3.1]) A sequence $\underline{x}:=x_1,\cdots,x_\ell$ is called a parameter sequence on $R$, if (1) $\underline{x}$ is a weak proregular sequence; (2) $(\underline{x})R \neq R$, and (3) $H^{\ell}_{\underline{x}} (R)_{{\frak{p}}} \neq 0$ for all prime ideals ${\frak{p}}\in {\operatorname{V}}(\underline{x}R)$. Also, $\underline{x}$ is called a strong parameter sequence on $R$ if $x_{1},\cdots, x_{i}$ is a parameter sequence on $R$ for all $1\leq i \leq \ell$. Now, we are ready to recall the following definitions of the different types of Cohen-Macaulay rings. \[def1\] Let $R$ be a ring and $M$ an $R$-module. 1. ([@HM Definition 4.1]) $R$ is called Cohen-Macaulay in the sense of Hamilton-Marley, if each strong parameter sequence on $R$ becomes a regular sequence on $R$. We denote this property by . 2. ([@G4 Page 219]) $M$ is called Cohen-Macaulay in the sense of Glaz, if for each prime ideal ${\frak{p}}$ of $R$, ${\operatorname{ht}}_{M}({\frak{p}})={\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},M_{{\frak{p}}})$ and denote this by . 3. ([@H2 Definition 1 and 2]) Let ${\frak{a}}$ be a finitely generated ideal of $R$. Set $\mu({\frak{a}})$, the minimal number of elements of $R$ that need to generate ${\frak{a}}$. Assume that for each ideal ${\frak{a}}$ with the property ${\operatorname{ht}}{\frak{a}}\geq \mu({\frak{a}})$, we have $\min ({\frak{a}})={\operatorname{wAss}}_R( R/{\frak{a}})$. A ring with such property is called weak Bourbaki unmixed. We denote this property by . 4. Let $\mathcal{A}$ be a non empty subclass of the class of all ideals of a ring $R$. We say that $M$ is Cohen-Macaulay in the sense of $\mathcal{A}$, if ${\operatorname{ht}}_{M}({\frak{a}})={\operatorname{K.grade}}_R({\frak{a}},M)$ for all ideals ${\frak{a}}$ in $\mathcal{A}$. We denote this property by $\mathcal{A}$. The classes we are interested in are ${\operatorname{Supp}}_R(M)$, ${\operatorname{Supp}}_R(M)\cap\max (R)$, the class of all ideals and the class of all finitely generated ideals. We denote them respectively by $\textmd{Spec}$, $\textmd{Max}$, and . This is clear from the above definition that any zero dimensional ring is Cohen-Macaulay in the sense of each part of Definition \[def1\]. Also, any one dimensional integral domain is Cohen-Macaulay in the sense of each part of Definition \[def1\].\ **3.B. Relations.** The following diagram illustrates our work in this subsection:$$\begin{array}{lllllll} \textmd{Max}\Leftarrow\textmd{Spec} \Leftrightarrow\textmd{ideals}\Rightarrow\textmd{Glaz} \Rightarrow\textmd{f.g. ideals}\Rightarrow\textmd{HM}\Leftarrow\textmd{WB} \ \ (\ast). \end{array}$$ Also, when the base ring is coherent, we show that $\textmd{Spec}\Rightarrow\textmd{WB}$.\ The key to the work in this subsection is given by the following elementary lemma. \[key\] Let ${\frak{a}}$ be an ideal of a ring $R$ and $M$ a finitely generated $R$-module. Then ${\operatorname{K.grade}}_R ({\frak{a}},M)\leq {\operatorname{ht}}_{M}({\frak{a}})$. [**Proof.**]{} If $M/ {\frak{a}}M=0$, then ${\operatorname{ht}}_{M}({\frak{a}})=+\infty$. Therefore, we can assume that ${\operatorname{Supp}}_R(\frac{M}{{\frak{a}}M})=V({\frak{a}})\cap{\operatorname{Supp}}M \neq \emptyset$. Let ${\frak{q}}\in V({\frak{a}})\cap {\operatorname{Supp}}M$. By parts (ii) and (iii) of Proposition 2.2, one gets $${\operatorname{K.grade}}_R({\frak{a}},M)\leq {\operatorname{K.grade}}_{R_{{\frak{q}}}}({\frak{a}}R_{{\frak{q}}},M_{{\frak{q}}})\leq {\operatorname{K.grade}}_{R_{{\frak{q}}}}({\frak{q}}R_{{\frak{q}}},M_{{\frak{q}}}).$$ Thus, it is enough for us to show that if $(R,{\frak{m}})$ is a quasi local ring and $M$ a finitely generated non-zero $R$-module, then ${\operatorname{K.grade}}_R({\frak{m}},M)\leq \dim M$. Applying Proposition \[pro1\] (iv) for the change ring $R\longrightarrow R/ {\operatorname{Ann}}M$, we may assume that $M$ is a faithful $R$-module. So, $\dim M=\dim R$. If $\dim R=\infty$, we have nothing to prove. Hence we can assume that $\dim R<\infty$. [@HM Proposition 2.4] says that $H^i_{\underline{y}}(M)=0$ for all $i>\dim R =\dim M$ and all finite sequences $\underline{y}$ of elements of $R$. On the other hand for a finite sequence $\underline{x}$ of elements of ${\frak{m}}$, by Nakayama’s Lemma, $M/\underline{x}M\neq 0$, and so ${\operatorname{K.grade}}_R(\underline{x},M)<\infty$. Consequently, by using Proposition \[pro2\] (i), ${\operatorname{K.grade}}_R({\frak{m}},M)={\operatorname{\check{C}.grade}}_{R}({\frak{m}},M)\leq \dim M$. $\Box$\ The next result gives the proof of the following implications: $$\textmd{Spec} \Leftrightarrow\textmd{ideals}\Rightarrow\textmd{Glaz} \Rightarrow\textmd{f.g. ideals}.$$ Let $M$ be a finitely generated $R$-module. Consider the following conditions: 1. ${\operatorname{ht}}_{M}({\frak{p}})={\operatorname{K.grade}}_R({\frak{p}},M)$ for all prime ideals ${\frak{p}}$ in ${\operatorname{Supp}}_ R(M)$. 2. ${\operatorname{ht}}_{M}({\frak{a}})={\operatorname{K.grade}}_R({\frak{a}},M)$ for all ideals ${\frak{a}}$ of $R$. 3. ${\operatorname{ht}}_{M}({\frak{q}})={\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{q}}R_{{\frak{p}}},M_{{\frak{p}}})$ for all prime ideals ${\frak{p}},{\frak{q}}$ in ${\operatorname{Supp}}_ R(M)$ with ${\frak{q}}\subseteq{\frak{p}}$. 4. ${\operatorname{ht}}_{M}({\frak{p}})={\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},M_{{\frak{p}}})$ for all prime ideals ${\frak{p}}$ in ${\operatorname{Supp}}_ R(M)$. 5. ${\operatorname{ht}}_{M}({\frak{a}})={\operatorname{K.grade}}_R({\frak{a}},M)$ for all finitely generated ideals ${\frak{a}}$ of $R$. Then $(i)\Leftrightarrow (ii)\Rightarrow (iii)\Rightarrow (iv)\Rightarrow (v)$. [**Proof.**]{} $(i)\Rightarrow (ii)$ Let ${\frak{a}}$ be an ideal of $R$. By Proposition \[pro1\] (vi) and Proposition \[pro2\] (i), there exists a prime ideal ${\frak{p}}$ of $R$ containing ${\frak{a}}$ such that ${\operatorname{K.grade}}_R({\frak{a}},M)={\operatorname{K.grade}}_R({\frak{p}},M)$. In view of Lemma \[key\], one can find that$${\operatorname{K.grade}}_R({\frak{a}},M)={\operatorname{K.grade}}_R({\frak{p}},M)={\operatorname{ht}}_{M}({\frak{p}})\geq {\operatorname{ht}}_{M}({\frak{a}})\geq {\operatorname{K.grade}}_{R}({\frak{a}},M),$$ which completes the proof. $(ii)\Rightarrow (i)$ This is trivial. $(ii)\Rightarrow (iii)$ This follows from the following $${\operatorname{K.grade}}_R({\frak{q}},M)\leq {\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{q}}R_{{\frak{p}}},M_{{\frak{p}}})\leq{\operatorname{ht}}_{M_{{\frak{p}}}}({\frak{q}}R_{{\frak{p}}})={\operatorname{ht}}_{M}({\frak{q}}),$$where the last inequality follows from Lemma \[key\]. $(iii)\Rightarrow (iv)$ This is trivial. $(iv)\Rightarrow (v)$ Let ${\frak{a}}$ be a finitely generated ideal of $R$. Then, Proposition \[pro1\] (vii), Proposition \[pro2\] (i) and our assumption, imply that $$\begin{array}{ll} {\operatorname{K.grade}}_R({\frak{a}},M)&=\inf\{{\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},M_{{\frak{p}}})|{\frak{p}}\in {\operatorname{V}}({\frak{a}})\cap{\operatorname{Supp}}_R M\}\\&=\inf\{{\operatorname{ht}}_{M_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}})|{\frak{p}}\in {\operatorname{V}}({\frak{a}})\cap{\operatorname{Supp}}_R M\}\\&={\operatorname{ht}}_M({\frak{a}}), \\ \end{array}$$ which completes the proof. $\Box$ In view of [@HM Proposition 4.10], any weak Bourbaki unmixed ring is Cohen-Macaulay in the sense of Hamilton-Marley. Thus, in order to complete the proof of all of desired implications of the diagram $(\ast)$, we need to state the following. Let $R$ be a Cohen-Macaulay ring in the sense of finitely generated ideals. Then $R$ is Cohen-Macaulay in the sense of Hamilton-Marley. [**Proof.**]{} Let $\underline{x}:=x_1,\cdots,x_\ell$ be a strongly parameter sequence on $R$. By equivalency $(a)\Leftrightarrow (c)$ of [@HM Proposition 4.2], its enough to show that ${\operatorname{K.grade}}_R(\underline{x}R,R)={\operatorname{p.grade}}_R(\underline{x}R,R)=\ell$. For a finite sequence $\underline{y}:=y_1,\cdots,y_m$ of elements of $R$, [@HM Proposition 3.6 ] state that ${\operatorname{ht}}(\underline{y}R)\geq m$, if $\underline{y}$ is a parameter sequence on $R$. Now, let ${\frak{q}}\in {\operatorname{V}}(\underline{x}R)$ be such that ${\operatorname{ht}}({\frak{q}})= {\operatorname{ht}}(\underline{x}R)$. Also, from definition, one has $${\operatorname{K.grade}}_{R_{\frak{q}}}(\underline{x} R_{\frak{q}},R_{\frak{q}})\leq\mu(\underline{x} R_{\frak{q}})\leq\ell.$$ Then, it turns out that $${\operatorname{K.grade}}_R(\underline{x}R,R)\leq {\operatorname{K.grade}}_{R_{\frak{q}}}(\underline{x} R_{\frak{q}},R_{\frak{q}})\leq\ell\leq {\operatorname{ht}}_R({\frak{q}})={\operatorname{ht}}(\underline{x}R)={\operatorname{K.grade}}_R(\underline{x}R,R).$$ Therefore, ${\operatorname{K.grade}}_R(\underline{x}R,R)=\ell$, as claimed. $\Box$ Theorem 3.10 is one of our main results in this subsection. To prove it, we need a couple of lemmas. Let $R$ be a Cohen-Macaulay ring in the sense of (finitely generated) ideals and $x$ a regular element of $R$. Then $R/xR$ is Cohen-Macaulay in the sense of (finitely generated) ideals. In particular, a ring $A$ is Cohen Macaulay in the sense of (finitely generated) ideals, if either $A[[X]]$ or $A[X]$ is as well. [**Proof.**]{} Let ${\frak{b}}:={\frak{a}}/xR$ be an ideal (resp. finitely generated ideal) of $R/xR$. By parts $(i)$ and $(iv)$ of Proposition \[pro1\], one can find that $${\operatorname{K.grade}}_{R/xR}({\frak{b}},R/xR)={\operatorname{K.grade}}_{R}({\frak{a}},R/xR)= {\operatorname{K.grade}}_R({\frak{a}},R)-1.$$ Then it yields that:$$\begin{array}{ll} {\operatorname{K.grade}}_R({\frak{a}},R)-1&= {\operatorname{K.grade}}_{R/xR}({\frak{b}},R/xR)\\&\leq{\operatorname{ht}}_{R/xR} ({\frak{b}})\\&\leq{\operatorname{ht}}_{R} ({\frak{a}})-1\\&={\operatorname{K.grade}}_R({\frak{a}},R)-1, \\ \end{array}$$ which completes the proof. $\Box$ \(i) There exists an example of a quasi-local ring $R$ such that it is Cohen–Macaulay in the sense of Hamilton-Marley but not $R/xR$ for some regular element $x$ of $R$, see [@HM Example 4.9]. \(ii) Assume that $(R,{\frak{m}})$ is a quasi local ring, which is equidimensional, semicatenary and weak Bourbaki unmixed. Let $x$ be a regular element of $R$. [@H3 Theorem D] shows that $R/xR$ is weak Bourbaki unmixed. Recall that a module is coherent if it is finitely generated and each of its finitely generated submodule is finitely presented. A ring is coherent if it is coherent as a module over itself. Noetherian rings are coherent. There are many examples of non Noetherian coherent rings. For instance, any non Noetherian valuation domain is a non Noetherian coherent ring. Let $R$ be a coherent ring and $\underline{x}:=x_1,\cdots,x_\ell$ a finite sequence of elements of $R$. Then $H^{i}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(\underline{x}),R))$ is finitely generated $R$-module for all $i$. [**Proof.**]{} Let $\textbf{F}^\bullet:0{\longrightarrow}F^{0}{\longrightarrow}\cdots{\longrightarrow}F^i\stackrel{\varphi^i}{\longrightarrow}F^{i+1}{\longrightarrow}\cdots{\longrightarrow}F^{\ell}{\longrightarrow}0$ be the Koszul complex of $R$ related to $\underline{x}$. Let $i$ be an integer between $0$ and $\ell$. By using the exact sequence $$F^i{\longrightarrow}F^{i+1}{\longrightarrow}{\operatorname{im}}\varphi^i{\longrightarrow}0,$$ we find that ${\operatorname{im}}\varphi^i$ is finitely presented. Consider the exact sequence $$0{\longrightarrow}\ker\varphi^i{\longrightarrow}F^i{\longrightarrow}{\operatorname{im}}\varphi^i{\longrightarrow}0,$$ in which the maps are the natural one. Keep in mind that $R$ is coherent. Now, [@G1 Theorem 2.5.1] yields that $\ker\varphi^i$ is finitely presented. From this the claim follows. $\Box$ \(i) The coherent assumption on $R$ in Lemma 3.7 is really needed. To see an example, let $A$ be a $\mathbb{C}$-algebra generated by all degree two monomials of $\mathbb{C}[X_1,X_2,\cdots]:=\bigcup_{n=1}^{\infty} \mathbb{C}[X_1,\cdots,n]$ and set $R:= A/(X_1X_2)$. We use small letters to indicate the images in $R$. Then $(0:_Rx_1^2)=(x_2x_i: i\in\mathbb{N})$ is not finitely generated. So the first Koszul homology related to $x_1^2$ is not finitely generated (cf. [@G2 Example 2]). \(ii) If Koszul (co)homology modules are finitely generated, then one can see that the vanishing of first Koszul homology implies the exactness of Koszul complex. But there exists an example which does not satisfy this, see [@K Example 2]. Let $R$ be a Cohen-Macaulay ring in the sense of ideals. Then ${\operatorname{wAss}}_R(R)=\min(R)$, where $\min(R)$ is the set of all minimal prime ideals of $R$. [**Proof.**]{} It is well known that $\min(R)\subseteq {\operatorname{wAss}}_R(R)$. Let ${\frak{p}}\in{\operatorname{wAss}}_R(R)$. Then [@HM Lemma 2.8] state that ${\operatorname{p.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},R_{{\frak{p}}})=0$. By applying Proposition \[pro2\] (i), one has ${\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},R_{{\frak{p}}})=0$. The inequality ${\operatorname{K.grade}}_R({\frak{p}},R)\leq {\operatorname{K.grade}}_{R_{{\frak{p}}}}({\frak{p}}R_{{\frak{p}}},R_{{\frak{p}}})$ shows that ${\operatorname{K.grade}}_R({\frak{p}},R)=0$. Therefore, ${\operatorname{ht}}_{R}({\frak{p}})=0$, i.e., ${\frak{p}}\in\min (R)$. $\Box$ Now, we are ready in the position to present our next main result. Let $R$ be a coherent ring. If $R$ is Cohen-Macaulay in the sense of ideals, then $R$ is weak Bourbaki unmixed. [**Proof.**]{} By Theorem 3.3 and [@G1 Theorem 2.4.2], $R_{{\frak{p}}}$ is Cohen-Macaulay in the sense of ideals and it is coherent for all prime ideals ${\frak{p}}$ of $R$. Also, if $R_{{\frak{p}}}$ is weak Bourbaki unmixed for any ${\frak{p}}\in {\operatorname{Spec}}R$, then by [@H2 Theorem 3], $R$ is weak Bourbaki unmixed. Thus, we may and do assume that $R$ is quasi local. Let ${\frak{a}}$ be a proper finitely generated ideal of $R$ with the property that ${\operatorname{ht}}{\frak{a}}\geq \mu({\frak{a}})$. Then, ${\operatorname{K.grade}}_R({\frak{a}},R)\leq \mu({\frak{a}})\leq{\operatorname{ht}}{\frak{a}}$. So $\ell:={\operatorname{K.grade}}_R({\frak{a}},R)= \mu({\frak{a}})={\operatorname{ht}}{\frak{a}}$, since $R$ is Cohen-Macaulay in the sense of ideals. Let $\underline{x}:=x_1,\cdots,x_{\ell}$ be a generating set for ${\frak{a}}$. Now, we show that $\underline{x}$ is a strong parameter sequence. Let $1\leq i< \ell$ and set ${\frak{a}}_i:=(x_1,\cdots,x_i)R$. As the reader might have guessed, we consider the following long exact sequence of $R$-modules and $R$-homomorphisms $$\begin{array}{ll} \cdots {\longrightarrow}H^{j}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(x_1,\cdots,x_{i}),R))\stackrel{x_{i+1}}{\longrightarrow}H^{j}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(x_1,\cdots,x_{i}),R)){\longrightarrow}\\ H^{j+1}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(x_1,\cdots,x_{i+1}),R)){\longrightarrow}H^{j+1}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(x_1,\cdots,x_{i+1}),R)){\longrightarrow}\cdots.\\ \end{array}$$ By Lemma 3.7, $H^{j}({\operatorname{Hom}}_R( \mathbb{K}_{\bullet}(x_1,\cdots,x_{i}), R))$ is finitely generated for all $j$. Also, $x_{i+1}$ belongs to the Jacobson radical of $R$. By using of Nakayama’s Lemma, one can find that $${\operatorname{K.grade}}_R({\frak{a}}_i+x_{i+1}R,R)\leq {\operatorname{K.grade}}_R({\frak{a}}_i,R)+1.$$ An easy induction shows that $${\operatorname{K.grade}}_R({\frak{a}}_i+(x_{i+1},\cdots,x_{\ell}),R)\leq {\operatorname{K.grade}}_R({\frak{a}}_i,R)+(\ell-i).$$On the other hand, ${\operatorname{K.grade}}_R({\frak{a}}_i+(x_{i+1},\cdots,x_{\ell}),R)=\ell$. Hence ${\operatorname{K.grade}}_R({\frak{a}}_i,R)\geq i$. This implies that ${\operatorname{K.grade}}_R({\frak{a}}_i,R)= i$, since ${\frak{a}}_i$ can be generated by $i$’s elements. And so by [@HM Proposition 3.3 (e)], $x_1,\cdots,x_{i}$ is a parameter sequence on $R$. Thus, $\underline{x}$ is a strong parameter sequence on $R$. In view of Theorem 3.4, $R$ is Cohen-Macaulay in the sense of Hamilton-Marley. Therefore, $\underline{x}$ forms a weak regular sequence on $R$. So Lemma 3.5 implies that $R/{\frak{a}}$ is Cohen-Macaulay in the sense of ideals. Now, let ${\frak{p}}\in{\operatorname{wAss}}_R(R/ {\frak{a}})$. Then, Lemma 3.9 shows that ${\operatorname{ht}}_{R/{\frak{a}}}({\frak{p}}/ {\frak{a}})=0$, i.e., ${\frak{p}}\in\min ({\frak{a}})$. $\Box$ **3.C. Examples.** In this subsection, we provide some counter-examples to show that non of the following implications are valid: $$\begin{array}{lllllll}&& \textmd{WB} \\ &&\Uparrow \\ &\textmd{f.g. ideals}\Leftarrow\textmd{Max}\Leftrightarrow & \textmd{HM }\Rightarrow \textmd{f.g. ideals} \ \ (\ast,\ast). \\ \end{array}$$ One might ask whether the second statement of Theorem 3.3 is true, if ${\operatorname{ht}}_{R}({\frak{m}})={\operatorname{K.grade}}_R({\frak{m}},R)$ for all maximal ideals ${\frak{m}}$ of $R$. This, would not be the case, as the next example shows. Let $(R,m)$ be a Noetherian local Cohen-Macaulay ring of dimension $d>1$. Let $X(d-1):=\{{\frak{p}}\in{\operatorname{Spec}}R: {\operatorname{ht}}{\frak{p}}\leq d-1\}$. Set $M_{d-1}:= \underset{{\frak{p}}\in X(d-1)}\bigoplus R _{{\frak{p}}}/{\frak{p}}R _{{\frak{p}}}$ and consider $S:=R \ltimes M_{d-1}$, the trivial extension of $R$ by $M_{d-1}$. Then $S$ is a quasi-local ring with the unique maximal ideal ${\frak{n}}:={\frak{m}}\ltimes M_{d-1}$. By inspection of [@HM Example 2.10], we know that ${\operatorname{K.grade}}_{S}({\frak{n}},S)={\operatorname{ht}}({\frak{n}})$. Thus, $S$ is Cohen-Macaulay in the sense of maximal ideals. Again, in light of [@HM Example 2.10], we see that ${\operatorname{K.grade}}_{S}({\frak{a}},S)=0$ for all ideals ${\frak{a}}$ of $S$ with the property that ${\operatorname{rad}}({\frak{a}})\neq{\frak{n}}$. Now, take $a$ be in ${\frak{m}}$ but not in $\bigcup\{{\frak{p}}:{\frak{p}}\in\min(R)\}$. One has ${\operatorname{rad}}((a,0)S)\neq {\frak{n}}$ and ${\operatorname{ht}}((a,0)S)\neq0$. This yields that $S$ is not Cohen-Macaulay in the sense of finitely generated ideals. Also, by [@HM Example 4.3], $S$ is not Cohen-Macaulay in the sense of Hamilton-Marley. In view of [@Ber], a ring is called regular if every finitely generated ideal has finite projective dimension. For example, valuation domains are coherent and regular. So they are Cohen-Macaulay in the sense of Hamilton-Marley, see [@HM Theorem 4.8]. Then, the next result completes our list of counter-examples to the diagram $(\ast,\ast)$. Let $(R,{\frak{m}})$ be a valuation domain. Then, the following are equivalent. 1. $R$ is Cohen-Macaulay in the sense of ideals. 2. $R$ is Cohen-Macaulay in the sense of prime ideals. 3. $R$ is Cohen-Macaulay in the sense of Glaz. 4. $R$ is Cohen-Macaulay in the sense of finitely generated ideals. 5. $\dim R\leq1$. 6. $R$ is weak Bourbaki unmixed. 7. $R$ is Cohen-Macaulay in the sense of maximal ideals. [**Proof.**]{} Without loss of generality we can assume that $R$ is not a field. Let $\underline{x}$ be a finite sequence of nonzero elements of ${\frak{m}}$. Since $R$ is a valuation domain, so there is an element $r$ such that $rR=(\underline{x})R$. Hence ${\operatorname{K.grade}}_R(\underline{x}R,R)\leq1$. Thus ${\operatorname{K.grade}}_R(\underline{x}R,R)=1$, because $R$ is a domain. Therefore, we bring the following statement: $$\begin{array}{lllllll} {\operatorname{K.grade}}({\frak{a}},R)=1 \textit{ for all non-zero proper ideals } {\frak{a}}\textit{ of } R. \ \ (\star) \\ \end{array}$$ The assertions $(i)\Leftrightarrow (ii)$, $(ii)\Rightarrow (iii)$ and $(iii)\Rightarrow(iv)$ are hold by Theorem 3.3. $(iv)\Rightarrow (v)$ For a contradiction assume that $\dim R>1$. Since the ideals of $R$ are linearly ordered by means of inclusion, $R$ has only one prime ideal of height one, say ${\frak{p}}$. Let $x\in {\frak{m}}\setminus{\frak{p}}$. Then ${\operatorname{ht}}(xR)>1$. So in view of $(\star)$, $R$ is not Cohen-Macaulay in the sense of finitely generated ideals. This contradiction shows that $\dim R\leq1$. $(v)\Rightarrow (ii)$ This is obvious. $(i)\Rightarrow (vi)$ Any finitely generated ideal of a valuation domain is principal. So valuation domains are coherent. Therefore, this implication follows by Theorem 3.10. $(vi)\Rightarrow (v)$ It is enough to show that any valuation domain of dimension greater than 1 is not weak Bourbaki unmixed. Assume that $R$ is of that type. Then there is the chain $0\subsetneqq {\frak{p}}\subsetneqq {\frak{q}}$ of prime ideals of $R$ such that ${\operatorname{ht}}({\frak{p}})=1$. Let $a\in{\frak{p}}\setminus\{0\}$ and consider the ideal ${\frak{a}}:=aR$. Since ideals of $R$ are linearly ordered by means of inclusion, $\min({\frak{a}})=\{{\frak{p}}\}$. Assume that $\min ({\frak{a}})={\operatorname{wAss}}_R(R/{\frak{a}})$. Let $b\in{\frak{q}}\setminus{\frak{p}}$. Then $a,b$ is a weak $R$-sequence of length 2, which is a contradiction with $(\star)$. This shows that $\min ({\frak{a}})\neq{\operatorname{wAss}}_R( R/{\frak{a}})$ and consequently $R$ is not weak Bourbaki unmixed. $(ii)\Rightarrow(vii)$ is trivial and the remainder implication $(vii)\Rightarrow(v)$ follows by $(\star)$. $\Box$ Let $(R,{\frak{m}})$ be an unique factorization valuation domain which is not a field. By inspection of $(\star)$ in the proof of Proposition 3.12, one has $\dim R=1$, and so $R$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. Indeed, let ${\frak{p}}$ be a prime ideal of $R$ with height one. It is enough to show that $R/{\frak{p}}$ is a field. One has ${\frak{p}}=xR$ for some $x$ in ${\frak{p}}$, because $R$ is an unique factorization domain. Let ${\frak{b}}:={\frak{a}}/xR$ be a non zero proper ideal of $R/ xR$, where ${\frak{a}}$ is an ideal of $R$. Then by $ (\star)$ in the proof of Proposition 3.12, we have ${\operatorname{K.grade}}({\frak{b}},R/xR)=1$ and ${\operatorname{K.grade}}({\frak{a}},R)=1$. In light of Proposition 2.2 (i) one has ${\operatorname{K.grade}}_{R/xR}({\frak{b}},R/xR)={\operatorname{K.grade}}_{R}({\frak{a}},R/xR)= {\operatorname{K.grade}}_R({\frak{a}},R)-1=0$. This contradiction shows that $R/xR$ has no any non zero proper ideal. Therefore, $R/{\frak{p}}$ is a field as claimed. Examples of Cohen-Macaulay rings ================================ In this section we will construct some examples of non Noetherian Cohen-Macaulay rings. Our first example provides the Cohen-Macaulayness of the ring $R[X_1,X_2,\cdots]:=\bigcup_{i=1}^{\infty}R[X_1,\cdots,X_i]$, when $R$ is Noetherian and Cohen-Macaulay. Such result gives us that at least one of the Hamilton’s conditions for an appropriate definition of non Noetherian Cohen-Macaulay ring. Let $R$ be a Noetherian Cohen-Macaulay ring. Then the ring $R[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. [**Proof.**]{} First, we show that $R':=R[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of prime ideals. Let ${\frak{p}}$ be a prime ideal of $R'$. We need to show that the equality ${\operatorname{K.grade}}_{R'} ({\frak{p}},R')={\operatorname{ht}}_{R'}({\frak{p}})$ holds. For any positive integer $i$, set $R_i:=R[X_1,\cdots,X_i]$ and consider the prime ideal $\widetilde{{\frak{p}}}_i:={\frak{p}}\cap R_i$. Then we have the following chain of subsets of $R'$:$$\widetilde{{\frak{p}}}_1\subseteq \widetilde{{\frak{p}}}_2\subseteq\cdots\subseteq\widetilde{{\frak{p}}}_i \subseteq\widetilde{{\frak{p}}}_{i+1}\subseteq\cdots.$$ Consider the following, only possibility, cases (a) and (b). 1. For infinitely many $i$’s, the condition $\widetilde{{\frak{p}}}_{i}R_{i+1}\subsetneqq \widetilde{{\frak{p}}}_{i+1}$ satisfies. 2. Just only for finitely many $i$’s, the condition $\widetilde{{\frak{p}}}_{i}R_{i+1}\subsetneqq \widetilde{{\frak{p}}}_{i+1}$ holds. In the case (a), for infinitely many $i$’s the inequality ${\operatorname{ht}}_{R_{i}}(\widetilde{{\frak{p}}}_{i})<{\operatorname{ht}}_{R_{i+1}}(\widetilde{{\frak{p}}}_{i+1})$ is true, since ${\operatorname{ht}}_{R_{i}}(\widetilde{{\frak{p}}}_{i})={\operatorname{ht}}_{R_{i+1}}(\widetilde{{\frak{p}}}_{i}R_{i+1})$. Then for such $i$’s, it turns out that $$\begin{array}{ll} {\operatorname{K.grade}}_{R'} (\widetilde{{\frak{p}}}_{i}R',R')&={\operatorname{K.grade}}_{R_{i}} (\widetilde{{\frak{p}}}_{i},R_{i})\\&={\operatorname{ht}}_{R_{i}}(\widetilde{{\frak{p}}}_{i})\\ &<{\operatorname{ht}}_{R_{i+1}}(\widetilde{{\frak{p}}}_{i+1})\\&={\operatorname{K.grade}}_{R_{i+1}} (\widetilde{{\frak{p}}}_{i+1},R_{i+1})\\&={\operatorname{K.grade}}_{R'} (\widetilde{{\frak{p}}}_{i+1}R',R'), \\ \end{array}$$ where the first equality follows from Proposition \[pro1\] (v) and second from the Cohen-Macaulayness of $R_i$. Hence ${\operatorname{K.grade}}_{R'} ({\frak{p}}, R')=\infty$ and consequently ${\operatorname{K.grade}}_{R'}({\frak{p}},R')={\operatorname{ht}}_{R'}({\frak{p}})$. In the case (b), there is an integer $k>0$ such that $\widetilde{{\frak{p}}}_{k}R_{k+j}=\widetilde{{\frak{p}}}_{k+j}$ for all $j>0$. So $${\frak{p}}=\bigcup_{i\geq1} \widetilde{{\frak{p}}}_{i}=(\widetilde{{\frak{p}}}_{1}\cup\cdots\cup\widetilde{{\frak{p}}}_{k}) \cup(\bigcup_{j\geq1} \widetilde{{\frak{p}}}_{k}R_{k+j}).$$ In particular, ${\frak{p}}$ is finitely generated. Let $\{\alpha_1,\cdots,\alpha_{\ell}\}$ be a generating set for ${\frak{p}}$. Thus, there is a positive integer as $m$ such that $\alpha_{j}\in R_m$ for all $1\leq j\leq \ell$. One can see easily that $((\alpha_1,\cdots,\alpha_{\ell})R_m)R'\cap R_m=(\alpha_1,\cdots,\alpha_{\ell})R_m$, because $R'/R_m$ is a faithfully flat ring extension. In particular, $(\alpha_1,\cdots,\alpha_{\ell})R_m$ is a prime ideal of $R_m$. Now, by [@H1 Lemma 4.1], ${\operatorname{ht}}_{R_m}((\alpha_1,\cdots,\alpha_{\ell})R_m)={\operatorname{ht}}_{R'}({\frak{p}})$. Therefore $$\begin{array}{ll} {\operatorname{ht}}_{R'}({\frak{p}})&={\operatorname{ht}}_{R_m}((\alpha_1,\cdots,\alpha_{\ell})R_m)\\&={\operatorname{K.grade}}_{R_m} ((\alpha_1,\cdots,\alpha_{\ell})R_m, R_m)\\&={\operatorname{K.grade}}_{R'} ((\alpha_1,\cdots,\alpha_{\ell})R', R')\\&={\operatorname{K.grade}}_{R'} ({\frak{p}}, R'). \\ \end{array}$$ So $R'=R[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of prime ideals. Due to Theorem 3.3 we know that $R'$ is Cohen-Macaulay in the sense of ideals. Also, in view of Theorem 3.4, $R'$ is Cohen-Macaulay in the sense of Hamilton-Marley. By [@G1 Corollary 2.3.4], $R'$ is coherence. Thus, Theorem 3.10 implies that $R'$ is weak Bourbaki unmixed. $\Box$ Let $R$ be a Noetherian Cohen-Macaulay ring and ${\frak{a}}$ a finitely generated ideal of $R[X_1,X_2,\cdots]$ with the property that ${\operatorname{ht}}{\frak{a}}\geq \mu({\frak{a}})$. Then by [@H1 Theorem 4.2], all of the weak associated primes of ${\frak{a}}$ have the same height, i.e., $R[X_1,X_2,\cdots]$ is weak Bourbaki height unmixed. In particular, $R[X_1,X_2,\cdots]$ is weak Bourbaki unmixed. Let $(R,{\frak{m}})$ be a Noetherian local domain and let $R^{+}$ be the integral closure of $R$ in the algebraic closure of its field of fractions. Theorem 4.5 provides the Cohen-Macaulayness of $R^{+}$. To deal with this, we establish the following lemma. Let $f:R\longrightarrow S$ be a flat and integral ring homomorphism. If $R$ is Cohen-Macaulay in the sense of ideals, then $S$ is also Cohen-Macaulay in the sense of ideals. [**Proof.**]{} Let ${\frak{q}}$ be in ${\operatorname{Spec}}S$ and set ${\frak{p}}={\frak{q}}\cap R$. In view of Proposition \[pro1\] (ii), we have $$\begin{array}{ll} {\operatorname{ht}}{\frak{q}}&\leq{\operatorname{ht}}{\frak{p}}\\&= {\operatorname{K.grade}}_{R}({\frak{p}},R)\\&\leq {\operatorname{K.grade}}_{S}({\frak{p}}S,S)\\&\leq {\operatorname{K.grade}}_{S}({\frak{q}},S),\\ \end{array}$$and so Lemma \[key\] completes the proof. $\Box$ Note that by [@AH Theorem 4.5], $R^+$ is not coherent, when $R$ is of dimension at least $3$ and of positive characteristic. So in the next result we can not apply Theorem 3.10 for it. Let $(R,{\frak{m}})$ be a Noetherian complete local domain. Then the following holds. 1. If $R$ is of prime characteristic $p$, then $R^{+}$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. 2. If $\dim R\geq4$ and $R$ is of mixed characteristic, then $R^{+}$ is not Cohen-Macaulay in the sense of finitely generated ideals. 3. If $\dim R<3$, then $R^{+}$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. 4. If $\dim R\geq3$ and $R$ containing a field of characteristic $0$, then $R^+$ is not Cohen-Macaulay in the sense of finitely generated ideals. [**Proof.**]{} By Cohen’s Structure Theorem there exists a complete regular local subring $(A,{\frak{m}}_{A})$ of $R$ such that $R$ is a finitely generated $A$-module. Recall that $R^{+}=A^+$. Then, without loss of generality we can assume that $R$ is regular. \(i) First, we show that $R^{+}$ is Cohen-Macaulay in the sense of of ideals. In view of [@HH Theorem 5.15], $R^+$ is a balanced big Cohen-Macaulay $R$-algebra, i.e., every system of parameters is regular on $R^{+}$. Over regular local rings, [@HH 6.7, Flatness] state that any balanced big Cohen-Macaulay module is flat. Then, Lemma 4.3 yields that $R^{+}$ is Cohen-Macaulay in the sense of ideals. Next, we show that $R^{+}$ is weak Bourbaki unmixed. Let ${\frak{a}}$ be a finitely generated ideal of $R^+$ with the property that ${\operatorname{ht}}{\frak{a}}\geq \mu({\frak{a}})$. Then, ${\operatorname{K.grade}}_{R^+}({\frak{a}},R^+)\leq \mu({\frak{a}})\leq{\operatorname{ht}}{\frak{a}}$. So $n:={\operatorname{K.grade}}_{R^+}({\frak{a}},R^+)= \mu({\frak{a}})={\operatorname{ht}}{\frak{a}}$, since $R^+$ is Cohen-Macaulay in the sense of ideals. Let $\{a_1,\cdots,a_n\}$ be a generating set for ${\frak{a}}$. The ring $R^+$ is a direct union of module finite ring extensions of $R$. Such ring extensions are Noetherian, local and complete, since $R$ is local and complete. Let $A$ be one of them, which contains $R$ and $a_i$ for all $1\leq i \leq n$. In view of $A^{+}=R^{+}$, we can assume that $a_i\in R$ for all $1\leq i \leq n$. Set ${\frak{b}}:=a_1R+\cdots a_nR$. Then ${\frak{b}}R^+={\frak{a}}$. Because $R^+$ is an integral extension of $R$, we have $n={\operatorname{ht}}{\frak{a}}\leq{\operatorname{ht}}{\frak{b}}\leq n$. So $n:= \mu({\frak{b}})={\operatorname{ht}}{\frak{b}}$. This implies that $\{a_1,\cdots,a_n\}$ is a part of a system of parameter for $R$. Keep in mind that $R^+$ is a balanced big Cohen-Macaulay $R$-algebra. This say’s that $\{a_1,\cdots,a_n\}$ is a regular sequence on $R^+$. It follows from Lemma 3.5 and Lemma 3.9 that ${\operatorname{wAss}}_{R^+}(R^+/{\frak{a}})=\min ({\frak{a}})$. \(ii) For a contradiction assume that $R^{+}$ is Cohen-Macaulay in the sense of finitely generated ideals. Then by Theorem 3.4, $R^{+}$ is Cohen-Macaulay in the sense of Hamilton-Marley. Also, [@AH Proposition 3.6] state that $R^{+}$ is not a balanced big Cohen-Macaulay algebra for $R$. Thus, there exists a system of parameters of $R$ as $\underline{x}:=x_1,\cdots,x_{\ell}$ such that $\underline{x}$ is not regular sequence on $R^+$. For any $1\leq i\leq \ell$ set $\underline{x}_i:=x_1,\cdots,x_i$. Then ${\operatorname{ht}}(\underline{x}_i R)=i$, because $R$ is Cohen-Macaulay. [@Mat Theorem 19.4] says that regular rings are normal. In particular, going down theorem holds for the integral extension $R^+/R$. By applying this, one can find that ${\operatorname{ht}}(\underline{x}_i R^+)=i$. So ${\operatorname{K.grade}}_{R^+}(\underline{x}_iR^+,R^+)=i$, because $R^{+}$ is Cohen-Macaulay in the sense of finitely generated ideals. By using [@HM Proposition 3.3 (e)], one can find that $\underline{x}_i$ is a parameter sequence on $R^+$. Therefore, $\underline{x}$ is a strong parameter sequence on $R^+$. Then $\underline{x}$ is a regular sequence on $R^+$, since $R^{+}$ is Cohen-Macaulay in the sense of Hamilton-Marley. This is a contradiction. \(iii) Let $(R,{\frak{m}})$ be a Noetherian local domain of dimension less than $3$. One can see easily that $R^+$ is a balanced big Cohen-Macaulay $R$-algebra. Thus by a same reason as (i), $R^{+}$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. \(iv) By our assumptions, one can see that $R^+$ is not a balanced big Cohen-Macaulay $R$-algebra, see e.g. [@R Page 617]. Then by a same method as (ii), $R^{+}$ is not Cohen-Macaulay in the sense of finitely generated ideals. $\Box$ Let $R$ be a domain containing a field of characteristic $p > 0$. We let $R_{\infty}$ denote the perfect closure of $R$, that is, $R_{\infty}$ is the ring obtained by adjoining to $R$ the $p^n$-th roots of all its elements. The next result gives the Cohen-Macaulayness of $R_{\infty}$. Let $(R,{\frak{m}})$ be a Noetherian regular local ring of prime characteristic $p$. Then $R_{\infty}$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. [**Proof.**]{} For each positive integer $n$, set $R_n:=\{x\in R_\infty|x^{p^n}\in R\}$. By using of [@BH Corollary 8.2.8], one can find that the $R$-algebra $R_n$ is flat. Since $R_{\infty}:=\underset{n}{\varinjlim}R_n$, so $R_{\infty}$ is flat $R$-algebra. Therefore by Lemma 4.3, $R_{\infty}$ is Cohen-Macaulay in the sense of ideals. Let ${\frak{a}}$ be a finitely generated ideal of $R_\infty$ with the property that ${\operatorname{ht}}{\frak{a}}\geq \mu({\frak{a}})$. Then $m:=\mu({\frak{a}})={\operatorname{ht}}{\frak{a}}={\operatorname{K.grade}}_{R_\infty}({\frak{a}},R_\infty)$. Let $\{a_1,\cdots,a_m\}$ be a generating set for ${\frak{a}}$. There is an integer $\ell$ such that $a_i\in R_{\ell}$ for all $1\leq i \leq m$. Set ${\frak{b}}:=a_1R_{\ell}+\cdots a_mR_{\ell}$. In order to pass from $R$ to $R_{\ell}$ assume that $R$ is $d$-dimensional. So ${\frak{m}}$ can be generated by $d$ elements, namely $x_1,\cdots,x_d$. The ring $R_{\ell}$ is local with the maximal ideal $(x_1^{1/p^{\ell}},\cdots,x_d^{1/p^{\ell}})R_{\ell}$. In particular, $R_{\ell}$ is regular. Hence we can replace $R$ by $R_{\ell}$. Also, ${\frak{b}}R_\infty={\frak{a}}$ and $m:= \mu({\frak{b}})={\operatorname{ht}}{\frak{b}}$. In view of the equality $\mu({\frak{b}})={\operatorname{K.grade}}_{R}({\frak{b}},R)$ and by [@BH Exercise 1.2.21], one can generated ${\frak{b}}$ by an $R$-regular sequence $\underline{b}:=b_1,\cdots,b_m$. Keep in mind that $R_\infty$ is a flat $R$-algebra. Then $\underline{b}$ forms a regular sequence on $R_\infty$. From this, Lemma 3.5 and Lemma 3.9 we get that $\min({\frak{a}})={\operatorname{wAss}}_{R_\infty}(R_\infty/{\frak{a}})$. Therefore, $R_\infty$ is weak Bourbaki unmixed. $\Box$ The argument of the next result involves the concept of Generalized Principal Ideal Theorem. By definition, a ring $R$ satisfies (for Generalized Principal Ideal Theorem) if ${\operatorname{ht}}({\frak{p}})\leq n$ for each prime ideal ${\frak{p}}$ of $R$ which is minimal over an $n$-generated ideal of $R$. Rings, with this property are denoted by . For more details on this, see e.g. [@ADEH]. To see an easy example of non ring, let $(V,{\frak{m}})$ be an infinite dimensional valuation domain. Then, for any positive integer $n$ one can find an element $x_n$ such that ${\operatorname{ht}}(x_n V)=n$. Let $(R,{\frak{m}})$ be a Noetherian local domain of prime characteristic $p$. Then the following assertions hold. 1. If $R$ is complete, then $R^+$ is weak Bourbaki height unmixed. 2. If $R$ is regular, then $R_\infty$ is weak Bourbaki height unmixed. [**Proof.**]{} The proof of (ii) is similar as (i). Thus, we give only the proof of (i). To do this, first note that by [@H1 Theorem 3.3] over , weak Bourbaki height unmixed follows by weak Bourbaki unmixed. Thus, in view of Theorem 4.4 (i), the claim follows by showing that $R^+$ is . Due to [@ADEH Corollary 2.3] we know that any ring which is integral over a Noetherian domain is . Therefore $R^+$ is . $\Box$ Cohen-Macaulayness of rings of invariants ========================================= Let $R$ be a commutative ring and $G$ a finite group of automorphisms of $R$. The subring of invariants defined by $R^G:= \{ x\in R : \sigma(x)=x \textit{ for all } \sigma\in G\}$. Assume that the order of $G$ is a unit in $R$. Then by a famous result of Hochster and Eagon [@HE Proposition 13], we know that if $R$ is Noetherian and Cohen-Macaulay, then $R^G$ is as well. Our main aim of the present section can be regarded as a non Noetherian version of this result. First, we give the proof of Theorem 1.2. To do this, we need a new definition for the notion of Cohen-Macaulayness for arbitrary commutative rings as desired in Theorem 1.2. \[def2\] Let $\underline{x}:=x_{1},\cdots,x_{\ell}$ be a finite sequence of elements of a ring $R$. 1. For an $R$-module $L$ set $\mathbb{K}_{\bullet}(\underline{x};L):=\mathbb{K}_{\bullet}(\underline{x})\otimes_RL$. Recall that for a pair of integers $m \geq n$, there exists a chain map $$\varphi^{m} _{n} (\underline{x};L) :\mathbb{K}_{\bullet}(\underline{x}^{m};L)\longrightarrow \mathbb{K}_{\bullet}(\underline{x}^{n};L)$$ which induces by multiplication of $(\prod x_{i})^{m-n} $. We call $\underline{x}$ a generalized proregular sequence on $R$ if for each positive integer $n$ and any finitely generated $R$-module $M$, there exists an integer $m \geq n$ such that the maps $$H_{i}(\varphi^{m} _{n} (\underline{x};M)) :H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{m};M))\longrightarrow H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{n};M))$$ are zero for all $i \geq 1$. 2. We say that $\underline{x}$ is a generalized parameter sequence on $R$, if (1) $\underline{x}$ is a generalized proregular sequence, (2) $(\underline{x})R \neq R$, and (3) $H^{\ell}_{\underline{x}} (R)_{{\frak{p}}} \neq 0$ for all prime ideals ${\frak{p}}\in {\operatorname{V}}(\underline{x}R)$. 3. We call $\underline{x}$ a generalized strong parameter sequence on $R$, if $x_{1},\cdots, x_{i}$ is a parameter sequence on $R$ for all $1\leq i \leq \ell$. 4. We say that $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley, if each generalized strong parameter sequence on $R$ is a regular sequence on $R$. \(i) Assume that $R$ is a Noetherian ring. Let $\underline{x} :=x_{1},\cdots,x_{\ell}$ be a finite sequence of elements of $R$ and $m\geq n$ a pair of positive integers. [@Str Lemma 4.3.3] says that the morphisms $H_{i}(\varphi^{m} _{n} (\underline{x};R)) :H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{m};R))\longrightarrow H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{n};R))$ are finally null. Now, let $M$ be a finitely generated $R$-module. By making straightforward modification of [@Str Lemma 4.3.3], one can see that the following homomorphisms $$H_{i}(\varphi^{m} _{n} (\underline{x};M)) :H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{m};M))\longrightarrow H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{n};M))$$ are finally null. Then any finite sequence of elements of $R$ is a generalized proregular sequence. \(ii) Generalized parameter sequence does not coincide with (partial) systems of parameters if the ring is Noetherian and local. To see an example, let $\mathbb{F}$ be a field and consider the ring $R:=\mathbb{F}[[X,Y,Z]]/ (X)\cap(Y,Z)$. We use small letters to indicate the images in $R$. As was shown by [@Mat Theorem 14.1 (ii)], $y$ is a partial systems of parameter. Note that $\min(yR)={\frak{p}}:=(y,z)$, and so ${\operatorname{ht}}{\frak{p}}=0$. By using Grothendieck Vanishing Theorem, $H^1_y(R)_{{\frak{p}}}=0$. Therefore, $y$ is not a generalized parameter sequence. \(iii) If $(R,{\frak{m}})$ is a $d$-dimensional Noetherian local ring, then by [@Mat Theorem 14.1 (ii)], there exists a choice $\underline{x}:=x_1,\cdots,x_{d}$ of system of parameters such that ${\operatorname{ht}}(x_1,\cdots,x_i)=i$ for all $1\leq i\leq d$. Then ${\operatorname{ht}}({\frak{p}})=i$ for all ${\frak{p}}\in\min(x_1,\cdots,x_i)$ and by applying Grothendieck non-vanishing theorem, $H^i_{x_1,\cdots,x_i}(R)_{{\frak{p}}}\neq0$. This yields that $\underline{x}$ is a generalized strong parameter sequence. \(iv) For convention, the ideal generated by the empty sequence is the zero ideal and the empty sequence is a regular sequence of length zero over any ring. Let $R$ be a ring. Then the following assertions hold. 1. Assume that $R$ is Noetherian and Cohen-Macaulay. Then the ring $R[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 2. If $R_{{\frak{p}}}$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley for all prime ideals ${\frak{p}}$ of $R$, then $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. [**Proof.**]{} (i) Note that if a ring is Cohen-Macaulay in the sense of Hamilton-Marley, then it is Cohen-Macaulay in the sense of generalized Hamilton-Marley. So (i) follows from Theorem 4.1. \(ii) Let $\underline{x}$ be a generalized strong parameter sequence on $R$ and ${\frak{p}}$ a prime ideal containing $\underline{x}$. Let $N$ be a finitely generated $R_{{\frak{p}}}$-module. One can find a finitely generated $R$-module as $M$ such that $M_{{\frak{p}}}\cong N$. Since $\underline{x}$ is a generalized proregular sequence on $R$ for each positive integer $n$ there exists an $m \geq n$ such that the maps $$H_{i}(\varphi^{m} _{n} (\underline{x};M)) :H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{m};M))\longrightarrow H_{i} (\mathbb{K}_{\bullet}(\underline{x}^{n};M))$$ are zero for all $i \geq 1$. On the other hand localization commutes with homology functors. Therefore, $\underline{x}$ is a generalized proregular sequence on $R_{{\frak{p}}}$. By [@HM Proposition 3.3 (c)], $\underline{x}$ is a strong parameter sequence on $R_{{\frak{p}}}$. Hence, $\underline{x}$ is a generalized strong parameter sequence on $R_{{\frak{p}}}$. So, $\underline{x}$ is a regular sequence on $R_{{\frak{p}}}$ for all prime ideals ${\frak{p}}$. In particular, $\underline{x}$ is a regular sequence on $R_{{\frak{p}}}$ for all prime ideals ${\frak{p}}$ containing $\underline{x}R$. Therefore, $\underline{x}$ is a regular sequence on $R$. $\Box$ The preparation of Theorem 1.2 in the introduction is finished. Now, we proceed to the proof of it. We repeat Theorem 1.2 to give its proof. \[main\] The following assertions hold. 1. A Noetherian ring is Cohen-Macaulay with original definition in Noetherian case if and only if it is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 2. Coherent regular rings are Cohen-Macaulay in the sense of generalized Hamilton-Marley. 3. Let $R$ be a Cohen-Macaulay ring in the sense of generalized Hamilton-Marley and $G$ a finite group of automorphisms of $R$ such that the order of $G$ is a unit in $R$. Assume that $R$ is finitely generated as an $R^G$-module. Then $R^G$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 4. Let $R$ be a Noetherian Cohen-Macaulay ring. Then the polynomial ring $R[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. 5. If $R_{{\frak{p}}}$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley for all prime ideals ${\frak{p}}$ of $R$, then $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. [**Proof.**]{} (i) Let $R$ be a Noetherian ring. Note that, in view of Remark 5.2 (i), any finite sequence of elements of $R$ is a generalized proregular sequence. First, assume that $R$ is Cohen-Macaulay with original definition in Noetherian case. Then by Lemma 5.3 (ii), we may and do assume that $(R,{\frak{m}})$ is local. Let $\underline{x}:=x_1,\cdots,x_{\ell}$ be a strong generalized parameter sequence for $R$. Due to [@HM Remark 3.2] we know that ${\operatorname{ht}}(\underline{x}R)=\ell$. In particular, $\underline{x}$ is a (partial) systems of parameters. So $\underline{x}$ is a regular sequence on $R$. This shows that $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. Now, assume that $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley. Let ${\frak{a}}$ be an ideal of $R$ of height $\ell$. In view of [@BH Theorem A.2, Page 412], one can find a sequence $\underline{x}:=x_1,\cdots,x_{\ell}$ of elements of ${\frak{a}}$ such that ${\operatorname{ht}}(x_1,\cdots,x_i)=i$ for all $1\leq i\leq \ell$ and ${\operatorname{ht}}(\underline{x}R)={\operatorname{ht}}({\frak{a}})$. Then by using [@HM Remark 3.2], $\underline{x}$ is a generalized strong parameter sequence on $R$. Thus, $\underline{x}$ is a regular sequence on $R$, and so ${\operatorname{c.grade}}_R({\frak{a}},R)\geq {\operatorname{ht}}({\frak{a}})$. Therefore, $R$ is Cohen-Macaulay with original definition in Noetherian case. \(ii) [@HM Theorem 4.8] says that any coherent regular ring is locally Cohen-Macaulay in the sense of Hamilton-Marley, and so locally Cohen-Macaulay in the sense of generalized Hamilton-Marley. Therefore, Lemma 5.3 (ii) implies (ii). \(iii) Let $\underline{x}:=x_1,\cdots,x_\ell$ be a generalized parameter sequence on $R^G$. In order to show that $\underline{x}$ is a generalized parameter sequence on $R$, we need to show that the following three assertions hold: 1. $\underline{x}$ is a generalized proregular sequence on $R$, 2. $(\underline{x})R \neq R$, and 3. $H^{\ell}_{\underline{x}} (R)_{{\frak{q}}} \neq 0$ for all prime ideals ${\frak{q}}\in {\operatorname{V}}(\underline{x}R)$. Let $M$ be a finitely generated $R$-module. Since $R$ is a finitely generated $R^G$-module, we get that $M$ is also finitely generated as an $R^G$-module. From this one can find easily that $\underline{x}$ is a generalized proregular sequence on $R$. Hence (a) is satisfied. The assertion (b) trivially holds. In order to show (c), assume for a contradiction that $H^{\ell}_{\underline{x}} (R)_{{\frak{q}}}= 0$ for some prime ideal ${\frak{q}}\in {\operatorname{V}}(\underline{x}R)$. It follows from [@Bk Page 324, Proposition 23] that $S^{-1}(R^G)=(S^{-1}R)^G$ for any multiplicative closed subset $S$ of $R^G$. Set ${\frak{p}}:={\frak{q}}\cap R^G$ and $S=R^G\setminus{\frak{p}}$. So $(R^G)_{{\frak{p}}}\cong(R_{{\frak{p}}})^G$ and ${\frak{p}}\in{\operatorname{V}}(\underline{x}R^G)$. Since $\underline{x}$ is a parameter sequence on $R^G$, we have $$0\neq (H^{\ell}_{\underline{x}} (R^G))_{{\frak{p}}}\cong H^{\ell}_{\underline{x}} ((R^G)_{{\frak{p}}})\cong H^{\ell}_{\underline{x} } ((R_{{\frak{p}}})^G).$$ Also, $H^{\ell}_{\underline{x} } (R_{{\frak{p}}})_{{\frak{q}}R_{{\frak{p}}}}\cong H^{\ell}_{\underline{x} } (R_{{\frak{q}}})$. Then, to simplify the notation, after replacing $R$ by $R_{{\frak{p}}}$ and $R^G$ by $(R^G)_{{\frak{p}}}$, we can assume that $(R^G,{\frak{m}})$ is a quasi local ring with the following properties; $H^{\ell}_{\underline{x}} (R^G)\neq0$, ${\frak{q}}\cap R^G={\frak{m}}$ and $H^{\ell}_{\underline{x}} (R)_{{\frak{q}}}=0$. Let $\sigma:R {\longrightarrow}R$ be an element of $G$ and $y\in R^G$. Then the assignment $r/ y^n\mapsto \sigma(r)/ y^n$ induces an $R^G$-algebra isomorphism $\sigma_y:R_y {\longrightarrow}R_y$. This gives an $R^G$-isomorphism of the $\check{C}ech$ complexes $\sigma_1:\Check{\textbf{C}}_{\bullet}(\underline{x},R){\longrightarrow}\Check{\textbf{C}}_{\bullet}(\underline{x},R)$. Let $1 \leq i\leq \ell$. Thus we have an $R^G$-isomorphisms of the $\check{C}ech$ cohomology modules $$\sigma_2^i:H^i(\Check{\textbf{C}}_{\bullet}(\underline{x},R)){\longrightarrow}H^i(\Check{\textbf{C}}_{\bullet}(\underline{x},R)).$$ Note that $\sigma_2^i(tm)=\sigma (t)\sigma_2^i(m)$ for $t\in R$ and $m\in H^i_{\underline{x}} (R)$. From this one can find that the assignment $m/s\mapsto \sigma_2^i(m)/\sigma(s)$ for $s\in R\setminus {\frak{q}}$ and $m\in H^i_{\underline{x}} (R)$, induces the following $R^G$-isomorphisms $$\sigma_3^i:H^i_{\underline{x}} (R)_{{\frak{q}}}{\longrightarrow}H^i_{\underline{x}} (R)_{\sigma({\frak{q}})}.$$ Assume that ${\frak{q}}_1$ and ${\frak{q}}_2$ are prime ideals of $R$ lying over ${\frak{m}}$. In view of [@Bk Page 331, Theorem 2 (i)], one can find an element $\sigma$ in $G$ such that $\sigma({\frak{q}}_1)={\frak{q}}_2$. Also, any maximal ideals of $R$ contracted to ${\frak{m}}$. Thus, from the definition of $\sigma_3^i$, we have $H^{\ell}_{\underline{x}} (R)_{\sigma({\frak{n}})}=0$ for all ${\frak{n}}\in\max(R)$ and consequently $H^{\ell}_{\underline{x}} (R)=0$. Consider the Reynolds operator $\rho:R {\longrightarrow}R^G$. It sends $r\in R$ to $\frac{1}{|G|}\Sigma_{g\in G} gr$. This follows that $R^G$ is a direct summand of $R$ as $R^G$-module. So $H^{\ell}_{\underline{x}} (R^G)=0$, a contradiction. This completes the proof of (c). Now, assume that $\underline{x}$ is a generalized strong parameter sequence on $R^G$. The same reason as above, shows that $\underline{x}$ is a generalized strong parameter sequence on $R$. Since $R$ is Cohen-Macaulay in the sense of generalized Hamilton-Marley, we get that $\underline{x}$ is a regular sequence on $R$. By applying [@BH Proposition 6.4.4 (c)], we find that $\underline{x}$ is a regular sequence on $R^G$. This completes the proof of (iii). \(iv) and (v) are proved in Lemma 5.3. $\Box$ In the proof of the next result, we use the method of the proof of Lemma 3.2 (ii) and Lemma 4.1 in [@TZ]. Recall that, a group $G$ is said to be locally finite if for every $x\in R$ the orbit of $x$ has finite cardinality. Let $R$ be a ring and $G$ a group of automorphisms of $R$. 1. Let ${\frak{a}}$ be an ideal of $R$ and $S$ a pure extension of $R$. Then ${\operatorname{K.grade}}_{R}({\frak{a}}, R)\geq {\operatorname{K.grade}}_S ({\frak{a}}S,S)$. 2. Let ${\frak{a}}$ be an ideal of $R^G$. Assume that there is a Reynolds operator for the extension $R/R^G$. Then ${\operatorname{K.grade}}_{R^G}({\frak{a}}, R^G)\geq {\operatorname{K.grade}}_R ({\frak{a}}R,R)$. 3. Let ${\frak{q}}$ be a prime ideal of $R$ and $G$ a locally finite group of automorphisms of $R$ such that the cardinality of orbit of $x$ is a unit in $R$ for every $x\in R$. Then ${\operatorname{ht}}({\frak{q}})\leq{\operatorname{ht}}({\frak{q}}\cap R^G)$. The equality holds if $G$ is finite. [**Proof.**]{} (i) Let $\underline{y}:=y_1\cdots,y_{s}$ be a finite sequence of elements of ${\frak{a}}S$. Then there exists a finite subset $\underline{x}:=x_1\cdots,x_{\ell}$ of elements of ${\frak{a}}$ such that $\underline{y}S\subseteq\underline{x}S$. In view of [@BH Exercise 10.3.31(a)], one can find that the natural map $H_i (\mathbb{K}_{\bullet}(\underline{x})){\longrightarrow}H_i (\mathbb{K}_{\bullet}(\underline{x})\otimes_RS)$ is injective for all $i$. Then, by symmetry of Koszul cohomology and Koszul homology, one has ${\operatorname{K.grade}}_{R}(\underline{x}R, R)\geq {\operatorname{K.grade}}_{R}(\underline{x}R, S)$. Now, by Proposition \[pro1\] (iii) and (iv), we find that$$\begin{array}{ll} {\operatorname{K.grade}}_{R}({\frak{a}}, R)&\geq {\operatorname{K.grade}}_{R}(\underline{x}R, R)\\&\geq {\operatorname{K.grade}}_{R}(\underline{x}R, S)\\&={\operatorname{K.grade}}_{S}(\underline{x}S, S)\\&\geq {\operatorname{K.grade}}_{S}(\underline{y}S, S) .\\ \end{array}$$So the claim follows from definition. \(ii) By using Reynolds operator, one can find that $R$ is a pure extension of $R^G$. So (ii) follows from (i). \(iii) Since $G$ is locally finite, so by [@Bk Page 323, Proposition 22], the ring extension $R/ R^G$ is integral. The first claim follows from this. Let $${\frak{p}}_0\subsetneqq{\frak{p}}_1\subsetneqq\cdots\subsetneqq{\frak{p}}_n={\frak{q}}\cap R^G$$ be a chain of prime ideals of $R^G$. By lying over theorem, there exists ${\frak{q}}_0\in{\operatorname{Spec}}(R)$ such that ${\frak{q}}_0\cap R^G={\frak{p}}_0$. Thus by going up theorem, there is a chain of prime ideals of $R$ as ${\frak{q}}_0\subsetneqq{\frak{q}}_1\subsetneqq\cdots\subsetneqq{\frak{q}}_n$ such that ${\frak{q}}_i\cap R^G={\frak{p}}_i$. In view of [@Bk Page 331, Theorem 2 (i)], there exists an automorphism $\sigma$ in $G$ such that $\sigma({\frak{q}}_n)={\frak{q}}$. It is clear that $$\sigma({\frak{q}}_0)\subsetneqq\sigma({\frak{q}}_1)\subsetneqq\cdots \subsetneqq\sigma({\frak{q}}_n)={\frak{q}}$$ is a chain of prime ideals of $R$ and so ${\operatorname{ht}}{\frak{q}}\geq{\operatorname{ht}}({\frak{q}}\cap R^G)$. $\Box$ We now apply Lemma 5.5 to obtain the following result on the Cohen-Macaulayness of rings of invariants in the sense of (finitely generated) ideals. Let $R$ be a Cohen-Macaulay ring in the sense of (finitely generated) ideals and $G$ a finite group of automorphisms of $R$ such that the order of $G$ is a unit in $R$. Let ${\frak{a}}$ be a (finitely generated) ideal of $R^G$. Then ${\operatorname{K.grade}}_{R^G}({\frak{a}}, R^G)= {\operatorname{K.grade}}_R ({\frak{a}}R,R)$ and ${\operatorname{ht}}({\frak{a}})={\operatorname{ht}}({\frak{a}}R)$. In particular, $R^G$ is Cohen-Macaulay in the sense of (finitely generated) ideals. [**Proof.** ]{} Let ${\frak{a}}$ be a (finitely generated) ideal of $R^G$ and ${\frak{q}}\in {\operatorname{Spec}}R$ be such that ${\operatorname{ht}}({\frak{a}}R)={\operatorname{ht}}{\frak{q}}$. Thus, by Lemma 5.5 (iii), ${\operatorname{ht}}({\frak{a}}R)={\operatorname{ht}}({\frak{q}}\cap R^G)$. Therefore, Lemma \[key\] and Lemma 5.5 (ii) yield that $${\operatorname{ht}}{\frak{a}}\geq {\operatorname{K.grade}}_{R^G}({\frak{a}}, R^G)\geq {\operatorname{K.grade}}_R ({\frak{a}}R,R)={\operatorname{ht}}({\frak{a}}R)={\operatorname{ht}}({\frak{q}}\cap R^G)\geq{\operatorname{ht}}{\frak{a}},$$ which completes the proof. $\Box$ To complete our desired list of the behavior of rings of invariants, on the different types of Cohen-Macaulay rings, we need to state the following result. A consequence of this is given by Corollary 5.8. Let $R$ be a weak Bourbaki (height) unmixed ring and $G$ a finite group of automorphisms of $R$ such that the order of $G$ is a unit in $R$. Then $R^G$ is weak Bourbaki (height) unmixed. [**Proof.**]{} The proof of weak Bourbaki height unmixed case is similar as weak Bourbaki unmixed case. So we give only the proof of weak Bourbaki unmixed case. Let ${\frak{a}}$ be a finitely generated ideal of $R^G$ with the property that ${\operatorname{ht}}{\frak{a}}\geq \mu({\frak{a}})$. Assume that ${\frak{p}}$ belongs to ${\operatorname{wAss}}_{R^G}(R^G/{\frak{a}})$. Then there exists an element $r$ in $R^G$ such that ${\frak{p}}\in\min(({\frak{a}}:_{R^G}r))$. Let ${\frak{q}}$ be any prime ideal of $R$ lying over ${\frak{p}}$. First, we show that ${\frak{q}}\in{\operatorname{wAss}}_{R}(R/{\frak{a}}R)$. To do this, let ${\frak{q}}'$ be a prime ideal of $R$ such that $({\frak{a}}R:_{R}r)\subseteq {\frak{q}}'\subseteq {\frak{q}}$. By contraction of this to $R^G$ we get that ${\frak{q}}'\cap R^G= {\frak{q}}\cap R^G$, because ${\frak{a}}R\cap R^G={\frak{a}}$. So ${\frak{q}}'={\frak{q}}$, i.e., ${\frak{q}}\in{\operatorname{wAss}}_{R}(R/{\frak{a}}R)$. Let ${\frak{q}}_0$ be a prime ideal of $R$ such that ${\operatorname{ht}}({\frak{a}}R)={\operatorname{ht}}({\frak{q}}_0)$. Then, in view of Lemma 5.5 (iii), $${\operatorname{ht}}({\frak{a}}R)={\operatorname{ht}}({\frak{q}}_0)={\operatorname{ht}}({\frak{q}}_0 \cap R^G)\geq{\operatorname{ht}}({\frak{a}}) \geq \mu({\frak{a}}) \geq \mu({\frak{a}}R).$$This implies that ${\frak{q}}\in\min({\frak{a}}R)$. Now, we show that ${\frak{p}}\in\min({\frak{a}})$. To see this, let ${\frak{p}}'$ be a prime ideal of $R^G$ and assume that ${\frak{a}}\subseteq {\frak{p}}'\subseteq {\frak{p}}$. By lying over theorem, there exists ${\frak{q}}'\in{\operatorname{Spec}}(R)$ such that ${\frak{q}}'\cap R^G={\frak{p}}'$. By applying the going up theorem to this, we find a prime ideal ${\frak{q}}''$ of $R$ such that ${\frak{q}}'\subseteq {\frak{q}}''$ and ${\frak{q}}''\cap R^G={\frak{p}}$. As we saw, one has ${\frak{q}}''\in{\operatorname{wAss}}_{R}(R/{\frak{a}}R)=\min({\frak{a}}R)$. This implies that ${\frak{p}}'= {\frak{p}}$ and consequently ${\frak{p}}\in\min({\frak{a}})$. $\Box$ The statement of the next result involves a non Noetherian version of the concept of veronese subrings in polynomial ring $R:=\mathbb{C}[X_1,X_2,\cdots]$. Let $f:=X_{i_1}^{j_1}\cdots X_{i_{\ell}}^{j_{\ell}}$ be a monomial in $R$. The degree of $f$ is defined by $d(f):=\sum_{k=1}^{\ell} j_k$. Let $n$ be a positive integer. We call the $\mathbb{C}$-algebra generated by all monomials of degree $n$, the $n$-th veronese subring of $R$. We denoted it by $R_n$. Let $n$ be a positive integer and let $R_n$ be the $n$-th veronese subring of $R:=\mathbb{C}[X_1,X_2,\cdots]$. Then $R_n$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. [**Proof.**]{} In light of Theorem 4.1 we see that $\mathbb{C}[X_1,X_2,\cdots]$ is Cohen-Macaulay in the sense of each part of Definition \[def1\]. Since $\mathbb{C}$ is an algebraically closed field, then for each positive integer $n$, $\mathbb{C}\setminus\{0\}$ has a multiplicative subgroup $G$ of order $n$. Let $g$ be in $G$. The assignment $X_i\mapsto g X_i$ induces an action of $G$ on $R$. Assume that $f$ is a monomial in $R$. Then $f$ belongs to $R^G$ if and only if $g^{d(f)}=1$ for all $g\in G$. On the other hand by [@Ha V. Theorem 5.3], $G$ is cyclic. So $f$ belongs to $R^G$ if and only if $d(f)=\ell n$ for some $\ell\in \mathbb{N}\cup\{0\}$. From this we have $R^G=R_n=\mathbb{C}[f: d(f)\in n\mathbb{N}]$. Due to Theorem 5.6 and Proposition 5.7 we know that $R_n$ is Cohen-Macaulay in the sense of ideals and weak Bourbaki unmixed. Now, the claim follows by Theorem 3.3. $\Box$ It is noteworthy to remark that the converse of the previous results of this section are not true and their assumptions are really needed. \(i) Let $\mathbb{F}$ be a perfect field of characteristic $2$. In [@Ber], Bertin presented an action of a finite group $G$ of order $4$ on $R:=\mathbb{F}[X,Y,Z,W]$ such that $R^G$ is Noetherian but not Cohen-Macaulay. Thus, in Theorem 5.6 and Proposition 5.7 the unit assumption on $|G|$ is really needed, even if $R$ is Noetherian and regular. \(ii) Let $A$ be a Noetherian normal domain which is not Cohen-Macaulay. In particular, $A$ is a Krull domain. A beautiful result of Bergman [@Be Proposition 5.2] state that there is a principal ideal domain $R$ and an infinite cycle group $G$ such that $R^G=A$. So, in Theorem 5.6 the finite assumption on $G$ is really needed, even if $R^G$ is Noetherian and regular. \(iii) Let $\mathbb{F}$ be a field and set $R:=\mathbb{F}[[X,Y]]/(XY,Y^2)$. Then $R$ is not Cohen-Macaulay. The assignments $X\mapsto X$ and $Y\mapsto -Y$ induce an isomorphism call it $g$. Consider the group of automorphisms generated by $g$ and denote it by $G:=\langle g\rangle$. Then $|G|=2$ and $R^G=\mathbb{F}[[X]]$, (cf. [@F2 Page 448]). Therefore, the converse part of Proposition 5.7 is not true, even if $R^G$ is Noetherian and regular. \(iv) Fogarty [@F2] presented a wild action of a cyclic group $G$ on a local Noetherian ring $R$ such that $ R^G$ is Noetherian and ${\operatorname{depth}}R-{\operatorname{depth}}R^G$ can be arbitrarily large. Thus the assumptions of $G$ in Lemma 5.5 (ii) is really needed. \(v) Nagata constructed a zero-dimensional Noetherian ring $R$ and a finite group $G$ of automorphisms of $R$ such that $R^G$ is non Noetherian, see e.g. the introduction of [@F1]. The ring extension $R/R^G$ is integral, because $G$ is finite. Since $R$ is zero dimensional, so $R^G$ is zero dimensional. This is clear that any zero dimensional ring is Cohen-Macaulay in the sense of each part of Definition \[def1\]. Thus, $R^G$ is as well. Therefore, it is possible $R^G$ becomes Cohen-Macaulay without the unit assumption on $|G|$. [99]{} , [M. Hochster]{}, [*Finite tor dimension and failure of coherence in absolute integral closures*]{}, J. Pure Appl. Algebra, [**122**]{}, (1997), 171–184. , [*Grade non Noetherian*]{}, Comm. Algebra, [**8**]{}(9), (1981), 811–-840. , [D.E. Dobbs]{}, [P.M. Eakin]{}, [W.J. Heinzer]{}, [*On the generalized principal ideal theorem and Krull domains*]{}, Pacific J. of math., [**146**]{}(2), (1990), 201–215. , [*A theory of grade for commutative rings*]{}, Proc. AMS., [**36**]{}, (1972), 365–-368. , [*Groups acting on hereditary rings*]{}, Proc. of London Math. Soc.,[**23**]{}, (1971), 365–-368. , [*Anneaux $coh\acute{e}rents\ \ r\acute{e}guliers$*]{}, C. R. Acad. Sci. Paris, $S\acute{e}r$ A-B, [**273**]{}, (1971). , [*Commutative algebra*]{}, Chapters 1-7, Springer-Verlag, Berlin, 1989. , [J. Herzog]{}, [*Cohen-Macaulay rings*]{}, Rev. Ed. Cambridge univ. Press, [**39**]{}, 1998. , [*Kähler differentials and Hilbert’s fourteenth problem for finite groups*]{}, Amer. J. of math., [**102**]{}(6), (1980), 1159-1175. , [*On the depth of local rings of invariants of cyclic groups*]{}, Proc. of AMS., [**83**]{}, (1981), 448–452. , [*On the $\mu^i$ in a minimal injective resolution II*]{}, Math. Scan., [**41**]{}, (1977), 19-44. , [*Commutative coherent rings*]{}, Springer LNM, [**1371**]{}, 1989. , [*Fixed rings of coherent regular rings*]{}, Comm. Alg., [**20**]{}(9), (1992), 2635–2651. , [*Coherence, regularity and homological dimensions of commutative fixed rings*]{}, In: Commutative algebra, (Trieste, 1992), World Sci. Publ., River Edge, NJ, (1994), 89–-106. , [*Homological dimensions of localizations of polynomial rings*]{}, (Knoxville, TN, 1994), In: Zero-dimensional commutative rings, In: Lect. Notes in pure and appl. Math., [**171**]{}, Marcel Dekker, New York, (1995), 209–-222. , [*Algebra*]{}, Springer graduate text in math., [**73**]{}, 1974. , [*Unmixedness and generalized principal ideal theorem*]{}, Lect. Notes pure appl. Math., [**241**]{}, (2005), 282–292. , [*Weak Bourbaki unmixed rings: A step towards non Noetherian Cohen-Macaulayness*]{}, Rocky mountain J. of math., [**34**]{}(3), (2004), 963–-977. , [*Weak Bourbaki unmixed rings: A step towards non Noetherian Cohen-Macaulayness*]{}, Ph.D. thesis, University of north Carolina, (1999). , [T. Marley]{}, [*Non Noetherian Cohen-Macaulay rings*]{}, J. of algebra, [**307**]{}, (2007), 343–-360. , [*Grade-sensitive modules and perfect modules*]{}, London math. Soc., [**29**]{}(3), (1974), 55–-76. , [*Canonical elements in local cohomology modules and the direct summand conjecture*]{}, J. of Algebra, [**84**]{}, (1983), 503–-553. , [*Cohen-Macaulay rings, invariant theory, and the generic perfection of determinantal loci*]{}, Amer. J. of math., [**93**]{}, (1971), 1020-1058. , [C. Huneke]{}, [*Infinite integral extensions and big Cohen-Macaulay algebras*]{}, Ann. of math., [**135**]{}(2), (1992), 53–-89. , [*Regularity conditions in non Noetherian rings*]{}, Trans. AMS., [**155**]{}, (1971), 363–374. , [*Commutative ring theory*]{}, Cambridge studies in advanced math., [**8**]{}, Cambridge, 1989. , [*Finite free resolutions*]{}, Cambridge tracts math., [**71**]{}, 1976. , [*Almost Regular Sequences and the Monomial Conjecture*]{}, Michigan Math. J., [**57**]{}, (2008), 615–623. , [*Proregular sequences, local cohomology, and completion*]{}, Math. Scand., [**92**]{}(2), (2003), 271–-289. , [*Homological questions in local algebra*]{}, London math. Lect. Notes series, [**145**]{}, 1990. , [H. Zakeri]{}, [*Action of certain groups on local cohomology modules and cousin complexes*]{}, Alg. Colloquium, [**4**]{}(8), (2001), 441–454.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We demonstrate the generation of counter-rotating cavity solitons in a silicon nitride microresonator using a fixed, single-frequency laser. We demonstrate a dual 3-soliton state with a difference in the repetition rates of the soliton trains that can be tuned by varying the ratio of pump powers in the two directions. Such a system enables a highly compact, tunable dual comb source that can be used for applications such as spectroscopy and distance ranging.' author: - Chaitanya Joshi - Alexander Klenner - Yoshitomo Okawachi - Mengjie Yu - Kevin Luke - Xingchen Ji - Michal Lipson - 'Alexander L. Gaeta' bibliography: - 'counterprop.bib' title: 'Counter-rotating cavity solitons in a silicon nitride microresonator' --- Advancements in optical frequency comb technology over the past two decades have enabled applications in a wide range of fields including precision spectroscopy [@diddams2007], frequency metrology [@udem2002], optical clockwork [@diddams2001; @newbury2011], astronomical spectrograph calibration [@li2008; @steinmetz2008], and microwave signal synthesis [@fortier2011]. Applications benefit from the high precision of the frequencies of the comb lines and require low noise, stable operation [@ye2007]. Stabilized low-noise comb sources were first demonstrated using mode-locked solid state lasers and fiber lasers [@diddams2000; @jones2000]. Over the last decade, on-chip optical frequency comb generation using microresonators has seen significant progress and has been demonstrated in several materials including silica [@delhaye2007; @li2012; @yi2015; @webb2016], crystalline fluorides [@savchenkov2008; @liang2011; @herr2014], silicon nitride (Si$_3$N$_4$) [@foster2011; @wang2013; @joshi2016; @brasch2016; @li2017], hydex [@Peccianti2012], diamond [@hausmann2014], aluminum nitride [@jung2014], silicon [@griffith2015; @yu2016], and AlGaAs [@pu2016]. Low-noise soliton mode-locked microresonator frequency combs have been demonstrated [@herr2014; @wang2016; @li2017; @brasch2016; @yi2015; @webb2016; @joshi2016; @yu2016] by sweeping the relative detuning between the laser and cavity resonance from the blue- to the red-detuned [@Lamont13; @herr2014]. The dynamics of mode-locking have been studied using various approaches to control the effective detuning, including laser frequency tuning [@herr2014; @wang2016; @li2017], ‘power kick’ [@brasch2016; @yi2015; @webb2016], and resonance frequency tuning using integrated heaters [@joshi2016] or free-carrier lifetime control [@yu2016]. Recently, there has been interest in studying the nonlinear dynamics of bidirectionally pumped microresonators [@delbino2017; @yang2017]. For the case in which the pumps have unequal powers, the counter-rotating fields experience different nonlinear phase shifts that leads to unequal detuning from the cavity resonances for the clockwise (CW) and counter-clockwise (CCW) directions. Such behavior can lead to bistability [@delbino2017] and can be exploited to create a gyroscope with enhanced sensitivity to rotation [@silver2017; @wang2014]. For the case in which such a system can be mode-locked it would result in the generation of two soliton trains with different repetition rates in a single microresonator and thus be used as a dual-comb source in a number of applications [@dutt2016; @yudual2016; @suh2016; @pavlov2017; @bernhardt2010; @coddington2016; @link2017; @millot2016]. Recently counter-propagating solitons were generated in silica microresonators using a single laser, frequency shifted using two acousto-optic modulators (AOM’s) pumping a single microresonator [@yang2017]. The difference in effective detuning was controlled using the two AOM’s and leads to a difference in repetition rate for the solitons. While there have also been recent demonstrations of bidirectional mode-locked solid state [@ideguchi16] and fiber [@kieu08; @gowda2015] laser cavities, a microresonator-based system could be highly compact and fully integrated onto a chip. In this Letter, we present a novel approach to generating counter-rotating trains of solitons in a single microresonator using a single pump laser without using frequency shifting devices by thermally tuning the microresonator. By tuning the relative pump powers in the two directions, we can control the repetition rate of the two soliton-modelocked pulse trains. Such a dual comb source using a single pump laser and single microresonator eliminates common mode noise due to relative fluctuations between two resonators and lasers and would enable improved real-time, high signal-to-noise ratio (SNR) measurements of molecular spectra [@coddington2016], time-resolved measurements of fast chemical processes [@fleisher2014], and precise distance measurements [@coddington2009; @suh2017]. ![(a) Experimental setup to generate counter-rotating solitons in a single microresonator using a single pump laser. We characterize the generated counter-rotating solitons (b) individually, measuring the optical spectra and transmitted optical powers in CW and CCW directions and (c) after combining the output in both directions to measure the mixed optical and heterodyned RF signal.[]{data-label="fig:setup"}](setup){width="\linewidth"} In our experiment (Fig. \[fig:setup\](a)), we use a single-frequency laser (1559.79 nm) with a narrow linewidth (1 kHz) as our pump source, which is amplified using a polarization maintaining (PM) erbium-doped fiber amplifier (EDFA). The remaining experimental setup consists of PM components to ensure that the polarizations of the pump light and that of the generated combs are maintained throughout. We split the amplified output using a 50:50 splitter, and the outputs of the splitter are sent to a pair of variable optical attenuators (VOA’s) to independently control the pump power in the CW and CCW directions. The pump for the CW and CCW directions is connected to port 1 of the two circulators. Port 2 of both circulators is connected to a pair of PM-lensed fibers to couple light in and out of the chip. We use a Si$_3$N$_4$ ring with a 200-GHz free spectral range (FSR) and a cross section of 950 x 1500 nm that yields anomalous group-velocity dispersion at the pump wavelength as required for soliton formation [@joshi2016]. The microresonator is undercoupled with an extinction ratio of 0.53, and the resonance frequency of the ring is controlled using integrated platinum resistive heaters. We observe a narrowing of the detuning region corresponding to simultaneous soliton generation in both directions that we believe occurs due to the nonlinear coupling between the counter-rotating modes. We find that the overlap in the detuning region for generation of the three-soliton state in both directions is sufficiently broad to permit stable operation, in contrast to the single- and two-soliton states where the detuning region was too narrow for sustained operation. The resulting 3-FSR comb spectra indicate three equally-spaced solitons in the cavity for each direction. The generated combs in the CW and CCW direction are coupled out using the lensed fibers at port 3 on the respective circulators, and the optical spectra and transmitted power of each are measured using two optical spectrum analyzers (OSA’s) and fast photodiodes ($\geq$ 12.5 GHz) (Fig. \[fig:setup\](b)). The two soliton trains are then combined using another 50:50 splitter, and the optical and RF properties of the dual comb are measured using an OSA and microwave spectrum analyzer (MSA) (Fig. \[fig:setup\](c)). We generate a 3-soliton state in both directions with resonance tuning of the microresonator at a speed of 200 Hz. In order to tune the cavity resonance frequency close to the pump laser frequency, we apply 98 mW of electrical power to the integrated heater (R = 200 $\Omega$). The pump transmission is recorded as we scan the cavity resonance across the laser, and we observe a low-noise ‘step’ on the red-detuned side characteristic of soliton mode-locking [@herr2014; @joshi2016], which corresponds to the 3-soliton state. We use the thermal tuning method to reach this state deterministically [@joshi2016] by applying a downward current ramp to a fixed DC offset current (Fig. \[fig:scanburst\]). We generate a bidirectional 3-soliton state over pump powers from 1.35 to 6.1 mW in the bus waveguide in each direction and record its properties over this range. ![The transmitted pump power as a downward current ramp is applied followed by a fixed current offset to reach the 3-soliton state. 30 mV corresponds to 3.86 mW in the bus waveguide.[]{data-label="fig:scanburst"}](burst){width="\linewidth"} The generated combs are sent to a pair of OSA’s to record the optical spectrum. To allow for simultaneous measurement of the spectra (Fig. \[fig:specind\]) and pump transmission (Fig. \[fig:scanburst\]), the OSA’s are triggered using a signal from the arbitrary waveform generator that is used to drive the integrated heater. The 3-FSR spaced optical spectra in the CW (Fig. \[fig:specind\](a)) and CCW (Fig. \[fig:specind\](b)) directions show good agreement with the hyperbolic secant pulse profile, as shown by the black dashed curves. ![Measured optical spectra for the (a) CW and (b) CCW directions. We observe a 3-soliton mode-locked comb in both directions, and the sech$^2$ fits are shown with the dashed black curves.[]{data-label="fig:specind"}](specind){width="\linewidth"} We use a 50:50 splitter to combine the two combs and send them to both an OSA and a photodiode (bandwidth $\geq$ 250 MHz) to detect the heterodyne RF signal on a MSA. The measured optical spectrum (Fig. \[fig:mixedsoliton\] (a)) shows a hyperbolic secant spectral profile with a 3-FSR spacing as seen in the individual optical spectra for the each direction (Fig. \[fig:specind\]). However, due to the OSA resolution limit of 1.25 GHz (0.01 nm), the difference in repetition rates is not resolvable. We measure the heterodyned RF signal and observe a RF comb with a spacing of 19 MHz, which indicates a difference in the FSR of 6.3 MHz since the measured RF beatnotes correspond to multiples of 3$\times \Delta f_{\text{r}}$ from the two 3-soliton states. The linewidth of the first RF comb line is $\leq$ 100 kHz measured at a resolution bandwidth (RBW) of 50 kHz \[inset of Fig. \[fig:mixedsoliton\](b)\], which corresponds to a mutual coherence time for the two solitons of $\geq$ 10 $\mu$s. ![(a) Optical spectrum for the dual comb with both the CW and CCW soliton trains combined. We observe a 3-soliton spectrum. The sech$^2$ fit is shown in the dashed black curve. (b) Measured heterodyned RF comb with a sequence of beat notes corresponding to multiples of 3$\times\Delta f_{\text{r}}$ for a power ratio $r$ = 0.67. The inset shows the first beat note in the heterodyned RF comb.[]{data-label="fig:mixedsoliton"}](mixedsoliton){width="\linewidth"} We develop a simple model to predict the difference in repetition rates for the counter-rotating solitons. The self-phase modulation (SPM) and cross-phase modulation (XPM) for the pumps induces a shift in the pump-cavity detuning for the CW and CCW modes due to a change in the effective index. The pump-cavity detuning in each direction ($\delta \omega_{\text{CW}}$, $\delta \omega_{\text{CCW}}$) depends on pump detuning with respect to the cold cavity mode ($\delta \omega_{\text{p}}$) and the intracavity pump powers ($P_{\text{CW}},P_{\text{CCW}}$) as given by, $$\begin{aligned} \label{detuning} \delta \omega_{\text{CW}} &= \delta \omega_{\text{p}} + \frac{\omega_0 n_2}{n_{\text{eff}} A_{\text{eff}}}(P_{\text{CW}} + 2 P_{\text{CCW}}), \\ \delta \omega_{\text{CCW}} &= \delta \omega_{\text{p}} + \frac{\omega_0 n_2}{n_{\text{eff}} A_{\text{eff}}}(P_{\text{CCW}} + 2 P_{\text{CW}}), \end{aligned}$$ where $\omega_0$ is the resonance frequency, $A_{\text{eff}}$ is the mode area, $n_{\text{eff}}$ is the effective index of the waveguide, and $n_2$ is the nonlinear index coefficient [@delbino2017]. Unequal pump powers in the CW and CCW modes yield a difference in the pump-cavity detuning in the CW and CCW direction. The peak power of the generated solitons has a linear dependence on the pump-cavity detuning [@herr2014; @yi2015] as given by, $$\begin{aligned} \label{soliton} P_{\text{sol}} = \frac{2 \ c \ A_{\text{eff}} \ \tau_{\text{r}}}{\omega_0 \ n_2 \ L} \delta \omega\end{aligned}$$ where $L$ is the cavity length, and $\tau_{\text{r}}$ is the round trip time. As a result, unequal pump powers leads to unequal peak powers for the counter-rotating solitons. We assume over each round trip the solitons acquire a nonlinear self-phase shift, as well as a cross-phase shift from the pump fields. Due to the small temporal overlap between the counter-rotating solitons we neglect the XPM from the counter-rotating soliton. If we assume the XPM from the pump fields acts on both solitons equally and cancels out, the unequal peak powers of the solitons in the two directions result in a difference in the nonlinear phase over one round trip that results in a difference in the repetition rates $\Delta f_{\text{r}}$ as given by, $$\begin{aligned} \label{delfsr} \Delta f_{\text{r}} = |f_{\text{CW}} - f_{\text{CCW}}| = g \frac{n_2 \ f_{\text{r}}}{A_{\text{eff}} \ n_{\text{eff}}} |P_{\text{sol,CW}} - P_{\text{sol,CCW}}|,\end{aligned}$$ where $g$ is the factor for the nonlinear phase shift induced by the dissipative soliton on itself. Using Eqs. \[detuning\]-\[delfsr\], $\Delta f_{\text{r}}$ can be expressed in terms of the transmitted pump power $P_{\text{out}}$ in the clockwise direction, the ratio $r = P_{\text{CCW}}/P_{\text{CW}}$ of pump powers, and the ratio $\eta$ of the intracavity pump power to the transmitted pump power, which depends on the losses in the ring and the coupling constant, as well as losses due to coupling from the bus waveguide to the lensed fiber (2 dB), at the circulator (1 dB) and 50:50 splitter (3 dB). The value of $\Delta f_{\text{r}}/P_{\text{out}}$ can then be expressed purely in terms of material and waveguide parameters such that, $$\begin{aligned} \label{delfsrnorm} \frac{\Delta f_{\text{r}}}{P_{\text{out}}} = g \frac{2 \ n_2 \ f_{\text{r}}}{n_{\text{eff}}\ A_{\text{eff}}} \ \eta \ |1-r|.\end{aligned}$$ This result suggests that we can control $\Delta f_{\text{r}}$ by simply varying the ratio of the counter-rotating pump powers. Experimentally we use the VOA’s to independently control the pump power in the two directions. The coupled pump power in the bus waveguide in both directions is varied over a range of 1.35 to 6.1 mW. We measure the transmitted pump powers in each direction to determine $P_{\text{out}}$ and $r$. We measure the frequencies of the heterodyned RF peaks and infer the difference $\Delta f_{\text{r}}$. The ratio $\Delta f_{\text{r}}/P_{\text{out}}$ yields a normalized measure of the tunability of the difference in FSR at different power levels. We plot the measured values of $\Delta f_{\text{r}}/P_{\text{out}}$ and the fit to Eq. \[delfsrnorm\] while varying $r$ in (Fig. \[fig:rfbyp\]). We observe reasonable agreement between the theoretically predicted curve and the measured values. The parameter $\eta$ in Eq. \[delfsrnorm\] depends on the coupling constant between the bus waveguide and microresonator and on the waveguide loss. Over a range of power ratios close to unity, we observe locking between the two soliton trains, which is indicative of identical repetition frequencies for the CW and CCW soliton trains. A full understanding of the locking mechanism of the repetition rates over a range of power ratios will require extension of the theoretical analysis to include soliton comb formation dynamics including the coupling between the modes in both directions. We use Eq. \[delfsrnorm\] to fit the red curve in Fig. \[fig:rfbyp\] and from this fit extract the value of $\text{g} \times \eta$ to be 4600. ![The difference in repetition rate normalized to the power in the clockwise mode ($\Delta f_{\text{r}}/P_{\text{out}}$) as a function of the power ratio $r$. Each of the measured points from the experiment is plotted as a black dot. The red curve represents the theoretical curve from Eq. \[delfsrnorm\].[]{data-label="fig:rfbyp"}](tuning){width="\linewidth"} In conclusion, we observe counter-rotating solitons in a single microresonator using a single pump laser. We demonstrate the ability to tune the difference in the repetition frequency of the two soliton trains by varying pump power for the modes in the clockwise and counterclockwise directions. Using a single-frequency laser and a single microresonator eliminates common mode noise in the dual-comb source. With future advances, we envisage a fully integrated tunable dual-comb source with electrical control of both the mode-locking as well as the tuning of repetition rates that would find applications in dual-comb spectroscopy and adaptive distance measurement. **Funding.** Air Force Office of Scientific Research (AFOSR) (FA9550-15-1-0303); National Science Foundation (NSF) (ECS-0335765); Defense Advanced Research Projects Agency (W31P4Q-15-1-0015); A.K. acknowledges a postdoc fellowship from the Swiss National Science Foundation (P2EZP2\_162288) **Acknowledgements.** This work was performed in part at the Cornell Nano-Scale Facility, a member of the National Nanotechnology Infrastructure Network, which is supported by the NSF.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Utilities use demand response to shift or reduce electricity usage of flexible loads, to better match electricity demand to power generation. A common mechanism is peak pricing (PP), where consumers pay reduced (increased) prices for electricity during periods of low (high) demand, and its simplicity allows consumers to understand how their consumption affects costs. However, new consumer technologies like internet-connected smart thermostats simplify real-time pricing (RP), because such devices can automate the tradeoff between costs and consumption. These devices enable consumer choice under RP by abstracting this tradeoff into a question of quality of service (e.g., comfort) versus price. This paper uses a principal-agent framework to design PP and RP rates for heating, ventilation, and air-conditioning (HVAC) to address adverse selection due to variations in consumer comfort preferences. We formulate the pricing problem as a stochastic bilevel program, and numerically solve it by reformulation as a mixed integer program (MIP). Last, we compare the effectiveness of different pricing schemes on reductions of peak load or load variability. We find that PP pricing induces HVAC consumption to spike high (before), spike low (during), and spike high (after) the PP event, whereas RP achieves reductions in peak loads and load variability while preventing large spikes in electricity usage.' author: - 'John Audie Cabrera$^{2}$, Yonatan Mintz$^{1}$, Jhoanna Rhodette Pedrasa$^{2}$, and Anil Aswani$^{1}$[^1][^2][^3]' bibliography: - 'IEEEabrv.bib' - 'hvar.bib' title: '**Designing Real-Time Prices to Reduce Load Variability with HVAC**' --- Introduction ============ High demand variability stresses the electrical grid by increasing the mismatch with supply, and it is costly for utilities because it requires adding redundant power generation. Demand response is an alternative that induces consumers to reduce or shift their consumption by setting prices by time of day [@Newsham2011; @Gyamfi2012; @Gyamfi2013; @Sun2013; @Strbac2008]. For example, peak pricing (PP) reduces the peak demand of electricity by charging consumers reduced (increased) rates for electricity during periods of low (high) demand. This is a common structure for demand response programs because the simplicity of PP allows consumers to understand how their consumption impacts their costs. Real-time pricing (RP) of electricity is less common because historically the complex pricing structure of RP makes it difficult for consumers to match consumption to prices. However, new consumer technologies like internet-connected smart thermostats [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012b; @MaasoumyRosenbergSangiovanni-VincentelliEtAl2014; @ZugnoMoralesPinsonEtAl2013; @BorscheOldewurtelAndersson2013; @vrettos2013predictive] simplify RP, because such devices can automate the tradeoff between costs and consumption. These devices simplify RP by abstracting this tradeoff into a question of quality of service (e.g., comfort) versus price, which is easier for consumers to understand. This paper designs PP and RP electricity rates using realistic, validated models of heating, ventilation, and air-conditioning (HVAC) [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a], and there are three contributions. First, we use a principal-agent model [@LaffontMartimort2009; @aswani2012incentive] to formulate the problem of a utility designing rates for HVAC that responds to prices, where the consumer has an acceptable (but unknown to the utility) comfort level. The challenge is that prices must be designed so that inflexible (with respect to comfort) consumers do not get excessive benefits relative to flexible consumers, since flexible consumers provide more benefits to the utility. Second, we pose the design problem as a mixed integer program (MIP). Third, we present numerically solvable approximations of this MIP, and then evaluate the impact of the resulting PP and RP rates. PP for HVAC Demand Response --------------------------- HVAC is arguably the most significant target for demand response since it the largest source of energy consumption in most buildings [@afram2014]. This is relevant from the standpoint of utilities because HVAC use is obviously correlated with high outdoor temperature, which means that HVAC usage in different buildings is strongly correlated with each other and is an important contributor to peak demand [@mendoza2012]. As a result, many studies have considered different aspects of PP for demand response of HVAC. A large number of demand response programs that have been implemented by utility companies use PP to reduce peak load [@Newsham2011; @Gyamfi2012; @Gyamfi2013; @Sun2013; @Strbac2008], and such programs have been found to provide varying levels of value to utilities. Within the controls literature, the use of model predictive control (MPC) techniques is particularly popular for demand response of HVAC [@kelman2011; @parisio2014; @mintz2016; @mintz2017behavioral] because of the ability of MPC to handle complex constraints. RP for HVAC Demand Response --------------------------- Recent work studied RP design for HVAC that automates price-responsiveness. One approach uses stochastic differential equations to design prices [@YangCallawayTomlin2014; @YangCallawayTomlin2015], and this work found a benefit to RP for a simplified HVAC model. In contrast, we consider in this paper the rate design problem using realistic, validated models of HVAC [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a]. Another body of work [@avci2012residential; @avci2013model] considers RP design using realistic HVAC models. Our paper differs in two substantive ways. The first is we use a different notion of comfort: Comfort in [@avci2012residential; @avci2013model] was defined using the temperature set-point, whereas in our paper we define comfort using allowable deviations in the temperature from the desired value. The second is we consider adverse selection, which are the issues caused when an inflexible (with respect to comfort) consumer accepts a rate designed for a flexible consumer, in our rate design. Outline ------- Sect. \[sec:cm\] describes our model for the consumer and our model for the electric utility company, including the principal-agent model the utility uses to design the electricity rates. The key feature of the model is the fact that consumers are either flexible or inflexible with regards to their comfort, but this information is hidden from the electric utility. The electricity rate will not be efficient for the utility if it does not account for this information asymmetry (formally known as *adverse selection*). Next, Sect. \[sec:nspp\] describes how to numerically solve the rate design problem using an MIP reformulation of the principal-agent model. As part of our approach, we derive relaxations that facilitate fast numerical solution. We conclude with Sect. \[sec:nr\], which numerically solves the pricing problem and then compares the impact of PP and RP on electricity consumption by HVAC. Model of Consumer and Electric Utility {#sec:cm} ====================================== In this section, we present our model for the consumer and the electric utility. We also formally define the problem of using a principal-agent framework to design either PP or RP electricity prices for HVAC demand response. Consumer Model -------------- The first part of our model defines comfort in relation to deviations in room temperature from the desired value: Consumers are inflexible ($\pm 2^\circ$C deviation from desired temperature) or flexible ($\pm 3^\circ$C deviation from desired temperature) in their comfort, and these ranges are from the ASHRAE 55 standard [@Standard2010] that defines quantitative models of occupant comfort. We use $T_d$ to refer to a consumer’s desired room temperature, and the $\overline{T},\underline{T}$ are the upper and lower bounds of comfort for the consumer. So if the consumer is inflexible, then $\underline{T} = T_d-2$ and $\overline{T} = T_d+2$ . Similarly, if the consumer is flexible, then $\underline{T} = T_d-3$ and $\overline{T} = T_d+3$. The next part of our model describe the room temperature dynamics and provides an energy model for the consumer. We use a linear time-invariant model for room temperature $$T_{n+1} =k_{r}T_{n}+k_{c}u_{n}+k_{w}w_{n}+q_n,\label{eq: dynamic}$$ where $T_{n}$, $u_{n}$, $w_{n}$, $q_n$ are room temperature, HVAC control input, outside temperature, and heating load due to occupancy, respectively, and each time step is a 15 min interval. This model has been validated [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a]. The total energy usage of the consumer is $\sum_{n=1}^N(b_n + pu_n)$, where $b_n$ is nondeferrable electricity load, $p$ is a constant that converts input $u_n$ to energy consumption [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a], and $N$ is a horizon. An important component of our model characterizes the HVAC controller, which automates the tradeoff between room temperature and electricity consumption. In particular, we assume that the HVAC is controlled by MPC: $$\label{eqn:mpc} \begin{aligned} \min\ & \textstyle\sum_{n=1}^{N}\big((T_{n}-T_{d})^{2}+\gamma c_{n}u_{n}\big)\\ \mathrm{s.t.}\ & T_{n+1}=k_{r}T_{n}+k_{c}u_{n}+k_{w}w_{n}+q_n\\ & T_n \in [\underline{T},\overline{T}], u_n \in [0,\overline{u}],\quad \text{ for } n = 1,\ldots,N \end{aligned}$$ where $\gamma$ is a constant that trades off temperature and electricity usage, $\overline{u}$ is the maximum control input, and $c_n$ is the price of electricity at time $n$. The last part of the model describes what information is known by the consumer (and implicitly known by the HVAC controller). The variable $$\begin{gathered} \theta = \big\{k_r,k_c,k_w,w_n,q_n,b_n,\gamma,T_d,\underline{T},\overline{T},\overline{u}, \\\text{for } n=1,\ldots,N\big\}\end{gathered}$$ completely characterizes each consumer, and it is known as *type* in the principal-agent literature [@LaffontMartimort2009; @aswani2012incentive]. (The value $p$ is a constant known by everyone.) We assume that the consumer (and HVAC controller) exactly knows the value of $\theta$, and knows the electricity price $\textbf{c} = \{c_1,\ldots,c_N\}$. Moreover, we use $J(\textbf{c}; \theta)$ to refer to the minimum value of (\[eqn:mpc\]), and $u^*(\textbf{c}; \theta)$ refers to the minimizer of (\[eqn:mpc\]). Model of Electric Utility Company --------------------------------- An important component in the electric utility model is the information asymmetry between the utility and consumers. Specifically, we assume the utility does not know $\theta$ for any single customer. Instead, the utility knows the overall probability distribution for $\theta$. (Recall the utility and consumers know $p$, which is a constant.) We also assume that both the utility and consumers know the electricity price $\textbf{c}$. The next element in the utility model describes the goal of the electricity pricing for demand response. If the goal is to reduce peak load, then the utility aims to minimize $$V_{p} = \textstyle\mathbb{E}_\theta\Big(\sum_{n = t_1}^{t_2} u^*_n(\textbf{c}; \theta)\Big),$$ where $[t_1,t_2]$ is a time range during which the peak load is anticipated by the utility. If the goals is to reduce load variability, then the utility aims to minimize $$V_{l} = \textstyle\mathbb{E}_\theta\Big(\mathrm{var}_n\big(b_n + u_n^*(\textbf{c}; \theta)\big)\Big),$$ where $\mathrm{var}_n(\cdot)$ is the variance over $n=1,\ldots,N$. We will consider designing PP and RP for both goals. The electric utility is interested in designing $\textbf{c}$, and we describe the constraints that characterize PP and RP rates. If the utility is designing PP rates, then this means they are selecting from $$\mathcal{C}_{pp} = \left\{\textbf{c} : \begin{aligned} &c_n = c_{t_1}, &\text{ for } n \in[t_1,t_2]\\ & c_n = c_1, &\text{ for } n \in \{1,\ldots,N\}\setminus[t_1,t_2] \end{aligned}\right\}.$$ This expresses prices that are constant within the peak period $[t_1,t_2]$, and constant (with a possibly different value) outside of the peak period. Similarly, if the utility is designing RP rates, then this means they are selecting from $$\mathcal{C}_{rp} = \left\{\textbf{c} : \begin{aligned} &c_1 = c_N\\ &|c_{n+1}-c_n| \leq \rho, &\text{ for } n =1,\ldots,N_1 \end{aligned}\right\}.$$ This expresses prices that are equal at the beginning and end of the horizon, and such that the rate of change is bounded by a constant $\rho$. Lastly, we use $\textbf{f} = \{f,\ldots,f\}$ to refer to a flat pricing structure, and $f$ in particular refers to the existing electricity price prior to the introduction of the demand response pricing. Principle-Agent Model for Pricing --------------------------------- $k_{r}$ $k_{c}$ $k_{w}$ average $q_n$ -------- --------- --------- --------- --------------- Room 1 0.63 2.64 0.10 6.78 Room 2 0.43 1.95 0.18 9.44 : \[tabel:thermal\_coeff\]Temperature Model Coefficients The last part of the model for the utility describes the principal-agent formulation used to design electricity prices. In particular, we assume the utility solves $$\label{eqn:pam} \begin{aligned} \min\ & \textstyle V + \lambda\cdot\mathbb{E}_\theta\Big(\sum_{n=1}^N\big(f_n^{\vphantom{*}}u_n^*(\mathbf{f}; \theta) - c_n^{\vphantom{*}}u_n^*(\mathbf{c}; \theta)\big)\Big) \\ \mathrm{s.t}\ & J(\textbf{c}; \theta) \leq J(\textbf{f}; \theta)\\ &\mathbf{c}\in\mathcal{C}\\ & c_n \in [\underline{c},\overline{c}],\quad \text{for } n=1,\ldots,N \end{aligned}$$ to design the electricity rates, where $V$ is either $V_{p}$ (to minimize peak load) or $V_{v}$ (to minimize load variance), and $\mathcal{C}$ is either $\mathcal{C}_{pp}$ (for PP) or $\mathcal{C}_{rp}$ (for RP). Note the $\underline{c},\overline{c}$ are bounds on the minimum and maximum electricity rate, respectively. Here, $\sum_{n=1}^N\big(f_nu_n^*(\mathbf{f}; \theta) - c_nu_n^*(\mathbf{c}; \theta)\big)$ is the amount of revenue the utility loses from implementing the new pricing $\textbf{c}$ (relative to the existing rate $\mathbf{f}$), and so this means $\lambda$ is a constant that the utility uses to tradeoff achieving the demand response goal with revenue loss. We do not include the nondeferrable electricity load $b_n$ when defining revenue loss, because in our setting the electricity rates for the nondeferrable electricity load are different (and left unchanged) from the rates $\textbf{c}$ for HVAC electricity consumption. There are two game-theoretic considerations that must be discussed when defining and solving principal-agent models [@LaffontMartimort2009; @aswani2012incentive]. The constraint $J(\textbf{c}; \theta) \leq J(\textbf{f}; \theta)$ is known as a *participation constraint*, and it ensures that the new electricity rates $\textbf{c}$ are such that the overall utility of the consumer under the new rates $\mathbf{c}$ is equal or better than the overall utility of the consumer under the original rate $\mathbf{f}$. The second game-theoretic aspect to be discussed is adverse selection. We mitigate adverse selection by minimizing the expectation (with respect to type $\theta$) of the goal $V$ and revenue loss. Numerical Solution of Pricing Problem {#sec:nspp} ===================================== This section studies how to solve the principal-agent model (\[eqn:pam\]). The main difficulty is that (\[eqn:pam\]) is a bilevel program [@ColsonMarcotteSavard2007; @aswani2016duality], which means that (\[eqn:pam\]) is an optimization problem in which some variables are solutions to optimization problems themselves. In particular, recall that $u^*(\textbf{c}; \theta)$ is the minimizer to (\[eqn:mpc\]). In order to solve (\[eqn:pam\]), we first show how the problem can be reformulated as a MIP. Then we describe some relaxations that facilitate numerical solution of the MIP. MIP Reformulation of Pricing Problem ------------------------------------ They key idea in reformulating (\[eqn:pam\]) is to replace the convex optimization problem (\[eqn:mpc\]) by the KKT conditions, which provides constraints that $u^*(\textbf{c}; \theta)$ must satisfy. More specifically, the KKT conditions for (\[eqn:mpc\]) can be written as the following set of mixed integer linear constraints: $$\begin{aligned} &T_{n+1}=k_{r}^{\vphantom{*}}T_{n}^{\vphantom{*}}+k_{c}^{\vphantom{*}}u_{n}^*(\textbf{c}; \theta) +k_{w}^{\vphantom{*}}w_{n}^{\vphantom{*}}+q_n^{\vphantom{*}}\\ & \gamma c_{n}-k_{c}\nu_{n}+\overline{\mu}_{n}-\underline{\mu}_{n}=0\\ &0\leq\overline{\mu}_{n}\leq M\eta_n,\\ &0\leq\underline{\mu}_{n}\leq M\zeta_n\\ &\overline{u}\eta_n+\underline{u}\left(1-\eta_n\right)\leq u_{n}^*(\textbf{c}; \theta)\leq\underline{u}\zeta_n+\overline{u}\left(1-\zeta_n\right)\\ &\eta_n, \zeta_n \in\{0,1\}, \quad \text{for } i = 1,\ldots,N-1 \end{aligned}$$ and also that $$\begin{aligned} &(T_{n}-T_{d})+\nu_{n-1}-k_{r}\nu_{n}+\overline{\xi}_{n}-\underline{\xi}_{n}=0\\ &0\leq\overline{\xi}_{n}\leq Mx_{n}\\ &0\leq\underline{\xi}_{n}\leq My_{n}\\ &\overline{T}x_{n}+\underline{T}\left(1-x_{n}\right)\leq T_{n}\leq\underline{T}y_{n}+\overline{T}\left(1-y_{n}\right)\\ &x_n, y_n \in\{0,1\}, \quad \text{for } 2 = 1,\ldots,N \end{aligned}$$ where $M > 0$ is a sufficiently large constant [@Fortuny-AmatMcCarl1981]. The problem (\[eqn:pam\]) becomes an infinite dimensional MIP, after a few more reformulations. The first is to observe that $\mathbb{E}_\theta(f_n^{\vphantom{*}}u_n^*(\mathbf{f}; \theta))$ is a constant, and so can be removed from the objective function. The second is to note that $J(\textbf{f}; \theta)$ is also a constant since it does not depend on any decision variables. The third reformulation is to substitute $J(\textbf{c}; \theta)$ with $\sum_{n=1}^{N}\big((T_{n}^{\vphantom{*}}-T_{d^{\vphantom{*}}})^{2}+\gamma c_{n}^{\vphantom{*}}u_{n}^*(\textbf{c}; \theta)\big)$. Though this yields an infinite dimensional problem, using sample average approximation (SAA) [@kleywegt2002sample; @wang2008sample] to approximate the reformulation gives a finite dimensional MIP. Relaxation of Pricing Problem ----------------------------- The reformulated MIP described above is still difficult to solve because it involves nonconvex quadratic terms $c_{n}^{\vphantom{*}}u_{n}^*(\textbf{c}; \theta)$, and so additional relaxations are needed so that the price design problem can be solved using standard numerical optimization software. The quadratic term is relaxed using the McCormick envelope [@McCormick1976] to $$\begin{aligned} r_n \geq \underline{c}u_{n}^*(\textbf{c}; \theta) + \underline{u}c_n - \underline{u}\cdot\underline{c}\\ r_n \geq \overline{c}u_{n}^*(\textbf{c}; \theta) + \overline{u}c_n - \overline{c}\cdot\overline{u}\\ r_n \leq \overline{c}u_{n}^*(\textbf{c}; \theta) + \underline{u}c_n - \overline{c}\cdot\underline{u}\\ r_n \leq \underline{c}u_{n}^*(\textbf{c}; \theta) + \overline{u}c_n - \underline{c}\cdot\overline{u} \end{aligned}$$ for $n = 1,\ldots,N$. With this relaxation, the SAA form of the reformulated problem is a mixed-integer quadratic program (MIQP), which can be solved using existing software. However, numerical solution of MIQP’s can be slow. So we next describe two additional relaxations that speed up computation by approximating the MIQP using a mixed-integer linear program (MILP), which can typically be numerically solved faster. First, we replace $(T_n-T_d)^2$ with $4|T_n-T_d|^2$ since $(T_n-T_d) \leq 3|T_n-T_d|^2$ when $|T_n-T_d| \leq 3$ as is the case from our assumptions about comfort. Second, we replace $\mathrm{var}_n\big(b_n + u_n^*(\textbf{c}; \theta)\big)$ with $N^{-1}\sum_{n=1}^N|b_n + u_n^*(\textbf{c}; \theta) - m(\theta)|$, where $m(\theta) = \frac{1}{N}\sum_{n=1}^N u_n^*(\textbf{f}; \theta)$. The idea is we approximate the variance by (a) replacing squares with absolute value, and (b) replacing the mean in the variance $\frac{1}{N}\sum_{n=1}^N u_n^*(\textbf{c}; \theta)$ with the mean $m(\theta)$. **Flat Rate** **PP Rate** **RP Rate** -- --------------- --------------- ------------- ------------- Peak Load 28.3 27.0 27.6 Load Variance 0.49 0.54 0.42 Peak Load 19.1 15.3 17.5 Load Variance 0.25 0.28 0.17 : Pricing to Reduce Peak Load\[tab:rpl\] Numerical Results {#sec:nr} ================= In this section, we numerically solve our MILP relaxation of the pricing problem for a 24 hour horizon. All of the calculations where conducted on laptop computer with dual core 2.5GHz processor and 8GB RAM using MATLAB with the CVX toolbox [@cvx] and the Gurobi solver [@GurobiOptimization2016]. We finish by evaluating the quality of the designed electricity rates, and the results are summarized in Tables \[tab:rpl\] and \[tab:rlv\]. Values of Type Parameters ------------------------- For scenarios with PP and peak load reduction, we set the peak times to be 1pm–4pm. Our bounds on the electricity cost were $7\text{PhP}\leq c_{n}\leq20\text{PhP}$, where PhP is Philippines Pesos. Parameters in the room temperature dynamics (\[eq: dynamic\]) were chosen by uniformly sampling from the paramters in Table \[tabel:thermal\_coeff\]. The first set of parameters are from [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a], while the second set of parameters were replicated using the same methodology from [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a] with data from our UP-BRITE testbed located at the University of the Phillipines, Diliman. We set the probability of a consumer to have high flexibility to be 0.2. Scenario generation for outside temperature was performed using data from Weather Underground [@wunderground], scenario generation for heating load due to occupancy was based on occupancy models, and scenario generation for nondeferrable electricity load was based on data from [@nrel]. Results and Discussion for PP ----------------------------- Results for PP for peak load reduction are shown in Fig. \[fig:pppl\]. PP is effective in reducing the peak load for both the flexible and inflexible consumers; but there is a side effect in which the HVAC has sharp increases in electricity consumption both prior to and after the peak period, as well as a sharp decrease in consumption at the start and end of the peak period. This substantially increases the variability of the load profile. Results for PP for load variance reduction are shown in Fig. \[fig:ppv\]. PP is not effective in decreasing load variability because sharp changes in electricity price induce the HVAC to make sharp changes in consumption. Results and Discussion for RP ----------------------------- The results for RP for peak load reduction are shown in Fig. \[fig:rppl\]. The RP is effective in reducing the peak load for both the flexible and inflexible consumers, and it in fact also reduces the variance of the electricity load. The results for RP for load variance reduction are shown in Fig. \[fig:rpv\]. The RP is effective in decreasing the variability of the total electricity load, and it also reduces the peak load for both the flexible and inflexible consumers. The variance in load under this latter contract is lower than the variance under the former contract, but the difference is small. Conclusion ========== **Flat Rate** **PP Rate** **RP Rate** -- --------------- --------------- ------------- ------------- Peak Load 28.3 27.3 27.6 Load Variance 0.49 0.49 0.41 Peak Load 19.1 17.7 17.8 Load Variance 0.25 0.26 0.17 : Pricing to Reduce Load Variance\[tab:rlv\] We studied the problem of designing PP and RP electricity rates using realistic, validated models of HVAC. We used a principal-agent model to formulate the problem of a utility designing rates for HVAC that responds to prices, where the consumer has an acceptable (but unknown to the utility) comfort level. We showed how this problem could be posed as numerically tractable MILP’s, and then solved these MILP’s to compare the efficacy of different pricing schemes. We found that RP was substantially better at reducing load variability than PP, whereas PP was superior in reducing peak load. Directions for future work include incorporating more detailed consumer models to better understand best practices for the design of incentives for effective demand response. [^1]: \*This work was supported in part by the Philippine-California Advanced Research Institutes (PCARI) and NSF Award CMMI-1450963. [^2]: $^{1}$Y. Mintz and A. Aswani are with the Department of Industrial Engineering and Operations Research, University of California, Berkeley, CA 94720 USA [ymintz@berkeley.edu, aaswani@berkeley.edu]{} [^3]: $^{2}$J.A. Cabrera and J.R. Pedrasa are with the Electrical and Electronics Engineering Institute, University of the Philippines, Diliman, Quezon City, Philippines 1101 [john\_audie.cabrera@upd.edu.ph, jipedrasa@up.edu.ph]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a general unified approach for the study of quantum thermal machines operating under periodic adiabatic driving, in contact with thermal reservoirs kept at different temperatures, including both geometric heat engines and refrigerators. In this regime, we will show that many observables characterizing the operating mode and performance of the machine are of geometric nature. Heat pumping by the $ac$ forces and dissipation of energy can be described, respectively, by the antisymmetric and symmetric components of a thermal geometric tensor defined in the space of time-dependent parameters generalized to include the temperature bias. The antisymmetric component can be identified as a Berry curvature, while the symmetric component defines the metric of the manifold. We show that heat flow in and work performed by adiabatic thermal machines, and consequently also their efficiency, are intimately related to these geometric aspects. We illustrate these ideas by discussing two specific cases: a slowly driven qubit asymmetrically coupled to two bosonic reservoirs kept at different temperatures, and a quantum dot driven by a rotating magnetic field and strongly coupled to electron reservoirs with different polarizations. Both examples are already amenable for an experimental verification.' author: - Bibek Bhandari - Pablo Terrén Alonso - Fabio Taddei - Felix von Oppen - Rosario Fazio - Liliana Arrachea bibliography: - 'adiabatic\_geo\_final.bib' title: Geometric properties of adiabatic quantum thermal machines --- Introduction ============ Thermodynamics in quantum nanoscale systems [@scovil1959; @geusic1967; @geva1992; @allahverdyan2000; @hanggi2009; @gemmer2009; @horodecki2013; @binder2019] has been a rapidly growing research topic for some years now, emerging at the intersection of statistical mechanics, nanoscience, quantum information, as well as atomic and molecular physics. A paradigmatic goal in this field is to conceive of and realize thermal machines in the quantum realm, which, like the classical thermodynamic cycles, transform heat to useful work or use work to refrigerate [@alicki1979; @geva1992b; @linden2010; @cycle1; @cycle2; @cycle3; @cycle4; @cycle5; @cycle6; @cycle7; @cycle8; @cycle9; @cycle10]. The development of efficient thermal machines operating in the quantum realm is, in fact, of paramount importance for various fields related to quantum technologies. Numerous theoretical proposals [@pekola2007; @murphy2008; @esposito2009; @janine1; @kurizki2013; @sothmann2014; @kosloff2014; @hofer2016; @benenti2017; @nanomech2; @nanomech3] stimulated experimental efforts on several platforms [@peterson2019; @roche; @pothier], including solid-state electronics [@electro1; @molenkamp2015; @pekola2019; @ronzani] and nanomechanical systems [@nanomech0; @nanomech1; @nanomech4; @nanomech5; @nanomech6], as well as cold atoms and trapped ions [@coldat; @expotto; @superad; @rossnagel2016; @assis2019]. ![Geometrical thermal machine setup. A central, parametrically driven quantum system described by the Hamiltonian ${\cal H_S}$ is coupled to macroscopic reservoirs. A cycle of the machine is completely characterized by a closed path in the parameter space ${\bf X}$. After a complete cycle the averaged power $P$ is dissipated as heat $J^Q_{d,\alpha}$ in the reservoirs. The net transported energy $J^Q_{tr}$ flows from one reservoir to the other. []{data-label="generic-q-machine"}](fig1.pdf){width="\columnwidth"} In its most simplified version a quantum thermal machine is composed of a working substance (typically a few-level quantum system) coupled to two or more thermal baths kept at different temperatures (and possibly at different chemical potentials). Engines and refrigerators can operate under steady-state conditions, as thermoelectric engines, or be controlled by time-periodic perturbations which define a cycle, as in conventional macroscopic thermal machines. An example of the latter is the quantum Otto engine, which has been investigated theoretically [@kosloff84; @geva1992b; @kosloff00; @scully02; @lin03; @kieu04; @Rezek2006; @Quan2007; @henrich2007; @he2009; @abah12; @Campo:2014aa; @alecce2015; @legggio2016; @karimi; @Kosloff2017; @Watanabe2017; @Campisi2016; @camp; @cycle4; @jongmin2019] and realized experimentally [@expotto; @assis2019; @peterson2019]. Understanding how to discriminate and characterize useful work, heat, and dissipated energy in these systems is a fundamental step towards the realization of nanomachines. In fact, unlike the ideal classical thermodynamic cycles, quantum thermal machines typically operate out of equilibrium [@gallego2014; @anders2016], which necessarily implies entropy production and dissipation. In addition to its impact on emerging technologies, the study of quantum heat engines and refrigerators is also of fundamental importance to deepen our understanding of how energy flows and transforms at the nanoscale [@kieu2004; @uzdin2015; @seifert; @benenti2017; @palao2019; @binder; @barra]. In the present work we concentrate on adiabatically driven thermal machines. Their cycle is controlled by time-periodic parameter modulations which are slow compared to the typical time scales associated with the (quantum) working substance (see for example Refs. ). Here we show that in this regime, many properties characterizing the performance of the machine are of geometric nature, in the sense that they depend only on the geometry of the cycle in parameter space. We will refer to these quantum machines as [*geometric thermal machines*]{}. Starting from the seminal works of Aharonov and Bohm [@ab] as well as Berry [@berry], geometric effects have pervaded many areas of physics. In quantum transport, distinct contributions of geometric origin affect charge and energy currents. In the absence of an additional [*dc*]{} bias, the pumped charge in a periodically driven system was shown to be of geometric origin, and can thus be expressed in terms of a closed-path integral in parameter space [@brouwer; @aleiner; @avron], akin to the Berry phase [@berry]. A similar approach was adopted to analyze heat transport in a driven two-level system weakly coupled to bosonic baths [@hanggi]. Closely related to these ideas is the geometric description of driving-induced forces [@berry2010; @miche; @Bode2011; @Bode2012; @Thomas2012; @raul1; @raul2; @raul3; @pablo], including geometric magnetism [@Berry1993; @campisi2], with the extension of geometric response functions to open systems also being discussed in relation to Cooper pair pumping [@campisi3]. Geometric concepts like a [*thermodynamic metric*]{} and a [*thermodynamic length*]{} were recently introduced as promising tools to characterize the dissipated energy and to design optimal driving protocols [@reza; @crooks; @deff; @marti; @marti2; @marti3]. Similar ideas are behind the description of the adiabatic time-evolution of many-body ground states of closed systems in terms of a [*geometric tensor*]{} [@anatoli1; @anatoli2; @anatoli3]. The topological characterization of mixed thermal states is also close to these concepts[@diehl1; @diehl2]. This large body of work linking geometry to transport naturally hints at similar connections for thermal machines. In the present paper, under quite general assumptions, we will show that several characteristics of quantum thermal machines operating in the adiabatic regime are of geometric nature. We formulate a unified description in terms of a geometric tensor for all the relevant energy fluxes, which we refer to as [*thermal geometric tensor*]{}. Within this description, pumping and dissipation are, respectively, associated with the antisymmetric and symmetric components of this tensor. We also show that not only heat pumping but also the dissipated heat can be characterized in terms of an integral over a closed path in parameter space. These results apply universally to any periodically and adiabatically driven quantum system in contact with various reservoirs, irrespective of the statistics obeyed by the particles, the strength of the coupling between the system and the reservoir, or the presence of many-body interactions. The article is organized as follows. In Sec. \[model-heat-engine\], we introduce the model of an adiabatic thermal machine. We also introduce the linear-response formalism to treat [*ac*]{} adiabatic and thermal driving. Section \[geoF\] is devoted to the analysis of the thermodynamic behavior of the heat engine. This section contains the principal results of the present work and shows how the performance characteristics of the engine (efficiency, output power, etc.) are of geometric origin. The central results of this approach are captured by Eqs. (\[therpum\]), (\[ptot\]), (\[pumpedheatlambda\]) and (\[worklambda\]) which show that the pumped heat, the concomitant heat-to-work conversion and the dissipated power have a geometric interpretation. In the same section we will also analyze several classes of adiabatic machines depending on the various adiabatic drivings. Following this general formulation, Sec. \[Examples\] focuses on two specific examples of thermal machines, which are particularly relevant for experimental implementations. We first consider a driven qubit which is asymmetrically and weakly coupled to two bosonic thermal baths. We then discuss a driven quantum dot coupled to two electron reservoirs. Conclusions and some additional perspectives related to our work are presented in Sec. \[conclusions\]. The appendices contain further details on the derivation of the main results of the paper and explicit calculations for the examples presented in the main text. Model of a geometric thermal machine {#model-heat-engine} ==================================== A sketch of the geometric thermal machine that we analyze throughout this paper is shown in Fig. \[generic-q-machine\]. It consists of a central region containing the working substance, constituted by a few-level quantum system, coupled to two thermal baths. The quantum system is periodically driven by a set of $N$ slowly-varying parameters $\vec{X}(t)$. The baths are macroscopic reservoirs of bosonic excitations or fermionic particles. The macroscopic variables characterizing the thermal environment such as the bath temperatures can also slowly vary in time. We parametrize the bath temperatures as $T_\alpha(t) = T + \delta T_{\alpha}(t)$ (with $\alpha=L,R$ referring to the left and right reservoirs) and define $\Delta T(t)= \delta T_L(t) - \delta T_R(t)$. A (possible) time dependence in the bath temperatures is only included in $\delta T_{\alpha}(t)$. We assume that the right reservoir $R$ is the colder one. As we are interested in the dynamics for slow driving and small temperature biases, it is convenient to define the $N+1$-dimensional vector of “velocities”, $$\dot{{\bold X}}(t) = \left\{ \dot{\vec{X}}(t), \Delta T(t)/T \right\} \label{x-vec}$$ These two types of vector notation (arrow and bold character) appear in several places throughout the paper. For later reference, the Table \[notation-table\] summarizes the different symbols used in the text. N Number of slowly varying coupling parameters ------------------------------------ ----------------------------------------------- N + 1 Number of slowly varying coupling parameters including thermal bias $\vec{v} \;\;\; $ Arrows denotes $N$-dimensional vectors $\bf{v}$ Bold fonts denote $(N+1)$-dimensional vectors $\ell \;, \ell'$ Label the components of $N$ vectors $\mu \;, \nu$ Label the components of $(N+1)$ vectors $ \overleftrightarrow{M} $ $N\times N$ matrix $\underline{ \underline{{\bf M}}}$ $(N +1)\times (N+1)$ matrix : Notation used in the text \[notation-table\] The generalization to multi-terminal devices or the case in which in addition to temperature, also other macroscopic variables such as the electrochemical potential difference between reservoirs is straightforward. A temperature bias as well as time-dependent system and bath parameters generally induce net heat transport between the reservoirs. At the same time, any driving mechanism generates heat that is dissipated into the reservoirs. Hence, the total heat current entering a given reservoir, averaged over one period of the time-dependent driving, has two components [@lilimos], $$\label{tot} J^{Q}_{\alpha}= J^{Q}_{\rm tr, \alpha}+J^{Q}_{\rm d,\alpha}. \;\;$$ Here, $J^{Q}_{\rm tr, \alpha}$ denotes the transport contribution to the current for lead $\alpha$ and satisfies $$\label{net} J^{Q}_{\rm tr, R}=- J^{Q}_{ \rm tr , L} \equiv J^{Q}_{ \rm tr}.$$ The second, dissipative contribution $J^{Q}_{\rm d,\alpha}$ is related to the total power $$\label{dis} P = \sum_{\alpha=L,R} J^{Q}_{ \rm d ,\alpha},$$ generated by the driving forces, including the time-dependent parameters as well as the forces imposing the thermal difference between the reservoirs. The balance between these two contributions is the key for the performance of the thermal machine, which may operate as a heat engine by transforming heat into work against the time-dependent driving or as a refrigerator, by using the work performed by the $ac$ driving to pump heat from the cold to the hot reservoir. [*Within the adiabatic linear response regime, there are components of the heat currents and of the performed work which are fully described by geometric coefficients, implying a geometric characterization of quantum thermal machines.*]{} To analyze the performance of these machines, we need to compute the currents. This can be done by conventional many-body techniques, such as the non-equilibrium Green’s function formalism, scattering matrix theory (for systems without many-body interactions), or master equations (for weak coupling between system and reservoirs). Although we use these techniques to solve specific examples, we employ a Hamiltonian representation for the temperature difference and a Kubo linear response framework for small $\Delta T$ to derive general results. This enables us to analyze the energy dynamics induced by the thermal driving on the same footing with that induced by the time-dependent driving. Here we follow Luttinger’s approach [@luttinger] to thermal transport which introduces a “gravitational” potential whose gradients induce energy flows akin to the electrical currents induced by gradients of the electrochemical potential. Details of this approach are given in Appendix \[Lutt-theory\]. We then introduce the Hamiltonian ${\cal H}$ governing the system of Fig. \[generic-q-machine\], which can be expressed as $${\cal H}(t)= {\cal H}_S(t) + {\cal H}_{\rm baths} + {\cal H}_{c} + {\cal H}_{\rm th}(t). \label{hamtot}$$ The first term ${\cal H}_S(t)$ is the Hamiltonian of the quantum system. It depends on time through the $N$ slowly and periodically varying parameters (driving potentials) $\vec{X}(t)= \left\{X_{\ell}(t)\right\}$ [with]{} $\ell = 1,\dots,N$, so that ${\cal H}_S(t) \equiv {\cal H}_S[\vec{X}(t)]$. The second term describes the two reservoirs $ {\cal H}_{\rm baths} = {\cal H}_{\rm R}+{\cal H}_{\rm L}$, which are macroscopic systems of bosonic excitations or fermionic particles. In the latter case, they are held at the same chemical potential $\mu_L=\mu_R=\mu$ and should be described by the grand-canonical Hamiltonian, ${\cal H}_{\alpha} \rightarrow {\cal H}_{\alpha} -\mu {\cal N}_{\alpha}$, where ${\cal N}_{\alpha}$ denotes the number of particles in reservoir $\alpha$. The coupling between system and reservoirs, such as tunneling of particles and/or the exchange of energy between system and reservoirs, is captured by ${\cal H}_{c}$. Its form depends on the specific implementation, and some examples will be described later in the paper. The last term in Eq. (\[hamtot\]) accounts for the fact that the two reservoirs are held at different temperatures and derives from the Luttinger formulation of thermal transport. Adapting the definition of Eq. (\[at\]) to the present case, we define $$\label{hth} {\cal H}_{\rm th}(t) = - \sum_{\alpha=L,R} {\cal J}^E_{ \alpha}(t) \xi_{\alpha}(t),$$ where $\xi_{\alpha}(t)$ plays the same role as the thermal vector potential and the operator representing the energy flux entering reservoir $\alpha$ is given by $${\cal J}^E_{ \alpha}= \dot{\cal H}_{\alpha}=-i \left[{\cal H}_{\alpha}, {\cal H} \right]/\hbar.$$ Here, ${\cal H}_{\alpha}$ is the Hamiltonian of reservoir $\alpha$. When the chemical potential is the same for all reservoirs, time averaging the mean value of this operator over one period directly gives the heat current defined in Eq. (\[tot\]), $$J^Q_{\alpha}= \frac{\Omega}{2\pi} \int_0^{2\pi/\Omega} \mathrm{d}t \langle {\cal J}^E_{ \alpha}(t) \rangle.$$ The relation between the Luttinger field and the temperature bias, the counterpart of Eq. (\[adel\]), reads $$\label{dotpsi} \dot{\xi}_{\alpha}(t)= \delta T_{\alpha}(t)/T .$$ Adiabatic linear response {#linear_ad} ------------------------- Our quantum machine operates in a regime in which both the driving parameters $\vec{X}(t)$ and the temperature bias $\delta T_{\alpha}$ (with the associated parameter $\xi_{\alpha} (t)$) vary in time. We assume that all these parameters depend periodically on time with period $\tau=2\pi/\Omega$. Adiabatic driving implies that the driving frequency $\Omega$ is small compared to any characteristic frequency of the system’s degrees of freedom as well as the relevant relaxation times associated with the coupling to the reservoirs. We can then regard the velocities at which the parameters are changed and the temperature bias as sufficiently small so that the currents can be computed in linear response. This procedure was previously introduced in Ref. and it is similar to the one of Ref. for closed driven systems. The adiabatic time evolution of any observable ${\cal O}$ is described by the Kubo-like formula $$\begin{aligned} \label{linearo} \langle {\cal O}\rangle(t) = \langle {\cal O} \rangle_t &+& \sum_{\ell=1}^{N} \chi^{\rm ad}_t\left[{\cal O},{\cal F}_{\ell}\right] \dot{X}_{\ell} (t) \nonumber \\ &+& \sum_{\alpha=L,R} \chi^{\rm ad}_t\left[{\cal O},{\cal J}^E_{ \alpha}\right] \dot{\xi}_{\alpha} (t). \end{aligned}$$ Here we have introduced the operator $${\cal F}_{\ell} = - \frac{\partial {\cal H}}{\partial X_{\ell}}, \;\;\;\; {\rm with} \;\; \ell=1,\dots,N$$ which has the interpretation of a force induced by the driving. The [*adiabatic response functions*]{} appearing in Eq. (\[linearo\]) take the form $$\label{chiad} \chi^{\rm ad}_t\left[{\cal O}_1,{\cal O}_2 \right] = - \frac{i}{\hbar} \int_{-\infty}^t \mathrm{d}t^{\prime} (t-t^{\prime}) \langle \left[ {\cal O}_1(t),{\cal O}_2(t^{\prime})\right]\rangle_t,$$ with the mean values evaluated with respect to the equilibrium density matrix of the frozen Hamiltonian ${\cal H}_t$, $\rho_t=\sum_m p_m |m \rangle \langle m|$, where $p_m= e^{-\beta \varepsilon_m}/Z_t$, $\beta=1/k_B T$, and ${\cal H}_t |m\rangle =\varepsilon_m |m\rangle$. Notice that the instantaneous eigenvectors $|m\rangle$ and eigenenergies $\varepsilon_m$ depend on the time $t$. We have also assumed that the perturbations are switched on at $t_0=-\infty$. Within this framework, we can evaluate the adiabatic evolution of any observable. We are particularly interested in the energy current flowing into the coldest reservoir and the induced forces. Similar to the definition in Eq. (\[x-vec\]), we find it convenient to define the $N+1$-dimensional force vector $$\label{cal f} \mathbfcal{F} = (\vec{{\cal F}} , {\cal J}^E_R ).$$ Using this notation, the adiabatic dynamics for the forces and the energy current into the coldest reservoir can be written as $$\langle \mathbfcal{F} \rangle(t) = \langle \mathbfcal{F} \rangle_t + \underline{ \underline{{\bf \Lambda}}}({\vec X}) \cdot \dot{{\bf X}}. \label{linearof}$$ As expected, the physical response depends on the two Luttinger parameters $\xi_L(t)$ and $\xi_R(t)$ only through the temperature bias $\dot{X}_{N+1}(t)= \Delta T(t)/T$, as can be seen using Eqs. (\[ident\]) and (\[fj\]). In Eq. (\[linearof\]), we introduce the response matrix $\underline{ \underline{{\bf \Lambda}}}({\bf X})$ with elements defined as $$\label{lambda} \Lambda_{\mu,\nu}(\vec{ X}) = \Bigg\{\begin{matrix} \chi_t^{\rm ad} \left[ {\cal F}_\mu , {\cal F}_{\nu} \right] & \mu \le N \\ \\ \sum_{\alpha =L,R} \chi_t^{\rm ad} \left[ {\cal J}^E_{\alpha},{\cal F}_\nu \right] & \mu = N+1\\ \end{matrix}$$ Note that in deriving the linear response expression for the current, one should neglect the term ${\cal H}^{\rm th}_t$, which would lead to a “diamagnetic” component of the heat current [@tatara]. The notation in Eq. (\[lambda\]) highlights the fact that the $\Lambda_{\mu,\nu}(\vec{ X})$ depend on time only through the parameters $\vec{X}$. As the coefficients of Eq. (\[lambda\]) are evaluated with respect to the frozen equilibrium density matrix, they obey the Onsager relations [@ludovico; @cohen] $$\begin{aligned} \label{onsager} \Lambda_{\mu,\nu}(\vec{ X},\vec{B})&=& s_{\mu} s_{\nu} \Lambda_{\nu,\mu}(\vec{ X},- \vec{B}) ,\end{aligned}$$ where $s_{\nu}=\pm$ for operators ${\cal F}_{\nu}$ which are even/odd under time reversal. The dependence on an applied magnetic field $\vec{B}$, made explicit here, will be suppressed below unless necessary. Adiabatic forces, currents, and entropy production over a cycle {#ener-bal} --------------------------------------------------------------- In the geometric description of the adiabatic thermal machines, the central role is played by integrals of the forces in Eq. (\[linearof\]) over a period, rather than by the instantaneous quantities. First consider the energy current $\langle {\cal J}_R^E \rangle(t)$ which leads to a description of the heat fluxes introduced in Eq. (\[net\]) within the adiabatic linear response formalism. The average of the instantaneous heat current over one period, $\langle {\cal J}_R^E \rangle(t)$, defines the transported heat flux, $J_{\rm tr}^{Q }=J^{Q}_{\rm tr, \Delta T} + J^{Q}_{\rm tr, ac}$. The two components can be explicitly written from Eq. (\[linearof\]) as follows, $$\begin{aligned} \label{therpum} J^{Q}_{\rm tr, \Delta T} &=&\frac{\Omega}{ 2\pi }\int_0^{2\pi/\Omega} \mathrm{d}t\; \Lambda_{N+1,N+1}(\vec{ X}) \dot{X}_{N+1}(t),\nonumber \\ J^{Q}_{\rm tr, ac}&= &\frac{\Omega}{ 2\pi }\int_0^{2\pi/\Omega} \mathrm{d}t \;\sum_{\ell=1}^N \Lambda_{N+1,\ell}(\vec{ X}) \dot{X}_{\ell} (t).\end{aligned}$$ The contribution $J^{Q}_{\rm tr, \Delta T}$ is the heat current flowing in response to a finite temperature bias across the device. The term $J^{Q}_{\rm tr, ac}$ is a pumping contribution to the heat current. The literature on pumping of charge and heat, starting with the seminal paper by Thouless [@Thouless1983], is so vast that it would be impossible to give a proper account of it. A brief overview can be found in the reviews [@Avron2003; @Xiao2010]. One of the key results of the present paper is to show how pumping affects the operation of a quantum heat engine, thus paving the way to observe geometric effects in the operating mode of quantum thermal machines. Notice that the reservoir index $\alpha$ is irrelevant here, since using the identities of Eqs. (\[ident\]) and (\[fj\]) as well as Eq. (\[linearo\]), we can show that in linear response, both contributions satisfy the continuity equation separately. As a consequence the L and R currents are opposite as in Eq. (\[net\]). We always use the current flowing into the coldest (R) reservoir as a reference. For a single driving parameter, it is straightforward to show that the pumped heat current $J^{Q}_{\rm tr, ac}$ vanishes. At least two parameters are necessary for pumping. This was originally noticed in the framework of scattering matrix theory for driven electron systems [@avron; @brouwer]. Moreover, a spatially symmetric system has $\chi_t^{\rm ad} \left[ {\cal J}^E_L, {\cal F}_{\ell} \right] = \chi_t^{\rm ad} \left[ {\cal J}^E_R, {\cal F}_{\ell} \right]$, so that these quantities should be zero in view of Eq. (\[fj\]). Hence, breaking of spatial symmetry is another necessary condition for a non-vanishing pumping contribution to the heat current [@mobu; @lilimos]. The dissipative contribution $J^Q_{\rm d, \alpha}$, accounting for the energy flowing into the reservoirs due to dissipation, is beyond linear response. The net generated power has components associated to the time-dependent driving forces as well as to the thermal bias, $$\begin{aligned} \label{ptot} P &=& \frac{\Omega}{ 2\pi } \int_0^{2\pi/\Omega} \mathrm{d}t \left(\sum_{\ell=1}^N \langle {\cal F}_\ell \rangle \dot{X}_\ell (t) + \sum_{\alpha,\beta=L,R} \langle {\cal J}^E_{\alpha} \rangle(t) \dot{\xi}_{\alpha}(t) \right) = \frac{\Omega}{ 2\pi } \int_0^{2\pi/\Omega} \mathrm{d}t \;\; \dot{{\bf X}} \cdot \underline{ \underline{{\bf \Lambda}}}({\vec X}) \cdot \dot{{\bf X}}.\end{aligned}$$ While Eqs. (\[therpum\]) for the fluxes are linear in $\dot{{\bf X}}$, Eq. (\[ptot\]) is bilinear in these parameters. This reflects the fact that the dissipated heat $J^Q_{\rm d, \alpha}$, defined in Eq. (\[net\]) is at least second order in these quantities – equivalent to being ${\cal O}(\Omega^2)$ [@mobu; @lilimos]. The cross terms proportional to the thermal bias and [*ac*]{} driving usually have opposite signs and cancel one another when evaluating the total power. This happens, in particular, in the absence of a magnetic field with driving forces symmetric under time reversal, as a consequence of the Onsager relations (\[onsager\]). From Eq. (\[dis\]) we have the following expression for the entropy production rate $$\label{s} T \dot{S}= \sum_{\alpha} J^{Q}_{\rm d, \alpha}=P $$ Substituting Eq. (\[ptot\]) we get $$\label{sdot0} \dot {S}= \frac{\Omega }{2 \pi T} \int_0^{2 \pi/\Omega} \mathrm{d}t \;\; \dot{{\bf X}} (t) \cdot \underline{ \underline{{\bf \Lambda}}}({\vec X}) \cdot \dot{{\bf X}}(t).$$ We present an alternative derivation for the above expression in Appendix \[entropy-rate\]. The forces $\langle {\cal F}_\ell \rangle(t)$ enter the work performed by the thermal machine, as will be discussed in more detail in Sec. \[sec:thermalgeo\] below. We also find it useful to introduce average of the force over one period, $$F_{\ell}=\frac{\Omega}{2\pi} \int_0^{2\pi/\Omega} \mathrm{d}t \langle {\cal F}_{\ell} \rangle(t)= F_{\ell,\rm BO}+F_{\ell,\rm ar}, \;\;\;\ell,=1,\ldots, N. \label{timeaverageforce}$$ The first term of Eq. (\[timeaverageforce\]) corresponds to the instantaneous [*equilibrium*]{} (Born-Oppenheimer) description given by the first term of Eq. (\[linearof\]), while the second term is the first order [*adiabatic reaction force*]{} defined in Ref. . Geometric characterization {#geoF} ========================== Thermal geometric tensor {#geo-section} ------------------------ It is instructive to decompose the tensor $\Lambda_{\mu,\nu}(\vec{ X})$ into its symmetric and antisymmetric parts, $$\Lambda^{S,A}_{\mu,\nu} = \frac{1}{2}\left( \Lambda_{\mu,\nu} \pm \Lambda_{\nu,\mu}\right).$$ Equation (\[sdot0\]) for the entropy production implies that the symmetric component $\Lambda^S_{\mu,\nu}$ controls dissipation. Since the rate of entropy production $\dot{S}$ is non-negative, the symmetric part $\Lambda^S_{\mu,\nu}$ can be viewed as a metric tensor on the space of thermodynamic states [@crooks; @reza; @marti]. Then, geodesics with respect to this metric correspond to adiabatic trajectories which minimize dissipation [@crooks; @reza; @marti]. This contribution to $\Lambda_{\mu,\nu}(\vec{ X})$ has also been referred to as geometric friction [@Berry1993; @crooks; @campisi2]. We can obtain an explicit expression for $\Lambda_{\mu,\nu}$ from the Lehmann representation (see details in App. \[lehmann\]). The result for the symmetric component is $$\begin{aligned} \label{lambfs} \Lambda^S_{\mu,\nu}(\vec{ X}) = & & \hbar \pi \lim_{\omega \rightarrow 0} \sum_{n,m} p_m \frac{(\varepsilon_n-\varepsilon_m)^2}{\omega}\mbox{Re}[\langle \partial_{\mu} m|n\rangle \langle n| \partial_{\nu} m\rangle ]\nonumber \\ & & \times \left[\delta(\omega-(\varepsilon_m-\varepsilon_n))- \delta(\omega-(\varepsilon_n-\varepsilon_m))\right].\end{aligned}$$ Here, $|m\rangle$ and $\epsilon_m$ denote the instantaneous eigenstates and eigenenergies, and $p_m$ is the corresponding thermal weight. Similarly, the antisymmetric component can be expressed as $$\begin{aligned} \label{lambfa} \Lambda^A_{\mu,\nu}(\vec{ X}) = 2 \hbar\sum_m p_m \;\mbox{Im} \left[\langle \partial_{\mu} m | \partial_{\nu} m \rangle\right].\end{aligned}$$ In the limit of zero temperature, the sum over $m$ is dominated by the ground state and $\Lambda^A_{\mu,\nu}(\vec{ X})$ reduces to its Berry curvature. For $\Delta T=0$, this component can be viewed as a velocity-dependent force, akin to a Lorentz force, which does not contribute to the net entropy production. This contribution has been referred to as geometric magnetism [@Berry1993; @hanggi; @Bode2011; @Bode2012; @Thomas2012]. It is interesting to compare $\Lambda_{\mu,\nu}$ to the quantum geometric tensor for the instantaneous ground state $|\psi\rangle$ of a closed system [@anatoli2; @anatoli3], $$g_{\mu,\nu} = \left\langle {\partial_\mu \psi} \left | {\partial_\nu \psi}\right. \right\rangle - \left\langle \left. {\partial_\mu \psi}\right | \psi \right\rangle\left\langle \psi\left | {\partial_\nu \psi}\right. \right\rangle.$$ Analogous to $\Lambda_{\mu,\nu}$, the symmetric part of $g_{\mu,\nu}$ defines a metric on the manifold of ground states and the antisymmetric part equals the Berry curvature. The crucial difference between the two tensors is that the quantum geometric tensor is defined for a discrete spectrum, while $\Lambda_{\mu,\nu}$ assumes a continuous spectrum. This does not lead to essential differences for the antisymmetric components of the tensors which are non-dissipative. In contrast, the symmetric part of $\Lambda_{\mu,\nu}$ controls dissipation and therefore vanishes for a discrete (or gapped) spectrum. We can therefore view $\Lambda_{\mu,\nu}$ as the analog of the quantum geometric tensor for systems with continuous spectra. In view of this analogy, we refer to $\Lambda_{\mu,\nu}$ as the [*thermal geometric tensor*]{}. In time reversal symmetric systems subject to driving parameters $\vec X$ which also respect time reversal symmetry, different parts of the thermal geometric tensor are either purely symmetric or antisymmetric. The Onsager relations (\[onsager\]) imply that $\Lambda_{\ell,\ell^{\prime}} = \Lambda_{\ell^{\prime},\ell}$ ($\ell,\ell^{\prime} = 1, \ldots, N$) is purely symmetric (corresponding to geometric friction without geometric magnetism). In contrast, $\Lambda_{N+1,\ell} = - \Lambda_{\ell, N+1}$ (corresponding to geometric magnetism without geometric friction). In systems which break time reversal symmetry, both the symmetric and the antisymmetric components of the thermal geometric tensor are generally nonzero. Thermal machines and geometry {#sec:thermalgeo} ----------------------------- The above analysis implies that there are several purely geometric quantities which enter into the operation of adiabatic quantum thermal machines. An essential quantity is the total heat transported between the leads per cycle, $Q_{\rm tr} =2 \pi J^Q_{\mathrm tr}/\Omega$. In a heat engine, this heat is in part converted into useful work while in a refrigerator, this heat is extracted from the colder reservoir. The transported heat takes the form $$ Q_{\rm tr} = \oint \sum_{\ell=1}^N \Lambda_{N+1,\ell} {\mathrm d}X_\ell + \oint {\mathrm d}t \Lambda_{N+1,N+1} \frac{\Delta T}{T}. \label{pumpedheatlambda}$$ The first term on the right hand side is again only dependent on the path and has a simple physical interpretation. It is just the heat $Q_{\rm tr,ac} = 2 \pi J^Q_{\rm tr,ac}/\Omega$, which is pumped between the reservoirs due to the periodic variation of the parameters $\vec X$, $$Q_{\rm tr,ac} = \oint \sum_{\ell=1}^N \Lambda_{N+1,\ell} {\mathrm d}X_\ell . \label{pumpedheatlambda2}$$ The second term describes the heat current driven by the applied temperature bias as a result of the heat conductance $\Lambda_{N+1,N+1}$ of the system. Notice that the two terms typically have a different dependence on the period $2\pi/\Omega$. Due to its geometric nature, the first term is independent of the period. In contrast, the second term is in general proportional to the period. The pumped heat per cycle is essential for the operation of adiabatic quantum thermal machines. To see this, we compute the work $W = \oint {\mathrm d}{\vec X}\cdot{\vec F}$ per period performed on the system during one cycle of the [*ac*]{} sources. The forces, as described by Eq. (\[linearof\]), have an instantaneous and a linear-response component. The instantaneous contribution depends only on the parameters $\vec X$ and is evaluated in the absence of the temperature bias. This [*equilibrium*]{} contribution to the force is necessarily conservative (in the mechanical sense) and thus gives a vanishing contribution to the work performed over a cycle. Thus, only the linear-response component contributes to the work per cycle, $$W = \oint {\mathrm d}t \sum_{\ell,\ell'=1}^N {\dot{X}_\ell} \Lambda_{\ell,\ell'}{\dot{X}}_{\ell'} + \oint \sum_{\ell=1}^N {\mathrm d}X_\ell \Lambda_{\ell,N+1} \frac{\Delta T}{T}. \label{worklambda}$$ First consider the second term on the right hand side. For constant $\Delta T/T$, this term is again a purely geometric line integral over a closed contour. Unlike the contribution of the instantaneous component, this term is in general nonconservative and gives a finite contribution when integrated over a closed cycle. The reason is that this term originates from the [*nonequilibrium*]{} contribution to the force which is generated by the temperature bias. This term is essential for the [*heat-to-work*]{} conversion and hence the operation of the thermal machine. In fact, as a result of the Onsager relations (\[onsager\]), the prefactor of this term is very closely related to the pumped heat per cycle. If the system is time reversal invariant (which also requires that the parameters ${\vec X}$ couple to time reversal even operators), the Onsager relations imply that $\Lambda_{N+1,\ell} = - \Lambda_{\ell,N+1}$ and the prefactor of $\Delta T/T$ in Eq. (\[worklambda\]) just equals minus the pumped heat between the reservoirs. We can then understand the operation of a heat engine as follows. During one cycle of the machine, the cyclic variation of the parameters pumps heat from the high-temperature to the low-temperature reservoir. The corresponding change in free energy is converted into work $W$ performed on a load (i.e., $W<0$). Here, the load corresponds to an external agent which couples to the dynamics of the parameters $\vec{X}$. This is analogous to the operation principle of inverted quantum pumps as adiabatic quantum motors [@Bustos2013; @torque4; @motor1; @motor2; @motor3]. Similarly, in a refrigerator work $W=- Q_{\rm tr,ac}\Delta T/T >0 $ must be supplied by the [*ac*]{} sources to overcome the thermal bias and to pump heat $Q_{\rm tr,ac}$ from the low-temperature to the high-temperature reservoir. It is also interesting to discuss this heat-to-work conversion in the context of the entropy production rate defined in Eq. (\[s\]). With the definitions of this section, we can write $$\label{sqw} T\dot{S}= \frac{\Omega}{2 \pi} \left(W + Q_{\rm tr} \frac{\Delta T}{T} \right).$$ The first term corresponds to the total power generated by the ac sources, while the second term corresponds to the power invested to transport the heat $Q_{\rm tr}$ per cycle in the presence of the thermal bias $\Delta T$. Due to the heat-to-work conversion, the geometric component of $W$ exactly cancels the component $Q_{\rm tr,ac}$ of $Q_{\rm tr}$ in the dissipated power (still assuming time-reversal invariance). Entropy production is then associated with the nongeometric contributions to heat and work. We already commented on the second term in Eq. (\[pumpedheatlambda\]), which describes the effects of a nonzero heat conductance of the device. Similarly, the first term in Eq. (\[worklambda\]) describes frictional losses. Unlike the second term, which can take either sign, this term is always positive. A negative balance of the two terms, $W<0$, can be used to work against the load on a heat engine. In a refrigerator, both terms are positive since one has to overcome the frictional losses in addition to pumping heat from the cold to the hot reservoir. It is important to notice that the terms are typically of different orders in the period $2\pi/\Omega$. While the first is inversely proportional to the period, the second is independent of it. Thus, one can often neglect the first term when considering the limit of small frequency $\Omega$. As we will show below, we note that under certain circumstances the first term in Eq. (\[worklambda\]) can also be viewed as a geometric quantity even though it cannot be immediately rewritten as a line integral. The operation of a heat engine or refrigerator requires that a net amount of heat $Q_{\rm tr,ac}$ is pumped between the reservoirs during a cycle, requiring that the force is nonconservative. Above, we have focused on the case that $\Delta T/T$ is constant over the cycle. In principle, the conditions for the operation of adiabatic quantum thermal machines can be less stringent if one allows $\Delta T/T$ to vary along the cycle, for instance by coupling the system to different reservoirs at different stages. In the absence of time reversal symmetry, the Onsager relations connect the response functions $\Lambda_{\mu,\nu}$ at different magnetic fields. In this case, there is no general relation between $\Lambda_{N+1,\ell}$ and $\Lambda_{\ell,N+1}$ for a fixed magnetic field, and in addition to the antisymmetric contribution $\Lambda^A_{N+1,\ell} = - \Lambda^A_{\ell,N+1}$, there could also be a symmetric contribution, $\Lambda^S_{N+1,\ell} = \Lambda^S_{\ell,N+1}$. Unlike $\Lambda_{\mu,\nu}^A$, the symmetric $\Lambda_{\mu,\nu}^S$ is associated with entropy production and dissipation according to Eq. (\[ptot\]). Even if both the dissipative and the nondissipative contributions to the pumped heat flow from the hot to the cold reservoir, the work performed on a load would involve the difference between the antisymmetric and the symmetric contribution. The time average of the forces $\vec F$ as defined in Eq. (\[timeaverageforce\]) also has contributions which are purely geometric. From Eq.  (\[linearof\]), the first-order adiabatic reaction component can be readily rewritten as $$F_{\ell, {\rm ar}} = \frac{\Omega}{2\pi}\left\{ \oint \sum_{\ell'=1}^N\Lambda_{\ell,\ell'} {\mathrm d}{X}_{\ell'} + \oint {\mathrm d}t \Lambda_{\ell,N+1} \frac{\Delta T}{T} \right\} \label{avforceline}$$ Here, the first term on the right hand side is a line integral which is purely geometric in that it depends only on the path. Finally, we remark that under certain conditions, the dissipated component of $W$, corresponding to the first term of Eq. (\[worklambda\]), can also be formally represented in terms of a line integral over a closed path in parameter space. This is not as straightforward as for Eqs. (\[avforceline\]), (\[pumpedheatlambda\]), and (\[worklambda\]) since the power is bilinear in $\dot{\bf X}$. It is, however, possible when there exists a well-defined mapping between $\dot{{\bf X}}$ and ${\bf X}$ as the latter varies along the closed path $\gamma$. In particular, such a mapping exists for the case of periodic driving. For a smooth path $\gamma$, one can write the relations $\dot{X}_{\mu} = \Omega g_{\mu}( \vec{ X})|_{\gamma}$ for all $\mu$, where the functions $g_{\mu}(\vec{ X} )|_{\gamma}$ are defined by eliminating the parametrization in $t$ between $X_{\mu}(t)$ and $\dot{X}_{\mu}(t)$. Then, we can write the dissipated power as a line integral by using this relation to eliminate one of the factors of $\dot{X}_{\mu}$ in Eq. (\[ptot\]) via these relations. Note that the resulting line integral has a prefactor of $\Omega$, making it explicit that the dissipated power is inversely proportional to the period of the driving, as already mentioned above. The line integrals controlling the operation of adiabatic thermal quantum machines are reminiscent of line integrals over Berry connections. This motivates us to introduce the vector fields $$\label{amu} \vec{A}^{A/S}_{\mu}=\left( \Lambda^{A/S}_{\mu, 1}(\vec{ X}), \ldots, \Lambda^{A/S}_{\mu, N}(\vec{ X})\right)$$ with $\mu=1, \ldots, N+1$ for the rows of the thermal geometric tensor. Similarly, we introduce $$\begin{aligned} \label{atildemu} \tilde{\vec{A}}&=& \sum_{\ell} \left(\tilde{ \Lambda}^S_{\ell,1}(\vec{X}), \ldots, \tilde{\Lambda}^S_{\ell,N}(\vec{X})\right),\end{aligned}$$ where $\tilde{\Lambda}^S_{\mu, \nu}(\vec{ X} )=g_\mu(X_{\mu}) \Lambda^S_{\mu, \nu}(\vec{X})$. These vector fields control the pumped heat and the work performed on the system as well as the dissipated power. Thus, they are useful to illustrate the operation of the specific thermal machines which we discuss in Sec. \[Examples\]. In terms of these vector potentials Eqs. (\[pumpedheatlambda2\]) and (\[worklambda\]) read, respectively, $$\label{q-a} Q_{\rm tr, ac}=\oint \vec{A}_{N+1}(\vec{X}) \cdot d \vec{X},$$ with $\vec{A}_{\mu}(\vec{X}) = \vec{A}^A_{\mu}(\vec{X}) +\vec{A}^S_{\mu}(\vec{X}) $ and $$\label{w-a} W=\oint \left[ \tilde{\vec{A}}(\vec{X}) - \frac{\Delta T}{T} \left(\vec{A}^A_{N+1}(\vec{X}) - \vec{A}^S_{N+1}(\vec{X}) \right)\right] \cdot d \vec{X}.$$ In the latter equation, the last term does not contribute for many systems. In particular, this is the case in the presence of time-reversal symmetry (including driving parameters $\vec{X}$ coupling to time-reversal-even operators). In such cases, we can write $ W=\oint \tilde{\vec{A}}(\vec{X}) - (\Delta T/T) Q_{\rm tr, ac}$. Efficiencies ------------ ### Heat engine {#eff_heat} In a heat engine, heat transported from the high to the low temperature reservoir is partially converted into useful work. We can then define an efficiency for the heat engine as $$\eta^{\rm (he)} = \frac{-W}{Q_{\rm tr}}. \label{effx_he}$$ This expression can be readily analyzed for a time reversal invariant system with constant $\Delta T/T$. In the limit of adiabatic operation of the heat engine, $\Omega\to 0$, we can neglect the frictional losses to leading order and only the second term on the right hand side of Eq. (\[worklambda\]) contributes to the work performed against the load, $W \simeq - Q_{\rm tr,ac} \Delta T/T$. If the heat transfer across the system is dominated by the geometric contribution, one finds $Q_{\rm tr} \simeq Q_{\rm tr, ac}$, and hence that the efficiency approaches the Carnot efficiency, $\eta^{\mathrm (he)} \simeq \Delta T/T$. This result is obtained in the limit of a negligible heat conductance $\Lambda_{N+1,N+1}\simeq 0$ of the system. This can be realized in a topological quantum pump for which the ground state is separated from the excited states by a gap. Consequently, the symmetric contributions to $\Lambda_{\mu,\nu}$ – including the heat conductance – are strongly suppressed. A finite heat conductance diminishes the efficiency of the heat engine, as do frictional losses described by the first term on the right hand side of Eq. (\[worklambda\]). Note that the contribution of the heat conductance to the transferred heat is proportional to the period of the cycle. This implies that this term is less detrimental to the efficiency as the frequency at which the machine operates increases. However, by increasing the frequency, the effect of the frictional losses becomes larger. ### Refrigerator {#eff_refrigerator} A refrigerator uses work $W$ performed on the system to remove heat from a cold to a hot reservoir. Thus, we can define a corresponding efficiency or coefficient of performance (COP) as $$\eta^{\rm (fr)} =\frac{-Q_{\rm tr}}{W}. \label{effx_fr}$$ Again focusing on a time reversal invariant system with constant $\Delta T/T$, this efficiency approaches the Carnot limit $\eta^{\mathrm fr}=T/\Delta T$ for zero heat conductance. The efficiency is again reduced by a finite heat conductance since, for a refrigerator, its contribution to the numerator has the opposite sign compared to the pumped heat. ### Heat pump {#eff_pump} Of course, the device can also be used as an adiabatic heat pump in the absence of a thermal bias $\Delta T/T$. Heat is transported from left to right or vice versa due to the variation of ${\vec X}$. According to Eq. (\[worklambda\]), we need to exert work $W$ associated with dissipation, even if there is no temperature bias. We can then define a corresponding efficiency of heat pumping through $$\label{nobiascop} \eta^{\rm (pump)} = \frac{|Q_{\rm tr, ac}|}{W} .$$ The denominator in this expression is proportional to $\Omega$, so that the efficiency of the heat pump grows as it becomes more adiabatic. Examples {#Examples} ======== We now illustrate the general formalism introduced in the previous sections by two driven systems coupled to thermals baths. One example is referred to as a [*driven qubit*]{} and consists of a generic two-level system with time-dependent energies and inter-level transition matrix elements, coupled to baths of bosonic excitations. This problem will be solved in the limit of weak coupling to the reservoirs. The second example is a [*driven quantum dot*]{}, which consists of a confined structure with two single-electron levels – one per spin orientation – driven by a rotating magnetic field. This problem is solved for weak as well as for strong coupling to spin-polarized electron reservoirs. Driven qubit {#example-qubit} ------------ ![Illustration of the q-bit coupled to two bosonic reservoirs by the Hamiltonian of Eq. (\[qcont\]) with $\hat{\tau}_L=\hat{\sigma}_x$ and $\hat{\tau}_R=\hat{\sigma}_z$, operating as a heat engine. Panel (a): the q-bit is in one of the states $|x, \pm \rangle$ and couples to the reservoir $L$. Panel (b): the q-bit is in one of the states $|z, \pm \rangle$ and couples only to the reservoir $R$. The driving changes the energy difference between the two levels. []{data-label="figure0"}](fig2.pdf){width="\columnwidth"} We consider a generalization of the celebrated spin-boson model, which was introduced in Refs. [@sb1; @sb2]. As in those works, we express the Hamiltonian in terms of the Pauli matrices $\hat{\mbox{$\vec{\sigma}$}}=(\hat{\sigma}_x, \hat{\sigma}_y,\hat{\sigma}_z)$ and a magnetic field $\vec{B}(t)=\left(B_x(t), B_y(t), B_{z}(t) \right)$. In our case, the latter varies periodically in time. The ensuing Hamiltonian reads $$\label{qs} {\cal H}_S(t)= \vec{B}(t) \cdot \hat{\mbox{$\vec{\sigma}$}}. $$ The reservoirs are represented by the Hamiltonians $$\label{qres} {\cal H}_{\alpha}=\sum_{k}\varepsilon_{k\alpha}b_{k\alpha}^\dagger b_{k\alpha}, \;\;\; \alpha=L, R,$$ with $b_{k\alpha}$ and $b_{k\alpha}^\dagger$ being the annihilation and creation operators of a bosonic excitation. The coupling is described by the Hamiltonian ${\cal H}_c={\cal H}_{c,L}+{\cal H}_{c,R}$. Our generalization with respect to previous works is to consider different types of couplings to the $L$ and $R$ reservoirs. This is motivated by the fact that spatial inversion symmetry has to be broken in order to obtain pumping, as mentioned in Section \[ener-bal\]. Concretely, the Hamiltonians read $$\label{qcont} {\cal H}_{c,\alpha}=\sum_{k}V_{k\alpha}\hat{\tau}_{\alpha}\Big(b_{k\alpha}+b_{k\alpha}^\dagger\Big),$$ with $\hat{\tau}_{L}=\hat{\sigma}_{x}$ and $\hat{\tau}_{R}=\hat{\sigma}_{z}$. Hence, the q-bit couples to the $L$ or $R$ reservoir if it is in a state with a non-vanishing projection on the eigenstates $|x,\pm \rangle$ of $\hat{\sigma}_x$ or $|z, \pm \rangle$ of $\hat{\sigma}_z$, respectively. Any other combination of two Pauli matrices with $\hat{\tau}_{L} \neq \hat{\tau}_{R}$ would also be appropriate, as we will discuss in Section IV.A.3. Previous works related to heat engines based on q-bits considered the same type of coupling to the two reservoirs and non-adiabatic driving [@rob1; @rob2; @gab; @segal2005; @nit; @hanggi; @karimi; @liu2017; @wang2018; @yamamoto2018; @newman] . The Hamiltonian for the system of Eq. (\[qs\]) can be transformed to the basis of instantaneous eigenstates $|j \rangle$, such that ${\cal H}_S(t) |j \rangle= E_j(t) |j \rangle$, $j=1,2$, with $E_{1,2}(t)=\mp |\vec{B}|$. The resulting transformed Hamiltonian reads $\tilde{\cal H}_{ S}(t)=\hat{U}^{-1}(t) {\cal H}_S(t) \hat{U}(t)$ with $\hat{U}(t)$ being a unitary transformation and $$\label{hs1} \tilde{\cal H}_{ S}(t)=E_{1}(t) |1\rangle \langle 1| + E_{2}(t) |2\rangle \langle 2|,$$ Accordingly, the contact Hamiltonian can be also expressed in this basis as $$\label{qcontch} \tilde{\cal H}_{c,\alpha}(t)=\sum_{k}\sum_{ij}V_{k\alpha}{v}_{\alpha,ij}(t)\hat{\rho}_{ij}(t)\Big(b_{k\alpha}+b_{k\alpha}^\dagger\Big),$$ with ${v}_{\alpha,ij}(t)= \left[\hat{U}^{-1}(t) \hat{\tau}_{\alpha}\hat{U}(t)\right]_{ij}$, $\hat{U}(t)$ being the unitary transformation which diagonalizes the Hamiltonian (\[qs\]), and $\hat{\rho}_{ij}=|i\rangle \langle j |$. Before proceeding to explicit calculations, we can gather some intuition on how the driven q-bit may work as a thermal machine by using the sketch of Fig. \[figure0\]. As a consequence of the driving, the energy of the two levels as well as the coupling to the $L$ and $R$ reservoirs change in time according to Eqs. (\[hs1\]) and (\[qcontch\]), respectively. Panel (a) represents a situation where the q-bit at a given time $t_1$ is in one of the eigenstates of $\hat{\sigma}_x$, hence, it couples to the $L$ reservoir and it is completely decoupled from $R$. Panel (b) illustrates the situation where the q-bit is in an eigenstate of $\hat{\sigma}_z$ at a different time $t_2$, therefore it is coupled to $R$ and decoupled from $L$. In an evolution from $t_1$ to $t_2$ the energy difference $\delta E(t)=E_2(t)-E_1(t)$ changes. A cycle can be realized when the protocol returns the q-bit to the state of the step (a). The paradigmatic Otto cycle corresponds to the extreme situation, where the q-bit is allowed to thermalize with $L$ at the step (a) and with $R$ at the step (b), while it evolves decoupled from the two reservoirs at intermediate times [@karimi; @camp]. For the case of adiabatic driving, the changes take place smoothly and the q-bit is coupled to the two reservoirs at all times. For suitable protocols, the setup may anyway operate as a heat engine or refrigerator, as well as a heat pump. We will analyze in detail protocols with two time-dependent parameters of the form $\vec{B}(t)=\left(B_x(t),0, B_z(t)\right)$, with $$\begin{aligned} B_x(t)&=&B_{x,0}+B_{x,1}\cos (\Omega t+\phi), \nonumber \\ B_z(t) &=&B_{z,0}+B_{z,1}\cos(\Omega t). \end{aligned}$$ These two components of $\vec{B}(t)$ are identified with the time-dependent parameters of Eq. (\[hamtot\]) as follows $$\label{xb} \vec{ X}(t) =\left(X_1(t),X_2(t)\right) \equiv \left(B_z(t),B_x(t) \right).$$ In addition, we will consider a constant difference of temperature $\Delta T$, which defines $\dot{X}_3=\Delta T/T$. We will solve the problem in the limit of very weak coupling between the qubit and the reservoirs (small $V_{k\alpha} $). ### Master equation approach We follow the procedure of Refs. , which consists in solving the time-dependent master equation by performing an adiabatic expansion along the lines of the general formalism of Section \[linear\_ad\]. The basic idea is to describe the evolution of the population probabilities of the eigenstates of $\tilde{\cal H}_{ S}(t)$, represented by the vector $\mathbf{p}(t)= \left( p_1(t), p_2(t) \right)$, in terms of a master equation where the effect of the coupling to the reservoirs is treated at the lowest order of perturbation theory (first order in $|V_{k\alpha}|^2$). The master equation reads, $$\label{meex1} \frac{d}{dt}\mathbf{p}(t)=\sum_\alpha\mathbf{M}_\alpha(\vec{B})\cdot \mathbf{p}(t),$$ where $\mathbf{M}_\alpha(\vec{B})$ is a $\text{2}\times \text{2}$ matrix representing the instantaneous transition rates corresponding to the reservoir $\alpha$, which is given by $$\begin{aligned} \mathbf{M}_\alpha(\vec{B})=\begin{bmatrix} -\Gamma_{1\rightarrow 2}^\alpha(\vec{B}) & \Gamma_{2 \rightarrow 1}^\alpha(\vec{B}) \\ \Gamma_{1\rightarrow 2}^\alpha(\vec{B}) & -\Gamma_{2\rightarrow 1}^\alpha(\vec{B}) \end{bmatrix}. \label{tunmatr}\end{aligned}$$ Here we stress that the instantaneous rates depend on time through the parameters $\vec{B}$, as indicated in Eq. (\[xb\]). We have introduced the following definitions $$\begin{aligned} &\Gamma_{1\rightarrow 2}^\alpha(\vec{B})=\lambda_\alpha(\vec{B}) \Big[\gamma_\alpha \Big(\delta E \left(\vec{B}\right)\Big)+\tilde{\gamma}_\alpha\Big(-\delta E\left(\vec{B}\right)\Big)\Big],\nonumber \\ &\Gamma_{2\rightarrow 1}^\alpha(\vec{B})=\lambda_\alpha(\vec{B}) \Big[\tilde{\gamma}_\alpha \Big(\delta E(\vec{B}) \Big)+{\gamma}_\alpha \Big(-\delta E(\vec{B}) \Big)\Big], \label{repre}\end{aligned}$$ with $$\begin{aligned} \gamma_{\alpha}(\varepsilon) &= &n_{\alpha}(\varepsilon)\Gamma_{\alpha}(\varepsilon)/\hbar, \nonumber \\ \tilde{\gamma}_{\alpha}(\varepsilon) &=& [1+n_{\alpha}(\varepsilon)]\Gamma_{\alpha}(\varepsilon)/\hbar,\end{aligned}$$ while $\delta E(\vec{B})=E_2(\vec{B})-E_1(\vec{B})$ and $\lambda_\alpha(\vec{B})=v_{\alpha,12}(\vec{B})v_{\alpha,21}(\vec{B})$. For $\hat{\tau}_L=\hat{\sigma}_x$ and $\hat{\tau}_R=\hat{\sigma}_z$ we have $$\lambda_L(\vec{B})=\frac{B_x^2(t)}{B_z^2(t)+B_x^2(t)}, \;\;\;\;\;\;\lambda_R(\vec{B})=\frac{B_z^2(t)}{B_z^2(t)+B_x^2(t)}. \label{lambda1} $$ $n_{\alpha}(\varepsilon)$ is the Bose-Einstein distribution for bath $\alpha$ and $\Gamma_{\alpha}(\epsilon)$ is the corresponding spectral density, which we assume to be Ohmic $$\label{ombath} \Gamma_{\alpha}(\epsilon)=\Gamma_\alpha\, \epsilon\, e^{-\epsilon/\epsilon_{\rm C}}, \;\;\;\text{with }\epsilon >0,$$ $\epsilon_{\rm C}$ being the cut-off frequency. Since, according to Eq. (\[ombath\]), there are no negative-energy states in the bath, we set $\gamma_\alpha [-\delta E(\vec{B})] = \tilde{\gamma}_\alpha [-\delta E(\vec{B}) ]=0$ (notice that $\delta E(\vec{B})$ is positive by definition). Following Refs. , the population can be expanded in different orders of the driving frequency $\Omega$. Here we keep only the zeroth-order (instantaneous) term $\mathbf{p}^{({\rm i})}$, and first-order (adiabatic) term $\mathbf{p}^{({\rm a})}$ such that $$\mathbf{p}(t)=\mathbf{p}^{({\rm i})}(t)+\mathbf{p}^{({\rm a})}(t). \label{probabil}$$ The solution of the master equation (\[meex1\]) order by order in $\Omega$, leads to $$\begin{aligned} \sum_\alpha\mathbf{M}_\alpha(\vec{B})\cdot\mathbf{p}^{({\rm i})}(t)=0, \label{insad1}\end{aligned}$$ and $$\begin{aligned} \frac{d}{dt}\mathbf{p}^{({\rm i})}(t)=\sum_\alpha\mathbf{M}_\alpha(\vec{B})\cdot\mathbf{p}^{({\rm a})}(t). \label{insad2}\end{aligned}$$ The adiabatic correction can be written in terms of instantaneous contributions as $$\mathbf{p}^{({\rm a})}(t)=\sum_\alpha\left[\bar{\mathbf{M}}_\alpha(\vec{B})\right]^{-1}\cdot\frac{d}{dt}\mathbf{p}^{({\rm i})}(t), \label{papi}$$ where the matrix $\left[\bar{\mathbf{M}}_\alpha(\vec{B})\right]^{-1}$ includes the normalization condition for the adiabatic probabilities [@janine]. We obtain two additional equations from the conservation of the probability, namely $\sum_{j}p_{j}^{({\rm i})}(t)=1$ and $\sum_{j}p_{j}^{({\rm a})}(t)=0$. The instantaneous (${\rm i}$), adiabatic (${\rm a}$) and thermal (${\rm th}$) contributions to the heat current flowing in reservoir $\alpha$ as functions of time are given by $$\begin{aligned} J_{\alpha}^{({\rm i/a})}(t) &= &\delta E(\vec{B}) \left[\mathbf{M}_\alpha(\vec{B})\cdot \mathbf{p}^{\rm (i/a)}(t)\right]_{11},\nonumber \\ J_{\alpha}^{({\rm th})}(t) & = &\delta E(\vec{B}) \left[\mathbf{M}_\alpha(\vec{B})\cdot \mathbf{p}^{\rm (i)}_{\Delta T}(t)\right]_{11}, \label{asymJ}\end{aligned}$$ where $\mathbf{p}_{\Delta T}^{({\rm i})}$ is the instantaneous probability vector in the presence of the thermal bias $\Delta T$. We can now calculate the different linear-response components of the heat current defined in Eq. (\[therpum\]) as follows $$J_{{\rm tr,ac}/\Delta T}^{Q} = \frac{\Omega}{2 \pi} \int_0^{2\pi/\Omega} dt \; J_{\rm R}^{({\rm a/th})}(t), \label{J}$$ while the instantaneous component vanishes when averaged over the period. On the other hand, the net work developed by the ac forces, corresponding to Eq. (\[worklambda\]) can also be calculated in the master equation approach. To this end we write the total energy of the qubit at a particular time $t$ as $${E}_{\rm tot}(t)=E_1(t) p_1(t) + E_2(t) p_2(t),$$ where the probabilities are given by the sum of the instantaneous $p_j^{\rm (i)}$, the adiabatic $p_j^{\rm (a)}$ and thermal $p_j^{\rm (th)}$ components. The time derivative of the total energy contains two contributions, $$\begin{aligned} \frac{d{E}_{\rm tot}}{dt}&=&\sum_{j=1}^2 \left( \frac{dE_j(t)}{dt} p_j(t) + E_j(t) \frac{dp_j(t)}{dt} \right) $$ These are the power delivered by the ac sources $${P}(t)=\frac{dE_1(t)}{dt} p_1(t) + \frac{dE_2(t)}{dt} p_2(t),$$ and the heat temporarily stored in the q-bit. Thus, the total work over a cycle reads $$\begin{aligned} \label{wqub} W=\int_0^{2\pi/\Omega} dt \Big(\frac{dE_1}{dt}p_1(t)+\frac{dE_2}{dt}p_2(t)\Big),\end{aligned}$$ where both instantaneous, adiabatic, and thermal components of the probabilities $\mathbf{p}(t)$ contribute. The contribution due to the instantaneous components represents the work done by the conservative forces, while the other terms will contribute to the non-conservative work defined in Eq. (\[worklambda\]). The explicit expressions for the different components of $\mathbf{p}(t)$ for the driving protocol of Eq. (\[xb\]) are presented in Appendix \[linearr\]. We notice that the terms originating from the coupling Hamiltonian in Eq. (\[qcontch\]), could in principle contribute to $W$ and can be calculated from the time average of $\langle \dot{\tilde{{\cal H}}}_{c,\alpha} \rangle$. However, this term is neglected in the limit of very small $V_{k\alpha}$. In fact, its contribution to the work per cycle is smaller (by at least a factor of $|V_{k\alpha}|$) than the contribution to the work due to $\tilde{H}_S(t)$. ### Geometrical properties {#thadex1} We now derive the expressions corresponding to Eqs. (\[pumpedheatlambda2\]) and (\[worklambda\]) within the formalism of the master equation. These can be derived from Eqs. (\[J\]) and (\[wqub\]). We get $$\begin{aligned} Q_{\rm tr,ac} &=& \int_0^{2\pi/\Omega} dt \; \mathbf{M}_R^{(h)}(\vec{B})\cdot\mathbf{p}^{({\rm a})}(t),\label{currheatad} \\ W &=&\int_0^{2\pi/\Omega} dt \; \frac{d\mathbf{E}}{dt} \cdot \left[\mathbf{p}^{({\rm a})}(t)+ \mathbf{p}^{({\rm i})}_{\Delta T}(t) \right], \label{currpowad}\end{aligned}$$ where $$\begin{aligned} \mathbf{M}_R^{(h)}(\vec{B})= \delta E(\vec{B}) \begin{bmatrix} -\Gamma_{1\rightarrow 2}^R(\vec{B}) \\ \Gamma_{2\rightarrow 1}^R(\vec{B}) \end{bmatrix}^{T}.\end{aligned}$$ and ${\bf E}((\vec{B}))=\left(E_1(t),\; E_2(t) \right)$. Using Eq. (\[papi\]) and $$\label{dpi} \frac{d {\bf p}^{(i)}}{dt}= \sum_{\ell=1}^2 \frac{\partial {\bf p}^{(i)}}{\partial B_{\ell}} \dot{B}_{\ell}$$ the pumped heat given by Eq. (\[currheatad\]) can be written as in Eq. (\[pumpedheatlambda2\]), by identifying $$\Lambda_{3,\ell}(\vec{B})=\mathbf{M}^{(h)}_R(\vec{B})\cdot\bar{\mathbf{M}}^{-1}(\vec{ B}) \cdot\frac{\partial{\mathbf{p}}^{({\rm i})}}{\partial B_{\ell}},\;\;\; \ell=1,2.$$ In the present configuration, the explicit calculation of these coefficients show that $\Lambda_{3,\ell}= -\Lambda_{\ell,3}$, up to a function that vanishes upon integrating over the period. This means that these terms are components of the antisymmetric thermal tensor $\Lambda^A_{\mu,\nu}$. The other components of the tensor can be derived from the first terms ($\propto \mathbf{p}^{({\rm a})}(t)$) of Eq. (\[currpowad\]). More precisely, using Eq. (\[papi\]) with Eq. (\[dpi\]), and expressing $$\begin{aligned} \label{el} \frac{d {\bf E}}{dt}= \sum_{\ell=1}^2 \frac{\partial {\bf E}}{\partial B_{\ell}} \cdot \dot{B}_{\ell},\end{aligned}$$ we find $$\Lambda_{\ell,\ell^{\prime}}(\vec{B})= \frac{\partial {\bf E}}{\partial B_{\ell}} \cdot \bar{\mathbf{M}}^{-1}(\vec{B}) \cdot \frac{d {\bf p}^{(i)}}{\partial B_{\ell^{\prime}}} ,\;\;\; \ell,\ell^{\prime}=1,2. \label{lambdallp}$$ We can see that these terms satisfy $\Lambda_{\ell,\ell^{\prime}}=\Lambda_{\ell^{\prime},\ell}$, as explicitly shown in Eq. (\[aplamqbit\]). Hence they are components of the symmetric tensor $\Lambda^S_{\mu,\nu}$. On the other hand, by using the fact that we can define a relation of the form $\dot{B}_{\ell}= g_{\ell}(\vec{B}) \Omega $ for the protocol of Eq. (\[xb\]), we can express the total work in terms of purely geometric quantities, by rewritting Eqs. (\[currheatad\]) and (\[currpowad\]) in terms of the vector potentials of Eqs. (\[amu\]) and (\[atildemu\]). In the present case, they read $$\begin{aligned} \label{aas} \vec{A}^A_3(\vec{B}) &=& \left( \Lambda_{3,1}^A(\vec{B}),\Lambda_{3,2}^A(\vec{B}) \right),\nonumber \\ \vec{A}^S_{\ell}(\vec{B}) &=& \left( \Lambda_{\ell,1}^S(\vec{B}),\Lambda_{\ell,2}^S(\vec{B}) \right), \;\;\; \ell =1,2,\nonumber \\ \tilde{\vec{A}}(\vec{B}) &= & \Omega \sum_{\ell=1}^2 g_{\ell}(\vec{B}) \left(\Lambda_{\ell,1}^S(\vec{B}), \Lambda_{\ell,2}^S(\vec{B}) \right), \end{aligned}$$ We have highlighted the antisymmetric and symmetric character in each case. Notice that, according to the analysis of Sections \[ener-bal\] and \[geoF\], the symmetric component contributes purely to dissipation of energy and entropy production, while the antisymmetric one is related to useful work. In order to characterize the performance of the heat engine and refrigerator as in Eqs. (\[effx\_he\]) and (\[effx\_fr\]) we also need the heat transported in one period as a response to the thermal bias. It reads $$Q_{\rm tr, \Delta T}= \int_0^{2\pi/\Omega} dt \; J_{\rm R}^{({\rm th})}(t)$$ with $J_{\rm R}^{({\rm th})}(t)$ defined in Eq. (\[asymJ\]). This component is not geometric and we recall that the total transported heat is $Q_{\rm tr}=Q_{\rm tr, ac} + Q_{\rm tr, \Delta T}$. According to our conventions, the contribution to the contour integral of the first component of Eq. (\[w-a\]) is always positive and is the portion related to the net dissipated power and entropy production due to the ac driving. Instead, the second one, also defining $Q_{\rm tr, ac}$ in Eq. (\[q-a\]), can have any sign. In the case of a heat engine, $Q_{\rm tr, ac}$ and $Q_{\rm tr, \Delta T}$ have the same sign, i.e. the pumped heat flows in the same direction as the component induced by the temperature bias. As a consequence, it generates useful work that can be absorbed by the ac sources. Notice that in such a case, the second term of Eq. (\[w-a\]) has an opposite sign to the first. In the refrigerator, it is the opposite. Irrespectively of the sign of $Q_{\rm tr, ac}$, which determines that the system operates as a heat engine or a refrigerator, the crucial quantity to optimize is the integral of $\vec{A}^A(\vec{B}) $ over a suitable chosen closed path in the parameter space. ### Results {#reultsqbt} We present some results for specific parameters of the driving protocol defined in Eq. (\[xb\]). We start by analyzing the case with $\Delta T=0$ and showing that a necessary condition for the heat currents to be finite is that the coupling to the left and right reservoirs are different, i. e. $\hat{\tau}_L\neq \hat{\tau}_R$. In fact, let us notice that these couplings determine the functions $\lambda_L(\vec{B})$ and $\lambda_R(\vec{B})$. If we assume symmetric couplings, we have $\lambda_L(\vec{B})=\lambda_R(\vec{B})$ and $\Gamma_L=\Gamma_R$. Therefore, we get ${\bf M}_L(\vec{B})={\bf M}_R(\vec{B})$ in Eq. (\[tunmatr\]). After replacing the latter matrices in Eq. (\[asymJ\]), we get $J^{(a)}_L(t)=J^{(a)}_R(t)$ at every time. This implies that the currents obtained by averaging over one period, i. e. $J^{Q}_{\rm tr,L}$ and $J^{Q}_{\rm tr,R}\equiv J^{Q}_{\rm tr,ac}$, must be equal to zero in order to agree with Eq. (\[net\]). Interestingly, one can check by means of the explicit calculations that the adiabatically pumped current in one period $J^{Q}_{\rm tr,ac}$ is zero even if one allows $\Gamma_L$ and $\Gamma_R$ to be different. Moreover, we verified that the magnitude of the pumped heat current depends on the chosen combinations of Pauli matrices (see Appendix  \[linearr\]). The maximum pumping for the protocol of Eq. (\[xb\]) corresponds to $\mathcal{H}_{\rm c,\alpha}$ containing $\hat{\tau}_L=\hat{\sigma}_x$ and $\hat{\tau}_R=\hat{\sigma}_z$, as in Eq. (\[qcont\]). As a matter of fact, in the other two combinations ($\hat{\tau}_L=\hat{\sigma}_x$, $\hat{\tau}_R=\hat{\sigma}_y$, and $\hat{\tau}_L=\hat{\sigma}_y$, $\hat{\tau}_R=\hat{\sigma}_z$) one obtains half the magnitude. We now turn to analyze the geometric properties, which can be fully characterized by the vector potentials $\vec{A}^{A}(\vec{B})$ and $\tilde{\vec{A}}^{S}(\vec{B})$, entering Eqs. (\[w-a\]) and (\[q-a\]). These vectors are represented with arrows in the parameters space in Fig. \[figcont\]. In the Fig. \[figcont\] we show several paths, which are plotted in blue, corresponding to the protocol of Eq. (\[xb\]) with different relative phases $\phi$. This provides a visual representation of the magnitude of $Q_{\rm tr, ac}$ and the two types of geometric components of $W$. In all the cases we represent with red arrows the vector $\vec{A}^A(\vec{B})$ along the path while the green arrows represent the vector potential $\tilde{\vec{A}}^S(\vec{B})$ along the same protocol (note that $\tilde{\vec{A}}^{S}(\vec{B})$ is inherently associated with the protocol and cannot be defined outside it). The latter vectors follow the circulation of the path. Thus, they lead to a positive non-vanishing contribution to $W$ for all the values of $\phi$. Instead, the vectors $\vec{A}^{A}(\vec{B})$ are in general opposite to the circulation of the path along some pieces. In particular, for trajectories like the ones corresponding to $\phi=n \pi$, they are parallel to the circulation along half of the path and antiparallel in the other half, leading to a vanishing result of the integral. ![Vectors $\vec{A}^{A}$ and $ \tilde\vec{A}^{S}$. Black and red arrows represent the vector $\vec{A}_3^{A}(\vec{B})\equiv \left( \Lambda_{3,1}^A(\vec{B}), \Lambda_{3,2}^A(\vec{B}) \right)$ in the parameter space, while the green arrows represent the vector $ \tilde\vec{A}^{S}$ defined in Eq. (\[atildemu\]). The blue line is the closed path corresponding to the driving protocol in Eq. (\[xb\]) with $B_{x,0}=B_{z,0}=0.2k_{\rm B}T$, $B_{x,1}=B_{z,1}=0.1k_{\rm B}T$. The other parameters are $\Gamma_{\rm L}=\Gamma_R=0.2$ and $\epsilon_{\rm C}=100k_{\rm B}T$ and define the spectral properties of the bosonic bath as indicated in Eq. (\[ombath\]).[]{data-label="figcont"}](fig3.pdf){width="\columnwidth"} In Fig. \[fig:currlrs\] we plot the adiabatically pumped heat current $Q_{\rm tr,ac}$, black curve, as a function of the phase lag $\phi$ in the weak pumping limit. The latter corresponds to considering values of $B_{x,1}$ and $B_{z,1}$ small enough so that $\oint \vec{A}^A_3 \cdot d \vec{B}$ in Eqs. (\[w-a\]) and (\[q-a\]) is proportional to the area, in the parameter space, enclosed by the closed contour defining the protocol. Indeed, using the Green’s theorem, these integrals can be written as a surface integral of the derivatives of $\vec{A}^A_3$ with respect to $\vec{B}$. When $B_{x,1}$ and $B_{z,1}$ are small, such derivatives do not depend on $\vec{B}$ and can be factorized outside the integral. Accordingly, as shown in Fig. \[fig:currlrs\], the pumped heat current (black curve) behaves as a sine function of $\phi$, which vanishes at $\phi=0$. In particular, we note that a heat current is extracted from the reservoir R when $\phi$ is between 0 and $\pi$ and injected for $\pi < \phi < 2\pi$. The dependence of the total work $W$ developed by the ac sources with respect to the phase lag $\phi$ is is also plotted in Fig. \[fig:currlrs\] (red curve) using the same parameters as for the heat current. We notice that $W$ is finite in the whole range of values of $\phi$, behaving like a cosine function with a vertical offset, hence, it is non-vanishing in any case. ![Adiabatically pumped heat $Q_{\rm ac}$ and total work $W$ versus the phase lag $\phi$ in the weak pumping limit for $\Delta T=0$. Same parameters as in Fig. \[figcont\]. []{data-label="fig:currlrs"}](fig6.pdf){width="0.9\columnwidth"} ![Top panel: Pumped heat and work versus the phase difference between adiabatically-driven system parameters for $k_{\rm B}T=0.01\epsilon_{\rm C}$. Bottom panel: normalized pumped heat currents flowing in the left and right lead for $\phi=\pi/2$. We have used the following parameters: $\Gamma_L=\Gamma_R=1/5$, $B_{z,0}=0.06\epsilon_{\rm C}$ ($B_{z,0}=0.04\epsilon_{\rm C}$ for the dashed lines in the bottom panel), $B_{x,0}=0.03\epsilon_{\rm C}$, $B_{x,1}= B_{z,1}=0.07 \epsilon_{\rm C}$, and $\Delta T=0$.[]{data-label="fig:linres"}](fig4.pdf){width="\columnwidth"} \[srpump\] ![Pumped heat and work versus reference temperature $T$. Inset: efficiency of a heat pump for $\Delta T=0$ as a function of $k_{\rm B}T$. Same parameters as in Fig. \[fig:linres\] for the solid curves.[]{data-label="fig:linresT"}](fig5.pdf){width="\columnwidth"} In what follows, we show some results for the strong pumping regime corresponding to larger amplitudes of $B_{x,1}$ and $B_{z,1}$. In the top panel of Fig. \[fig:linres\], we plot the heat pumped and the work performed in a period by the ac source as functions of the phase lag $\phi$. As in the case of weak pumping previously analyzed, the pumped heat as well as the work performed by the ac sources are equal to zero at $\phi=0$ and $\pi$, since the contour has no area (see Fig. \[figcont\]). For other parameters, it is difficult to make a simple argument to explain in which direction is the heat pumped. In fact, we see that $Q_{\rm tr,ac}$ changes sign many times between $\phi=0$ and $\phi=2\pi$, whereas $W$ shows multiple positive peaks. In the bottom panel of Fig. \[fig:linres\] we plot the pumped heat in the absence of thermal bias as a function of temperature. For a suitable choice of parameters (relative to the solid curves), the direction of the flow of adiabatic heat can be reversed just by increasing the temperature of the reservoirs. In Fig. \[fig:linresT\] we plot the variation of the heat pumped and the work performed by the ac source, namely $Q_{\rm tr,ac}$ and $W$, as a function of the temperature $T$. We note that $W$ is always positive, as expected, and is non monotonous (displaying a maximum). $Q_{\rm tr,ac}$ are the same data as in Fig. \[fig:linres\] bottom, but plotted in a larger range of temperatures. $Q_{\rm tr,ac}$ is non monotonous too and changes sign, going from negative values for small $T$ to positive values at around $k_BT=0.02\epsilon_{\rm C}$. The inset of Fig. \[fig:linresT\] shows the efficiency $\eta^{\rm (pump)}$, defined in Eq. (\[nobiascop\]), of the system operated as a heat pump as a function of $T$. The non-monotonic behavior simply reflects the fact that, in the strong pumping regime, the heat currents change sign at around $k_BT=0.02\epsilon_{\rm C}$, as shown in Fig. \[fig:linres\]. ![Coefficient of performance for refrigeration (black dashed curve for absolute value and red curve for normalized to the Carnot value) versus $\Delta T$, for $\hbar \Omega= k_{\rm B}T/100$ and versus $\Omega$ (in the inset), for $\Delta T=T/500 $. We use the following parameters: $\Gamma_{\rm L}=\Gamma_{\rm R}=0.2$, $B_{z,1}=10k_{\rm B}T$, $B_{x,0}=20k_{\rm B}T$, $B_{x,1} = 30k_{\rm B}T$, $B_{z,0}=7 k_{\rm B}T$, $\epsilon_{\rm C}=120 k_{\rm B}T$, $\phi=\pi/2$.[]{data-label="fig:heatpmpcopqbtdt2"}](fig7.pdf){width="0.9\columnwidth"} Finally, in Fig. \[fig:heatpmpcopqbtdt2\] we assess the performance of the driven q-bit as a refrigerator which removes heat from the cold reservoir ($R$) even in the presence of a positive thermal bias $\Delta T$, i. e. for $T_R<T_L$. Given this temperature bias, we focus on a protocol with $\phi=\pi/2$ and the same driving parameters as in Fig. \[fig:currlrs\], in which case, we already know from the analysis of this figure, that heat is pumped from the coldest reservoir and the heat current at zero bias is maximum. We plot the COP $\eta^{\rm(fr)}$ as a black dashed curve, defined in Eq. (\[effx\_fr\]), and the normalized COP $\eta^{\rm(fr)}/\eta_{\rm C}^{\rm (fr)}$ (red curve) as functions of $\Delta T$, where $\eta^{\rm (fr)}_C = T/ \Delta T$ is the Carnot COP. Starting from $\Delta T=0$, where $\eta^{\rm(fr)}$ is roughly equal to 1.1, the plot shows that $\eta^{\rm(fr)}$ monotonously decreases with $\Delta T$. This behavior can be understood by recalling that the refrigeration mode results from a competition between the heat induced by the temperature difference and the pumped heat against the thermal bias. In fact, $Q_{\rm tr}$ is made up of two components: i) the component $Q_{ \Delta T}=2 \pi J_{{\rm tr},\Delta T}^Q/\Omega $, which is the heat current flowing from the hot to the cold reservoir during one period, therefore entering the reservoir $R$ ($Q_{\Delta T}>0$). This component increases linearly with $\Delta T$; ii) $Q_{\rm tr, ac}$, which is the pumped heat current extracted from the cold reservoir $R$ ($Q_{\rm tr, ac}<0$), which is independent of $\Delta T$. Therefore $Q_{\rm tr}$ remains negative as long as $Q_{ \Delta T}$ is not large enough to compensate $Q_{\rm tr, ac}$. This occurs at $\Delta T\simeq 0.19 \;T$, where the total transported heat $Q_{\rm tr}$ vanishes, i. e. the thermal machine is no longer a refrigerator (a further increase of $\Delta T$ leads to a sign reversal of the heat current). On the other hand, the ratio $\eta^{\rm(fr)}/\eta_{\rm C}^{\rm (fr)}$ (red curve) is bell-shaped, since this ratio becomes $\propto \Delta T$. In the inset of Fig. \[fig:heatpmpcopqbtdt2\] we plot the normalized COP as a function of the inverse of the driving frequency $\Omega$. Since $Q_{ \Delta T} \propto \Omega^{-1}$, increasing the frequency – within the adiabatic regime – favors the pumping component $Q_{\rm ac}$ relative to $Q_{\Delta T}$. Notice, however, that by increasing the frequency the dissipative component represented by $\tilde{\vec{A}}$ in Eq. (\[w-a\]) becomes more detrimental to the efficiency. There is, thus, a compromise between the two effects and an optimal frequency of operation. Driven quantum dot {#example-qdot} ------------------ ![Illustration of the quantum dot driven by a magnetic field and connected to electron reservoirs with different polarizations, represented by different orientations of the paraboloids. The hybridization strength is modified according to the magnetic field’s pointing direction. In (a) the electron hopping between the quantum dot and the right ($z$-polarized) reservoir is favored, as is denoted by the thick arrow. In (b) the pointing direction of the magnetic field has changed to $x$ and now the quantum dot is stronger coupled to the left reservoir. []{data-label="fig:qDotSchematic"}](fig8.pdf){width="\columnwidth"} In this case, the configuration consists of a central quantum dot driven by a time-dependent magnetic field and coupled to electron reservoirs with different polarizations. For the quantum dot the Hamiltonian ${\cal H}_S$ reads $$\label{qdot} {\cal H}_S(t)= \; \Psi^{\dagger}_d \left[ V_g \; \hat{\sigma}_0- \vec{B}(t) \cdot \hat{\mbox{$\vec{\sigma}$}} \right] \Psi_d,$$ where $\Psi_d^{\dagger} = (d_{\uparrow}^{\dagger}, d_{\downarrow}^{\dagger})$ is a spinor related to the spin degrees of freedom of the electron in the quantum dot, while $d^{\dagger}_{\sigma}$ and $d_{\sigma}$ are respectively the creation and annihilation fermionic operators for these particles. The quantum dot contains two levels as a consequence of the Zeeman splitting introduced by the magnetic field. $\hat{\mbox{$\vec{\sigma}$}} = \left( \hat\sigma_x, \hat\sigma_y, \hat\sigma_z \right)$ is composed of the $2\times2$ Pauli matrices and $\hat{\sigma}_0$ is the identity, while $\vec{B}(t)= \left( B_x(t), B_y(t), B_z(t) \right)$ is the external time-periodic magnetic field and $V_g$ is a gate voltage, which rigidly shifts the energies of the two levels. The reservoirs are represented by systems of non-interacting fermions. The electrons in the $\alpha$ reservoir are spin-polarized along the magnetization $\vec{m}_{\alpha}$. The Hamiltonian ${\cal H}_{\alpha}$ which describes the reservoir reads $$\label{dqres} {\cal H}_{\alpha}=\sum_{k\alpha} \Psi_{k\alpha}^\dagger \left[ \varepsilon_{k \alpha} - \vec{ m}_{ \alpha} \cdot \hat{\mbox{$\vec{\sigma}$}} \right] \Psi_{k\alpha}, \;\;\; \alpha=L, R,$$ where $\Psi_{k\alpha}^\dagger= \left( c_{k\alpha, \uparrow}^\dagger, c_{k\alpha, \downarrow}^\dagger \right)$ are spinors composed by the fermionic creation/annihilation operators $c_{k\alpha, \sigma}^\dagger$ and $c_{k\alpha,\sigma}$. We assume that both reservoirs have chemical potential $\mu_L=\mu_R=0$. The coupling between the quantum dot and the reservoirs is represented by $$\label{qcontd} {\cal H}_{c,\alpha}=\sum_{k\alpha,\sigma =\uparrow, \downarrow}V_{k\alpha, \sigma } \left( c^{\dagger}_{k\alpha, \sigma} d_{\sigma} + d_{\sigma}^{\dagger} c_{k\alpha, \sigma} \right).$$ In order to solve the problem, it is convenient to change the basis of ${\cal H}_{\alpha}$ to the one where the quantization axis for the spin coincides with the direction of $\vec{m}_{\alpha}$. This is accomplished by the transformation $\left(c^{\dagger}_{k\alpha, \uparrow}, c^{\dagger}_{k\alpha, \downarrow} \right) = \hat{U}^{\alpha} \left(c^{\dagger}_{k\alpha, +}, c^{\dagger}_{k\alpha, - } \right)$. In the new basis the Hamiltonians for the reservoirs and the couplings read $$\label{dqres1} {\cal H}_{\alpha}=\sum_{k\alpha, s=\pm} c_{k\alpha,s}^\dagger \varepsilon_{k \alpha,s} c_{k\alpha,s}, \;\;\; \alpha=L, R,$$ and $$\label{qcontd1} {\cal H}_{c,\alpha}=\sum_{k\alpha,s=\pm,\sigma =\uparrow, \downarrow} v_{k_{\alpha}s, \sigma}\left( c^{\dagger}_{k\alpha, s} d_{\sigma} + H. c. \right),$$ with $ v_{k_{\alpha}s, \sigma}=U^{\alpha}_{s,\sigma} V_{k\alpha, \sigma }$. As discussed in Section \[ener-bal\], in order to have a non-vanishing pumping component we need to break spatial symmetry. We achieve this by considering different polarizations in the reservoirs. For concreteness, we consider the $L$ reservoir polarized along the positive $x$, and the $R$ one polarized along the positive $z$ direction. An illustration of the whole setup is sketched in Fig. \[fig:qDotSchematic\]. This device bears resemblance to the driven q-bit discussed in Section \[example-qubit\]. In fact, only the electrons with spins $z,\uparrow$ ($x, \uparrow$) can tunnel between the quantum dot and the $R$ ($L$) reservoir. Therefore, when the magnetic field polarizes the quantum dot along the positive $x$ direction, the tunneling of the electrons between the quantum dot and the $L$ reservoir is optimal, while the tunnel between the dot and the $R$ reservoir is optimal when the electron in the dot is polarized along the positive $z$ direction. The main difference between the present setup and the q-bit studied in Section \[example-qubit\] is the nature of the reservoirs, which is fermionic in the present case, while it is bosonic in the previous one. This difference is crucial from the technical point of view, because in the case of the quantum dot we will be able to solve the problem for arbitrary coupling between the driven system and the reservoirs. In addition, the quantum dot has a gate voltage, which moves its energy levels upwards or downwards in energy, thus tuning different parts of the spectrum of the quantum dot into the relevant transport window – $\sim k_B T$– around the chemical potential of the reservoirs. This ingredient can be used to improve the performance, as we will discuss in Section \[q-dot-results\]. Besides these differences, we expect the operation to be similar in both cases, at least within the regime where the coupling between the driven system and the reservoirs is very weak. The heat-engine operational mode in the present case could be practically realized by implementing the time-dependent magnetic field by means of a rotating classical magnetic moment. The dynamics of the latter realizes the load of the heat engine. In such a case, a pumped heat $Q_{\rm tr,ac}$ flowing in the direction of the heat current induced by the thermal bias, will generate a torque and exert work on the magnetic moment, akin to the spin torque induced by an electrical bias [@torque1; @torque2; @torque3; @torque4]. We will consider the same driving protocol as in the previous example, which is defined in Eq. (\[xb\]), without focusing on the detailed mechanism generating the magnetic field. As in the previous example, we will show results for the heat pump and refrigerator modes. ### Green’s function approach We can solve the problem exactly for arbitrary strength of the coupling between the quantum dot and the reservoirs by recourse to Green’s functions. We will use the equilibrium finite-temperature formalism to evaluate the frozen susceptibilities and compute the response functions from Eq. (\[lambda\]). This problem could be also exactly solved by recourse to the non-equilibrium Schwinger - Keldysh formalism in the Floquet representation and afterwards consider the expansion in small $\hbar \Omega$ and $\Delta T$ as in Refs. [@ludovico; @lilimos] arriving at the same results as the ones we present here. We briefly summarize the results below and show some details on the calculations in Appendix \[aplam\], $$\begin{aligned} \label{lambdadot} \Lambda^A_{3,\ell}(\vec{B}) &=& -\frac{1}{h} \int d \varepsilon \frac{ d f(\varepsilon)}{d \varepsilon} \varepsilon \mbox{Tr} \left[ \hat{\Gamma}_{R} \hat{\rho}(\varepsilon) \hat{\sigma}_\ell \hat{\rho} (\varepsilon) \right] , \;\ell=1,2\nonumber\\ \Lambda^S_{\ell,\ell'}(\vec{B}) &=& -\frac{1}{h} \int d \varepsilon \frac{ d f(\varepsilon)}{d \varepsilon} \mbox{Tr} \left[ \hat{\sigma}_{\ell} \hat{\rho} (\varepsilon) \hat{\sigma}_{\ell'} \hat{\rho}(\varepsilon) \right],\; \ell,\ell'=1,2 \nonumber\\ \Lambda^S_{3,3}(\vec{B}) &=& -\frac{1}{h} \int d \varepsilon \frac{ d f(\varepsilon)}{d \varepsilon} \varepsilon^2 \mbox{Tr} \left[\hat{\Gamma}_{R} \hat{G}_t (\varepsilon) \hat{\Gamma}_L \hat{G}_t^{\dagger}(\varepsilon) \right],\end{aligned}$$ where $f(\varepsilon)=1/\left(e^{\varepsilon/(k_B T)} +1\right)$ is the Fermi-Dirac distribution function. We have also introduced the hybridization matrix $\hat{\Gamma}_{\alpha}$, with elements $$(\hat{\Gamma}_\alpha)_{\sigma,\sigma'} = 2 \pi \sum_{k \alpha, s=\pm} U^{\alpha}_{\sigma,s} U^{\alpha}_{\sigma^{\prime},s} |V_{k\alpha}|^2 \delta (\varepsilon -\varepsilon_{k\alpha,s}).$$ We consider $L$ ($R$) reservoirs fully polarized with spins along the positive $x$ ($z$) directions and a constant density of states. Thus, $\Gamma_{\alpha} \simeq \sum_{k \alpha} |V_{k\alpha}|^2 \delta (\varepsilon -\varepsilon_{k\alpha,+})$ and $\hat{\Gamma}_{\alpha} \simeq \Gamma_{\alpha} \hat{\tau}_{\alpha}$, with $$\label{gammatau} \hat{\tau}_L \equiv \frac{1}{2} \left( \hat{\sigma}_x + \hat{\sigma}_0\right), \;\;\;\; \hat{\tau}_R \equiv \frac{1}{2} \left( \hat{\sigma}_z+ \hat{\sigma}_0 \right).$$ The local density of states is described by the matrix $$\hat{\rho} (\varepsilon) = - 2 \mbox{Im}[\hat{G}_t(\vec{B},\varepsilon) ] = \hat{G}_t(\vec{B},\varepsilon) \hat{\Gamma}\left[ \hat{G}_t(\vec{B},\varepsilon) \right]^{\dagger},$$ which depends on the frozen Green’s function $$\label{gdot} \hat{G}_t(\vec{B},\varepsilon) = \left( \varepsilon - \vec{B}(t) \cdot \hat{\mbox{$\vec{\sigma}$}} + i \hat{\Gamma}/2 \right)^{-1},$$ with $\hat{\Gamma}= \hat{\Gamma}_L + \hat{\Gamma}_R$. In Eqs. (\[lambdadot\]) we have highlighted the symmetric or antisymmetric nature of the components in each case. The fact that the components $\Lambda_{3,\ell}(\vec{B})$ are purely antisymmetric while $\Lambda_{\ell,\ell^{\prime}}(\vec{B})$ are purely symmetric is a consequence of Onsager relations in combination with symmetry properties of the setup. These properties can be directly verified from the explicit calculations of Appendix \[aplam\]. The last component $\Lambda^S_{3,3}(\vec{B})$ is proportional to the thermal conductance. The symmetry properties of $\Lambda_{\mu,\nu}(\vec{B})$ are the same as in the q-bit example of Section \[example-qubit\]. Thus, the definitions of the vector potentials in the present case are the same as in Eq. (\[aas\]). ### Results {#q-dot-results} We carry out a similar analysis to the one for the q-bit example given in Section \[example-qubit\]. We consider the same two-parameter driving protocol as before, with $\vec{B}(t)=\left(B_x(t), 0, B_z(t)\right)$ given by Eq. (\[xb\]). As mentioned before, for the case of $V_g=0$ and weak coupling to the reservoirs, we expect a similar behavior to the case of the qubit. In Figure \[fig:pumping\] we present the pumped heat $Q_{\rm tr, ac}(\phi)$ and the work developed by the ac sources $W$ for $\Delta T=0$, as function of the driving phase difference $\phi$ between the two ac components of the magnetic field. As in the qubit case analyzed in Section \[example-qubit\], for small amplitudes of the driving, $Q_{\rm tr, ac}$ is proportional to the area enclosed by the contour defined by the protocol. For this reason, the pumped heat behaves as $\propto \sin(\phi)$ and the generated work as $\propto \cos(\phi)$ plus a constant. These functions are the same as in the case of the driven qubit shown in Fig. \[fig:currlrs\]. For larger values of the driving amplitude the pumped heat departs from this behavior. However, $Q_{\rm tr, ac}(\phi)$ vanishes for $\phi=0,\pi$ for any value of $B_{x,1}=B_{z,1}$. ![ Pumped heat $Q_{\rm tr, ac} = Q_{\rm tr}$ (upper panel) and work done by the ac sources $W$ (lower panel) for $\Delta T=0$ as functions of the phase difference in the protocol defined by $B_x(t)= B_{x,0}+B_{x,1} \cos(\Omega t + \phi)$, $B_z(t)=B_{z,0}+B_{z,1} \cos (\Omega t)$ with $B_{x,0} = B_{z,0}=0.4 k_B T$ and $B_{x,1}=B_{z,1}=B_1 k_B T $. $\Gamma_L=\Gamma_R=0.4 k_B T$ and $\hbar \Omega = k_B T/800$. The plot with $B_1=0.1$ is multiplied by a factor $20$ in order to be shown in the same scale.[]{data-label="fig:pumping"}](fig9.pdf){width="0.9\columnwidth"} In Fig. \[fig:weak\] we further explore the comparison between the driven quantum dot and the driven q-bit. In particular, we show the behavior of the pumped heat as a function of the coupling to the reservoirs, assuming $\Gamma_L=\Gamma_R=\Gamma$ and the same parameters and driving protocol of Fig. \[fig:currlrs\]. We can verify that as the latter parameter approaches the limit $\Gamma \rightarrow 0$, the value of the pumped heat of the quantum dot approaches the one of the qubit case shown in Fig. \[fig:currlrs\]. There is some quantitative difference, which can be traced back to the fact that the type of couplings are not exactly the same (notice the matrix elements entering the couplings of the quantum dot are those of Eq. (\[gammatau\]), while in the qubit we have considered $\hat{\sigma}_{x,z}$). We see that the strength of the coupling has a significant impact on the behavior of the pumped heat. For the present parameters, we observe an inversion in the direction of the pumped heat as the coupling increases and overcomes $\Gamma \sim |\vec{B}|$, at which the width of the levels of the quantum dot becomes comparable to the energy difference between them. ![Pumped heat $Q_{\rm tr, ac} = Q_{\rm tr}$ for the quantum dot with the same parameters as the q-bit operating with the protocol of Eq. (\[xb\]) shown in Fig. \[fig:currlrs\] with $\phi=\pi/2$. []{data-label="fig:weak"}](fig10.pdf){width="\columnwidth"} ![image](fig11.pdf){width="\textwidth"} ![Vector fields $\vec{A}^A_3(\vec{B})$ (cyan) and $\tilde{\vec{A}}^S(\vec{B})$ (red) over a closed path (solid blue curve) for the configuration shown in the lower panel of Fig. \[fig:flor\] ($\Gamma_R=0.1 \Gamma_L$). The driving protocol defining the path is $B_x(t)= B_{x,0}+B_{x,1} \cos(\Omega t + \phi)$, $B_z(t)=B_{z,0}+B_{z,1} \cos (\Omega t)$ with $B_{x,0}=1.5 \Gamma_R,\; B_{z,0}= \Gamma_R$, $B_{x,1}=B_{z,1}= \Gamma_R$, $\phi= \pi/2$. The black arrows represent $\vec{A}_3(\vec{B})$ outside the defined protocol.[]{data-label="fig:protocol"}](fig12.pdf){width="\columnwidth"} ![Coefficient of performance for refrigeration (absolute in dashed black and normalized to the Carnot value in red) versus $\Delta T$ for the protocol of Fig. \[fig:protocol\] for $\hbar \Omega= \Gamma_R/200$ Inset: Normalized coefficient of performance for refrigeration as a function of $\hbar \Omega$ for $\Delta T = T/150 $.[]{data-label="fig:eff_dT"}](fig13.pdf){width="\columnwidth"} We now focus on the properties in the operation of the quantum-dot machine that are different from the weakly coupled driven q-bit. To this end, we further analyze the structure of the vector potentials $\vec{A}_{\mu}^{S/A}(\vec{B})$ and $\tilde\vec{A}^{S/A}(\vec{B})$ in Eq. (\[aas\]) with the tensor $\Lambda_{\mu,\nu}(\vec{B})$ of Eq. (\[lambdadot\]). The vector map for $\vec{A}^A_3(\vec{B})$ in the parameter space for a given temperature $T$ is shown in Fig. \[fig:flor\]. This representation is useful to visualize the symmetries of the setup and to select the driving protocol that maximizes the contour integral $\oint \vec{A}^A_3(\vec{B}) \cdot d\vec{B}$. In the left panel the quantum dot is contacted with the same strength to both reservoirs ($\Gamma_L=\Gamma_R$), $L$ being polarized along positive $x$ and $R$ along positive $z$ direction, as indicated in the sketch of Fig. \[fig:qDotSchematic\]. In the middle panel, the contact is stronger to $L$ than to $R$ ($\Gamma_R=0.1 \Gamma_L$). Consequently, we can visualize a higher intensity of the field $\vec{A}_3^A$ along the $B_x$ than along the $B_z$ direction. Both left and middle plots have $V_g=0$, in which case the Hamiltonian of Eq. (\[qdot\]) is symmetric under the simultaneous transformations $\Psi^{\dagger}_d \rightarrow \Psi_d$ and $\vec{B} \rightarrow -\vec{B}$. The first one is a particle-hole transformation, under which the heat current changes the sign. Consequently, the field maps of Fig. \[fig:flor\] present the symmetry $\vec{A}^A_3(\vec{B})=-\vec{A}_3^A(-\vec{B})$. In the right panel, we can visualize that the breaking of the particle-hole symmetry by a gate voltage introduces a strong asymmetry in the vector field. With the picture of Fig. \[fig:flor\] in mind, we can readily design a closed trajectory that optimizes pumping. The latter corresponds to a path that goes parallel to the vector field within the region where its intensity is high, and closes antiparallel to the vector field in a very low-intensity region. An example of such a trajectory is shown in Fig. \[fig:protocol\]. The corresponding vectors $\tilde\vec{A}^S(\vec{B})$ along the trajectory are also shown in cyan. Trajectories leading to high efficiencies of the machine would have as small dissipation as possible, in addition to high values of heat pumping. While the optimization of the pumping can be easily achieved by recourse to the vector field representation $\vec{A}^A_3(\vec{B})$, it is not easy to optimize a trajectory to decrease the integral over $\tilde\vec{A}^S(\vec{B})$. However, we know that this quantity can be reduced by decreasing the pumping frequency $\Omega$. In Fig. \[fig:eff\_dT\] we illustrate the behavior of the COP of the driven quantum dot operating as a refrigerator. Overall, this quantity follows a similar behavior as a function of $\Delta T/T$ and $\Omega$ as the one of the qubit (see Fig. \[fig:heatpmpcopqbtdt2\]). Therefore, most of the comments and remarks presented in the analysis of Fig. \[fig:heatpmpcopqbtdt2\] apply also here. However, it is several orders of magnitude higher in the present case, achieving values as large as $14 \; \%\; \eta_C^{\rm fr}$. The key for this improvement is the selection of an appropriate pumping protocol, taking advantage of the extra features introduced by the existence of the gate voltage $V_g$ in the present problem. ![Vector fields $\vec{A}^S_1(\vec{B})$ (left) and $\vec{A}^S_2(\vec{B})$ (right) following Eqs. (\[aas\]) and (\[lambdadot\]), for the parameters of the right panel of Fig. \[fig:flor\]. []{data-label="fig:mag"}](fig14.pdf){width="\columnwidth"} ![Components of the geometric magnetization $m_{1,2}^{\rm geo}$, defined in Eq. (\[n12\]) as functions of the phase-lag $\phi$ corresponding to paths of the form $B_x(t)= B_{x,0}+B_{x,1} \cos(\Omega t + \phi)$, $B_z(t)=B_{z,0}+B_{z,1} \cos (\Omega t)$ with $ B_{x,0}=1.5 \Gamma_R,\; B_{z,0}= \Gamma_R$, $B_{x,1}=B_{z,1}= \Gamma_R$, on the vector fields of Fig. \[fig:mag\]. []{data-label="fig:magphi"}](fig15.pdf){width="\columnwidth"} We close this section by analyzing the geometric component of the first-order adiabatic reaction force defined in Eq. (\[avforceline\]). In the present problem, the latter coincides with the magnetic moment of the quantum dot. For $\Delta T=0$, the magnetic moment of the quantum dot is given by $$\begin{aligned} \label{n12} m_{\ell} &=&=\frac{\Omega}{2\pi} \int_0^{2 \pi/\Omega} dt \langle \Psi^{\dagger}_d \hat{\sigma}_{\ell} \Psi_d \rangle (t) = m^{\rm BO}_{\ell}+m^{\rm geo}_{\ell}, \nonumber \\ m^{\rm geo}_{\ell}&=& \frac{\Omega}{2\pi} \oint \vec{A}^S_{\ell}(\vec{B}) \cdot d \vec{B},\end{aligned}$$ with $\hat{\sigma}_{x,y}$ for $\ell \equiv 1,2$, respectively. Here, $m^{\rm BO}_{\ell}$ is the average over one period of the the instantaneous magnetization corresponding to the equilibrium frozen Hamiltonian, while $m^{\rm geo}_{\ell}$ is the geometric component, corresponding to the first-order adiabatic reaction force of Eq. (\[avforceline\]). The vectors $\vec{A}^S_{\ell}(\vec{B})$ are calculated from Eq. (\[lambdadot\]) as defined in Eq. (\[aas\]). Interestingly, the symmetric component of the thermal geometric tensor, which defines the dissipation, is directly related in the present problem to a local physical quantity, which is the quantum dot geometric magnetization [@pablo]. The latter is experimentally accessible. In fact, notice that the component $m^{\rm BO}_{\ell}$ does not explicitly depend on the driving frequency, while the second term has an explicit linear dependence on $\Omega$. Therefore, in a concrete experimental measurement of the quantum dot magnetization, both components should be distinguishable from one another. The associated vector fields $\vec{A}^S_{\ell}(\vec{B})$ are shown in Fig. \[fig:mag\] for configurations with stronger coupling to the $L$ ($x$-polarized) reservoir than to the $R$ ($z$-polarized) one and a finite gate voltage $V_g$, with the same values of the parameters as in the right panel of Fig. \[fig:flor\]. In this representation, we can visualize higher intensity of the fields along $B_x, B_z >0$ relative to $B_x, B_z <0$, as a consequence of the polarization of the reservoirs along the positive $x$ and $z$-axis. The amplitudes of $\vec{A}^S_1(\vec{B})$, shown in the left panel, are larger than those of $\vec{A}^S_2(\vec{B})$, shown in the right panel, due to the larger coupling to the reservoir polarized along $x$. The result of calculating the integrals over closed trajectories with different phase lags $\phi$ between the components $B_x$ and $B_z$ is shown in Fig. \[fig:magphi\]. As in the case of the pumped heat, both components of the magnetization vanish at $\phi=0,\pi$. Summary and conclusions {#conclusions} ======================= We have presented a general description of the geometrical properties of quantum thermal machines under the effect of adiabatic periodic driving and a small thermal bias due to the contact to reservoirs at different temperatures. The cyclic time-dependence is introduced via classical variables, varying slowly in time, that enter the quantum Hamiltonian of the system. We show that the operation of the thermal machine, consisting of a few-level quantum system, is fully characterized by the thermal tensor $\Lambda_{\mu,\nu}$ defined in Section \[geo-section\]. The formal derivation of this tensor is obtained by means of the adiabatic linear response theory complemented by Luttinger’s representation of the thermal bias. The symmetric component of $\Lambda_{\mu,\nu}$ characterizes the total rate of entropy production, thus controlling the dissipation of all the sources involved in the operation of the machine. When the system is driven by two or more periodically-varying parameters, it is possible to obtain pumping of heat between reservoirs, even in the absence of a temperature bias. The heat pumped, the work performed on the system, and the dissipated power can be described by means of vector fields defined through the thermal tensor. In particular, the pumped heat by the driving and the work performed can be expressed in a purely geometric form as line integrals of those vector fields over the closed paths which represent the driving cycles in the parameter space. In the presence of a thermal bias, these two quantities allow the characterization of a thermal machine which realizes heat-to-work conversion. We have illustrated these ideas using two paradigmatic quantum systems coupled to two thermal reservoirs. The first example consists of a qubit, whose energy levels and inter-level tunneling depend harmonically on time, attached to two bosonic reservoirs kept at different temperatures. The second example is a quantum dot coupled to electronic reservoirs and driven by a harmonically time-dependent and rotating magnetic field. The two examples are solved with different techniques, while two driving parameters are assumed. In the case of the qubit we rely on the master equation approach, valid for weak coupling to the reservoirs, while in the case of the quantum dot we solve the problem exactly for arbitrary coupling by recourse to linear response and Green’s function formalisms. The two problems are very similar qualitatively and quantitatively when the driven system is weakly coupled to the reservoirs. In the two cases, we have calculated the vector fields responsible for the geometric characterization of the systems as thermal machines. We have computed the heat pumped and the work as functions of: i) phase lag between the two driving parameters, ii) the reference temperature, and iii) the coupling between system and reservoir (for the second example). The efficiency of the thermal machines has been analyzed in terms of the temperature difference between reservoirs, the average temperature, and the frequency of the driving parameters in both cases. Finally, in the second example, we have shown how the representation of the pumped heat by means of vector fields can be used to identify the cycles that maximize it, thus improving the performance of a thermal machine. Acknowledgements ================ We acknowledge support from PIP-2015-CONICET, PICT-2014, PICT-2018, Argentina (PTA and LA), Simons-ICTP-Trieste associateship and the Alexander von Humboldt Foundation, Germany (LA), Deutsche Forschungsgemeinschaft through CRC 183 (FvO), ICTP Federation agreement (PTA), grant CNR-CONICET (BB, LA and FT), the Oxford Martin Programme (RF). We thank the Dahlem Center for Complex Quantum Systems, Berlin and the International Center for Theoretical Physics, Trieste, for hospitality. Appendices ========== Luttinger theory of thermal transport {#Lutt-theory} ===================================== The idea of expressing the thermal difference in a Hamiltonian language was originally introduced by Luttinger [@luttinger]. Here, we follow the revised version of Luttinger’s theory presented by Tatara in Ref. , which we briefly review and adapt in order to deal with a Hamiltonian containing a tunneling contact between the central system and the reservoirs at which the thermal difference is applied. Luttinger’s theory is formulated in the continuum starting from a Hamiltonian ${\cal H}_{\rm E}(t)= \int \mathrm{d}{\bf r} h({\bf r}) \psi({\bf r}, t) $, where $ \psi({\bf r}, t) $ is a “gravitational” potential. Gradients of the latter induce energy flows ${\bf j}^E$ akin to the electrical currents induced by gradients of the electric potential. Such energy flows obey a continuity equation $\dot{ h}({\bf r}) =- \partial_{\bf r} \cdot {\bf j}^E({\bf r})$ as a consequence of energy conservation, which motivates the definition $$\label{lut} {\cal H}_{\rm Lutt}(t) = \int_{-\infty}^t \mathrm{d}t^{\prime} \int \mathrm{d}{\bf r} \; {\bf j}^E(t^{\prime} ) \cdot \partial_{\bf r} \psi({\bf r}, t),$$ with $ \partial_{\bf r} \psi({\bf r}, t)= \partial_{\bf r} T/T$. Such formulation is consistent with the rate of change of the entropy production, $$\dot{S}= - \int \mathrm{d}{\bf r} \frac{1}{T} \partial_{\bf r}\cdot \langle {\bf j}^E(t) \rangle= - \int \mathrm{d}{\bf r} \langle{\bf j}^E(t)\rangle \cdot \frac{\partial_{\bf r} T}{T^2},$$ through the relation $\langle {\cal H}_{\rm Lutt}(t) \rangle= T S$. Ref.  considers the alternative Hamiltonian $$\label{at} {\cal H}_{ A_T}(t) = - \int d{\bf r} \; {\bf j}^E(t^{\prime} ) \cdot \vec{A}_T({\bf r},t).$$ The Hamiltonians of Eqs. (\[lut\]) and (\[at\]) coincide in the long-time average. In fact, $\int_{-\infty}^{+\infty} \mathrm{d}t \; {\cal H}_{\rm Lutt}(t)= \int_{-\infty}^{+\infty} \mathrm{d}t \; {\cal H}_{A_T}(t)$ with $$\label{adel} \partial_t \vec{A}_T({\bf r},t)= \partial_{\bf r} \psi({\bf r}, t)= \partial_{\bf r} T/T.$$ In this way, $\vec{A}_T({\bf r},t)$ and $\psi({\bf r}, t)$ behave, respectively, in a similar way as the vector and scalar potentials of electromagnetism. Identities satisfied by the adiabatic susceptibilities {#relat} ====================================================== In order to prove the identities of Eq. (\[ident\]), satisfied by the adiabatic susceptibilities for the thermal driving corresponding to the [*frozen*]{} Hamiltonian ${\cal H}_t$, we proceed by writing the following equation satisfied by the current operators, $${\cal J}^E_L(t) + {\cal J}^E_R(t) = \dot{\cal H}_S(t),$$ where $\dot{\cal H}_S$ encloses all the terms of ${\cal H}_t$ corresponding to the central system and contacts between system and reservoirs. All the operators are expressed in Heisenberg representation with respect to ${\cal H}_t$ $$\label{eqlr} \sum_{\alpha, \beta=L,R} \chi_t^{\rm ad} \left[ {\cal J}^E_{\alpha} , {\cal J}^E_{\beta}\right] = \chi_t^{\rm ad} \left[ \dot{\cal H}_S, \dot{\cal H}_S\right]=0.$$ In order to prove that the right-hand side (rhs) of this equation is zero we start from the definition of the adiabatic susceptibility, $$\label{chiadlim} \chi_t^{\rm ad} \left[ \dot{\cal H}_S, \dot{\cal H}_S\right]=-i \lim_{\omega \rightarrow 0} \partial_{\omega} \chi_{\dot{S},\dot{S}}(\omega) = \lim_{\omega \rightarrow 0} \frac{\mbox{Im}[\chi_{\dot{S},\dot{S}}(\omega)]}{\omega},$$ being $\chi_{\dot{S},\dot{S}}(\omega)$ the Fourier transform of the susceptibility $\chi_{\dot{S},\dot{S}}(t-t^{\prime})=-i\theta(t-t^{\prime}) \langle \left[ \dot{\cal H}_S(t), \dot{\cal H}_S(t^{\prime})\right]\rangle_t$. Since all the mean values correspond to the equilibrium frozen Hamiltonian ${\cal H}_t$, we have $\chi_{\dot{S},\dot{S}}(t-t^{\prime})= \partial_t \partial_{t^{\prime}} \chi_{S,S}(t-t^{\prime})$, being $\chi_{S,S}(t-t^{\prime})=-i\theta(t-t^{\prime}) \langle \left[ {\cal H}_S(t), {\cal H}_S(t^{\prime})\right]\rangle_t$. Hence, $$\label{chids} \chi_{\dot{S},\dot{S}}(\omega) = - \omega^2 \chi_{S,S}(\omega).$$ For a system with a bounded spectrum, $\chi_t^{\rm ad} \left[ \dot{\cal H}_S, \dot{\cal H}_S\right] =0$ when the limit $\omega \rightarrow 0$ is evaluated in Eq. (\[chiadlim\]). In fact, introducing the Lehmann representation in $\chi_{S,S}(\omega)$ and using (\[chids\]) and (\[chiadlim\]) we get $$\begin{aligned} \label{chissd} & & \chi_t^{\rm ad} \left[ \dot{\cal H}_S, \dot{\cal H}_S\right] =\pi \lim_{\omega \rightarrow 0} \omega^2 \sum_{n,m } p_m |\langle m|H_S | n\rangle|^2\nonumber \\ & &\;\;\;\;\;\;\; \times \left[\delta(\omega-(\varepsilon_m-\varepsilon_n))- \delta(\omega-(\varepsilon_n-\varepsilon_m))\right] \end{aligned}$$ with ${\cal H}_t |m\rangle = \varepsilon_m |m\rangle$. In the latter equation $ |\langle m|H_S | n\rangle|^2$ is finite for a system with a bounded spectrum, while $\sum_{n,m }\left[\delta(\omega-(\varepsilon_m-\varepsilon_n))- \delta(\omega-(\varepsilon_n-\varepsilon_m))\right] $ is the density of states for the excitations of the full system. Typically, the latter function is gapped or has a power-law behavior $\sim |\omega|^{\gamma}$ with $\gamma >0$, which proves the rhs of Eq. (\[eqlr\]). Using Eq. (\[eqlr\]), we get the identities of Eq. (\[ident\]). A similar argument can be elaborated for the identities related to the response functions combining energy currents and ac-driving forces. In that case, we can prove $$\begin{aligned} & & \sum_{\alpha=L,R} \chi_t^{\rm ad} \left[ {\cal J}^E_{\alpha} , {\cal F}_l\right] = \chi_t^{\rm ad} \left[ \dot{\cal H}_S, {\cal F}_l \right]=0,\nonumber \\ & & \sum_{\alpha=L,R} \chi_t^{\rm ad} \left[ {\cal F}_l, {\cal J}^E_{\alpha} \right] = \chi_t^{\rm ad} \left[ {\cal F}_l, \dot{\cal H}_S \right]=0,\end{aligned}$$ following similar reasoning as with Eq. (\[eqlr\]). Summarizing, the adiabatic response functions in which the energy current enters are $$\begin{aligned} \label{ident} \chi_t^{\rm ad} \left[ {\cal J}^E_{\alpha} , {\cal J}^E_{\alpha}\right] &=& \chi_t^{\rm ad}\left[ {\cal J}^E_{\bar{\alpha}} , {\cal J}^E_{\bar{\alpha}}\right] \nonumber \\ \chi_t^{\rm ad} \left[ {\cal J}^E_{\alpha} , {\cal J}^E_{\alpha}\right] &=& - \chi_t^{\rm ad}\left[ {\cal J}^E_{\alpha} , {\cal J}^E_{\bar{\alpha}}\right], \\ \chi_t^{\rm ad} \left[ {\cal F}_{l} , {\cal J}^E_{\alpha} \right] &=& - \chi_t^{\rm ad} \left[ {\cal F}_{l} , {\cal J}^E_{\bar{\alpha}}\right], \nonumber\\ \chi_t^{\rm ad} \left[ {\cal J}^E_{\alpha}, {\cal F}_{l} \right] &=& - \chi_t^{\rm ad} \left[ {\cal J}^E_{\bar{\alpha}}, {\cal F}_{l} \right],\label{fj}\end{aligned}$$ up to some function that vanishes when averaging over one period. In the above equations $\bar{\alpha}$ denotes the reservoir opposite to $\alpha$. Entropy production rate {#entropy-rate} ======================= In what follows, we present a microscopic derivation of the expression for the entropy production rate associated to the combined effect of the time-dependent and thermal driving in the adiabatic regime. ### ac driving We start by analyzing the effect of the time-dependent driving. To this end, we can proceed along the lines of Refs. [@deff; @ludo] and start from the definition of von Neumann entropy $$\label{vn} S(t)= -k_B \mbox{Tr}\left[ \rho(t) \mbox{ln} \rho(t) \right].$$ We also introduce the following auxiliary function, $$S[{\cal H}_t]= k_B \mbox{Tr}\left[ \rho(t) \left( \beta{\cal H}_t +\mbox{ln}Z_t \right) \right],$$ with $Z_t= \mbox{Tr}\left[e^{-\beta {\cal H}_t}\right]$. Under a small change in the parameter space, ${\bf X}(t) \rightarrow {\bf X}(t+\delta t)$, the Hamiltonian evolves to $${\cal H}_{t+\delta t} = {\cal H}_{t}+ \frac{\partial {\cal H}(t)}{\partial {\bf X} } \cdot \dot{\bf X}(t) \delta t = {\cal H}_{t}- \mathbfcal{F}\cdot \dot{\bf X}(t) \delta t.$$ Consequently, $$S[{\cal H}_{t+\delta t}]= k_B \mbox{Tr}\left[ \rho(t+\delta t) \left( \beta{\cal H}_{t+\delta t} +\mbox{ln}Z_{t+\delta t} \right) \right].$$ The change in the latter function is $\delta S[{\cal H}]=S[ {\cal H}_{t+\delta t}]-S[ {\cal H}_{t}]$, which keeping terms up to first order in $\delta t$ explicitly reads $$\begin{aligned} & & \delta S[{\cal H}]= k_B \beta \left\{ \mbox{Tr}\left[ \rho(t+\delta t) {\cal H}_t \right] - \mbox{Tr}\left[ \rho(t) {\cal H}_t \right] \right\} + \\ & & \;\;\;\;\;\;\; k_B \mbox{ln} Z_{t+\delta t} - k_B \mbox{ln} Z_{t} - k_B \beta \mbox{Tr}\left[ \rho(t ) \mathbfcal{F} \right] \cdot \dot{\bf X}(t) \delta t. \nonumber \label{delst} \end{aligned}$$ In the last Eq. we have used $\mbox{Tr}\left[ \rho(t+\delta t) \right] = \mbox{Tr}\left[ \rho(t) \right]=1$. We can identify the first term with a change in the internal energy, $$U=\mbox{Tr}\left[ \rho(t) {\cal H}_t \right],$$ i. e. $\delta U=\mbox{Tr}\left[ \rho(t+\delta t) {\cal H}_t \right]-\mbox{Tr}\left[ \rho(t) {\cal H}_t \right]$, as well as the change in the internal free energy, $$F= - k_B T \; \mbox{ln} Z_{t}.$$ The other terms are related to the work developed in the change of the time-dependent parameters [@balian], $$\delta W= -\mbox{Tr}\left[ \rho(t ) \mathbfcal{F} \right] \cdot \dot{\bf X}(t) \delta t.$$ Therefore, Eq. (\[delst\]) can be expressed as follows $$T \delta S[{\cal H}]= \delta U - \delta F + \delta W.$$ Following Ref. [@deff; @kawai; @horo; @espo], we define the non-equilibrium entropy production as the following difference $$\delta S^{\rm neq} = \delta S - \delta S[{\cal H}],$$ and we evaluate it for a protocol $\delta {\cal C}$ in the parameter space starting in ${\bf X}(t_0)$ and ending in ${\bf X}(\tau)$, which consists in a sequence of the previous small changes. Using Eq. (\[vn\]) and (\[delst\]), and introducing the definition of the relative entropy $S\left[\rho(t) ||\rho_t \right]= S(t)+k_B\mbox{Tr}\left[ \rho(t) \mbox{ln} \rho_{t} \right] $, the non-equilibrium entropy change can be written as in Ref. [@deff] $$\begin{aligned} \label{sneq} \delta S^{\rm neq} &= &S\left[\rho({\tau}) ||\rho_{\tau} \right] - S\left[\rho(t_0) ||\rho_{t_0} \right] \nonumber \\ & & +k_B\beta \int_{\delta {\cal C}} \mathrm{d}t \mbox{Tr}\left[ \rho(t) \mathbfcal{F} \right] \cdot \dot{\bf X}(t). \end{aligned}$$ Lehmann representation for the thermoadiabatic tensor {#lehmann} ===================================================== Performing a Fourier transform in the adiabatic susceptibilities entering the of Eq. (\[lambda\]), we see that the elements of this tensor can be expressed as $$\label{lambd} \Lambda_{\mu,\nu}(\vec{ X}) = -i \partial_{\omega} \chi_{\mu,\nu}(\omega)|_{\omega=0}= \lim_{\omega \rightarrow 0} \frac{\mbox{Im}[\chi_{\mu,\nu}(\omega)]}{\omega},$$ being $\chi_{\mu,\nu}(\omega)$ the Fourier transform of the susceptibility $\chi_{\mu,\nu}(t-t^{\prime})=-i\theta(t-t^{\prime}) \langle \left[ {\cal F}_{\mu}(t), {\cal F}_{\nu}(t^{\prime})\right]\rangle_t$. Using the notation ${\cal F}_{\mu} = - \partial_{\mu} {\cal H}_t$ and expressing the susceptibility in the Lehmann representation we have $$\begin{aligned} \label{chi} \chi_{\mu,\nu}(\omega) &=&\hbar \sum_{n,m } p_m \left( \varepsilon_m-\varepsilon_n\right)^2 \left[ \frac{\langle \partial_{\mu} m|n\rangle \langle n| \partial_{\nu} m\rangle }{\omega - (\varepsilon_m-\varepsilon_n)+i \eta} \right. \nonumber\\ & & \left. \;\;\;\;\;\;\;\;\;\;\;\;\;\;- \frac{\langle \partial_{\nu} m|n\rangle \langle n | \partial_{\mu} m\rangle }{\omega - (\varepsilon_n-\varepsilon_m)+i \eta} \right], \end{aligned}$$ with $\eta=0^+$. We have used the following identities calculated from ${\cal H}_t |n\rangle =\varepsilon_n |n\rangle$ and $\langle n| \partial_{\mu} \left( {\cal H}_t |m \rangle \right)$ $$\begin{aligned} \label{elem} \langle n|\partial_{\mu} {\cal H}_t |m\rangle & = & \left(\varepsilon_m -\varepsilon_n\right)\langle n|\partial_{\mu} m \rangle + \delta_{n,m} \partial_{\mu} \varepsilon_m, \nonumber \\ \langle m |\partial_{\mu} {\cal H}_t |n \rangle & = & \left(\varepsilon_m -\varepsilon_n\right)\langle \partial_{\mu} m| n \rangle + \delta_{n,m} \partial_{\mu} \varepsilon_m,\end{aligned}$$ Calculating the derivative as indicated in Eq. (\[lambd\]), we have $\Lambda_{\mu,\nu}(\vec{ X})= \Lambda^A_{\mu,\nu}(\vec{ X})+\Lambda^S_{\mu,\nu}(\vec{ X})$, with the antisymmetric and symmetric components given by $$\begin{aligned} \label{lambf} \Lambda^S_{\mu,\nu}(\vec{ X}) = & & \hbar \pi \lim_{\omega \rightarrow 0} \sum_{n,m}p_m \frac{(\varepsilon_n-\varepsilon_m)^2}{\omega}\mbox{Re}[\langle \partial_{\mu} m|n\rangle \langle n| \partial_{\nu} m\rangle ]\nonumber \\ & & \times \left[\delta(\omega-(\varepsilon_m-\varepsilon_n))- \delta(\omega-(\varepsilon_n-\varepsilon_m))\right]\nonumber \\ & & \Lambda^A_{\mu,\nu}(\vec{ X}) = 2 \hbar\sum_m p_m \;\mbox{Im} \left[\langle \partial_{\mu} m | \partial_{\nu} m \rangle\right],\end{aligned}$$ Driven qubit: Calculation of currents and power for different spin couplings {#linearr} ============================================================================ Coupling: $\hat{\tau}_L=\hat{\sigma}_x$ and $\hat{\tau}_R=\hat{\sigma}_z$ ------------------------------------------------------------------------- The different components of $\mathbf{p}(t)$ for the driving protocol of Eq. (\[xb\]) with $\hat{\tau}_L=\hat{\sigma}_x$ and $\hat{\tau}_R=\hat{\sigma}_z$ can be calculated by solving Eqs. (\[insad1\]) and (\[insad2\]). They read $$\begin{aligned} \label{appprob} &{p}_{1}^{\rm (i)}=\frac{1}{1+e^{- \delta E/k_{\rm B}T}}, \nonumber \\ &{p}_{1,\Delta T}^{({\rm i})}=\frac{\delta E\Big(-{B_z^2}\Gamma_L+B_x^2\Gamma_R\Big)\operatorname{sech}^2\left(\frac{\delta E}{2k_{\rm B}T}\right)}{4k_{\rm B}\Big({B_z^2}\Gamma_L+B_x^2\Gamma_R\Big)} \frac{\Delta T}{T^2},\nonumber \\ &{p}_{1}^{({\rm a})}=-\frac{d{p}_1^{({\rm i})}}{dt}\frac{\delta E\tanh\left(\frac{\delta E}{2k_{\rm B}T}\right)e^{\delta E/\epsilon_{\rm C}}}{4\Big(B_z^2\Gamma_L+B_x^2\Gamma_R\Big)},\nonumber \\ &p_2^{\rm (i)}=1-p_{1}^{\rm (i)},\;\;{p}_{2,\Delta T}^{({\rm i})}=-{p}_{1,\Delta T}^{({\rm i})},\;\;{p}_{2}^{({\rm a})}=-{p}_{1}^{({\rm a})}.\end{aligned}$$ The heat currents are $$\begin{aligned} &J_L^{({\rm a})}(t)=\frac{dp_1^{({\rm i})}}{dt}\frac{\delta E\, B_z^2\,\Gamma_L}{B_z^2\Gamma_L+B_x^2\Gamma_R},\nonumber \\ &J_R^{({\rm a})}(t)=\frac{dp_1^{({\rm i})}}{dt}\frac{\delta E\, B_x^2\,\Gamma_L}{B_z^2\Gamma_L+B_x^2\Gamma_R}.\end{aligned}$$ Coupling: $\hat{\tau}_L=\hat{\sigma}_x$ and $\hat{\tau}_R=\hat{\sigma}_y$ ------------------------------------------------------------------------- In this case the adiabatic probabilities can be written as $$\begin{aligned} &p_{1}^{({\rm a})}=-\frac{dp_1^{({\rm i})}}{dt} \frac{\delta E e^{\delta E/\epsilon_{\rm C}}\tanh \left(\delta E/k_{\rm B}T\right)}{4B_z^2\Gamma_L+\delta E^2\Gamma_R},\nonumber \\ &p_{2}^{({\rm a})}=-p_1^{({\rm a})}.\end{aligned}$$ In the absence of a bias, the instantaneous contribution to the current vanishes, and the only contributions come from adiabatic corrections. The adiabatic heat current flowing in the left and right lead are given by $$\begin{aligned} &J_{\rm L}^{({\rm a})}(t)=\frac{dp_1^{({\rm i})}}{dt} \frac{4\delta E B_{z}^2\,\Gamma_L}{4B_z^2\Gamma_L+\delta E^2 \Gamma_R},\nonumber \\ &J_{\rm R}^{({\rm a})}(t)=\frac{dp_1^{({\rm i})}}{dt} \frac{\delta E^3\Gamma_R}{4B_z^2\Gamma_L+\delta E^2\Gamma_R}. \label{curradiabatic}\end{aligned}$$ Using the modulation in Eq. (\[xb\]) with $\phi =\frac{\pi}{2}$, we obtain $$\begin{aligned} &\frac{dp_1^{({\rm i})}}{dt}=\frac{-\Omega \operatorname{sech}^2\left(\delta E/k_{\rm B T}\right)}{2k_{\rm B}T\delta E} \Big[2B_{z,0} B_{z,1} \sin(\Omega t)\nonumber\\ &~~~~~~~~~+2B_{x,0}B_{x,1}\cos(\Omega t) +\Big(B_{z,1}^2-B_{x,1}^2\Big)\sin(2\Omega t)\Big]. \label{dpt}\end{aligned}$$ Plugging Eq. (\[dpt\]) into Eqs. (\[curradiabatic\]), the time averaged adiabatic heat currents can be written as a function of different parameters $$\begin{gathered} J_{\rm tr,L}^{{\rm Q}}=\frac{1}{\tau}\int_0^\tau J_{\rm L}^{({\rm a})} (t)\\ =\frac{k_{\rm B}T\Omega}{2\pi}\int_0^{2\pi}dx f\Big[\frac{\epsilon_0}{k_{\rm B}T},\frac{\epsilon_1}{k_{\rm B}T},\frac{\Delta_0}{k_{\rm B}T},\frac{\Delta_1}{k_{\rm B}T},\frac{\Gamma_\alpha}{k_{\rm B}T},x\Big], \label{linear}\end{gathered}$$ where $f$ is a dimensionless function which depends on all the parameters of the driving modulation and on the coupling strengths with the leads. Similar expression can be obtained for the heat current flowing in the right contact. In particular, the adiabatic heat currents are linear in the driving frequency as observed in Eq. (\[linear\]). Symmetry properties of $\Lambda_{\ell,\ell^{\prime}}$ ----------------------------------------------------- For $\Delta T =0$, we can rewrite the work $W$ as $$W=\int_0^{2\pi/\Omega} dt \Big[\frac{dE_1}{dt}p_1^{({\rm a})}+\frac{dE_2}{dt}p_2^{({\rm a})}\Big]$$ and, by using the normalization condition $\sum_jp_j^{({\rm a})}=0$ and the fact that $E_1(t)=-E_2(t)$, we find $$W=2\sum_{j}\int_0^{2\pi/\Omega} dt \;\frac{dE_2}{dX_j}\dot{X}_jp_{2}^{(a)}, \label{poweraa}$$ where $X_1(t)$ and $X_2(t)$ are the two driving parameters of the q-bit. Moreover, applying the fact that $\delta E=2E_2$, we find $$\begin{aligned} W&=\int_0^{2\pi/\Omega} dt \;{\zeta}(\mathbf{B})\sum_{j,k}\frac{d\delta E}{dX_j}\frac{dp_2^{({\rm i})}}{dX_k}\dot{X}_{j}\dot{X}_k \nonumber \\ &=\int_0^{2\pi/\Omega} dt \;{\zeta}(\mathbf{B})\sum_{j,k}\frac{d\delta E}{dX_j}\frac{dp_2^{({\rm i})}}{d\delta E}\frac{d\delta E}{dX_k}\dot{X}_{j}\dot{X}_k \label{poweraa}\end{aligned}$$ where ${\zeta}(\mathbf{B})$ is defined by the relation $p_2^{({\rm a})}=\zeta(\mathbf{B})\frac{dp_2^{({\rm i})}}{dt}$ (see Eq. (\[appprob\])), which is a consequence of Eq. (\[papi\]). Comparing Eq. (\[poweraa\]) with Eq. (\[worklambda\]), we obtain: $$\label{aplamqbit} \Lambda_{12}(\mathbf{B})=\Lambda_{21}(\mathbf{B})={\zeta}(\mathbf{B})\frac{d\delta E}{dX_1}\frac{dp_{2}^{({\rm i})}}{d\delta E}\frac{d\delta E}{dX_2}.$$ Eq. (\[aplamqbit\]) and Eq. (\[lambdallp\]) have the same form. Driven quantum dot - calculation of the thermal geometric tensor {#aplam} ================================================================ We need to calculate the following coefficients $$\begin{aligned} \label{lambdalj} \Lambda_{\mu,\nu}(t) &=& \frac{1}{\hbar} \int_{-\infty}^{-\infty} d t^{\prime} (t-t^{\prime}) \chi_{\mu,\nu}(t-t^{\prime}) \nonumber \\ &=& -\lim_{\omega \rightarrow 0} \frac{\mbox{Im}\left[ \chi_{\mu,\nu}(\omega) \right] }{\hbar \omega} ,\;\;\mu,\nu=1,2,3\end{aligned}$$ with $$\chi_{\mu,\nu}(t-t^{\prime})= - i \theta(t-t^{\prime}) \langle \left[ {\cal F}_\mu(t), {\cal F}_\nu(t^{\prime}) \right] \rangle,$$ being $ {\cal F}_{1,2}=\Psi^{\dagger}_d \; \hat{\sigma}_{x,z} \Psi_d$, and ${\cal F}_3={\cal J}_{Q,R}= -i \sum_{k_R,s,\sigma}\varepsilon_{k_R,s} v_{k_R,s,\sigma} c^{\dagger}_{k_R,s} d_{\sigma} + H. c. $. We can calculate (\[lambdalj\]) following standard procedures based on the formalism of imaginary-time Green’s functions. We can define $\hat{\cal G}(\tau) = - \langle T_{\tau} \left[ \Psi_d(\tau) \Psi^{\dagger}_d(0) \right] \rangle$ and $\hat{\cal G}_{k_R,d}(\tau) = - \langle T_{\tau} \left[ \Psi_{k_R} (\tau) \Psi^{\dagger}_d(0) \right] \rangle$, where $ T_{\tau} $ denotes ordering along the imaginary axis. In terms of this, it is possible to write $$\begin{aligned} \label{chimat} \chi_{\ell,\ell^{\prime}}(iq_n) &=&\frac{1}{\beta} \sum_{ik_n} \mbox{Tr} \left[\hat{\sigma}_\ell \hat{\cal G}(ik_n+iq_n) \hat{\sigma}_{\ell^{\prime}} \hat{\cal G}(ik_n)\right],\nonumber \\ \chi_{3,\ell}(iq_n) &=&\frac{1}{\beta} \sum_{ik_n}\sum_{k_R} \mbox{Tr} \left\{ \hat{\varepsilon}_{k_R} \hat{v}_R \left[ i \hat{\cal G}(ik_n+iq_n) \hat{\sigma}_\ell \right. \right. \nonumber \\ & & \left. \left. \times \hat{\cal G}_{d,k_R}(ik_n) -i\hat{\cal G}_{k_R,d}(ik_n) \hat{\sigma}_\ell \hat{\cal G}(ik_n-iq_n) \right] \right\} \nonumber \\ &=& - \chi_{\ell,3}(iq_n), \;\;\;\;\;\; \;\ell,\ell^{\prime}=1,2,\nonumber \\ \chi_{3,3}(iq_n) &=&- \frac{1}{\beta} \sum_{ik_n}\sum_{k_R, k^{\prime}_L} \nonumber \\ & & \mbox{Tr} \left\{ \hat{\varepsilon}_{k_R} \hat{v}_R \hat{\cal G}_{k_R,d}(ik_n+iq_n)\hat{\varepsilon}_{k^{\prime}_R} \hat{v}_L \hat{\cal G}_{k^{\prime}_L,d}(ik_n) \right. \nonumber \\ & & \left. + \hat{\cal G}_{d,k_R}(ik_n+iq_n)\hat{\varepsilon}_{k_R} \hat{v}_R \hat{\cal G}_{d,k^{\prime}_L}(ik_n) \hat{\varepsilon}_{k^{\prime}_L} \hat{v}_L \right\} \end{aligned}$$ with $\varepsilon_{k_{\alpha},s,s^{\prime}}= \varepsilon_{k_{\alpha},s} \delta_{s,s^{\prime}}$, $\alpha=L,R$, $q_n= 2 \pi n/\beta$ and $k_n= (2n+1) \pi /\beta$. It is convenient to introduce the spectral representation $$\begin{aligned} \label{greenmat} & & \hat{\cal G}(ik_n)= \int \frac{ d\varepsilon}{2 \pi} \frac{\hat{\rho}_t(\varepsilon)}{ik_n- \varepsilon}, \nonumber \\ & &\hat{\cal G}_{k_{\alpha},d}(ik_n)= \int \frac{ d\varepsilon}{2 \pi} \frac{\hat{\rho}_{k_{\alpha},d}(\varepsilon)}{ik_n- \varepsilon}\end{aligned}$$ with $$\begin{aligned} & & \hat{\rho}_t(\varepsilon)= -2 \mbox{Im}\left[\hat{G}_t(\varepsilon)\right]= \hat{G}_t(\varepsilon)\hat{\Gamma} \left[\hat{G}_t(\varepsilon)\right]^{\dagger}, \\ & & \hat{\rho}_{k_{\alpha},d}(\varepsilon)= \hat{\rho}^0_{k_{\alpha}}(\varepsilon) \hat{v}_{\alpha} \hat{\rho}_t(\varepsilon)+ \hat{\rho}^0_{k_{\alpha}}(\varepsilon) \hat{v}_{\alpha} \hat{\rho}_t (\varepsilon),\end{aligned}$$ where $\rho^0_{k_{\alpha}, s,s^{\prime}}(\varepsilon)= 2 \pi \delta_{s,s^{\prime}} \delta(\varepsilon-\varepsilon_{k_{\alpha},s})$ and $G^0_{k_{\alpha},s,s^{\prime}}(\varepsilon) = \delta_{s,s^{\prime}} \left( \varepsilon + i \eta-\varepsilon_{k_{\alpha},s} \right)^{-1}$. The retarded frozen Green’s function of the quantum dot in contact to the reservoirs is given in Eq. (\[gdot\]), while $\hat{\Gamma}= - 2 \mbox{Im}\left[\hat{G}_t(\varepsilon)^{-1}\right]= \sum_{\alpha}\hat{\Gamma}_{\alpha}$ is the hybridization matrix accounting for the contact between the quantum dot and the reservoirs, being $ \hat{\Gamma}_{\alpha} = \sum_{k_{\alpha}} \hat{v}_{k_{\alpha}} \hat{\rho}^0_{k_{\alpha}}\hat{v}_{k_{\alpha}} $. Using Eq. (\[greenmat\]) into Eq. (\[chimat\]), after some algebra and performing the analytic continuation to the real axis we get $$\begin{aligned} \Lambda_{\ell,{\ell^{\prime}}}(t) &=& - \frac{1}{h} \int d \varepsilon \frac{df (\varepsilon)}{d\varepsilon} \mbox{Tr} \left[\hat{\sigma}_{\ell} \hat{\rho}_t(\varepsilon) \hat{\sigma}_{\ell^{\prime}} \hat{\rho}_t(\varepsilon) \right],\;\ell,\ell^{\prime}=1,2 \nonumber \\ \Lambda_{3,\ell}(t) &=& - \frac{1}{h} \int d \varepsilon \; \varepsilon \frac{df (\varepsilon)}{d\varepsilon} \mbox{Tr} \left[\hat{\Gamma}_R \hat{\rho}_t(\varepsilon) \hat{\sigma}_\ell \hat{\rho}_t(\varepsilon) \right],\nonumber \\ &=& - \Lambda_{\ell,3}(t) \;\;\;\;\;\; \ell=1,2, \nonumber \\ \Lambda_{3,3}(t) &=& - \frac{1}{h} \int d \varepsilon \; \varepsilon^2 \frac{df (\varepsilon)}{d\varepsilon} \mbox{Tr} \left[\hat{\Gamma}_R \hat{G}_t(\varepsilon) \hat{\Gamma}_L \hat{G}_t^{\dagger}(\varepsilon)\right],\end{aligned}$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'It is known that the topological entropy of a continuous interval map $f$ is positive if and only if the type of $f$ for Sharkovskii’s order is $2^d p$ for some odd integer $p\ge 3$ and some $d\ge 0$; and in this case the topological entropy of $f$ is greater than or equal to $\frac{\log\lambda_p}{2^d}$, where $\lambda_p$ is the unique positive root of $X^p-2X^{p-2}-1$. For every odd $p\ge 3$, every $d\ge 0$ and every $\lambda\ge\lambda_p$, we build a piecewise monotone continuous interval map that is of type $2^dp$ for Sharkovskii’s order and whose topological entropy is $\frac{\log\lambda}{2^d}$. This shows that, for a given type, every possible finite entropy above the minimum can be reached provided the type allows the map to have positive entropy. Moreover, if $d=0$ the map we build is topologically mixing.' author: - Sylvie Ruette bibliography: - '../../tex/biblio/biblio.bib' date: 'June 9, 2019' title: 'Interval maps of given topological entropy and Sharkovskii’s type' --- Introduction ============ In this paper, an interval map is a continuous map $f\colon I\to I$ where $I$ is a compact nondegenerate interval. A point $x\in I$ is periodic of period $n$ if $f^n(x)=x$ and $n$ is the least positive integer with this property, i.e. $f^k(x)\ne x$ for all $k\in\Lbrack 1,n-1\Rbrack$. Let us recall Sharkovskii’s theorem and the definitions of Sharkovskii’s order and type [@Sha] (see e.g. [@R3 Section 3.3]). \[defi:sharkovskii\] *Sharkovskii’s order* is the total ordering on ${{\mathbb N}}$ defined by: $$3\lhd 5\lhd 7\lhd 9\lhd\cdots\lhd 2\cdot 3 \lhd 2\cdot 5\lhd\cdots\lhd 2^2\cdot 3\lhd 2^2\cdot 5\lhd\cdots\lhd 2^3\lhd 2^2 \lhd 2 \lhd 1$$ (first, all odd integers $n>1$, then $2$ times the odd integers $n>1$, then successively $2^2\times$, $2^3\times$, …, $2^k\times$ $\ldots$ the odd integers $n>1$, and finally all the powers of $2$ by decreasing order). $a \rhd b$ means $b\lhd a$. The notation $\unlhd, \unrhd$ will denote the order with possible equality. \[theo:Sharkovsky\] If an interval map $f$ has a periodic point of period $n$ then, for all integers $m\rhd n$, $f$ has periodic points of period $m$. Let $n\in{{\mathbb N}}\cup\{2^\infty\}$. An interval map $f$ is *of type $n$ (for Sharkovskii’s order)* if the periods of the periodic points of $f$ form exactly the set $\{m\in{{\mathbb N}}\mid m\unrhd n\}$, where the notation $\{m\in{{\mathbb N}}\mid m\unrhd 2^\infty\}$ stands for $\{2^k\mid k\ge 0\}$. It is well known that an interval map $f$ has positive topological entropy if and only if its type is $2^d p$ for some odd integer $p\ge 3$ and some $d\ge 0$ (see e.g. [@R3 Theorem 4.58]). The entropy of such a map is bounded from below (see theorem 4.57 in [@R3]). \[theo:lambdap\] Let $f$ be an interval map of type $2^d p$ for some odd integer $p\ge 3$ and some $d\ge 0$. Let $\lambda_p$ be the unique positive root of $X^p-2X^{p-2}-1$. Then $\lambda_p>\sqrt{2}$ and $h_{top}(f)\ge \frac{\log \lambda_p}{2^d}$. This bound is sharp: for every $p,d$, there exists a interval map of type $2^dp$ and topological entropy $\frac{\log \lambda_p}{2^d}$. These examples were first introduced by Štefan, although the entropy of these maps was computed later [@Ste; @BGMY]. Moreover, it is known that the type of a topological mixing interval map is $p$ for some odd integer $p\ge 3$ (see e.g. [@R3 Proposition 3.36]). The Štefan maps of type $p$ are topologically mixing [@R3 Example 3.21]. We want to show that the topological entropy of a piecewise monotone map can be equal to any real number, the lower bound of Theorem \[theo:lambdap\] being the only restriction. First, for every odd integer $p\ge 3$ and every real number $\lambda\ge \lambda_p$, we are going to build a piecewise monotone map $f_{p,\lambda}\colon [0,1]\to [0,1]$ such that its type is $p$ for Sharkovskii’s order, its topological entropy is $\log\lambda$, and the map is topologically mixing. Then we will show that for every odd integer $p\ge 3$, every integer $d\ge 0$ and every real number $\lambda\ge \lambda_p$, there exists a piecewise monotone interval map $f$ such that its type is $2^d p$ for Sharkovskii’s order and its topological entropy is $\frac{\log\lambda}{2^d}$. Notations --------- We say that an interval is *degenerate* if it is either empty or reduced to one point, and *nondegenerate* otherwise. When we consider an interval map $f\colon I\to I$, every interval is implicitly a subinterval of $I$. Let $J$ be a nonempty interval. Then $\partial J:=\{\inf J, \sup J\}$ are the endpoints of $J$ (they may be equal if $J$ is reduced to one point) and $|J|$ denotes the length of $J$ (i.e. $|J|:=\sup J-\inf J$). Let ${{\rm mid}}(J)$ denote the middle point of $J$, that is, ${{\rm mid}}(J):=\frac{\inf J+\sup J}2$. An interval map $f\colon I\to I$ is *piecewise monotone* if there exists a finite partition of $I$ into intervals such that $f$ is monotone on each element of this partition. An interval map $f$ has a *constant slope* $\lambda$ if $f$ is piecewise monotone and if on each of its pieces of monotonicity $f$ is linear and the absolute value of the slope coefficient is $\lambda$. Štefan maps =========== We recall the definition of the Štefan maps of odd type $p\ge 3$. Let $n\ge 1$ and $p:=2n+1$. The Štefan map $f_p\colon [0,2n]\to [0,2n]$, represented in Figure \[fig:type-2n+1\], is defined as follows: it is linear on $[0,n-1]$, $[n-1,n]$, $[n,2n-1]$ and $[2n-1,2n]$, and $$f_p(0):=2n,\ f_p(n-1):=n+1,\ f_p(n):=n-1,\ f_p(2n-1):=0,\ f_p(2n):=n.$$ Note that $n=1$ is a particular case because $0=n-1$ and $n=2n-1$. ![On the left: the map $f_3$. On the right: the map $f_p$ with $p=2n+1>3$.[]{data-label="fig:type-2n+1"}](stefan-p) Next proposition summarises the properties of $f_p$, see [@R3 Example 3.21] for the proof. The map $f_p$ is topologically mixing, its type for Sharkovskii’s order is $p$ and $h_{top}(f)=\log\lambda_p$. Moreover, the point $n$ is periodic of period $p$, and $f_p^{2k-1}(n)=n-k$ and $f_p^{2k}(n)=n+k$ for all $k\in\Lbrack 1,n\Rbrack$. Mixing map of given entropy and odd type {#sec:oddtype} ======================================== For every odd integer $p\ge 3$ and every real number $\lambda\ge \lambda_p$, we are going to build a piecewise monotone continuous map $f_{p,\lambda}\colon [0,1]\to [0,1]$ such that its type is $p$ for Sharkovskii’s order, its topological entropy is $\log\lambda$, and the map is topologically mixing. We will write $f$ instead of $f_{p,\lambda}$ when there is no ambiguity on $p,\lambda$. The idea is the following: we start with the Štefan map $f_p$, we blow up the minimum into an interval and we define the map of this interval in such a way that the added dynamics increases the entropy without changing the type. At the same time, we make the slope constant and equal to $\lambda$, so that the entropy is $\log \lambda$ according to the following theorem [@ALM Corollary 4.3.13], which is due to Misiurewicz-Szlenk [@MS2], Young [@You] and Milnor-Thurston [@MT2]. \[theo:entropie-penteCte\] Let $f$ be a piecewise monotone interval map. Suppose that $f$ has a constant slope $\lambda\ge 1$. Then $h_{top}(f)=\log\lambda$. We will also need the next result (see the proof of Lemma 4.56 in [@R3]). \[lem:lambdap\] Let $p\ge 3$ be an odd integer and $P_p(X):=X^p-2X^{p-2}-1$. Then $P_p(X)$ has a unique positive root, denoted by $\lambda_p$. Moreover, $P_p(x)<0$ if $x\in [0,\lambda_p[$ and $P_p(x)>0$ if $x> \lambda_p$. Let $\chi_p(X):=X^{p-1}-X^{p-2}-\sum_{i=0}^{p-2}(-X)^i$. Then $P_p(X)=(X+1)\chi_p(X)$, and thus $\chi_p(x)<0$ if $x\in [0,\lambda_p[$ and $\chi_p(x)>0$ if $x> \lambda_p$. Definition of the map --------------------- We fix an odd integer $p\ge 3$ and a real $\lambda\ge \lambda_p$. Recall that $\lambda_p>\sqrt2>1$ (Theorem \[theo:lambdap\]). We are going to define points ordered as follows: $$\begin{gathered} x_{p-2}<x_{p-4}<\cdots <x_1<x_0<x_2<x_4<\cdots<x_{p-3} \le t<x_{p-1},\\ \text{with}\quad x_{p-2}=0,\ x_{p-3}=\frac1\lambda\quad\text{and} \quad x_{p-1}=1.\end{gathered}$$ The points $x_0,\ldots, x_{p-1}$ will form a periodic orbit of period $p$, that is, $f(x_i)=x_{i+1\bmod p}$ for all $i\in\Lbrack 0, p-1\Rbrack$. In the following construction, the case $p=3$ is degenerate. The periodic orbit is reduced to $x_1<x_0<x_2$ with $x_1=0, x_0=\frac1\lambda, x_2=1$. We only have to determine the value of $t$. The map $f\colon [0,1]\to [0,1]$ is defined as follows (see Figure \[fig:f\]): - $f(x):=1-\lambda x$ for all $x\in[0,\frac{1}{\lambda}]=[x_{p-2},x_{p-3}]$ (so that $f(0)=1$ and $f(\frac{1}{\lambda})=0$), - $f(x):=\lambda(x-t)$ for all $x\in [t,1]$ (so that $f(t)=0$ and $f(1)=\lambda(1-t)$), - definition on $[\frac{1}{\lambda},t]$: we want to have $f([\frac{1}{\lambda},t])\subset [0,x_{p-4}]$ (note that $x_{p-4}$ is the least positive point among $x_0,\ldots,x_{p-1}$), with $f$ of constant slope $\lambda$, in such a way that all the critical points except at most one are sent by $f$ on either $0$ or $x_{p-4}$. If $t=\frac{1}{\lambda}$, there is nothing to do. If $t>\frac{1}{\lambda}$, we set $\ell:= t-\frac{1}{\lambda}$ (length of the interval), $k:=\left\lfloor \frac{\lambda \ell}{2x_{p-4}}\right\rfloor$, $$\label{eq:defJi} J_i:=\left[\frac{1}{\lambda}+(i-1) \frac{2x_{p-4}}{\lambda}, \frac{1}{\lambda}+ i \frac{2x_{p-4}}{\lambda}\right] \quad\text{for all }i\in\Lbrack 1, k\Rbrack,$$ $$\label{eq:defK} K:=\left[\frac{1}{\lambda}+ k\frac{2x_{p-4}}{\lambda}, t\right].$$ If $p=3$, we replace $x_{p-4}$ (not defined) by $1$ in the above definitions. It is possible that there is no interval $J_1,\ldots, J_k$ (if $k=0$) or that $K$ is reduced to the point $\{t\}$. On each interval $J_1,\ldots J_k$, $f$ is defined as the tent map of summit $x_{p-4}$: $f(\min J_i)=0$, $f$ is increasing of slope $\lambda$ on $[\min J_i,{{\rm mid}}(J_i)]$ (thus $f({{\rm mid}}(J_i))=x_{p-4}$ because of the length of $J_i$), then $f$ is decreasing of slope $-\lambda$ on $[{{\rm mid}}(J_i),\max J_i]$ and $f(\max J_i)=0$. On $K$, $f$ is defined as a tent map with a summit $< x_{p-4}$: $f(\min K)=0$, $f$ is increasing of slope $\lambda$ on $[\min K,{{\rm mid}}(K)]$, then $f$ is decreasing of slope $-\lambda$ on $[{{\rm mid}}(K),\max K]$ and $f(\max K)=0$. ![The map $f$ for $p=5$ and $\lambda=2$. \[fig:f\]](fixed-entropy-type-fig2){width="15cm"} In this way, we get a map $f$ that is continuous on $[0,1]$, piecewise monotone, of constant slope $\lambda$. It remains to define $t$ and the points $\{x_i\}_{0\le i\le p-4}$ (recall that $x_{p-3}=\frac{1}{\lambda}$, $x_{p-2}=0$ and $x_{p-1}=1$). We want these points to satisfy: $$x_0\ =\ \lambda(1-t)\label{eq:t}$$ and $$\label{eq:xi} \left\{\begin{array}{lcl} x_1&=&1-\lambda x_0\\ x_2&=&1-\lambda x_1\\ &\vdots&\\ x_{p-3}&=&1-\lambda x_{p-4} \end{array}\right.$$ and to be ordered as follows: $$\begin{gathered} x_{p-2}<x_{p-4}<\cdots <x_1<x_0<x_2<x_4<\cdots<x_{p-3} \label{eq:ordre-xi}\\ \frac{1}{\lambda} \le t<x_{p-1}.\label{eq:ordre-t}\end{gathered}$$ If $p=3$, the system is empty, and equation  is satisfied because it reduces to $0=x_1<x_0=\frac{1}{\lambda}$. According to the definition of $f$, the equations , , , imply that $f(x_i)=x_{i+1}$ for all $i\in\Lbrack 0, p-2\Rbrack$ and $f(x_{p-1})=x_0$. We are going to show that the system is equivalent to: $$\label{eq:formule-xi} \forall i\in\Lbrack 0,p-4\Rbrack,\quad x_i=\frac{(-1)^i}{\lambda^{p-i-2}} \sum_{j=0}^{p-i-3}(-\lambda)^j.$$ We use a descending induction on $i$. $\bullet$ According to the last line of , $x_{p-4}=\frac{1}{\lambda}(1-x_{p-3})=\frac{1}{\lambda^2}(\lambda-1)$. This is for $i=p-4$. $\bullet$ Suppose that holds for $i$ with $i\in \Lbrack 1,p-4\Rbrack$. By , $x_i=1-\lambda x_{i-1}$, thus $$\begin{aligned} x_{i-1}&=-\frac{1}{\lambda}(x_i-1)\\ &=-\frac{(-1)^i}{\lambda^{p-i-1}}\left(\sum_{j=0}^{p-i-3}(-\lambda)^j- (-1)^i\lambda^{p-i-2}\right)\end{aligned}$$ Since $p$ is odd, $-(-1)^i\lambda^{p-i-2}=(-\lambda)^{p-i-2}$. Hence $$x_{i-1}=\frac{(-1)^{i-1}}{\lambda^{p-i-1}}\sum_{j=0}^{p-i-2}(-\lambda)^j,$$ which gives for $i-1$. This ends the proof of . Equation  is equivalent to $t=1-\frac{1}{\lambda}x_0$. Thus, using , we get $$\label{eq:formule-t} t=\frac{1}{\lambda^{p-1}}\left(\lambda^{p-1}-\sum_{j=0}^{p-3}(-\lambda)^j \right).$$ Conclusion: with the values of $x_0,\ldots, x_{p-4}$ and $t$ given by and , the system of equations - is satisfied (and there is a unique solution). It remains to show that these points are ordered as stated in and . Let $i$ be in $\Lbrack 0, p-6\Rbrack$. By , we have $$\begin{aligned} x_{i+2}-x_i&=\frac{(-1)^i}{\lambda^{p-i-2}}\left( \lambda^2\sum_{j=0}^{p-i-5}(-\lambda)^j-\sum_{j=0}^{p-i-3}(-\lambda)^j\right)\\ &=\frac{(-1)^i}{\lambda^{p-i-2}}\left( \sum_{j=2}^{p-i-3}(-\lambda)^j-\sum_{j=0}^{p-i-3}(-\lambda)^j\right)\\ &=\frac{(-1)^i}{\lambda^{p-i-2}}(\lambda-1)\end{aligned}$$ Since $\lambda-1>0$, we have, for all $i\in \Lbrack 0, p-6\Rbrack$, - $x_i<x_{i+2}$ if $i$ is even, - $x_{i+2}<x_i$ if $i$ is odd. By , $x_{p-4}=\frac{\lambda-1}{\lambda^2}$. Since $\lambda>1$, $x_{p-4}>0=x_{p-2}$. Again by , $$\begin{aligned} x_0-x_1&=\frac{1}{\lambda^{p-2}}\left( \sum_{j=0}^{p-3}(-\lambda)^j+\lambda\sum_{j=0}^{p-4}(-\lambda)^j\right)\\ &=\frac{1}{\lambda^{p-2}}\left( \sum_{j=0}^{p-3}(-\lambda)^j-\sum_{j=1}^{p-3}(-\lambda)^j\right)\\ &=\frac{1}{\lambda^{p-2}}>0\end{aligned}$$ thus $x_0<x_1$. Moreover, $$x_{p-3}-x_{p-5}=\frac{1}{\lambda}-\frac{\lambda^2-\lambda+1}{\lambda^3} =\frac{\lambda-1}{\lambda^3}>0$$ thus $x_{p-5}<x_{p-3}$. This several inequalities imply . By , we have $$t-\frac{1}{\lambda}=\frac{1}{\lambda^{p-1}}\left( \lambda^{p-1}-\lambda^{p-2}-\sum_{j=0}^{p-3}(-\lambda)^j\right)= \frac{1}{\lambda^{p-1}}\cdot \chi_p(\lambda),$$ where $\chi_p$ is defined in Lemma \[lem:lambdap\]. According to this lemma, $\chi_p(\lambda)\ge 0$ (with equality iff $\lambda= \lambda_p$) because $\lambda \ge \lambda_p$. This implies that $t\ge \frac{1}{\lambda}$ (with equality iff $\lambda=\lambda_p$). Moreover, if $t\ge 1$, then $x_0=\lambda(1-t)\le 0$, which is impossible by ; thus $t<1$. Therefore, the inequalities  hold. Finally, we have shown that the map $f_{p,\lambda}=f$ is defined as wanted. Entropy ------- $h_{top}(f_{p,\lambda})=\log\lambda$. This result is given by Theorem \[theo:entropie-penteCte\] because, by definition, $f_{p,\lambda}$ is piecewise monotone of constant slope $\lambda$ with $\lambda>1$. Type ---- \[lem:periode-pseudo-recouvrement\] Let $g\colon [0,1]\to [0,1]$ be a continuous map. Let ${{\mathcal A}}$ be a finite family of closed intervals that form a pseudo-partition of $[0,1]$, that is, $$\bigcup_{A\in{{\mathcal A}}}A=[0,1]\quad\text{and}\quad\forall A,B\in{{\mathcal A}},\ A\neq B \Rightarrow {{\rm Int}\left(A\right)}\cap {{\rm Int}\left(B\right)}=\emptyset.$$ We set $ \partial{{\mathcal A}}=\bigcup_{A\in{{\mathcal A}}}\partial A. $ Let ${{\mathcal G}}$ be the oriented graph whose vertices are the elements of ${{\mathcal A}}$ and in which there is an arrow $A\dashrightarrow B$ iff $g(A)\cap {{\rm Int}\left(B\right)}\neq \emptyset$. Let $x$ be a periodic point of period $q$ for $g$ such that $\{g^n(x)\mid n\ge 0\}\cap \partial {{\mathcal A}}=\emptyset$. Then there exist $A_0,\ldots, A_{q-1}\in{{\mathcal A}}$ such that $A_0\dashrightarrow A_1\dashrightarrow\cdots\dashrightarrow A_{q-1} \dashrightarrow A_0$ is a cycle in the graph ${{\mathcal G}}$. For every $n\ge 0$, there exists a unique element $A_n\in{{\mathcal G}}$ such that $g^n(x)\in {{\rm Int}\left(A_n\right)}$ because $\{g^n(x)\mid n\ge 0\}\cap \partial {{\mathcal A}}=\emptyset$. We have $g^n(x)\in A_n$ and $g^{n+1}(x)\in{{\rm Int}\left(A_{n+1}\right)}$, thus $g(A_n)\cap {{\rm Int}\left(A_{n+1}\right)}\neq\emptyset$; in other words, there is an arrow $A_n\dashrightarrow A_{n+1}$ in ${{\mathcal G}}$. Finally, $A_q=A_0$ because $g^q(x)=x$. The map $f_{p,\lambda}$ is of type $p$ for Sharkovskii’s order. According to the definition of $f=f_{p,\lambda}$, $x_0$ is a periodic point of period $p$. It remains to show that $f$ has no periodic point of period $q$ with $q$ odd and $3\le q<p$. We set $I_1:=\langle x_0,x_1\rangle$, $I_i:=\langle x_{i-2},x_i\rangle$ for all $i\in\Lbrack 2,p-2\Rbrack$ and $I_{p-1}:=[t,1]$, where $\langle a,b\rangle$ denotes the convex hull of $\{a,b\}$ (i.e. $\langle a,b\rangle=[a,b]$ or $[b,a]$). The intervals $J_i, K$ have been defined in and . The family ${{\mathcal A}}:=\{I_1,\ldots, I_{p-1},J_1,\ldots, J_k, K\}$ is a pseudo-partition of $[0,1]$. Let ${{\mathcal G}}$ be the oriented graph associated to ${{\mathcal A}}$ for the map $g=f$ as defined in Lemma \[lem:periode-pseudo-recouvrement\]. If $f(A)\supset B$, the arrow $A \dashrightarrow B$ is replaced by $A\to B$ (full covering). The graph ${{\mathcal G}}$ is represented in Figure \[fig:Grecouvrement\]; a dotted arrow $A \dashrightarrow B$ means that $f(A)\cap {{\rm Int}\left(B\right)}\neq \emptyset$ but $f(A)\not\supset B$ (partial covering). ![Covering graph ${{\mathcal G}}$ associated to $f$. \[fig:Grecouvrement\] ](fixed-entropy-type-fig1) The subgraph associated to the intervals $I_1,\ldots, I_{p-1}$ is the graph associated to a Štefan cycle of period $p$ (see [@R3 Lemma 3.16]). The only additional arrows with respect to the Štefan graph are between the intervals $J_1,\ldots, J_k, K$ on the one hand and $I_{p-2}$ on the other hand. There is only one partial covering, which is $K\dashrightarrow I_{p-2}$. Let $q$ be an odd integer with $3\le q<p$. We easily see that, in this graph, there is no primitive cycle of length $q$ (a cycle is primitive if it is not the repetition of a shorter cycle): the cycles not passing through $I_1$ have an even length, whereas the cycles passing through $I_1$ have a length either equal to $1$, or greater than or equal to $p-1$. Moreover, if $x$ is a periodic point of period $q$, then $\{f^n(x)\mid n\ge 0\}\cap \partial {{\mathcal A}}=\emptyset$ (because the periodic points in $\partial{{\mathcal A}}$ are of period $p$). According to Lemma \[lem:periode-pseudo-recouvrement\], $f$ has no periodic point of period $q$. Conclusion: $f$ is of type $p$ for Sharkovskii’s order. Mixing ------ \[prop:melange\] The map $f_{p,\lambda}$ is topologically mixing. This proof is inspired by [@R3 Lemmas 2.10, 2.11] and their use in [@R3 Example 2.13]. We will use several times that the image by $f=f_{p,\lambda}$ of a nondegenerate interval is a nondegenerate interval (and thus all its iterates are nondegenerate). Let $A$ be a nondegenerate closed interval included in $[0,1]$. We are going to show that there exists an integer $n\ge 0$ such that $f^n(A)=[0,1]$. We set $${{\mathcal C}}_0:=\bigcup_{i=1}^k\partial J_i \cup \{t\},\quad {{\mathcal C}}_1:=\{{{\rm mid}}(J_i)\mid i\in\Lbrack 1,k\Rbrack\},\quad c_K:={{\rm mid}}(K).$$ The set of critical points of $f$ is ${{\mathcal C}}_0\cup {{\mathcal C}}_1\cup \{c_K\}$. **Step 1:** there exists $i_0\ge 0$ such that $f^{i_0}(A)\cap ({{\mathcal C}}_0\cup{{\mathcal C}}_1)\ne\emptyset$ and there exists $n_0\ge 0$ such that $0\in f^{n_0}(A)$. Let $$\begin{gathered} J_i':=[\min J_i,{{\rm mid}}(J_i)]\text{ and } J_i'':=[{{\rm mid}}(J_i),\max J_i] \quad\text{for all }i\in\Lbrack 1,k\Rbrack,\\ {{\mathcal F}}:=\left\{\left[0,\frac{1}{\lambda}\right], [t,1], K\right\}\cup\{J_i', J_i''\mid i\in\Lbrack 1,k\Rbrack\}.\end{gathered}$$ If $A\subset B$ for some $B\in {{\mathcal F}}$ and $B\ne K$, then $|f(A)|=\lambda |A|$. If $A\subset K$, then $|f(A)|\ge \frac{\lambda |A|}2$ and $f(A)\subset I_{p-2}$, thus $|f^2(A)|=\lambda |f(A)|\ge \frac{\lambda^2}2 |A|$. We have $\lambda>1$ and $\frac{\lambda^2}2>1$ because $\lambda>\sqrt{2}$ (Theorem \[theo:lambdap\]). If for all $i\ge 0$, there exists $A_i\in{{\mathcal F}}$ such that $f^i(A)\subset A_i$, then what precedes implies that $\lim_{i\to+\infty}|f^i(A)|=+\infty$. This is impossible because $f^i(A)\subset [0,1]$. Thus there exist $i_0\ge 0$ and $c\in {{\mathcal C}}_0\cup {{\mathcal C}}_1$ such that $c\in f^{i_0}(A)$. If $c\in {{\mathcal C}}_0$, then $f(c)=0$, and hence $0\in f^{i_0+1}(A)$. If $c\in {{\mathcal C}}_1$, then $f(c)=x_{p-4}$ and hence $0\in f^{i_0+3}(A)$. This ends step 1. **Step 2:** there exist $n_1\ge n_0$ and $j\in\Lbrack 1, p-1\Rbrack$ such that $f^{n_1}(A)\supset I_j$. Recall that $I_1=[x_1,x_0]$, $I_i=\langle x_{i-2},x_i\rangle$ for all $2\le i\le p-2$ and $I_{p-1}=[t,1]=[t,x_{p-1}]$. We set $I_0:=I_1$. By definition, for all $0\le i\le p-1$, there exists $\delta_i>0$ such that $I_i=\langle x_i, x_i+(-1)^{i+1}\delta_i\rangle$. Moreover, $f$ is linear of slope $-\lambda$ on each of the intervals $I_0,\ldots, I_{p-2}$ and of slope $+\lambda$ on $I_{p-1}$. We set $B_{-2}:=f^{n_0}(A)$. This is a nondegenerate closed interval containing $0$, thus there exists $b>0$ such that $B_{-2}=[0,b]$ with $0=x_{p-2}$. We set $B_i:=f^{i+2}(B_{-2})$ for all $i\ge -2$, and we define $m\ge -2$ as the least integer such that $B_m$ is not included in a interval of the form $I_j$ (such an integer $m$ exists by step 1). If $b>x_{p-4}$, then $B_{-2}\supset I_{p-2}$ and $m=-2$. Otherwise, $B_{-2}\subset I_{p-2}$ and $B_{-1}=[1-\lambda b, 1] =[x_{p-1}-\lambda b, x_{p-1}]$ because $f|_{I_{p-2}}$ is of slope $-\lambda$. If $1-\lambda b<t$, then $B_{-1}\supset I_{p-1}$ and $m=-1$. Otherwise, $B_{-1}\subset I_{p-1}$ and $B_0=[x_0-\lambda^2 b, x_0]$ because $f|_{I_{p-1}}$ is of slope $+\lambda$. We go on in a similar way. - If $m>0$, then $B_0\subset I_0$ and $B_1=[x_1,x_1+\lambda^3 b]$.\ - If $m>1$, then $B_1\subset I_1$ and $B_2=[x_2-\lambda^4 b,x_2]$. - - If $m>p-3$, then $B_{p-3}\subset I_{p-3}$ and $B_{p-2}=\left\langle x_{p-2}, x_{p-2}+(-1)^{p+1}\lambda^p b\right\rangle =[0,\lambda^p b]$. Notice that $B_{p-2}$ is of the same form as $B_{-2}$. What precedes implies that $$\begin{gathered} \forall i\in\Lbrack -2, m\Rbrack,\ B_i= \left\langle x_{i\bmod p}, x_{i\bmod p}+(-1)^{r+1}\lambda^{i+2}b\right\rangle, \text{ where }i=qp+r,\ r\in\Lbrack 0, p-1\Rbrack,\\ \forall i\in\Lbrack -2, m-1\Rbrack,\ B_i\subset I_{i\bmod p},\\ B_m\supset I_{m\bmod p}.\end{gathered}$$ This ends step 2 with $n_1:=n_0+m+2$ and $j:=m$. **Step 3:** there exists $n_2\ge n_1$ such that $f^{n_2}(A)=[0,1]$. Let $n_1\ge 0$ and let $j\in\Lbrack 1,p-1\Rbrack$ be such that $f^{n_1}(A)\supset I_j$ (step 2). In the covering graph of Figure \[fig:Grecouvrement\], we see that there exists an integer $q\ge 0$ such that, for every vertex $C$ of the graph, there exists a path of length $q$, with only arrows of type $\to$, starting from $I_j$ and ending at $C$. This implies that $f^q(I_j)=[0,1]$, that is, $f^{n_1+q}(A)=[0,1]$. We have shown that, for every nondegenerate closed interval $A\subset [0,1]$, there exists $n$ such that $f^n(A)=[0,1]$. We conclude that $f$ is topologically mixing. General case ============ Square root of a map -------------------- We first recall the definition of the so-called *square root* of an interval map. If $f\colon [0,b]\to [0,b]$ is an interval map, the square root of $f$ is the continuous map $g\colon [0,3b]\to [0,3b]$ defined by - $\forall x\in [0,b]$, $g(x):=f(x)+2b$, - $\forall x\in [2b,3b]$, $g(x):=x-2b$, - $g$ is linear on $[b,2b]$. The graphs of $g$ and $g^2$ are represented in Figure \[fig:type-2n-g-g2\]. ![The left side represents the map $g$, which is the square root of $f$. The right side represents the map $g^2$.[]{data-label="fig:type-2n-g-g2"}](ruette-fig46) The square root map has the following properties, see e.g. [@R3 Examples 3.22 and 4.62]. \[prop:squareroot\] Let $f$ be an interval map of type $n$, and let $g$ be the square root of $f$. Then $g$ is of type $2n$ and $h_{top}(g)=\frac{h_{top}(f)}2$. If $f$ is piecewise monotone, then $g$ is piecewise monotone too. Piecewise monotone map of given entropy and type ------------------------------------------------ \[theo:main\] Let $p\ge 3$ be an odd integer, let $d$ be a non negative integer and $\lambda$ a real number such that $\lambda\ge \lambda_p$. Then there exists a piecewise monotone map $f$ whose type is $2^dp$ for Sharkovskii’s order and such that $h_{top}(f)=\frac{\log\lambda}{2^d}$. If $d=0$, the map $f$ can be built in such a way that it is topologically mixing. If $d=0$, we take $f=f_{p,\lambda}$ defined in Section \[sec:oddtype\]. If $d>0$, we start with the map $f_{p,\lambda}$, then we build the square root of $f_{p,\lambda}$, then the square root of the square root, etc. According to Proposition \[prop:squareroot\], after $d$ steps we get a piecewise monotone interval map $f$ of type $2^d p$ and such that $h_{top}(f)=\frac{h_{top}(f_{p,\lambda})}{2^d}= \frac{\log\lambda}{2^d}$. For every positive real number $h$, there exists a piecewise monotone interval map $f$ such that $h_{top}(f)=h$. Let $d\ge 0$ be an integer such that $\frac{\log \lambda_3}{2^d}\le h$ and set $\lambda:=\exp(2^dh)$. Then $\lambda\ge \lambda_3$ and, according to Theorem \[theo:main\], there exists a piecewise monotone interval map $f$ of type $2^d3$ such that $h_{top}(f)=\frac{\log \lambda}{2^d}=h$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The emergence of rotational bands is observed in no-core configuration interaction (NCCI) calculations for the odd-mass $\isotope{Be}$ isotopes ($7\leq A \leq 13$) with the JISP16 nucleon-nucleon interaction, as evidenced by rotational patterns for excitation energies, quadrupole moments, and $E2$ transitions. Yrast and low-lying excited bands are found. The results demonstrate the possibility of well-developed rotational structure in NCCI calculations using a realistic nucleon-nucleon interaction.' address: - 'Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556-5670, USA' - 'Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011-3160, USA' author: - 'M. A. Caprio' - 'P. Maris' - 'J. P. Vary' title: | Emergence of rotational bands in *ab initio* no-core\ configuration interaction calculations of light nuclei --- No-core configuration interaction ,Nuclear rotation ,JISP16 21.60.Cs ,21.10.-k ,21.10.Re ,27.20.+n Introduction {#sec-intro} ============ Nuclei exhibit a wealth of collective phenomena, including clustering, rotation, and pairing [@rowe2010:collective-motion; @bohr1998:v1; @bohr1998:v2]. Collective dynamics have been extensively modeled in phenomenological descriptions [@rowe2010:collective-motion; @bohr1998:v2; @iachello1987:ibm; @eisenberg1987:v1]. Some forms of collectivity may also be obtained microscopically in the conventional (valence) shell model, *e.g.*, Elliott $\grpsu{3}$ rotation [@elliott1958:su3-part1; @harvey1968:su3-shell]. However, observing the emergence of collective phenomena directly from first principles — that is, in a fully *ab initio* calculation of the nucleus, as a many-body system in which all the constituent protons and neutrons participate, with realistic interactions — remains as an outstanding challenge. Recent developments in large-scale calculations have brought significant progress in the $\textit{ab initio}$ description of light nuclei [@pieper2004:gfmc-a6-8; @neff2004:cluster-fmd; @hagen2007:coupled-cluster-benchmark; @navratil2009:ncsm; @bacca2012:6he-hyperspherical]. In *ab initio* no-core configuration interaction (NCCI) approaches — such as the no-core shell model (NCSM) [@navratil2000:12c-ab-initio; @navratil2000:12c-ncsm; @navratil2009:ncsm; @vary2009:ncsm-mfdn-scidac09; @barrett:ncsm], no-core Monte Carlo shell model (MCSM) [@abe2012:fci-mcsm-ncfc], or no-core full configuration (NCFC) [@maris2009:ncfc] methods — the nuclear many-body bound-state eigenproblem is formulated as a matrix diagonalization problem. The Hamiltonian is represented with respect to a basis of antisymmetrized products of single-particle states, generally harmonic oscillator states, and the problem is solved for the full system of $A$ nucleons, *i.e.*, with no inert core. In practice, such calculations must be carried out in a finite space, obtained by truncating the many-body basis according to a maximum allowed number ${{N_\text{max}}}$ of oscillator excitations above the lowest oscillator configuration (*e.g.*, Ref. [@navratil2009:ncsm]). With increasing ${{N_\text{max}}}$, the results converge towards those which would be achieved in the full, infinite-dimensional space for the many-body system. Computational restrictions limit the extent to which converged calculations can be obtained for the observables needed for the identification of collective phenomena. In particular, the observables most indicative of rotational collectivity — $E2$ matrix elements — present special challenges for convergence in an NCCI approach [@bogner2008:ncsm-converg-2N; @cockrell2012:li-ncfc], due to their sensitivity to the large-radius asymptotic portions of the nuclear wave function. Nonetheless, some promising suggestions of collective phenomena, *e.g.*, deformation and clustering, have already been obtained in *ab initio* calculations [@wiringa2000:gfmc-a8; @neff2008:clustering-nuclei; @cockrell2012:li-ncfc; @kanadaenyo2012:amd-cluster; @shimizu2012:mcsm; @maris2012:mfdn-ccp11]. In this letter, we observe the emergence of collective rotation in *ab initio* NCCI calculations for the $\isotope{Be}$ isotopes, using the realistic JISP16 nucleon-nucleon interaction [@shirokov2007:nn-jisp16]. Evidence for rotational band structure is found in the calculated excitation energies, quadrupole moments, and $E2$ transition matrix elements. In NCCI calculations of the even-mass $\isotope{Be}$ nuclei, yrast sequences of angular momenta $0$, $2$, $4$, $\ldots$ arise with calculated properties resembling those of $K=0$ ground-state rotational bands (see Ref. [@maris2012:mfdn-hites12] for a preliminary report of comparable results for $\isotope[12]{C}$). However, the most distinctive, well-developed, and systematic rotational band structures are observed in calculations for odd-mass nuclei. Given the same range of excitation energies and angular momenta, the low-lying $\Delta J=1$ bands in the odd-mass nuclei provide a richer set of energy and electromagnetic observables. We therefore focus here on the odd-mass $\isotope{Be}$ isotopes, specifically, with $7\leq A \leq 13$. After a brief review of the properties expected in nuclear rotational structure (Sec. \[sec-rot\]), the results for rotational bands in NCCI calculations of these $\isotope{Be}$ isotopes are presented (Sec. \[sec-results\]). Preliminary results for $\isotope[9]{Be}$ were reported in Ref. [@maris2012:mfdn-ccp11]. Rotation {#sec-rot} ======== We first review nuclear collective rotation and its expected signatures [@rowe2010:collective-motion; @bohr1998:v2]. Under the assumption of adiabatic separation of the rotational degree of freedom, a nuclear state may be described in terms of an *intrinsic state*, as viewed in the non-inertial intrinsic frame, together with the rotational motion of this intrinsic frame. For axially symmetric structure, in particular, the intrinsic state $\tket{\phi_K}$ is characterized by definite angular momentum projection $K$ along the intrinsic symmetry axis. The full nuclear state $\tket{\psi_{JKM}}$, with total angular momentum $J$ and projection $M$, then has the form $$\begin{gathered} \label{eqn-psi} \tket{\psi_{JKM}}=\Bigl[\frac{2J+1}{16\pi^2(1+\delta_{K0})}\Bigr]^{1/2} \int d\vartheta\,\bigl[{{\mathscr{D}}}^J_{MK}(\vartheta)\tket{\phi_K;\vartheta} {\\} +(-)^{J+K}{{\mathscr{D}}}^J_{M-K}(\vartheta)\tket{\phi_{\bar{K}};\vartheta}\bigr],\end{gathered}$$ where $\vartheta$ represents the Euler angles for rotation of the intrinsic state, and $\tket{\phi_{\bar{K}}}$ is the ${{\mathscr{R}}}_2$-conjugate intrinsic state, which has angular momentum projection $-K$ along the symmetry axis. The most recognizable features in the spectroscopy of rotational states reside not in the states taken individually but in the relationships among the different states $\tket{\psi_{JKM}}$ sharing the same intrinsic state $\tket{\phi_K}$. These states constitute members of a rotational band, with angular momenta $J=K$, $K+1$, $\ldots$, except with only even $J$ (or only odd $J$, depending upon the intrinsic ${{\mathscr{R}}}_2$ symmetry) for $K=0$ bands. Within a rotational band, energies follow the pattern $$\label{eqn-EJ} E(J)=E_0+AJ(J+1),$$ where, in terms of the moment of inertia $\cal{J}$ about an axis perpendicular to the symmetry axis, $A\equiv\hbar^2/(2\cal{J})$. For $K=1/2$ bands, the Coriolis contribution to the kinetic energy results in an energy staggering given by $$\label{eqn-EJ-stagger} E(J)=E_0+A\bigl[J(J+1)+a(-)^{J+1/2}(J+\tfrac12)\bigr],$$ where $a$ is the Coriolis decoupling parameter. Reduced matrix elements $\trme{\psi_{J_fK}}{Q_2}{\psi_{J_iK}}$ of the electric quadrupole operator $Q_2$ between states within a band are entirely determined by the rotational structure, except for the overall normalization, which is proportional to the intrinsic quadrupole moment $eQ_0\equiv(16\pi/5)^{1/2}\tme{\phi_K}{Q_{2,0}}{\phi_K}$. In particular, quadrupole moments within a band are obtained as $$\label{eqn-Q} Q(J)=\frac{3K^2-J(J+1)}{(J+1)(2J+3)}Q_0,$$ and reduced transition probabilities as $$\label{eqn-BE2} B(E2;J_i\rightarrow J_f)=\frac{5}{16\pi} \tcg{J_i}{K}{2}{0}{J_f}{K}^2 (eQ_0)^2.$$ In obtaining these results, $Q_2$ can be taken to be any operator of the form $Q_{2\mu}=\sum_{i=1}^Ae_i r_{i}^2Y_{2\mu}(\uvec{r}_{i})$ and may therefore represent the electromagnetic $E2$ transition operator, mass quadrupole tensor, neutron quadrupole tensor, *etc.*, depending upon the choice of coefficients $e_i$ (see Sec. \[sec-results\]). Results {#sec-results} ======= An NCCI calculation is defined by the interaction for the nuclear system and by the truncated many-body space in which the calculation is carried out. The present calculations use the JISP16 interaction [@shirokov2007:nn-jisp16], which is a two-body interaction derived from neutron-proton scattering data and adjusted via a phase-shift equivalent transformation to describe light nuclei without explicit three-body interactions. The bare interaction is used, without renormalization to the truncated space [@maris2009:ncfc]. The Coulomb interaction has been omitted from the Hamiltonian, to ensure exact conservation of isospin, thereby simplifying the spectrum. (The primary effect of the Coulomb interaction, when included, is to induce a shift in the overall binding energies, which is irrelevant to identification of rotational band structure.) These calculations are carried out for oscillator truncations ranging from ${{N_\text{max}}}=10$ for $\isotope[7]{Be}$ to ${{N_\text{max}}}=7$ for $\isotope[13]{Be}$, with basis oscillator $\hbar\Omega$ parameters near the variational minimum ($\hbar\Omega=20\text{--}22.5\,{{\mathrm{MeV}}}$). The proton-neutron $M$-scheme code MFDn [@sternberg2008:ncsm-mfdn-sc08; @maris2010:ncsm-mfdn-iccs10; @aktulga2012:mfdn-ep2012] has been used for the many-body calculations. The calculated excitation energies for low-lying states of the odd-mass $\isotope{Be}$ isotopes (with $7\leq A \leq 13$) are shown in Figs. \[fig-Ex-odd-nat\] and \[fig-Ex-odd-un\]. For each nucleus, there are two parity spaces to consider, shown separately in these two figures (energies are taken relative to the lowest state of the same parity). We refer to the parity of the lowest allowed oscillator configuration (negative for $\isotope[7,9,11]{Be}$, positive for $\isotope[13]{Be}$) as the *natural parity* (Fig. \[fig-Ex-odd-nat\]) and that obtained by promoting one nucleon by one shell as the *unnatural parity* (Fig. \[fig-Ex-odd-un\]). The NCCI bases for these spaces consist of states with even and odd numbers of oscillator excitations, respectively, above the lowest configuration. While the lowest unnatural parity states normally lie at significantly higher energy than those of natural parity, they are calculated to lie within a few ${{\mathrm{MeV}}}$ of the lowest natural parity states in the isotopes $\isotope[9,11,13]{Be}$ [@maris2012:mfdn-ccp11], which are therefore included in Fig. \[fig-Ex-odd-un\]. Note that parity inversion arises for $\isotope[11]{Be}$, *i.e.*, the ground state is experimentally [@npa1990:011-012] in the unnatural parity space, and both spaces are near-degenerate in calculations at finite ${{N_\text{max}}}$ (see Ref. [@navratil2009:ncsm]). The minimal isospin ($T=T_z$) spectrum is shown in each case. To facilitate identification of rotational bands, it is helpful to plot the calculated excitation energies with respect to $J(J+1)$, so that energies within an ideal rotational band would lie on a straight line — or, for $K=1/2$ bands, staggered about a straight line. For the candidate $K=1/2$ bands in Figs. \[fig-Ex-odd-nat\] and \[fig-Ex-odd-un\], an energy fit is obtained by adjusting the parameters of (\[eqn-EJ-stagger\]) to the first three bandmembers (the remainder of the line is thus an extrapolation). For the remaining bands, a straight line fit of (\[eqn-EJ\]) to all bandmembers is shown. The yrast and near-yrast states yield the most immediately recognizable sets of candidate bandmembers. Yrast rotational bands (with bandmembers indicated by solid black squares in Figs. \[fig-Ex-odd-nat\] and \[fig-Ex-odd-un\]) are found with $K=1/2$ except in the natural parity space of $\isotope[9]{Be}$ \[Fig. \[fig-Ex-odd-nat\](b)\], for which the yrast band has $K=3/2$. The density of states rapidly increases off the yrast line, hindering identification of candidate bands and furthermore suggesting that the rotational states may be fragmented by mixing with nearby states. Nonetheless, several excited candidate bands (indicated by solid red squares) can also be clearly identified, with $1/2\leq K \leq 5/2$, once transition strengths have been taken into account. For the yrast $K=1/2$ bands, as a result of Coriolis decoupling, it should be noted that alternate bandmembers are raised in energy into a region of higher density of states, which complicates identification and is conducive to fragmentation. The energy staggering in the calculated yrast band of the $\isotope[7]{Be}$ natural parity space \[Fig. \[fig-Ex-odd-nat\](a)\] — in which the $J=3/2$, $7/2$, $\ldots$ levels are lowered, and the $J=1/2$, $5/2$, $\ldots$ levels are raised — corresponds to a negative value of the decoupling parameter. Note that the staggering is sufficiently pronounced that the two lowest-$J$ bandmembers are inverted, as experimentally observed for this nucleus [@npa2002:005-007]. Then, positive values of the decoupling parameter are instead obtained for the remaining $K=1/2$ bands. It is interesting to compare these *ab initio* results for the yrast bands in the natural parity spaces (Fig. \[fig-Ex-odd-nat\]) with the Nilsson model predictions [@nilsson1955:model; @mottelson1959:nilsson-model]. Specifically, the calculated yrast bands have $K=1/2$ ($a\approx-1.4$) for $\isotope[7]{Be}$, $K=3/2$ for $\isotope[9]{Be}$, $K=1/2$ ($a\approx+1.2$) for $\isotope[11]{Be}$, and $K=1/2$ ($a\approx+3.1$) for $\isotope[13]{Be}$. The expected Nilsson $[Nn_z\Lambda\Omega]$ asymptotic quantum number assignments (see Fig. 5-1 of Ref. [@bohr1998:v2]) and corresponding Nilsson values of the decoupling parameter are $[110\tfrac12]$ ($a\approx-1$) for $\isotope[7]{Be}$, $[101\tfrac32]$ for $\isotope[9]{Be}$, $[101\tfrac12]$ ($a\approx0$) for $\isotope[11]{Be}$, and $[220\tfrac12]$ ($a\approx+1$) for $\isotope[13]{Be}$. We see consistency not only in the $K$ ($=\Omega$) quantum numbers for the band but also in the *qualitative* trend of the decoupling parameters for these bands. (The Nilsson values for $a$ [@nilsson1955:model; @mottelson1959:nilsson-model] consider mixing of spherical orbitals only within a single spherical oscillator shell, which is sufficient for a weakly-deformed oscillator-like mean field. However, they should not be expected to provide *quantitative* accuracy for a nucleon in, say, the mean field produced by a double-$\alpha$ $\isotope[8]{Be}$ core.) The quadrupole moments for all states within the candidate bands are shown in Figs. \[fig-Q-odd-nat\] and \[fig-Q-odd-un\], both for the yrast bands (black squares) and for the excited bands (red diamonds). The values are normalized to $Q_0$, to facilitate comparison with the rotational predictions for $Q(J)/Q_0$ from (\[eqn-Q\]) (shown as curves in each plot). The value of $Q_0$ used for normalization has in each case been obtained simply from the quadrupole moment of the lowest-energy bandmember of nonvanishing quadrupole moment. (Thus, for $K=1/2$ bands, since the quadrupole moment of the $J=1/2$ bandhead vanishes identically, either the $J=3/2$ or $5/2$ bandmember is used for normalization, according to the staggering.) Quadrupole moments in Figs. \[fig-Q-odd-nat\] and \[fig-Q-odd-un\] are calculated using both the proton (filled symbols) and neutron (open symbols) quadrupole tensors.[^1] Finally, in-band transition strengths are shown in Fig. \[fig-BE2-odd\], again as obtained for both proton (solid symbols) and neutron (open symbols) quadrupole operators, and for $\Delta J=2$ transitions (upper curves, solid) and $\Delta J=1$ transitions (lower curves, dashed). The various $K=1/2$ bands are superposed in Fig. \[fig-BE2-odd\](a), and the $K=3/2$ bands are shown in Fig. \[fig-BE2-odd\](b). Transition strengths are normalized as $B(E2;J\rightarrow J-\Delta J)/(eQ_0)^2$, for comparison with the rotational values from (\[eqn-BE2\]). The same $Q_0$ values are used as in Figs. \[fig-Q-odd-nat\] and \[fig-Q-odd-un\], *i.e.*, obtained from $Q(3/2)$ or $Q(5/2)$. Therefore, no free normalization parameters remain for the $B(E2)$ strengths in Fig. \[fig-BE2-odd\]. For instance, it may be observed that the values for $B(E2;3/2\rightarrow 1/2)/(eQ_0)^2$ in Fig. \[fig-BE2-odd\](a) cluster at the rotational value, indicating that the calculated $B(E2;3/2\rightarrow 1/2)$ strengths are in the proper relation to the calculated $Q(3/2)$ moment or $Q(5/2)$ moment, as appropriate, consistent with adiabatic rotation. The level of resemblance between the calculated energies, quadrupole moments, and transition strengths for the candidate bands and the expected rotational values in Figs. \[fig-Ex-odd-nat\]–\[fig-BE2-odd\], while clearly not perfect, indicates a remarkably clean separation of rotational and intrinsic degrees of freedom in these *ab initio* NCCI calculations. One should bear in mind that quadrupole moments of *arbitrarily* chosen states in the spectrum fluctuate not only in magnitude but also in sign, and that calculated $E2$ strengths among arbitrarily chosen pairs of states fluctuate by many orders of magnitude. (The $3/2\rightarrow 1/2$ transitions in Fig. \[fig-BE2-odd\] are enhanced by factors of $\sim1.1$–$17$ relative to the typical Weisskopf single-particle estimate [@weisskopf1951:estimate].) It is worth highlighting a few notable features from the band structures in Figs. \[fig-Ex-odd-nat\]–\[fig-BE2-odd\]: \(1) The $K=1/2$ yrast bands in the unnatural parity spaces (Fig. \[fig-Ex-odd-un\]) can be traced to $J$ values as high as $\sim13/2$. For instance, for $\isotope[13]{Be}$ \[Fig. \[fig-Ex-odd-un\](c)\], the energies of the $J=7/2$, $9/2$, $11/2$, and $13/2$ bandmembers all agree with the rotational values, from (\[eqn-EJ-stagger\]), to within $0.4\,{{\mathrm{MeV}}}$, and a $J=15/2$ bandmember can also be reasonably identified (within $1.5\,{{\mathrm{MeV}}}$ of the rotational energy). The quadrupole moments \[Fig. \[fig-Q-odd-un\](c)\] of the $J=3/2$ and $J=5/2$ bandmembers (the latter is used to determine $Q_0$ in the figure) are in the expected rotational ratio, from (\[eqn-Q\]), to within $1.1\%$ for protons or $0.4\%$ for neutrons. The quadrupole moments for the higher bandmembers are highly consistent between protons and neutrons and have the expected sign, but they gradually fall off from the rotational values, approaching zero for the $J=15/2$ bandmember. \(2) For the yrast and low-lying rotational bands in the natural parity spaces (Fig. \[fig-Ex-odd-nat\]), rotational behavior appears to terminate at generally lower angular momentum. For instance, for $\isotope[11]{Be}$ \[Fig. \[fig-Ex-odd-nat\](c)\], the $K=1/2$ yrast band terminates at $J=7/2$ on the basis of energies: the lowest calculated $J=7/2$ state lies within $0.5\,{{\mathrm{MeV}}}$ of the expected energy extrapolated for an yrast bandmember, but the lowest calculated $J=9/2$ state is $11\,{{\mathrm{MeV}}}$ too high in energy to be an yrast bandmember. The terminating angular momentum expected in a simple valence $p$-shell or NCCI ${{N_\text{max}}}=0$ description is, in fact, $J=7/2$. The quadrupole moments \[Fig. \[fig-Q-odd-nat\](c)\] suggest that the viability of a rotational description may end even earlier, at $J=5/2$. Similar comments may be made about the yrast and excited bands in $\isotope[7]{Be}$ and $\isotope[9]{Be}$ \[Fig. \[fig-Ex-odd-nat\](a,b)\], where the quadrupole moments \[Fig. \[fig-Q-odd-nat\](a,b)\] are in close agreement with rotational values through $J=7/2$, but then begin to deviate significantly at $J=9/2$. \(3) To some extent in the quadrupole moments, but especially in the $\Delta J=2$ transition strengths for the $K=1/2$ bands \[Fig. \[fig-BE2-odd\](a)\], one may observe that the $E2$ matrix element strengths start at the expected rotational values for low $J$ but then systematically fall off below the rotational values at higher $J$. This trend signals deviation from a strict *adiabatic* rotational picture, as described in Sec. \[sec-rot\], but it is also, at least qualitatively, in agreement with more microscopic treatments of nuclear rotation. Specifically, $E2$ matrix elements within an Elliott $\grpsu{3}$ band decline in strength as band termination is approached (see discussion in Ref. [@harvey1968:su3-shell]). A similar falloff can be obtained in $\grpsptr$ symplectic calculations [@draayer1984:spsm-20ne] of rotational bands (see Fig. 6 of Ref. [@rowe1985:micro-collective-sp6r]). Whether or not such $\grpsu{3}$ or $\grpsptr$ rotational mechanisms are at play in the present NCCI results awaits full analysis in an $\grpsu{3}$/$\grpsptr$ symmetry-adapted implementation of the NCCI approach [@dytrych2008:sp-ncsm]. As we explore the interpretation of NCCI results in a rotational context, it is interesting to note that straightforward fragmentation of the rotational strength over two calculated levels can be observed for the $J=7/2$ member of the excited band in the natural parity space of $\isotope[11]{Be}$ \[Fig. \[fig-Ex-odd-nat\](c)\]. Transitions into and out of this state are fragmented in the approximate proportion $0.4:0.6$. However, the summed strengths, shown in Fig. \[fig-BE2-odd\](b), which combine the fragmented transitions involving this level, are in near-perfect agreement with rotational values. We also note that a staggering may be observed in the $\Delta J=2$ transition strengths, to a greater or lesser degree, for the various $K=1/2$ bands \[Fig. \[fig-BE2-odd\](a)\]. Such staggering is in fact consistent with the adiabatic rotational picture, once the $\tme{\phi_K}{Q_{2,2K}}{\phi_{\bar{K}}}$ cross term in the rotational $E2$ matrix element, neglected in (\[eqn-BE2\]), is taken into account \[see (6.38) of Ref. [@rowe2010:collective-motion]\]. In well-deformed rotor nuclei, this contribution is commonly ignored, on the presumption that $\tme{\phi_K}{Q_{2,0}}{\phi_K}\sim Q_0$ is strongly enhanced while $\tme{\phi_K}{Q_{2,2K}}{\phi_{\bar{K}}}$ is of single-particle strength [@rowe2010:collective-motion]. However, in light nuclei, where the collective enhancement is weaker, such a single-particle contribution may be expected to be nonnegligible in comparison, and to be of approximately the magnitude seen in Fig. \[fig-BE2-odd\](a). Conclusion {#sec-concl} ========== The principal challenge in identifying collective structure in NCCI calculations with realistic interactions lies in the weak convergence of the relevant observables. Eigenvalues and other calculated observables are dependent upon both the truncation ${{N_\text{max}}}$ and the oscillator length parameter (or $\hbar\Omega$) for the NCCI basis. Although it is possible to extrapolate the values of calculated observables to their values in the full, infinite space [@bogner2008:ncsm-converg-2N; @maris2009:ncfc; @cockrell2012:li-ncfc; @furnstahl2012:ho-extrapolation; @coon2012:nscm-ho-regulator], such methods are still in their formative stages, especially for the crucial $E2$ observables. It is therefore particularly notable that quantitatively well-developed and robust signatures of rotation may be observed in the present results. That this is possible reflects the distinction between convergence of *individual* observables, taken singly, and convergence of *relative* properties, such as ratios of excitation energies or ratios of quadrupole matrix elements. It is these latter relative properties which are essential to identifying rotational dynamics and which are found to be sufficiently converged to yield stable rotational patterns at currently achievable ${{N_\text{max}}}$ truncations, as illustrated in Fig. \[fig-Q-Nmax-9Be\] for the $K=3/2$ ground-state band of $\isotope[9]{Be}$. From the results of Sec. \[sec-results\], it is seen that rotational structure is pervasive in *ab initio* NCCI calculations of light nuclei, occurring in the yrast and near-yrast regions of all the spectra considered for the $\isotope{Be}$ isotopic chain. With suitable extrapolation methods in place, a salient test of *ab initio* calculations and interactions will then be quantitative prediction of collective rotational parameters, such as the intrinsic quadrupole moment, for direct comparison with experiment. One may observe that the present discussion represents a phenomenological rotational analysis, in the traditional experimental sense, but of a large set of observables taken from *ab initio* calculations of nuclei. Having full access to the calculated wavefunctions, we may also hope to extract information on the collective structure of the nuclear eigenstates from other measures of the wave function correlations, such as density distributions [@cockrell2012:li-ncfc] and symmetry decompositions [@dytrych2008:sp-ncsm-deformation]. Natural questions include the origin of rotation in the $\isotope{Be}$ isotopes — for instance, the extent to which it might arise from relative motion of alpha clusters or perhaps from $\grpsu{3}$ rotation in an extended multi-shell valence space — and the relevance of some form of Nilsson-like strong-coupling picture for rotation in *ab initio* calculations of odd-mass light nuclei. Indeed, the proton and neutron density distributions found in Ref. [@maris2012:mfdn-ccp11] for the ground state of $\isotope[9]{Be}$ suggest the emergence of two alpha clusters, with the additional neutron in a $\pi$ orbital. Acknowledgements {#acknowledgements .unnumbered} ================ Discussions with A. O. Macchiavelli and P. Fallon are gratefully acknowledged. This work was supported by the Research Corporation for Science Advancement through the Cottrell Scholar program, by the US Department of Energy under Grants No. DE-FG02-95ER-40934, DE-FC02-09ER41582 (SciDAC/UNEDF), DESC0008485 (SciDAC/NUCLEI), and DE-FG02-87ER40371, and by the US National Science Foundation under Grant No. 0904782. Computational resources were provided by the National Energy Research Supercomputer Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. References {#references .unnumbered} ========== [^1]: The proton quadrupole tensor, defined as $Q_{2\mu,p}=\sum_{i=1}^Zr_{p,i}^2Y_{2\mu}(\uvec{r}_{p,i})$, is the operator used in calculation of the physically observable electromagnetic moments and transitions. However, the rotational relations (\[eqn-Q\]) and (\[eqn-BE2\]) are equally applicable to matrix elements of the neutron quadrupole tensor, $Q_{2\mu,n}=\sum_{i=1}^Nr_{n,i}^2Y_{2\mu}(\uvec{r}_{n,i})$. These therefore provide a valuable complementary set of observables for purposes of investigating whether or not the nuclear wave functions satisfy the conditions of adiabatic rotational separation, particularly relevant, due to the high neutron-proton asymmetry, in the neutron-rich $\isotope{Be}$ isotopes.
{ "pile_set_name": "ArXiv" }
--- author: - | B. Darquié, M.P.A. Jones, J. Dingjan, J. Beugnon, S. Bergamini,\ Y. Sortais, G. Messin, A. Browaeys$^{\ast}$, P. Grangier\ \ \ \ \ title: 'Controlled Single-Photon Emission from a Single Trapped Two-Level Atom' --- By illuminating an individual rubidium atom stored in a tight optical tweezer with short resonant light pulses, we create an efficient triggered source of single photons with a well-defined polarization. The measured intensity correlation of the emitted light pulses exhibits almost perfect antibunching. Such a source of high rate, fully controlled single photon pulses has many potential applications for quantum information processing. Implementing a deterministic or conditional two qubit quantum gate is a key step towards quantum computation. Deterministic gates generally require a strong interaction between the particles that are used to carry the physical qubits [@Zoller]. Recently, controlled-not gates have been realized using trapped ions and incorporated in elaborate quantum algorithms [@teleportation_wineland; @teleportation_blatt; @code_correcteur_wineland]. So far, individually addressed two qubit gates have not been demonstrated with neutral atoms. Promising results have been obtained on entangling neutral atoms using cold controlled collisions in an optical lattice [@bloch], but the single qubit operations are difficult to perform in such a system. Another approach is to bypass the requirement for a direct interaction between the qubits, and use instead an interference effect and a measurement-induced state projection to create the desired operation [@KLM; @Dowling]. An interesting recent development of this idea is to use photon detection events for creating entangled states of two atoms [@protsenko02b; @simon; @duan03]. This provides “conditional" quantum gates, where the success of the logical operation is heralded by appropriate detection events. These schemes can be extended to realize a full controlled-not gate, or a Bell-state measurement, or more generally to implement conditional unitary operations [@protsenko02b; @Beige]. They could be implemented by using, for instance, trapped ions [@Blinov], or atoms in microscopic dipole traps. These proposals require the controlled emission of indistinguishable single photons by at least two identical emitters. Various single photon sources have been implemented using solid-state systems as well as atoms or ions. Solid-state systems such as single molecules, nitrogen-vacancy centers in diamond or quantum dots allow high single photon rates [@source_photon_unique]. However, realizing truly identical sources is a major problem for such systems, due to inhomogeneities in both the environment of the emitters, and the emitters themselves. Another approach is provided by sources based on neutral atoms [@rempe2002; @kimble2004] or ions [@walther2004] strongly coupled to a mode of a high-finesse optical cavity. Such sources are spectrally narrow, and the photons are emitted into a well-defined spatial mode, thus opening the way to coherent coupling of the quantum state of single atoms and single photons. However, the rate at which the system can emit photons is limited by the cavity and is often low in practice. Moreover, the need to achieve the strong coupling regime of cavity quantum electrodynamics remains a demanding experimental requirement. We present a triggered single-photon source based on a single rubidium atom trapped at the focal point of a high-numerical-aperture lens (N.A. = $0.7$). We also show that we have full control of the optical transition by observing Rabi oscillations. Under these conditions our system is equivalent to the textbook model formed by a two-level atom driven by monochromatic light pulses. Previous work has shown that by using holographic techniques one can create arrays of dipole traps, each containing a single atom, which can be addressed individually [@bergamini]. The work presented here can therefore be directly scaled to two or more identical emitters. We trap the single rubidium 87 atom at the focus of the lens using a far-detuned optical dipole trap (810 nm), loaded from an optical molasses. The same lens is also used to collect the fluorescence emitted by the atom (Fig. 1). The experimental apparatus is described in more detail in references [@Schlosser01; @Schlosser02]. A crucial feature of our experiment is the existence of a “collisional blockade" mechanism [@Schlosser02] which allows only one atom at a time to be stored in the trap: if a second atom enters the trap, both are immediately ejected. In this regime the atom statistics are sub-Poissonian and the trap contains either one or zero (and never two) atoms, with an average atom number of $0.5$. The trapped atom is excited with $4$ ns pulses of laser light, resonant with the $S_{1/2},\, F=2 \rightarrow P_{3/2},\, F^{\prime} = 3$ transition at $780.2$ nm. The laser pulses are generated by frequency doubling pulses at 1560 nm, generated by using an electro-optic modulator to chop the output of a continuous-wave diode laser. A fiber amplifier is used to boost the peak power of the pulses prior to the doubling crystal. The repetition rate of the source is $5$ MHz. Fluorescence photons are produced by spontaneous emission from the upper state, which has a lifetime of $26$ ns. The pulsed laser beam is $\sigma^+$-polarized with respect to the quantization axis defined by a magnetic field applied during the excitation. The trapped atom is optically pumped into the $F=2,\, m_F =+2$ ground state by the first few laser pulses. It then cycles on the $F=2,\, m_F=+2 \rightarrow F^{\prime}=3,\, m_F^{\prime}=+3$ transition, which forms a closed two-level system emitting $\sigma^+$-polarized photons. Impurities in the polarization of the pulsed laser beam with respect to the quantization axis, together with the large bandwidth of the exciting pulse ($250$ MHz), result in off-resonant excitation to the $F'=2$ upper state, leading to possible de-excitation to the $F=1$ ground state. To counteract this, we add a repumping laser resonant with the $F=1 \rightarrow F' = 2$ transition. We check that our two-level description is still valid in the presence of the repumper by analyzing the polarization of the emitted single photons (see supporting online text for further details). The overall detection and collection efficiency for the light emitted from the atom is measured to be $0.60 \pm 0.04\%$. This is obtained by measuring the fluorescence rate of the atom for the same atomic transition driven by a continuous-wave probe beam, and confirmed by a direct measurement of the transmission of our detection system (see supporting online text). For a two-level atom and exactly resonant square light pulses of fixed duration $T$, the probability for an atom in the ground state to be transferred to the excited state is $\sin^2(\Omega T/2)$, the Rabi frequency $\Omega$ being proportional to the square root of the power. Therefore the excited state population and hence the fluorescence rate oscillates as the intensity is increased. To observe these Rabi oscillations, we illuminate the trapped atom with the laser pulses during $1$ ms. We keep the length of each laser pulse fixed at $4$ ns, with a repetition rate of $5$ MHz, and measure the total fluorescence rate as a function of the laser power. The Rabi oscillations are clearly visible on our results (see Fig. 2). From the height of the first peak and the calibrated detection efficiency measured previously, we derive a maximum excitation efficiency per pulse of $95\pm 5\%$. The reduction in the contrast of the oscillations at high laser power is mostly due to fluctuations of the pulsed laser peak power. This is shown by the theoretical curve in Fig. 2, based on a simple two-level model. This model shows that the $10\%$ relative intensity fluctuations that we measured on the laser beam are enough to smear out the oscillations as observed. The behavior of the atom in the time domain can be studied by using time resolved photon counting techniques to record the arrival times of the detected photons following the excitation pulses, thus constructing a time spectrum. By adjusting the laser pulse intensity, we observe an adjustable number of Rabi oscillations during the duration of the pulse, followed by the free decay of the atom once the laser has been turned off. The effect of pulses close to $\pi$, $2\pi$ and $3\pi$ are displayed as inserts on Fig. 2, and show the quality of the coherent control achieved on a single atom. In order to use this system as a single photon source, the laser power is set to realize a $\pi$ pulse. To maximize the number of single photons emitted before the atom is heated out of the trap, we use the following sequence. First, the presence of an atom in the dipole trap is detected in real-time using its fluorescence from the molasses light. Then, the magnetic field is switched on and we trigger an experimental sequence that alternates $115\,\mu$s periods of pulsed excitation with $885\,\mu$s periods of cooling by the molasses light (Fig. 3). The repumping laser remains on throughout, and the trap lifetime during the sequence is measured to be $34$ ms. After $100$ excitation/cooling cycles, the magnetic field is switched off and the molasses is turned back on, until a new atom is recaptured and the process begins again. On average, three atoms are captured per second under these conditions. The average count rate during the excitation is $9600$ s$^{-1}$, with a peak rate of $29000$ s$^{-1}$ (corresponding to twice the first peak in Fig. 3) To characterize the statistics of the emitted light, we measure the second order temporal correlation function, using a Hanbury Brown and Twiss type set-up. This is done using the beam splitter in the imaging system (Fig. 1), which sends the fluorescence light to two photon-counting avalanche photodiodes that are connected to a high-resolution time-to-digital conversion counting card in a start-stop configuration (resolution of about 1 ns). The card is gated so that only photons scattered during the $115\,\mu$s periods of pulsed excitation are counted, and the number of coincidence events is measured as a function of delay. The histogram obtained after $4$ hours of continuous operation is displayed in Fig. 4, and shows a series of spikes separated by the period of the excitation pulses ($200$ ns). The $1/e$ half width of the peaks is $27 \pm 3$ ns, in agreement with the lifetime of the upper state. No background correction is done on the displayed data. The small flat background is attributed to coincidences between a fluorescence photon, and an event coming either from stray laser light (about $175$ counts/sec), or dark counts of the avalanche photodiodes (about $150$ counts/sec). When these events are corrected for, the integrated residual area around zero delay is $3.4\%\, \pm\, 1.2\%$ of the area of the other peaks. We calculate [@calculs] that under our experimental conditions, the probability to emit exactly one photon per pulse is $0.981$ whereas the probability to emit two photons is $0.019$. These two-photon events would show up in the correlation curve as coincidences close to zero delay (still with no coincidences at exactly zero delay). From our calculation, the value for the ratio of the area around zero delay compared to the others is $3.7\%$, in excellent agreement with the experimental results. Finally, we discuss the coherence properties of the emitted photons, necessary for entanglement protocols based on the interference between two emitted photons, either from the same atom or from different atoms. As our collection optics are diffraction-limited, the outgoing photons should be in a single spatial mode of the electromagnetic field. As far as temporal coherence is concerned, the main limiting factor appears to be the motion of the atom in the trap, which can be controlled by optimized cooling sequences. We then anticipate that our source should be Fourier-limited by the lifetime of the excited state. We are now working to characterize the coherence of our single-photon source, and to use it to observe multiple atom interference effects. [30]{} P. Zoller, J.I. Cirac, L. Duan, J.J. García-Ripoll, in [*Les Houches 2003: Quantum entanglement and information processing*]{}, D. Est[è]{}ve, J.-M. Raimond, J. Dalibard, Eds. (Elsevier, Amsterdam, 2004), pp 187-222. M. Riebe [*et al.*]{}, [*Nature*]{} [**429**]{} 734 (2004) M.D. Barrett [*et al.*]{}, [*Nature*]{} [**429**]{}, 737 (2004). J. Chiaverini [*et al.*]{}, [*Nature*]{} [**432**]{}, 602 (2004). O. Mandel [*et al.*]{}, [*Nature*]{} [**425**]{}, 937 (2003). E. Knill, R. Laflamme, G.J. Milburn, [*Nature*]{} [**409**]{}, 46 (2001). J.P. Dowling, J.D. Franson, H. Lee, G.J. Milburn, [*Quantum Information Processing*]{} [**3**]{}, 205 (2004). I. Protsenko, G. Reymond, N. Schlosser, P. Grangier, [*Phys. Rev. A*]{} [**66**]{}, 062306 (2002). C. Simon, W.T.M. Irvine, [*Phys. Rev. Lett.*]{} [**91**]{}, 110405 (2003). L.-M. Duan, H.J. Kimble, [*Phys. Rev. Lett.*]{} [**90**]{}, 253601 (2003). Y.L. Lim, A. Beige, L.C. Kwek, in preparation (available at http://arXiv.org/abs/quant-ph/0408043). B.B. Blinov, D.L. Moehring, L.-M. Duan, C. Monroe, [*Nature*]{} [**428**]{}, 153 (2004). , P. Grangier, B. Sanders, J. Vukovic, Eds. [*New J. Phys.*]{} [**6**]{}, 85 to 100, 129 and 163 (2004). A. Kuhn, M. Hennrich, G. Rempe, [*Phys. Rev. Lett.*]{} [**89**]{}, 067901 (2002). J. McKeever [*et al.*]{}, [*Science*]{} [**303**]{}, 1992 (2004). M. Keller, B. Lange, K. Hayasaka, W. Lange, H. Walther, [*Nature*]{} [**431**]{}, 1075 (2004). S. Bergamini [*et al.*]{}, [*J. Opt. Soc. Am. B*]{} [**21**]{} 1889, (2004). N. Schlosser, G. Reymond, I. Protsenko, P. Grangier, [*Nature*]{} [**411**]{}, 1024 (2001) N. Schlosser, G. Reymond, P. Grangier, [*Phys. Rev. Lett.*]{} [**89**]{}, 023005 (2002). We have performed a full calculation of the second order correlation function using the Heisenberg-Langevin equations, which gives the areas of the peaks. We have also calculated the photon emission probabilities using density matrix and Monte Carlo methods, which lead to the same result. We thank Patrick Georges for his assistance in designing the pulsed laser system. This work was supported by the European Union through the IST/FET/QIPC project “QGATES" and the Research Training Network “CONQUEST". Supporting Online Material {#supporting-online-material .unnumbered} =========================== www.sciencemag.org\ Supporting online text Polarization analysis of the emitted single photons {#polarization-analysis-of-the-emitted-single-photons .unnumbered} --------------------------------------------------- As explained in the main text, polarization imperfections lead to a depumping process to the $F=1$ ground state. A repumping laser is used to counteract this process and minimize deviations from the two-level behavior. To check the validity of our two-level description, we have investigated the effect of impurities in the polarization of the pulsed laser beam with respect to the quantization axis. We measure that on average the atom is pumped into the $F=1$ ground state by spontaneous emission after 120 excitations. In the presence of the repumping light, a rate equation model of the repumping process (including the repumping laser as well as the pulsed excitation) shows that the atom spends more than 90% of its time on the cycling $F=2,m_F=+2 \rightarrow F'=3,m_F'=+3$ transition, as desired. In addition, we have measured the polarization of the emitted light, using a polarizer to select fluorescence light either polarized perpendicularly ($\bot$) or parallel ($\|$) to the quantization axis. For a narrow collection angle, $\bot$ would correspond to the circularly polarized photons emitted on the cycling transition, and $\|$ to $\pi$ polarized photons. Here, the measured contrast $(R_{\bot} - R_{\|})/(R_{\bot}+R_{\|})= 72 (\pm 2)\%$, where $R_{\bot}$ and $R_{\|}$ are the count rates perpendicular and parallel to the quantization axis respectively. The largest part of this depolarization is actually due to the very large numerical aperture (N.A. = 0.7) of the collection lens, which decreases to $77\,\%$ the maximum contrast obtainable for purely $\sigma$-polarized fluorescence. From the measured contrast of $72\%$, we calculate that $3\%$ of the collected photons are $\pi$-polarized and we attribute this to photons induced by the depumping-repumping processes. This number is compatible with the results of the rate equation model discussed in the previous paragraph. Collection and detection efficiency {#collection-and-detection-efficiency .unnumbered} ----------------------------------- The overall collection and detection efficiency of $(0.60\pm0.04)\%$ is obtained by measuring the fluorescence rate of the atom as a function of the power of a continuous-wave probe beam. Since the saturated photon emission rate for a closed two-level system is $\Gamma/2$, where $\Gamma$ is the inverse of the natural lifetime, the collection and detection efficiency can be obtained directly from the measured count rate. This value is compatible with that obtained from a direct evaluation of the transmission of our detection system. The transmission of our lens is measured to be $87\%$ and its collection solid angle is $0.15\times 4\pi$ sr. Because the emission pattern for $\sigma^+$-polarized photons is not isotropic, the effective solid angle of collection must be corrected by a factor of $85\%$. The transmission of the optical elements in the imaging system is $58\%$. Finally the light passes through a pinhole before illuminating the avalanche photodiode. The largest uncertainty is in the combination of the pinhole transmission and photodiode quantum efficiency, which is estimated to be around 10%. Multiplying all factors gives an overall collection and detection efficiency compatible with the $0.6 \%$ quoted above. [**Figure 1.**]{} Schematic of the experiment. The same lens is used to focus the dipole trap and collect the fluorescence light. The fluorescence is separated by a dichroic mirror and imaged onto two photon counting avalanche photodiodes (APD), placed after a beam-splitter (BS). The insert shows the relevant hyperfine levels and Zeeman sublevels of rubidium $87$. The cycling transition is shown by the arrow. Also shown is the nearby $F'=2$ level responsible for the depumping. [**Figure 2.**]{} Total count rate (squares) as a function of the average power of the pulsed beam, for a fixed pulse length of $4$ ns and a repetition rate of $5$ MHz. The solid line is a theoretical curve using a simple two-level model that includes spontaneous emission and intensity fluctuations. The inserts show the time spectra for the laser intensities corresponding to $\pi$, $2\pi$ and $3\pi$ pulses. [**Figure 3.**]{} Fluorescence signal measured by one of the two photodiodes during the experimental sequence, averaged over $22958$ cycles. Peaks are observed corresponding to the $115\,\mu$s periods of pulsed excitation, separated by periods of lower fluorescence induced by the molasses light during the $885\,\mu$s of cooling. The exponential decay of the signal is due the lifetime of the atom in the trap, which is $34$ ms under these conditions. Insert: A close-up of the signal clearly shows the alternating excitation and cooling periods. [**Figure 4.**]{} Histogram of the time delays in the start-stop experiment. The histogram has been binned $4$ times leading to a $4.7$ ns time resolution. No correction for background has been made. The absence of a peak at zero delay shows that the source is emitting single photons. During the $4$-hour experimental run, $43895$ sequences were completed, which corresponds to a total of $505$ seconds of excitation. A total of $4.83\times 10^6$ photons were detected by the two photodiodes. ![image](spsfig1.eps) ![image](spsfig2.eps) ![image](spsfig3.eps) ![image](spsfig4.eps)
{ "pile_set_name": "ArXiv" }
--- abstract: 'An interactive image retrieval system learns which images in the database belong to a user’s query concept, by analyzing the example images and feedback provided by the user. The challenge is to retrieve the relevant images with minimal user interaction. In this work, we propose to solve this problem by posing it as a binary classification task of classifying all images in the database as being relevant or irrelevant to the user’s query concept. Our method combines active learning with graph-based semi-supervised learning (GSSL) to tackle this problem. Active learning reduces the number of user interactions by querying the labels of the most informative points and GSSL allows to use abundant unlabeled data along with the limited labeled data provided by the user. To efficiently find the most informative point, we use an uncertainty sampling based method that queries the label of the point nearest to the decision boundary of the classifier. We estimate this decision boundary using our heuristic of adaptive threshold. To utilize huge volumes of unlabeled data we use an efficient approximation based method that reduces the complexity of GSSL from $O(n^3)$ to $O(n)$, making GSSL scalable. We make the classifier robust to the diversity and noisy labels associated with images in large databases by incorporating information from multiple modalities such as visual information extracted from deep learning based models and semantic information extracted from the WordNet. High F1 scores within few relevance feedback rounds in our experiments with concepts defined on AnimalWithAttributes and Imagenet (1.2 million images) datasets indicate the effectiveness and scalability of our approach.' author: - Akshay Mehra - Jihun Hamm - Mikhail Belkin bibliography: - 'sample-bibliography.bib' title: 'Fast Interactive Image Retrieval using large-scale unlabeled data' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010282.10011304&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Active learning settings&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010282.10011305&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Semi-supervised learning settings&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Appendix ======== Names of concepts defined on Imagenet ------------------------------------- The following are the different concepts we defined on the Imagenet dataset for evaluating our method:\ *wheel, makeup, poodle, elephant, shore, hosiery, setter, fox, wolf, bear, free-reed instrument, source of illumination, flower, heron, soft-finned fish, coraciiform bird, bridge, domestic cat, crocodilian reptile, bowl, guitar, piece of cloth, sled dog, thrush, sailboat, seabird, stork, citrus, frozen dessert, piano*. Names of concepts defined on AWA -------------------------------- The following attributes from the list of 85 attributes were combined to create different concepts on AWA:\ *blue hairless strain teeth, blue tough skin bulbous, blue tough skin flippers, brown hairless hooves, orange spots quadrupedal, white spots flippers, yellow spots claws, white spots small, brown tough skin flippers, black tough skin hands, blue hairless tail, brown hairless flippers, brown spots claws, yellow furry big, white spots claws, yellow spots meat teeth, black hairless small, blue hairless big, white stripes small, black hairless strain teeth, black stripes meat teeth, brown tough skin hands, white stripes paws, orange spots lean, brown tough skin claws, white hairless hooves, orange furry chew teeth, black tough skin strain teeth, brown spots small, black spots lean* Mean Average Precision Curves ----------------------------- A precision/recall curve plots precision and recall for every all possible thresholds. The curve decreases regularly between the points of (higest precision, lowest recall) and (lowest precision, highest recall). A slow decreasing curve is considered ideal. Mean average precision is a value that summarizes the precision/recall curve. This value is equivalent to the area under the precision recall curve and is independent of the threshold. Here we report the change in average precision scores after adding a single point queried by active learning. Figure \[fig:ap-scores\] contrasts the performance of our active learning method with adaptive threshold against active learning with constant threshold and not using active learning at all for AWA and Imagenet datasets. The comparison suggests that our method of using active learning with adaptive threshold is better than the other methods. Figure \[fig:ap-scores\] compare the performance of our method against only visual, only semantic and with the combination of visual and semantic features against SVM with only visual features. Our method performs well here too. Effect of different step size $\alpha$ -------------------------------------- The figure \[fig:img-ap\] shows the cross validation results for choosing the step size $\alpha$ in Algorithm 3. This is important since we want to have the step size which is neither too low nor too high. A slow step size will reduce the speed of convergence to the decision boundary of the concept and a high value will lead to unstability in initial runs. To do cross validation we choose 30 concepts different from the ones chosen for evaluation and run our method of adaptive threshold with different values of $\alpha$. Since $\alpha$ = 2 gives the best performance and hence we use this value in our experiments. ![F1 Scores for 30 concepts defined on the AWA dataset. The graph shows the effect of different step sizes.[]{data-label="fig:img-ap"}](AWA_step.pdf){width="35.00000%" height="0.20\textheight"} F1 scores for individual concepts --------------------------------- In this section we show the results of F1 scores for individual concepts. We can see our method performs well on all different concepts. The roughness in the curves of Imagenet is attributed to the fact that $f^*$ values changes quite a lot when we try to estimate the ranking of a million points just based on a few points. iin [1,...,6]{}[ in [1,...,5]{}[ ]{}\ ]{} iin [1,...,6]{}[ in [1,...,5]{}[ ]{}\ ]{} Average Precision scores for individual concepts ------------------------------------------------ In this section we show the results of Average Precision values for individual concepts. We can see our method of adaptive threshold performs well on all different concepts against constant threshold and approach that does not use active learning at all. iin [1,...,6]{}[ in [1,...,5]{}[ ]{}\ ]{} iin [1,...,6]{}[ in [1,...,5]{}[ ]{}\ ]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Roth’s two-pole approximation has been used by the present authors to investigate the role of $d-p$ hybridization in the superconducting properties of an extended $d-p$ Hubbard model. Superconductivity with singlet $d_{x^2-y^2}$-wave pairing is treated by following Beenen and Edwards formalism. In this work, the Coulomb interaction, the temperature and the superconductivity have been considered in the calculation of some relevant correlation functions present in the Roth’s band shift. The behavior of the order parameter associated with temperature, hybridization, Coulomb interaction and the Roth’s band shift effects on superconductivity are studied.' author: - | E. J. Calegari and S. G. Magalhães[^1]\ \ [*Laboratório de Mecânica Estatística e Teoria da Matéria Condensada*]{}\ [*Universidade Federal de Santa Maria, 97105-900 Santa Maria, RS, Brazil*]{}\ \ A. A. Gomes\ \ [*Centro Brasileiro de Pesquisas Físicas-CBPF*]{}\ [*Rua Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ, Brazil*]{} title: Superconductivity in a two dimensional extended Hubbard model --- Introduction {#intro} ============ After almost two decade of intense research about the cuprates, there is still plenty of open questions in this problem. However, it is recognized that the electrons which move on the $CuO_2$ planes are the most relevant to describe their physical properties [@ref4]. In the undoped regime, these compounds are insulators and exhibit antiferromagnetic order at sufficient low temperatures [@ref4; @ref3]. The physical properties of the insulating phase can be well described by the Heisenberg model [@ref3]. Upon doping, these systems suppress the antiferromagnetic order and become superconductors. In this scenario there is no doubt that the $d$-$d$ electron correlations play a fundamental role. The study of the electronic structure near the Fermi level $\varepsilon_F$ in such strongly correlated systems is very important to understand their physical properties [@ref1]. Earlier angle-resolved photoemission experiments (ARPES) have showed the presence of flat bands close to $\varepsilon_F$ in a region centered around the point $(\pi,0)$ in the $p$-type cuprates like $Bi_2$$Sr_2$$CuO_6$ and $YBa_2$$Cu_3$$O_y$ [@ref1; @ref2]. Due to the presence of strong correlations, to study some physical properties of these cuprate compounds, the one-band Hubbard model [@ref5] can be used. Bulut [*et al.*]{} [@ref6; @ref7] have done Monte Carlo calculations in the one-band Hubbard model. Their results show bands with a flat region near $(\pi,0)$ point for a given doping which agreed with the previously mentioned ARPES results [@ref1]. Beenen and Edwards [@ref9], using the Roth’s two-pole approximation [@ref8] in the one-band Hubbard model, have studied the normal state of the model obtaining flat quasi-particle bands, which agree well with those found with Monte Carlo simulations [@ref6; @ref7]. The Roth’s two-pole approximation has been proposed to improve the Hubbard-I approximation [@ref5] by considering a decoupling scheme which produces an additional energy shift (the Roth’s band shift) in the peaks of the spectral function. That result is in agreement with those obtained by Harris and Lange [@ref8.1]. They looked at the moments of individual peaks in the spectral function. The presence of the exchange term $\langle S_iS_j\rangle$ in the Roth’s band shift exhibits in it a spin dependence. As consequence, the Roth’s method raises the possibility of magnetic solution in the Hubbard model while this feature is not present in the Hubbard-I approximation. Recently, due to the good agreement between the Roth’s and the Monte Carlo data, Beenen and Edwards have extended the Roth’s two-pole approximation in order to investigate the superconducting properties of the one-band Hubbard model. Their main achievement has been to show the emergence of the pairing with $d_{x^{2}-y^{2}}$ symmetry in a given amount of doping. In that approach, the gap equation for $d$-wave symmetry depends on a particular four operator correlation function which, in principle, can be found extending the Roth’s formalism to obtain two particle Green’s functions. However, the authors have introduced two decoupling schemes to calculate the gap. The first one (the factorization procedure) has been formulated to treat the problem for intermediated values of $U$ (the Coulomb interaction) and it provides an upper estimate for the gap and $T_{c}$ (the critical temperature). The second one is adapted for very large $U$ scenario which preserves the proper limit for $U\rightarrow\infty$ where the gap function vanishes. Nevertheless, the one-band models neglect the presence of the oxygen sites. Due to the strong correlations at the $Cu$-sites, the oxygen sites may be occupied by holes when the system is doped [@ref3]. For instance, the Hubbard one-band model suffers some limitations to describe the low-energy physical properties of the cuprate superconductors [@ref5.1]. In the doped regime, the one-band Hubbard model gives a wrong description of various properties, like, for example, the asymmetric magnetic doping-temperature phase diagram [@ref5.2]. Therefore, a model which take into account also the oxygen can be more adequate to treat the cuprate systems in the doped regime [@Emery]. This raises the question whether it is possible to extend the Beenen and Edwards analysis to investigate the $d$-wave symmetry superconductivity when the hybridization is present. Recently, the present authors have used the extended Hubbard model [@Emery] with the Roth’s method to study the role of hybridization in the superconductivity following closely the approach introduced by Beenen and Edwards [@ref11]. As discussed in the references [@ref9] and [@ref10], the flattening of the bands is directly related to the band shift. The presence of flat bands at Fermi level $\varepsilon_F$ in the $p$-type cuprates [@ref1] suggests a high density of states at the Fermi level, which can favor pair formation. Therefore, considering that the main responsible elements for the density of states at the Fermi level are the $d$-electrons, it has been assumed in Ref.[@ref11] that the $d-d$ pairs are the most relevant ones for superconductivity [@ref13]. The band shift plays an important role in the study of the superconducting properties of the model using the Roth’s or some similar procedures. In reference [@ref11], the factorization procedure [@ref9] has been used to investigate the effects of the hybridization on the superconductivity. It has been shown [@ref11] that the hybridization has strong effects in the shift and, therefore, in some superconducting physical properties such as the critical temperature $T_{c}$. However, as a first approach, in reference [@ref11], the band shift has been evaluated taking into account the hybridization effects, but disregarding temperature effects, superconducting properties and, most important, it has been considered in the limit $U\rightarrow \infty$. As a consequence of this limit, many correlation functions, which appear in the shift, are vanished. The important point is that these correlation functions are very relevant in the sense of to include correctly the hybridization effects. Therefore, it would be necessary to calculate the shift with the $U$ finite in order to include the hybridization effects in a more complete way. In this work, the superconductivity problem has been studied using the Roth’s method, following closely reference [@ref9], but adapted to the $d$-$p$ extended Hubbard model. Here, special attention is devoted to the effects of the hybridization and superconductivity in the band shift. In order to have the effects of the hybridization included properly in the superconductivity, the gap function is obtained using the factorization procedure [@ref11] and the shift is evaluated with finite $U$. This procedure is justified because it preserves some correlation functions present in the band shift, which are non vanishing for finite $U$. As consequence, it captures the effects of the hybridization properly. Some preliminary results of this approach have been given in Ref. [@ref12]. There are some shortcomings in the Beenen and Edwards approach [@ref14; @Avella]. For instance, the $d_{x^{2}-y^{2}}$ pairing is quite dependent on the choice of the decoupling scheme for the correlation functions related to the gap. However, in the present work, the main goal is to study the effects of hybridization. Therefore, as discussed in the previous paragraph, the natural choice is the decoupling scheme for intermediated $U$, which is also the simplest one. One is allowed to find in that procedure, at least, a better estimate for the gap (and therefore for $T_{c}$) as a function of hybridization within the same decoupling procedure. The paper presents the following organization. In section \[sec:2\] it is introduced the model and given a short introduction of the Roth’s method [@ref8]. Also, some analytic expressions for quasi-particle bands and the Green’s functions are derived. In section \[sec:3\], the factorization procedure proposed by Beenen and Edwards [@ref9] is applied for the present case. In section \[sec:4\], the band shift is discussed in detail. The numerical results are showed and discussed in section \[sec:5\]. Finally, in section \[sec:6\], a short summary and some concluding remarks are given. General formulation {#sec:2} =================== The model considered here assumes overlapping bands. It is characterized by a narrow $d$-like band with a large density of states and a wide $p$-like band with low density of states. The extended Hubbard model is defined as: $$\begin{aligned} H&=&\sum_{i,\sigma }(\varepsilon _{d}-\mu)d_{i\sigma }^{\dag}d_{i\sigma }+\sum_{i,j,\sigma }t_{ij}^{d}d_{i\sigma }^{\dag}d_{j\sigma }+ U\sum_{i}n_{i\uparrow}^{d}n_{i\downarrow}^{d}\nonumber\\ & &+\sum_{i,\sigma }(\varepsilon _{p}-\mu)p_{i\sigma }^{\dag}p_{i\sigma }+\sum_{i,j,\sigma }t_{ij}^{p}p_{i\sigma }^{\dag}p_{j\sigma }\nonumber\\ & &+\sum_{i,j,\sigma }t_{ij}^{pd}\left( d_{i\sigma }^{\dag}p_{j\sigma +}p_{i\sigma }^{\dag}d_{j\sigma }\right) \label{eq2.0}\end{aligned}$$ where $\mu$ is the chemical potential. The $d_{i\sigma }^{\dag}$, $d_{i\sigma}$ and $p_{i\sigma }^{\dag}$, $p_{i\sigma}$ are the creation and annihilation operators of the $d$- and $p$-electrons, respectively, with spin $\sigma$ at a lattice site $i$. The $\varepsilon_{d}$ and $\varepsilon_{p}$ are the centers of the on site energies of the occupied orbitals of the copper and oxygen respectively. The second term of the Hamiltonian given in Eq. (\[eq2.0\]) describes a narrow $d$-band with a hopping amplitude $t^d$. The Hamiltonian (\[eq2.0\]) considers also a $p$-band which is wider than the $d$-band. The following relation between $t^d$ and $t^p$ can be established $t^p=\alpha t^d$ with $\alpha > 1$. Also, $t^d < 0$ to coincide the bottom of the $d$- and $p$-bands with the $\Gamma$ point $k_x=k_y=0$ as suggested by experimental results [@ref1]. The third term corresponds to the Coulomb interaction $U$ that represents the repulsion between two holes in the same $d$-orbital. The last term of the Hamiltonian (\[eq2.0\]) is the $d-p$ hybridization and describes the nearest neighbor hopping process between the $d$-orbital of the Cu-atom and the $p$-orbital of the O-atom. Considering a rectangular two dimensional lattice, the unperturbed $d$- and $p$-energy bands are given by $$\varepsilon_{\vec{k}}^d = 2t^d(\cos(k_xa)+\cos(k_ya)) \label{eq2.01}$$ and $$\varepsilon_{\vec{k}}^p = 2t^p(\cos(k_xa)+\cos(k_ya)) \label{eq2.02}$$ where $a$ is the lattice constant. In this work, the Hamiltonian given in Eq. (\[eq2.0\]) has been investigated using the Roth’s two-pole approximation [@ref8] to obtain the Green’s function in the Zubarev’s formalism. In the Roth’s procedure, a set of operators $\left\{A_{n}\right\}$ is introduced in order to describe the relevant one particle excitations of the system. These operators satisfy in some approximation the following relation: $$\left[ A_{n},H\right] _{\left( -\right) }=\sum_{m}K_{nm}A_{m} \label{eq2.1}.$$ Anticommuting both sides of Eq. (\[eq2.1\]) with each operator of the set $\left\{A_{n}\right\}$ and taking the thermal average, the equation (\[eq2.1\]) becomes: $$E_{nm}=\sum_mK_{nm}N_{nm} \label{eq2.2}$$ where $E_{nm}$ and $N_{nm}$ are the energy and normalization matrices, given by $$E_{nm}=\left\langle\left[ \left[ A_{n},H\right] _{\left( -\right) }, A_{m}^{\dag}\right] _{\left( +\right) }\right\rangle \label{eq2.3}$$ and $$N_{nm}=\langle [ A_{n},A_{m}^{\dag}]_{\left( +\right) }\rangle \label{eq2.4}.$$ In matrix notation, Eq. (\[eq2.2\]) is written as $\bf{E}=\bf{K\cdot N}$, where, if $\bf N$ is nonsingular, then the $\bf K$ matrix can be obtained. With the equation of motion (in the Zubarev’s formalism) of the Green’s function $$G_{nm}\left( \omega \right) =\langle\langle A_n;A_{m}^{\dagger}\rangle\rangle_{\omega } \label{eq2.5}$$ and the Eqs. (\[eq2.1\])-(\[eq2.4\]), it is possible to obtain the following general Green’s functions $$\langle\langle A_n;B\rangle\rangle_{\omega }=\sum_m\widetilde{G}_{nm}(\omega) \langle [ A_{m},B]_{\left( +\right) }\rangle \label{eq2.6}.$$ In the particular case, where $B=A_m^{\dagger}$, the elements of the Green’s function matrix $\bf G$ are given by Eq. (\[eq2.5\]). Thus, using the matrices $\bf E$ and $\bf N$, the matrix $\bf G$ is given by $${\bf G}\left( \omega \right) =\widetilde{\bf G}(\omega){\bf N} \label{eq2.7}$$ where $$\widetilde{\bf G}\left( \omega \right) ={\bf N}(\omega {\bf N}-{\bf E})^{-1} \label{eq2.8}.$$ Considering the fact that the operators of the set $\{A_n\}$ describe the particle excitations of the system, the choice of these operators is very relevant to study the physical properties of the system. In order to discuss superconductivity, Beenen and Edwards, in their approach with the one-band Hubbard model, mixed electron and hole operators and evaluated anomalous correlation functions [@ref9]. Therefore, using a set of four operators $\{c_{i\sigma },n_{i-\sigma }c_{i\sigma },c_{i-\sigma }^{\dag},$ $n_{i\sigma }c_{i-\sigma }^{\dag}\}$, it has been obtained a four-pole approximation to the Green’s functions. However, in order to discuss the role of the hybridization, it is necessary to adapt the formalism to include a $p$-operator in the original set of operators used by Beenen and Edwards. Thus, the new set of operators is given by $$\left\{d_{i\sigma },n_{i-\sigma }^{d}d_{i\sigma },d_{i-\sigma }^{\dag}, n_{i\sigma }^{d}d_{i-\sigma }^{\dag},p_{i\sigma }\right\} \label{eq2.9}.$$ In the present work, only the singlet pairing is considered, and particularly the [*d*]{}-wave symmetry. In this particular case, $\langle d_{i-\sigma }d_{i\sigma }\rangle = 0 $ and $\displaystyle\sum_l\langle d_{i-\sigma }d_{l\sigma }\rangle = 0 $, where $l$ are the nearest neighbors of $i$. Using the set of operators given in Eq. (\[eq2.9\]), and introducing the symmetries discussed above, the elements of the energy matrix defined in Eq. (\[eq2.3\]) can be obtained as: [$$\begin{aligned} & &{\bf E_5}=\nonumber \\ & &\left[ \begin{array}{ccc} {\bf E_2} &\begin{array}{lr} ~0~~~~~~ & 0 \\ \\ ~0 & {\overline{\gamma}_{k}} \end{array} &\begin{array}{r} V_{k}^{dp} \\ \\ n_{-\sigma}^dV_{k}^{dp} \end{array}\\ \\ \begin{array}{lr} 0~~~~~ & 0 \\ \\ 0 & {\overline{\gamma}_{k}}^* \end{array} &-{\bf E_2} &\begin{array}{c} 0 \\ \\ 0 \end{array}\\ \\ \begin{array}{lr} V_{k}^{pd} ~~ & n_{-\sigma}^dV_{k}^{pd} \end{array} &\begin{array}{lr} ~~0~~~~~~~~ & 0 \end{array} &\begin{array}{r} ~~~\varepsilon_{p}-\mu + \varepsilon_{k}^{p} \end{array} \end{array} \right]\nonumber \\ \label{eq2.16}\end{aligned}$$ ]{} where $V_{k}^{dp}$ and $V_{k}^{pd}$ are the Fourier transform of $t_{ij}^{dp}$ and $t_{ij}^{pd}$ respectively. The matrix ${\bf E_2}$ present in the energy matrix ${\bf E_5}$ is given by: $$\begin{aligned} {\bf E_2}=\left[ \begin{tabular}{ccc} $\overline{\varepsilon}_{d} +\varepsilon_{k}^{d} + Un_{-\sigma}^d$ & $(\overline{\varepsilon}_{d} + \varepsilon_{k}^{d}+U)n_{-\sigma}^d$\\ \\$(\overline{\varepsilon}_{d} + \varepsilon_{k}^{d}+U)n_{-\sigma}^d$ & $Un_{-\sigma}^d + \Gamma_{k-\sigma}$ \end{tabular} \right] %\nonumber \\ \label{eq2.10}\end{aligned}$$ where $\overline{\varepsilon}_d=\varepsilon_{d}-\mu$. It is assumed that the system considered here is translationally invariant, then $n_{-\sigma}^d=n_{i-\sigma}^d$. The quantity $\Gamma_{k-\sigma}$ is the Fourier transform of $$\Gamma_{ij-\sigma}=(\varepsilon_{d}-\mu)n_{-\sigma}^d\delta_{ij} + t_{ij}^d (n_{-\sigma }^d)^2 + n_{-\sigma}^{d}( 1-n_{-\sigma }^{d})W_{ij-\sigma } . \label{eq2.11}$$ In Eq. (\[eq2.11\]), the band shift $W_{ij-\sigma }$ is defined as: $$W_{ij-\sigma }=\frac{t_{ij}^{d}\left( \langle n_{i-\sigma }^{d}n_{j-\sigma }^{d}\rangle-(n_{-\sigma }^{d})^{2}\right) +\Lambda _{ij\sigma }}{n_{-\sigma }^{d}( 1-n_{-\sigma }^{d}) } \label{eq2.12}$$ where $\Lambda _{ij\sigma }$ can be separated into two explicit contributions $$\Lambda _{ij\sigma }=\Lambda _{ij\sigma }^{d}+\Lambda _{ij\sigma }^{pd} \label{eq2.13}.$$ The terms $\Lambda_{ij\sigma}^{d}$ and $\Lambda_{ij\sigma}^{pd}$ are associated with the hopping $t_{ij}^{d}$ and the hybridization $t_{ij}^{pd}$, respectively. The hybridized term of $\Lambda _{ij\sigma }$ may be written as $$\Lambda _{ij\sigma }^{pd}=\sum_{l}t_{il}^{pd} [2\langle p_{l-\sigma }^{\dagger}n_{i\sigma}^{d}d_{i-\sigma } \rangle -\langle p_{l-\sigma }^{\dagger}d_{i-\sigma } \rangle]\delta _{ij} \label{eq2.14},$$ and the part associated to the hopping $t_{ij}^{d}$ is given by $$\begin{aligned} \Lambda _{ij\sigma }^{d}&=& \sum_lt_{il}^d\left \{\langle n_{i\sigma }^{d}d_{i-\sigma }^{\dagger}d_{l-\sigma }\rangle +\langle n_{i\sigma }^{d}d_{l-\sigma }^{\dagger}d_{i-\sigma }\rangle\right.\nonumber \\ & & +\left.\langle d_{i-\sigma }^{\dagger} d_{l-\sigma }n_{i-\sigma}^{d}\rangle-\langle d_{l-\sigma }^{\dagger}d_{i-\sigma } n_{i-\sigma }^{d}\rangle\right\}\delta_{ij}\nonumber \\ & & -~t_{ij}^d\{\langle d_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger} d_{i-\sigma }d_{i\sigma }\rangle+\langle d_{j\sigma}^{\dagger}d_{i-\sigma }^{\dagger} d_{j-\sigma }d_{i\sigma }\rangle\}.\nonumber \\ \label{eq2.15}\end{aligned}$$ The calculation of the correlation functions presented in Eqs. (\[eq2.14\]) and (\[eq2.15\]) will be discussed in detail in section \[sec:4\]. One of the most important elements of the matrix ${\bf E_5}$ is $E_{24}={\overline{\gamma}_{k}}$, where $$\overline{\gamma }_{k}=\sum_{\langle l\rangle i}t_{il}^{d} e^{i\vec{k}\cdot(\vec{R}_l-\vec{R}_i)}\overline{\gamma }_{il} \label{eq2.17}$$ and $$\overline{\gamma }_{il}=\langle n_{i-\sigma }^dd_{l\sigma }d_{l-\sigma }+n_{l\sigma }^dd_{i\sigma }d_{i-\sigma }\rangle \label{eq2.18}.$$ The correlation function $\overline{\gamma }_{k}$ gives the gap of the superconductor state in the $d-$wave case. The elements of the normalization matrix ${\bf N_5}$ are given from Eq. (\[eq2.4\]) as: $$N_{11}=N_{33}=N_{55}=1 \label{eq2.19}$$ and $$N_{12}=N_{21}=N_{22}=N_{34}=N_{43}=N_{44}=n_{-\sigma }^d. \label{eq2.20}$$ The remaining elements of the normalization matrix ${\bf N_5}$, due to the [*d*]{}-wave symmetry and the anticommution rules, have been found to be zero. Using the energy and the normalization matrices ${\bf E_5}$ and ${\bf N_5}$, respectively, the matrix Green’s function $\bf G_5$ defined in Eq. (\[eq2.7\]) can be obtained. For simplicity, only the *most relevant* elements (for the purposes of this work) of this $(5\times 5)$ ${\bf G_5}$ matrix are shown. Following the Roth’s notation [@ref8], the correlation function $\langle BA\rangle$ is related to the Green’s function $\langle\langle A;B\rangle\rangle_{\omega }$ as: $$\langle BA\rangle={\cal F}_{\omega}\langle\langle A;B\rangle\rangle_{\omega } \equiv\frac{1}{2\pi i}\oint d\omega f(\omega)\langle\langle A;B\rangle\rangle_{\omega } \label{eq2.21},$$ where $f(\omega)$ is the Fermi function. The chemical potential $\mu$ is obtained in the standard way, using the element $G_{\vec {k}\sigma}^{11}$ of the matrix ${\bf G_5}$ and the relation given in Eq. (\[eq2.21\]). The matrix element $G_{\vec {k}\sigma}^{11}$ is given by $$G_{k\sigma }^{11}(\omega)=\frac{(\omega - E_{55})\left[A\left(\omega\right)- (\omega + E_{11}){\overline{\gamma}_{k}}^2\right]} {\overline{D}\left( \omega \right)}, \label{eq2.22}$$ where $E_{11}$ and $E_{55}$ are elements of the energy matrix $\bf E_5$, defined in Eq. (\[eq2.16\]). In Eq. (\[eq2.22\]), it is also necessary to introduce the following definitions: $$\begin{aligned} A(\omega)&=&(n_{-\sigma }^d)^2(1-n_{-\sigma }^d)^2\nonumber\\ & &\times (\omega^3 + \alpha_{k\sigma}^{(1)}\omega^2 + \alpha_{k\sigma}^{(2)}\omega + \alpha_{k\sigma}^{(3)}) \label{eq2.23}\end{aligned}$$ with $$\alpha_{k\sigma}^{(1)}=E_{11} \label{eq2.24},$$ $$\alpha_{k\sigma}^{(2)}={\cal{Z}}_{k\sigma}^{(1)}{\cal{Z}}_{k\sigma}^{(2)}- ({\cal{Z}}_{k\sigma}^{(1)}+{\cal{Z}}_{k\sigma}^{(2)})({\cal{Z}}_{k\sigma}^{(1)} +{\cal{Z}}_{k\sigma}^{(2)}-E_{11}) \label{eq2.25}$$ and $$\alpha_{k\sigma}^{(3)}=-{\cal{Z}}_{k\sigma}^{(1)}{\cal{Z}}_{k\sigma}^{(2)} ({\cal{Z}}_{k\sigma}^{(1)}+{\cal{Z}}_{k\sigma}^{(2)}-E_{11}) \label{eq2.26}.$$ The quantities ${\cal{Z}}_{k\sigma}^{(1)}$ and ${\cal{Z}}_{k\sigma}^{(2)}$ are defined as $${\cal{Z}}_{k\sigma}^{(1)}=\frac{U+2(\varepsilon_{d}-\mu)+\varepsilon_{k}^{d}+W_{k-\sigma}}{2} -\frac{\Delta_{k\sigma}}{2} \label{eq2.27}$$ and, $${\cal{Z}}_{k\sigma}^{(2)}={\cal{Z}}_{k\sigma}^{(1)}+\Delta_{k\sigma} \label{eq2.28}.$$ In the particular case, when $\varepsilon_d$, $\overline{\gamma}_{k}$ and $t_{ij}^{pd}$ are zero, ${\cal{Z}}_{k\sigma}^{(1)}$ and ${\cal{Z}}_{k\sigma}^{(2)}$ represent the quasi-particle bands in the paramagnetic normal state of the one-band Hubbard model. The term $\Delta_{k\sigma}$ is given by: $$\Delta_{k\sigma}=\sqrt{(U+W_{k-\sigma }-\varepsilon_{k}^{d})^2 +4n_{-\sigma}^dU(\varepsilon_{k}^{d}-W_{k-\sigma })} \label{eq2.29}$$ where $W_{k-\sigma }$ is the Fourier transform of $W_{ij-\sigma }$ given in Eq. (\[eq2.12\]). The denominator of the Green’s function $G_{\vec{k}\sigma}^{11}$ given in Eq. (\[eq2.22\]) is defined as: $$\begin{aligned} \overline{D}(\omega)&=&(\omega - E_{55})D(\omega)- V_k^{dp}V_k^{pd}\left[A(\omega) -(\omega + E_{11}){\overline{\gamma}_{k}}^2\right]\nonumber \\ \label{eq2.30}\end{aligned}$$ where $$D(\omega)={\cal{D}}(\omega)-{\overline{\gamma}_{k}}^2(\omega^2-E_{11}^2) \label{eq2.31}$$ with $$\begin{aligned} {\cal{D}}(\omega)&=&[(\omega-E_{11})(\omega n_{-\sigma}^d-E_{22})-(\omega n_{-\sigma}^d-E_{12})^2]\nonumber\\ & &\times[(\omega+E_{11})(\omega n_{-\sigma}^d+E_{22})-(\omega n_{-\sigma}^d+E_{12})^2].\nonumber\\ \label{eq2.32}\end{aligned}$$ In Eq. (\[eq2.32\]), $E_{12}$ and $E_{22}$ are elements of the energy matrix $\bf E_2$ given in Eq. (\[eq2.10\]). The use of a set of five operators $A_n$ results in a five-pole approximation to the Green’s functions. Then, the $\overline{D}(\omega)$ defined in Eq. (\[eq2.30\]) may be also written as: $$\begin{aligned} \overline{D}\left( \omega \right)&=&(n_{-\sigma }^d)^2(1-n_{-\sigma }^d)^2(\omega-E_{1k}) (\omega-E_{2k})(\omega-E_{3k})\nonumber \\ & &\times(\omega-E_{4k})(\omega-E_{5k}) \label{eq2.33}\end{aligned}$$ where the quasi-particle bands $E_{pk}$ (with $p=1,..,5$) satisfy $\overline{D}=det(\omega{\bf{N_5}}-{\bf{E_5}})=0$. Therefore, the resulting Green’s function can be written as a sum of five terms: $$G_{k\sigma }^{11}(\omega)=\sum_{s=1}^5\frac{Z_{p\vec{k}\sigma}}{\omega-E_{p\vec{k}\sigma}} \label{eq10.3}$$ where $Z_{p\vec{k}\sigma}$ express the spectral weights which satisfy $$Z_{1\vec{k}\sigma}+Z_{2\vec{k}\sigma}+Z_{3\vec{k}\sigma}+Z_{4\vec{k}\sigma}+Z_{5\vec{k}\sigma}=1 \label{eq10.4}.$$ Calculation of the gap function using the factorization procedure {#sec:3} ================================================================= In the case of $d$-wave symmetry, the traditional correlation function $\langle d_{i-\sigma }d_{i\sigma }\rangle$ is always zero. Therefore, this correlation function can not be used to determinate the pairing gap in the $d$-wave channel [@ref9; @ref14]. In the factorization procedure proposed by Beenen and Edwards in Ref. [@ref9], the correlation function given by Eq. (\[eq2.18\]) is rewritten as $$\overline{\gamma }_{il}=[\langle d_{i-\sigma}^{\dagger}d_{l-\sigma}\rangle +\langle d_{l\sigma}^{\dagger}d_{i\sigma}\rangle]\langle d_{i-\sigma}d_{l\sigma}\rangle \label{eq3.0}$$ where the symmetry $\overline{\gamma }_{il}=\overline{\gamma }_{li}$ is conserved, and the products $d_{l\sigma }d_{l-\sigma }$ and $d_{i-\sigma }d_{i\sigma }$ are split up. It is also introduced $$n_{01\sigma}^d=\langle d_{i-\sigma}^{\dagger}d_{l-\sigma}\rangle= \langle d_{l\sigma}^{\dagger}d_{i\sigma}\rangle \label{eq3.1}$$ which allows to rewrite Eq. (\[eq3.0\]) as $$\overline{\gamma }_{il}=2n_{01\sigma}^d\langle d_{i-\sigma}d_{l\sigma}\rangle \label{eq3.2}$$ where $n_{01\sigma}^d$ can be calculated from $G_{k\sigma }^{11}$. Considering the [*d*]{}-wave symmetry, the Fourier transform of $\overline{\gamma }_{il}$ given by Eq. (\[eq2.17\]) becomes $$\overline{\gamma }_{k}=\overline{g}\left[\cos{(k_xa)}-\cos{(k_ya)}\right] \label{eq3.3}$$ where $$\overline{g}=2t^{d}\overline{\gamma} \label{eq3.4}$$ is the gap-function amplitude. Due to the $d$-wave symmetry, $\overline{\gamma }_{il}=+\overline{\gamma }$ for $\vec{R}_i-\vec{R}_l$ in the $x$ direction and $\overline{\gamma }_{il}=-\overline{\gamma }$ when $\vec{R}_i-\vec{R}_l$ is in the $y$ direction. The Fourier transform of the correlation function $\langle d_{i-\sigma }d_{l\sigma }\rangle$ is given by $$\langle d_{i-\sigma}d_{l\sigma}\rangle=\frac{1}{L}\sum_{k} e^{i\vec{k}\cdot(\vec{R}_l-\vec{R}_i)}\langle d_{k-\sigma}d_{k\sigma}\rangle \label{eq3.5}$$ where $L$ is the number of sites in the system. The correlation function $\langle d_{k-\sigma}d_{k\sigma}\rangle $ can be evaluated using the Green’s function $G_{k\sigma}^{13}$ and the relation given by Eq. (\[eq2.21\]). The Green’s function $G_{k\sigma}^{13}$ can be rewritten as: $$G_{k\sigma }^{13}(\omega)=-\overline{\gamma }_{k}U^2F_{k\sigma }^{13} \label{eq3.6}$$ where $$F_{k\sigma }^{13}(\omega)=\frac{(n_{-\sigma }^d)^2(1-n_{-\sigma }^d)^2(\omega - E_{55})} {\overline{D}\left( \omega \right)} \label{eq3.7}$$ and $\overline{D}(\omega)$ is defined in Eq. (\[eq2.30\]). Combining the equation (\[eq2.17\]) with the Eqs. (\[eq3.2\]) to (\[eq3.6\]), the gap equation can be written as: $$\overline{\gamma}_{k}=-\overline{\gamma}_{k}2n_{01\sigma}^dt^{d}U^2I_{\sigma} \label{eq3.8}$$ where $$I_{\sigma}=\frac{1}{2\pi i}{\displaystyle \oint} f(\omega) F_{\sigma }(\omega)d\omega \label{eq3.81}$$ with $$F_{\sigma}(\omega)=\frac{1}{L} \sum_{\vec{q}}\left[\cos{(\vec{q}_xa)}-\cos{(\vec{q}_ya)} \right]^2 F_{q\sigma }^{13}(\omega) \label{eq3.9}.$$ Definition and calculation of the band shifts {#sec:4} ============================================= Using the definition (\[eq2.13\]) in Eq. (\[eq2.12\]), the band shift $W _{ij\sigma }$ can be written as: $$W _{ij-\sigma }=W_{ij-\sigma }^{d}+W_{ij-\sigma }^{pd} \label{eq4.0}$$ where $$W_{ij-\sigma }^d=\frac{t_{ij}^{d}[ \langle n_{i-\sigma }^{d}n_{j-\sigma }^{d}\rangle-(n_{-\sigma }^{d})^{2}] +\Lambda _{ij\sigma }^d}{n_{-\sigma }^{d}( 1-n_{-\sigma }^{d}) } \label{eq4.1}$$ and $$W_{ij-\sigma }^{pd}=\frac{\Lambda _{ij\sigma }^{pd}}{n_{-\sigma }^{d}( 1-n_{-\sigma }^{d}) } \label{eq4.2}.$$ The quantity $\Lambda_{ij\sigma }^{pd}$ is given by Eq. (\[eq2.14\]). The correlation function $\langle p_{l-\sigma }^{\dagger}d_{i-\sigma } \rangle$ present in $\Lambda_{ij\sigma }^{pd}$ can be obtained from de Green’s function $$G_{k\sigma }^{15}(\omega)=\frac{\left[A\left(\omega\right)- (\omega + E_{11}){\overline{\gamma}_{k}}^2\right]V_k^{pd}} {\overline{D}\left( \omega \right)} \label{eq4.3}.$$ The remaining correlation function $\langle p_{l-\sigma }^{\dagger}n_{i\sigma}^{d}d_{i-\sigma } \rangle$ present in $\Lambda_{ij\sigma }^{pd}$ is calculated from the Green’s function $$G_{k\sigma }^{25}(\omega)=\frac{n_{-\sigma }^d\left[B\left(\omega\right)- (\omega + E_{11}){\overline{\gamma}_{k}}^2\right]V_k^{dp}} {\overline{D}\left( \omega \right)} \label{eq4.4},$$ where $$B\left(\omega\right)=A\left(\omega\right) + n_{-\sigma }^d(1-n_{-\sigma }^d)^2U{\cal{D}}_1(\omega) \label{eq4.5}$$ with $A\left(\omega\right)$ defined in Eq. (\[eq2.23\]). The quantity ${\cal{D}}_1(\omega)$, in terms of the elements of the energy matrix (\[eq2.16\]), is given by: $${\cal{D}}_1(\omega)=(\omega-E_{11})(\omega n_{-\sigma}^d-E_{22})-(\omega n_{-\sigma}^d-E_{12})^2 \label{eq4.6}.$$ The Green’s function $G_{k\sigma }^{25}$ tends to zero as $U\rightarrow \infty$, consequently, the correlation function $\langle p_{l-\sigma }^{\dagger}n_{i\sigma}^{d}d_{i-\sigma }\rangle$ also vanishes recovering the result of Ref. [@ref6] for $\Lambda _{ij\sigma }^{pd}$. The quantity $\Lambda_{ij\sigma}^d$ present in Eq. (\[eq4.1\]) is given by Eq. (\[eq2.15\]). The Fourier transform of $W_{ij\sigma}^d$ is given by: $$W_{k\sigma}^d=\sum_{\langle j \rangle i} e^{i\vec{k}\cdot(\vec{R}_j-\vec{R}_i)}W_{ij\sigma}^d \label{eq4.7}.$$ Substituting Eq. (\[eq2.15\]) into Eq. (\[eq4.1\]) and then putting the result into Eq. (\[eq4.7\]), the Fourier transform of $W_{ij-\sigma}^d$ can be written as: $$\begin{aligned} W_{k\sigma}^d&=&-\frac{1}{n_{\sigma}^d(1-n_{\sigma}^d)}\sum_{j\neq 0}t_{0j}^{d} \langle d_{0\sigma }^{\dagger}d_{j\sigma}(1-n_{0-\sigma }^{d}-n_{j-\sigma }^{d})\rangle\nonumber \\ & &+\sum_{j\neq 0}t_{0j}^{d}e^{i\vec{k}\cdot\vec{R}_j}\left \{\langle n_{j\sigma }^{d}n_{0\sigma}^{d} \rangle -\langle n_{0\sigma }^{d}\rangle^{2}\right. \nonumber \\ & &\left. +~\langle d_{j\sigma }^{\dagger}d_{j-\sigma } d_{0-\sigma }^{\dagger}d_{0\sigma }\rangle - \langle d_{j\sigma}^{\dagger}d_{j-\sigma }^{\dagger} d_{0-\sigma }d_{0\sigma }\rangle \right\} \label{eq4.8}.\end{aligned}$$ The correlation functions present in $W_{k\sigma}^d$ are evaluated following the original Roth’s procedure [@ref8]. Introducing extra operators $B_{i\sigma}$, the correlation functions of the form $\langle A_nB_{i\sigma} \rangle$ can be calculated by using Eqs. (\[eq2.6\]) and (\[eq2.21\]). In Refs. [@ref9; @ref11], the sum present in Eq. (\[eq2.6\]) has been considered only over the operators which describe the normal state of the system. In the present work, the sum includes also the hole operators which describe the superconducting properties of the system. Thus, $W_{k\sigma}^d$ is given by: $$n_{\sigma}^d(1-n_{\sigma}^d)W_{\vec{k}\sigma }^d=h_{1\sigma} +\sum_{j\neq 0}t_{0j}^{d}e^{i\vec{k}\cdot\vec{R}_j}(h_{2j\sigma} +h_{3j\sigma}) \label{eq4.9}$$ where the term $h_{3j\sigma}$ is directly related to the gap function $\overline{\gamma}_{\vec{k}}$ through the Green’s functions $G_{\vec{k}\sigma}^{13}$ and $G_{\vec{k}\sigma}^{14}$ (see Appendix A). The quantities $h_{1\sigma} $, $h_{2j\sigma}$ and $h_{3j\sigma}$ are given in Appendix A. Results {#sec:5} ======= In this section, the numerical results obtained in this work are presented. One of the most important parameters of the model given in Eq. (\[eq2.0\]) is the $d-p$ hybridization [@ref11; @ref15], which is defined as $$V_{\vec{k}}^{dp}=-iV_0^{dp}[\sin(k_xa)-\sin(k_ya)] \label{eq5.0}.$$ In this work, as in Ref. [@ref15], the hybridization has been assumed $\vec{k}$-independent $(V_0^{dp})^2\equiv \langle V_{\vec{k}}^{dp} V_{\vec{k}}^{pd} \rangle $, where $\langle ...\rangle$ is the average over the Brillouin zone. In Ref. [@ref16] Sengupta and Ghatak have also used a $\vec{k}$-independent hybridization due to the fact that the pairs occur within a small energy interval around the Fermi level, therefore the dispersion of the hybridization can be neglected. The total occupation number is given by $n_T=n_{\sigma}^d+n_{-\sigma}^d$, where $n_{\sigma}^d$ is obtained combining $G_{k\sigma }^{11}$ (Eq. (\[eq2.22\])) and the relation given in Eq. (\[eq2.21\]). The charge transfer energy $\Delta=\varepsilon_p - \varepsilon_d$ is positive. This means that the first hole added to the system will energetically prefer to occupy the $d$-orbital of the copper ions [@ref4]. All results presented in this section are obtained with $\varepsilon_d =0$ and $\varepsilon_p =3.6 eV$. Consequently, $\Delta= \nolinebreak 3.6eV$, as estimated in Ref. [@ref17]. [llllll]{} & $U$ & $\overline{\gamma}_{\vec{k}}$ & $T$ & $V_0^{pd}$ & $h_{3j\sigma}$\ Beenen and\ Edwards [@ref9] & $finite$ & 0 & 0 & 0 & 0\ \ Ref.[@ref11]& $\infty$ & 0 & 0 & $finite$ & 0\ \ Present work & $finite$ & $finite$ & $finite$ & $finite$ & $finite$\ As discussed in Refs. [@ref9; @ref8], the band shift $W_{\vec{k}\sigma}$ (see Eq. (\[eq4.0\])) can be evaluated considering different approximations. In the limit $U\rightarrow \infty$, some terms of the band shift vanish (see Ref. [@ref8]). In Ref. [@ref11], the present authors estimate $W_{\vec{k}\sigma}$ in the limit of $U\rightarrow \infty$, but with finite $U$ in other parts of the problem. In Ref. [@ref9], Beenen and Edwards evaluated $W_{\vec{k}\sigma}$ in the normal state (where $\overline{\gamma}_{\vec{k}}=0$) and considering $T$ equal to zero and finite $U$ using the one-band Hubbard model (hybridization null). In the present work, the correlation functions present in $W_{\vec{k}\sigma}$ given in Eq. (\[eq4.8\]) are evaluated following closely the procedure used by Roth in Ref. [@ref8]. Nevertheless, here, also the hole operators given in the set of Eq. (\[eq2.9\]) are used to evaluate the correlation functions. As consequence, a new term ($h_{3j\sigma}$) appears in $W_{\vec{k}\sigma}$ (see Eq. (\[eq4.9\])). The approximations used to evaluate $W_{\vec{k}\sigma}$ are shown in Table \[tab:1\]. In figure \[fig1\](a), the quasi-particle bands $E_{pk}$, with $p=1..5$ (see Eq. (\[eq2.33\])), are plotted along the symmetry lines $(0,0)-(\pi,\pi)-(\pi,0)-(0,0)$, in the two-dimensional Brillouin zone. The quasi-particle energies $E_{pk}$, in the superconducting state, are relative to the chemical potential $\mu$. The circles [bf show]{} the $\varepsilon_{\vec{k}}^p$ band, where the center of $\varepsilon_{\vec{k}}^p$ is shifted by $\varepsilon_p = 3.6 eV$ relative to the zero of energy. All results shown in this paper are obtained with $t^p=2t^d$. The dashed line corresponds to the noninteracting $(U=0)$ band $\varepsilon_{\vec{k}}^d$ relative to the noninteracting chemical potential. The figure \[fig1\](b) shows the superconducting gap between the electron and hole bands in the neighborhood of the $(\pi,0)$ point, while on the $k_x=k_y$ diagonal (Fig. \[fig1\](a)) the gap is zero. This fact reflects the $d$-wave symmetry proposed in this work. The dashed lines show the absence of the gap in the normal state. In figure \[fig1\](c), the region near to the $(\pi,\pi)$ point shows the gaps produced by the $d-p$ hybridization $V_0^{pd}$. The dashed lines show the result for $V_0^{pd}=0$. In figure \[fig1.1\], the electron and hole quasi-particle bands are shown for two different values of hybridization. As can be observed, the hybridization shifts the quasi-particle bands to lower energy by breaking the symmetry [in relation to the]{} $\vec{k}$ axis. The figure \[fig2\] shows the spectral weights $Z_{p\vec{k}\sigma}$ for two different hybridizations. The dashed line corresponds to the sum of the five spectral weights which is equal to one (see Eq. (\[eq10.4\])). In figure \[fig2\](b), the effects of the hybridization on the spectral weights are shown. Such effects cause a small change in the chemical potential and consequently in the superconductivity. The figure \[fig3\] shows the behavior of gap function amplitude $\overline{g}$ as a function of the hybridization $V_0^{pd}$. It is clear that there is a decreasing of $\overline{g}$ with increasing $V_0^{pd}$. The analysis of the function $F_{\sigma}(\omega)$ introduced in Eq. (\[eq3.81\]) and defined in Eq. (\[eq3.9\]) is important to understand the behavior of the gap function amplitude showed in figure \[fig3\]. The figure \[fig4\] shows the function $F_{\sigma}(\omega)$ for $T=0$ and two different values of hybridization. As can be seen in the dashed line, the magnitude of the function $F_{\sigma}(\omega)$ decreases when the hybridization is enhanced. Moreover, the function is shifted to lower energy, breaking the symmetry respect to $\omega =0$. The symmetry break has been also observed in figure \[fig1.1\] for the electron and hole quasi-particle bands. For $T=0$, the product $f(\omega)F_{\sigma}(\omega)$ given in Eq. (\[eq3.81\]) vanishes when $\omega > 0$. That is because the Fermi function $f(\omega)$ is zero for that range of $\omega$. As consequence of the shift and the suppression of $F_{\sigma}(\omega)$, the value of $I_{\sigma}$, which is given by the integral in Eq. (\[eq3.81\]), decreases when the hybridization increase. However, from Eq. (\[eq3.8\]), it is necessary a minimum value for $I_{\sigma}$ to obtain a nonzero solution for $\overline{\gamma}$. But, for very strong values of hybridization, the minimum value for $I_{\sigma}$ is not reached and only the zero solution exists. According to this analysis, there is a critical value of hybridization ($V_{0c}^{pd}$), above which, the superconductivity is suppressed. Similar results, which show a critical value for the hybridization, were also obtained in Ref. [@ref18], for a $\vec{k}$-dependent hybridization and using the Hartree-Fock approximation for the electron-electron interaction. In Refs. [@ref13; @ref18], although the high $T_c$ was not considered, the hybridization effects play an important role for resonant states. The discussion above is also valid if the values of the temperature $T$ are raised with $V_{0}^{pd}$ constant. The only difference is that in this case the Fermi function becomes sloping smoothly, changing the product $f(\omega)F_{\sigma}(\omega)$. The effect of the temperature in the Fermi function causes a decreasing of $I_{\sigma}$ and consequently of $T_c$. Since the hybridization is directly related to the applied pressure [@ref13], the transition temperature $T_c$ may have a dependence on pressure through the hybridization. However, the pressure dependence of $T_c$ is very complicated in high temperature superconductors. As discussed in Ref. [@ref13], at least, in conventional superconductivity where the electron pairing is mediated by phonons, two effects are responsible for the pressure dependence of $T_c$. The first one is related to the lattice vibrations, while the second one comes from the electronic contribution. As long the pressure is increased, the lattice vibrations tend to increase $T_c$, whereas the effects of the electronic contribution associated with the hybridization cause a decreases of $T_c$. The figure \[fig5\] shows the function amplitude $\overline{g}$, as a function of temperature $T$ for several values of hybridization. If the hybridization is enhanced, the critical temperature $T_c$ decreases. Therefore, the present results agree with the discussion above in the scenario where the electronic effects dominate. This behavior for $T_c$ is also shown in figure \[fig6\], in phase diagrams displaying $T_c$ versus the total occupation number $n_T$. The numerical results obtained show that there is a critical value of hybridization where the superconducting phase vanishes. These results agree with the ones obtained by the authors of Ref. [@ref19] for heavy-fermion superconductivity with an $X$-boson treatment. The solid lines in figures \[fig6\](a)-(b) show the present result for $T_c$, where the effects of the temperature, the superconductivity and the Coulomb interaction have been included in the calculation of the band shift $W_{\vec{k}\sigma}$. The dashed lines correspond to the results obtained in Ref. [@ref11], where the band shift has been evaluated considering $T=0$, $\overline{\gamma}_{\vec{k}}=0$ and $U\rightarrow \infty$ (see table \[tab:1\]). The difference between the results can be explained by the analysis of the Eq. (\[eq2.14\]) where some of the correlation functions present in the band shift vanish in the $U\rightarrow\infty$ limit. In Eq. (\[eq2.14\]), the correlation function $\langle p_{l-\sigma }^{\dagger}n_{i\sigma}^{d}d_{i-\sigma }\rangle$, which is directly related to the hybridization effects in the band shift, vanishes for $U\rightarrow\infty$. It is important to highlight that the correlation functions in Eq. (\[eq2.14\]) are both negative. Therefore, for large $U$, the correlation function $\langle p_{l-\sigma }^{\dagger}n_{i\sigma}^{d}d_{i-\sigma }\rangle$ decreases and the hybridized shift $W_{\sigma}^{pd}$ is enhanced. However, for intermediate values of $U$, both correlation functions remain finite. As consequence, the hybridization effects in the band shift and, therefore, in the superconductivity, are weakened. The figures \[fig6\](c)-(d) show the present results when the value of $U$ is increased. The main consequence is, within the factorization procedure, to shift the window of doping where superconductivity is found, as in Ref [@ref9]. In figure \[fig7\](a), the chemical potential is show as a function of the total occupation number $n_T$ for $U=12|t^d|$ and two different hybridizations . In Ref. [@ref16], the authors criticized the Roth’s method because the compressibility $k=\frac{\partial n_T}{\partial\mu}$ is negative in the vicinity of half-filling in the Beenen and Edwards result. In Ref. [@ref14], by using a composite operator approach and imposing the Pauli principle, the authors have showed that the compressibility remains negative. However, they also showed that the pairing decreases the strength of the negative compressibility. In the present work, a careful study about the nature of the negative compressibility and the effect of the hybridization near half-filling in Roth’s approximation has been carried out. It has been verified that the most important contribution to provide negative compressibility comes from the spin-term $\langle S_jS_i\rangle$ present in the $d$-part $W_{\vec{k}\sigma}^d$ of the Roth’s band shift $W_{\vec{k}\sigma}$ (see Eqs. \[eq4.8\] and \[Apx26\]). In reference [@ref9], it has been showed that the correlation function $\langle S_jS_i\rangle$ plays an important role on the flattening of the quasi-particle bands. The correlation function $\langle S_jS_i\rangle$ increases with occupation and its effect is pronounced near half-filling. Nevertheless, when the hybridization is present, the numerical results show that it acts in the sense of suppressing the negative compressibility near half-filling. Because the hybridization considered here is $\vec{k}$-independent [@ref15], the hybridization term $W_{\sigma}^{pd}$ of the band shift is constant within the Brillouin zone. Its main effect is to shift the poles of the Green’s functions and consequently to change the value of the chemical potential suppressing the negative compressibility. In figure \[fig7\](a), it is clear that the effect of the hybridization in the chemical potential decreases the negative compressibility. The figure \[fig7\](b) shows the gap function amplitude $\overline{g}$ as a function of the total occupation number. This result agrees with those obtained in figure \[fig3\], where $\overline{g}$ decreases with increasing of $V_{0}^{pd}$. Conclusions {#sec:6} =========== In this work, the Roth’s two-pole approximation is extended to study the superconducting properties of the extended Hubbard model given in Eq. (\[eq2.0\]). The quality of the Roth’s two-pole approximation had been investigated in a previous work by Beenen and Edwards [@ref9]. In their work, they showed the remarkable agreement between the Roth’s and the Monte Carlo results [@ref6; @ref7] for the one-band Hubbard model in the paramagnetic normal state. Moreover, the flat bands obtained with Roth’s procedure show a qualitative agreement with the ARPES experiment data [@ref1] in cuprates. It is important to point out that the flattening observed in the quasi-particle bands which produces a peak in the density of states, can be connected with the Van Hove scenario. In cuprate systems the Van Hove singularity is present in the vicinity of the Fermi energy. Therefore, it is believed that the Van Hove scenario play a fundamental role in order to clarify the mechanism which drives the transition to superconductivity in these interesting materials [@ref20]. The accuracy of the Roth’s results is very related to the adequate evaluation of the band shift. Therefore, the focus of the present work has been to evaluate the Roth’s band shift taking into account relevant effects as Coulomb interaction, temperature, superconductivity and hybridization. Also, the effect of the hybridization in the superconducting of the model has been studied. This work has been carried out following the factorization procedure proposed by Beenen and Edwards [@ref9]. In order to study superconductivity, Beenen and Edwards proposed to include hole operators in the original set of operators that describes the normal state of the system. These operators can introduce the pairing formation in the $d$-band. The factorization procedure proposed by Beenen and Edwards [@ref9] and the $d$-wave symmetry are considered to obtain the gap function amplitude. The hybridization effects are considered by also including a $p$-operator. Thus, the set of operators is enlarged to five, which results in a five-pole approximation to the Green’s functions. The hybridization effects present in the band shift come from some correlation functions. The important point is that part of them vanish when $U\rightarrow\infty$, as it have been done in Ref. [@ref11]. In order to consider properly the hybridization effects, the band shift should be obtained for finite $U$. In fact, the obtained phase diagrams show that the presence of superconducting order exists in a larger range of doping when compared with the $U\rightarrow\infty$ limit [@ref11], for the same hybridization. Therefore, this result suggests that, in the $U\rightarrow\infty$ limit, the hybridization effects are overestimated. That is the ultimate justification for the use of the factorization procedure [@ref9], which is valid for intermediated values of $U$ for the gap function. The Beenen and Edwards’s [@ref9] results are recovered taking $V_0^{pd}=0$ in the present work. The hybridization $V_0^{pd}$ breaks the symmetry between the electron and hole quasi-particle bands, respect to $\vec{k}$ axis. Also, the gap amplitude function $\overline{g}$ and the critical temperature $T_c$ are suppressed with increasing the hybridization $V_0^{pd}$. The results show that the chemical potential does not change significantly away the half-filling. However, near half-filling, it is showed that the negative compressibility decreases with increasing $V_0^{pd}$. The correlation functions present in the $d$-part of the band shift $W_{\vec{k}\sigma}$ were discussed in detail. When the hole operators are also considered to obtain this correlation functions, a new term appears in the $d$-part of the band shift $W_{\vec{k}\sigma}$. The new term is directly associated with the superconducting properties of the system. Nevertheless, this term is quite small and therefore may be disregarding in the calculation of the band shift. Acknowledgments {#acknowledgments .unnumbered} --------------- The authors are grateful to the Grupo de Física Estatística-IFM, Universidade Federal de Pelotas where part of numerical calculations were performed. This work was partially supported by the Brazilian agencies CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) and FAPERGS (Fundação de Amparo à Pesquisa do Rio Grande do Sul). Appendix ======== The correlation functions present in the band shift $W_{\vec{k}\sigma}^d$ can be evaluated by introducing extra $B$ operators, as in the original Roth’s procedure. Combining the Eq. (\[eq2.6\]) and the relation given in Eq. (\[eq2.21\]), it is possible to write $$\langle BA_n \rangle = {\cal{F}}_{\omega}\sum_m \widetilde{G}_{nm}(\omega) \langle [A_m,B]_{(+)}\rangle \label{Apx1},$$ where $A_n$ and $A_m$ are members of the set of operators given in Eq. (\[eq2.9\]). For evaluate $\langle n_{j\sigma }^{d}n_{0\sigma}^{d}\rangle - (n_{0\sigma}^{d})^2$, it has been necessary to introduce the following $B$ operators: $$B_{\vec{k}j\sigma}^{(1)}=\frac{1}{\sqrt{L}}\sum_ie^{-i\vec{k}\cdot\vec{R}_i}n_{i+j\sigma}^{d}d_{i\sigma }^{\dagger} \label{Apx2}$$ and $$B_{\vec{k}j\sigma}^{(2)}=\frac{1}{\sqrt{L}}\sum_ie^{-i\vec{k}\cdot\vec{R}_i}n_{i+j-\sigma}^{d}d_{i\sigma }^{\dagger} \label{Apx3}.$$ By considering the operator given in the Eq. (\[Apx2\]), the correlation function $\langle n_{j\sigma }^{d}n_{0\sigma}^{d}\rangle$ can be written as: $$\langle n_{j\sigma }^{d}n_{0\sigma}^{d}\rangle= \frac{1}{L}\sum_{\vec k}\langle B_{\vec{k}j\sigma}^{(1)}d_{\vec{k}\sigma}\rangle \label{Apx4}$$ where the right side of Eq. (\[Apx4\]) may be obtained using the relation given by Eq. (\[Apx1\]). Therefore, it is necessary to evaluate the anticommutators $[A_{m},B_{\vec{k}l\sigma}^{(1)}]_{(+)}$ for the set of operators $A_{m}$ given in Eq. (\[eq2.9\]). For $m=1..5$, the $A$ operators are given by: $$A_{1\vec{k}\sigma}=\frac{1}{\sqrt{L}}\sum_le^{i\vec{k}\cdot\vec{R}_l}d_{l\sigma} \label{Apx5},$$ $$A_{2\vec{k}\sigma}=\frac{1}{\sqrt{L}}\sum_le^{i\vec{k}\cdot\vec{R}_l}n_{l-\sigma}^dd_{l\sigma} \label{Apx6},$$ $$A_{3\vec{k}\sigma}=\frac{1}{\sqrt{L}}\sum_le^{i\vec{k}\cdot\vec{R}_l}d_{l-\sigma}^{\dagger} \label{Apx7},$$ $$A_{4\vec{k}\sigma}=\frac{1}{\sqrt{L}}\sum_le^{i\vec{k}\cdot\vec{R}_l}n_{l\sigma}^dd_{l-\sigma}^{\dagger} \label{Apx8}$$ and $$A_{5\vec{k}\sigma}=\frac{1}{\sqrt{L}}\sum_le^{i\vec{k}\cdot\vec{R}_l}p_{l\sigma} \label{Apx9}.$$ Thus, the following results have been obtained $$\langle [A_{1\vec{k}\sigma},B_{\vec{k}j\sigma}^{(1)}]_{(+)}\rangle=n_{0\sigma}^{d} -e^{i\vec{k}\cdot \vec{R}_j}\langle d_{0\sigma}^{\dagger}d_{j\sigma}\rangle \label{Apx9.1},$$ $$\langle [A_{2\vec{k}\sigma},B_{\vec{k}j\sigma}^{(1)}]_{(+)} \rangle=\langle n_{0-\sigma }^{d}n_{j\sigma}^{d}\rangle -e^{i\vec{k}\cdot \vec{R}_j}\langle d_{0\sigma}^{\dagger}n_{j-\sigma}^{d}d_{j\sigma}\rangle \label{Apx10},$$ $$\langle [A_{3\vec{k}\sigma},B_{\vec{k}j\sigma}^{(1)}]_{(+)}\rangle=0 \label{Apx11},$$ $$\langle [A_{4\vec{k}\sigma},B_{\vec{k}j\sigma}^{(1)}]_{(+)}\rangle= -\langle n_{j\sigma }^{d}d_{0\sigma}^{\dagger} d_{0-\sigma}^{\dagger}\rangle \label{Apx12},$$ $$\langle [A_{5\vec{k}\sigma},B_{\vec{k}j\sigma}^{(1)}]_{(+)}\rangle=0 \label{Apx13}$$ where, it has been assumed that the brackets are real and unchanged when the indices 0 and $j$ are interchanged. Also, due to translational invariance of the system, $n_{0\sigma}^d=n_{j\sigma}^d$. Considering the relations given by Eqs. (\[Apx1\]) and (\[Apx4\]) with the results from Eq. (\[Apx9.1\]) to Eq. (\[Apx13\]), the correlation function $\langle n_{j\sigma }^{d}n_{0\sigma}^{d}\rangle$ can be written as: $$\begin{aligned} \langle n_{l\sigma }^{d}n_{0\sigma}^{d}\rangle&=&\alpha_{\sigma}n_{\sigma}^d -\alpha_{j\sigma}n_{0j\sigma}^d + \beta_{\sigma}\langle n_{0-\sigma }^{d}n_{j\sigma}^{d}\rangle -\beta_{j\sigma}m_{j\sigma}\nonumber \\ & &+ ~\beta_{\sigma}^{(1)}\langle n_{j\sigma }^{d}d_{0-\sigma}^{\dagger} d_{0\sigma}^{\dagger}\rangle \label{Apx14}\end{aligned}$$ where $n_{0\sigma}^d=n_{\sigma}^d$. In Eq. (\[Apx14\]), it has been introduced the following definitions: $$n_{0j\sigma}^d=\langle d_{0\sigma}^{\dagger}d_{j\sigma}\rangle=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}G_{\vec{k}\sigma}^{11}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx15},$$ $$m_{j\sigma}=\langle d_{0\sigma}^{\dagger}n_{j-\sigma }^{d}d_{j\sigma}\rangle=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}G_{\vec{k}\sigma}^{12}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx16},$$ $$\alpha_{j\sigma}=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}\widetilde{G}_{\vec{k}\sigma}^{11}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx17},$$ $$\beta_{j\sigma}=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}\widetilde{G}_{\vec{k}\sigma}^{12}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx18}$$ and $$\beta_{j\sigma}^{(1)}=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}\widetilde{G}_{\vec{k}\sigma}^{14}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx19}$$ where $G_{\vec{k}\sigma}^{11}$ is given in Eq. (\[eq2.22\]). The remaining Green’s functions $G_{\vec{k}\sigma}^{12}$ and $G_{\vec{k}\sigma}^{14}$ are given respectively by $$G_{k\sigma }^{12}(\omega)=\frac{n_{-\sigma}^d(\omega - E_{55})\left[B\left(\omega\right)- (\omega + E_{11}){\overline{\gamma}_{k}}^2\right]} {\overline{D}\left( \omega \right)} \label{Apx19.1}$$ and $$\begin{aligned} G_{k\sigma }^{14}(\omega)&=&(n_{-\sigma}^d)^2(1-n_{-\sigma}^d)^2U\overline{\gamma}_{k}\nonumber\\ & &\times\frac{(\omega - E_{55})(\omega + E_{11}-Un_{-\sigma}^d)} {\overline{D}\left( \omega \right)} \label{Apx19.2}\end{aligned}$$ where $B\left(\omega\right)$ is defined in Eq. (\[eq4.5\]) and $\overline{D}\left( \omega \right)$ in Eq. (\[eq2.33\]). It is also necessary to define $$\widetilde{G}_{\vec{k}\sigma }^{11}(\omega)=\frac{G_{\vec{k}\sigma}^{11}(\omega) - G_{\vec{k}\sigma}^{12}(\omega)}{1-n_{-\sigma}^d}, \label{Apx19.3}$$ $$\widetilde{G}_{\vec{k}\sigma }^{12}(\omega)=\frac{G_{\vec{k}\sigma}^{12}(\omega) - n_{-\sigma}^dG_{\vec{k}\sigma}^{11}(\omega)}{n_{-\sigma}^d(1-n_{-\sigma}^d)} \label{Apx19.4}$$ and $$\widetilde{G}_{\vec{k}\sigma }^{14}(\omega)=\frac{G_{\vec{k}\sigma}^{14}(\omega) - n_{-\sigma}^dG_{\vec{k}\sigma}^{13}(\omega)}{n_{-\sigma}^d(1-n_{-\sigma}^d)} \label{Apx19.5}$$ where $G_{\vec{k}\sigma}^{13}$ is given in Eq. (\[eq3.6\]). The correlation function $\langle n_{0-\sigma }^{d}n_{j\sigma}^{d}\rangle$ present in Eq. (\[Apx14\]), can be obtained by repeating the procedure above using the operator $B_{\vec{k}j\sigma}^{(2)}$ (given by Eq. (\[Apx3\])). Thus, $$\begin{aligned} \langle n_{j-\sigma }^{d}n_{0\sigma}^{d}\rangle&=&\alpha_{\sigma}n_{-\sigma}^d +\beta_{\sigma}\langle n_{0-\sigma }^{d}n_{j-\sigma}^{d}\rangle + \alpha_{j\sigma}^{(1)}n_{0j\sigma}^{(1)}\nonumber\\ & & + \beta_{j\sigma}^{(1)}m_{j\sigma}^{(1)} -\beta_{\sigma}^{(1)}\langle n_{j-\sigma }^{d}d_{0\sigma}^{\dagger} d_{0-\sigma}^{\dagger}\rangle \label{Apx19.6}\end{aligned}$$ where $$n_{0j\sigma}^{(1)}=\langle d_{j-\sigma}d_{0\sigma}\rangle=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}G_{\vec{k}\sigma}^{13}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx20},$$ $$m_{j\sigma}^{(1)}=\langle d_{j-\sigma}n_{j\sigma }^{d}d_{0\sigma}\rangle=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}G_{\vec{k}\sigma}^{14}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx21},$$ and $$\alpha_{j\sigma}^{(1)}=\frac{1}{L} \sum_{\vec k}{\cal F}_{\omega}\widetilde{G}_{\vec{k}\sigma}^{13}e^{i\vec{k}\cdot \vec{R}_j} \label{Apx22}$$ with $$\widetilde{G}_{\vec{k}\sigma }^{13}(\omega)=\frac{G_{\vec{k}\sigma}^{13}(\omega) - G_{\vec{k}\sigma}^{14}(\omega)}{1-n_{-\sigma}^d}. \label{Apx21.1}$$ Reversing the spin labels i.e., $\sigma \rightarrow -\sigma$ in Eq. (\[Apx19.6\]) and substituting the result into Eq. (\[Apx14\]), then $$\begin{aligned} \langle n_{j\sigma }^{d}n_{0\sigma}^{d}\rangle&=&\frac{\alpha_{\sigma}n_{\sigma}^d -\alpha_{j\sigma}n_{0j\sigma}^d + \beta_{\sigma}\alpha_{-\sigma}n_{\sigma}^d -\beta_{j\sigma}m_{j\sigma}}{1-\beta_{\sigma}\beta_{-\sigma}}\nonumber \\ & &+ \frac{1}{1-\beta_{\sigma}\beta_{-\sigma}} \left [\beta_{\sigma}(\alpha_{j-\sigma}^{(1)}n_{0j-\sigma}^{(1)} +\beta_{j-\sigma}^{(1)}m_{j-\sigma}^{(1)})\right.\nonumber \\ & &\left.+(\beta_{\sigma}\beta_{-\sigma}^{(1)} +\beta_{\sigma}^{(1)})\langle n_{j\sigma }^{d}d_{0-\sigma}^{\dagger} d_{0\sigma}^{\dagger}\rangle \right ] \label{Apx23}.\end{aligned}$$ For evaluate the two last correlation functions present in Eq. (\[eq4.8\]), the following operators have been introduced $$B_{\vec{k}j\sigma}^{(3)}=\frac{1}{\sqrt{L}}\sum_ie^{-i\vec{k}\cdot\vec{R}_i} d_{i+j\sigma}^{\dagger}d_{i+j-\sigma }d_{i-\sigma }^{\dagger} \label{Apx24}$$ and $$B_{\vec{k}j\sigma}^{(4)}=\frac{1}{\sqrt{L}}\sum_ie^{-i\vec{k}\cdot\vec{R}_i} d_{i+j\sigma}^{\dagger}d_{i+j-\sigma }^{\dagger}d_{i-\sigma } \label{Apx25}.$$ Using $B_3$, and following the procedure outlined above, the correlation function $\langle d_{j\sigma }^{\dagger}d_{j-\sigma }d_{0-\sigma }^{\dagger}d_{0\sigma }\rangle$ is given by $$\begin{aligned} \langle S_jS_0\rangle&=&\langle d_{j\sigma }^{\dagger}d_{j-\sigma }d_{0-\sigma }^{\dagger}d_{0\sigma }\rangle= -\frac{1}{1+\beta_{\sigma}}\left[ \alpha_{j\sigma}n_{0j-\sigma}^d\right. \nonumber \\ & & \left.+ \beta_{j\sigma}m_{j-\sigma} -\alpha_{j\sigma}^{(1)}n_{0j-\sigma}^{(1)} - \beta_{j\sigma}^{(1)}m_{j-\sigma}^{(1)}\right]. \nonumber \\ \label{Apx26}\end{aligned}$$ Similarly, using $B_4$ $$\begin{aligned} \langle d_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger}d_{0-\sigma }d_{0\sigma }\rangle&=& \frac{\alpha_{j\sigma}n_{0j-\sigma}^d +\beta_{j\sigma}(n_{0j-\sigma}^d -m_{j-\sigma} ) } {1-\beta_{\sigma}}\nonumber \\ & &+\frac{\beta_{\sigma}^{(1)}\langle n_{0\sigma }^dd_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger} \rangle}{1-\beta_{\sigma}} \label{Apx27}\end{aligned}$$ where the $d$-wave symmetry has been considered, therefore, $\langle d_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger}\rangle=0$. The four $B^{(p)}$ operators introduced up to now are exactly the same operators used by Roth in Ref. [@ref8] to obtain the band shift $W_{k\sigma}$ in the normal state and without hybridization. However, in the present work, due to the presence of the hole operators (see Eq. (\[eq2.9\])), a new $B$ operator, which is given by $$B_{\vec{k}j\sigma}^{(5)}=\frac{1}{\sqrt{L}}\sum_ie^{-i\vec{k}\cdot\vec{R}_i} d_{i\sigma}^{\dagger}d_{i+j\sigma }d_{i+j-\sigma } \label{Apx28},$$ has been introduced. With this operator, the correlation function $\langle n_{0\sigma }^dd_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger} \rangle$ present in Eq. (\[Apx27\]) may be evaluated. Thus, $$\begin{aligned} \langle d_{0\sigma}^{\dagger}d_{j-\sigma }d_{j\sigma }d_{0\sigma}\rangle&=& \frac{\alpha_{j\sigma}^{(1)}n_{0j\sigma}^d +\beta_{j\sigma}^{(1)}(n_{0j\sigma}^d -m_{j\sigma} ) } {1-\beta_{\sigma}}.\nonumber \\ \label{Apx29}\end{aligned}$$ Substituting the result (\[Apx29\]) into Eq. (\[Apx27\]), the correlation function $\langle d_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger}d_{0-\sigma }d_{0\sigma }\rangle$ can be rewritten as: $$\begin{aligned} \langle d_{j\sigma }^{\dagger}d_{j-\sigma }^{\dagger}d_{0-\sigma }d_{0\sigma }\rangle&=& \frac{\alpha_{j\sigma}n_{0j-\sigma}^d +\beta_{j\sigma}(n_{0j-\sigma}^d -m_{j-\sigma} ) } {1-\beta_{\sigma}}\nonumber \\ & &+\frac{\beta_{\sigma}^{(1)}[\alpha_{j\sigma}^{(1)}n_{0j\sigma}^d +\beta_{j\sigma}^{(1)}(n_{0j\sigma}^d -m_{j\sigma} )]}{(1-\beta_{\sigma})^2}.\nonumber \\ \label{Apx30}\end{aligned}$$ The result given in Eq. (\[Apx29\]) can be used in Eq. (\[Apx23\]) to obtain $$\begin{aligned} \langle n_{j\sigma }^{d}n_{0\sigma}^{d}\rangle&=& (n_{0j\sigma}^d)^2 - \frac{\alpha_{j\sigma}n_{0j\sigma}^d + \beta_{j\sigma}m_{j\sigma}} {1-\beta_{\sigma}\beta_{-\sigma}}\nonumber \\ & &+ \frac{1}{1-\beta_{\sigma}\beta_{-\sigma}} \left [\beta_{\sigma}(\alpha_{-\sigma}^{(1)}n_{0j-\sigma}^{(1)} +\beta_{j-\sigma}^{(1)}m_{j-\sigma}^{(1)})\right.\nonumber \\ & &\left.-\frac{\alpha_{j\sigma}^{(1)}n_{0j\sigma}^d (\beta_{\sigma}\beta_{-\sigma}^{(1)}+\beta_{\sigma}^{(1)})}{1-\beta_{\sigma}}\right.\nonumber \\ & &-\left.\frac{\beta_{j\sigma}^{(1)}(n_{0j\sigma}^d-m_{j\sigma}) (\beta_{\sigma}\beta_{-\sigma}^{(1)}+\beta_{\sigma}^{(1)})} {1-\beta_{\sigma}}\right ] \label{Apx31}.\end{aligned}$$ Finally, with the results (\[Apx29\]), (\[Apx30\]) and (\[Apx31\]) into Eq. (\[eq4.9\]), the following result has been obtained $$n_{-\sigma}^d(1-n_{-\sigma}^d)W_{\vec{k}\sigma }^d=h_{1\sigma} +\sum_{j\neq 0}t_{0j}^{d}e^{i\vec{k}\cdot\vec{R}_j}(h_{2j\sigma} +h_{3j\sigma}) \label{Apx32}$$ where $$h_{1\sigma}= -\sum_{j\neq 0}t_{0j}^{d}(n_{j0\sigma }^{d} - 2m_{j\sigma }) \label{Apx33},$$ $$\begin{aligned} h_{2j\sigma}&=&-\left\{\frac{\alpha_{j\sigma}n_{0j\sigma}^d + \beta_{j\sigma}m_{j\sigma}} {1-\beta_{\sigma}\beta_{-\sigma}} + \frac{\alpha_{j\sigma}n_{0j-\sigma}^d + \beta_{j\sigma}m_{j-\sigma}}{1+\beta_{\sigma}}\right.\nonumber\\ & &+\left.\frac{\alpha_{j\sigma}n_{0j-\sigma}^d +\beta_{j\sigma}(n_{0j-\sigma}^d -m_{j-\sigma} ) }{1-\beta_{\sigma}}\right\}, \label{Apx34}\end{aligned}$$ $$\begin{aligned} h_{3j\sigma} &=&\phi_{j\sigma}\left\{\frac{\alpha_{j\sigma}^{(1)}n_{0j\sigma}^d +\beta_{j\sigma}^{(1)}(n_{0j\sigma}^d -m_{j\sigma} )}{1-\beta_{\sigma}}\right\}\nonumber\\ & &- \frac{\beta_{\sigma}[\alpha_{j-\sigma}^{(1)}n_{0j-\sigma}^{(1)} +\beta_{j-\sigma}^{(1)}m_{j-\sigma}]}{1-\beta_{\sigma}\beta_{-\sigma}}\nonumber\\ & & -\frac{\alpha_{j\sigma}^{(1)}n_{0j-\sigma}^{(1)} + \beta_{j\sigma}^{(1)} m_{j-\sigma}^{(1)}}{1+\beta_{\sigma}} \label{Apx35}\end{aligned}$$ with $$\phi_{j\sigma} =\frac{(\beta_{\sigma}^{(1)}}{1-\beta_{\sigma}} +\frac{\beta_{\sigma}\beta_{-\sigma}^{(1)}+\beta_{\sigma}^{(1)}}{1-\beta_{\sigma}\beta_{-\sigma}} \label{Apx36}.$$ E. Dagotto, Rev. Mod. Phys., [**66**]{} (1993) 763. M.B. Zolfl, Th. Maier, Th. Pruschke and J. Keller, Eur. Phys. J. B. [**7**]{} (1999) 377. D.S. Dessau et al., Phys. Rev. Lett. [**71**]{}, (1993) 2781. E. Dagotto et al., Phys. Rev. Lett. [**73**]{}, (1994) 728. J. Hubbard, Proc. R. Soc. London, A [**276**]{} (1963) 238. N. Bulut, D.J. Scalapino and S.R. White, Phys. Rev. B [**50**]{} (1994) 7215. N. Bulut, D.J. Scalapino and S.R. White, Phys. Rev. Lett. [**73**]{} (1994) 748. J. Beenen and D.M. Edwards, Phys. Rev. B[**52**]{} (1995) 13636. L.M. Roth, Phys. Rev. [**184**]{} (1969) 451. A.B. Harris and R.V. Lange, Phys. Rev. [**157**]{} (1967) 295. Z. B. Huang and H. Q. Lin, Phys. Rev. B [**63**]{} (2001) 115112. M.B. Zolfl, Th. Maier, Th. Pruschke and J. Keller, Eur. Phys. J. B. [**13**]{} (2000) 47. V. J. Emery, Phys. Rev. Lett. [**58**]{} (1987) 2794. E.J. Calegari, S.G. Magalhães and A.A. Gomes, Intern. Journ. of Modern Phys. B, Vol. [**18**]{} No. 2 (2004) 241. T. Herrmann and W. Nolting, J. Magn. Magn. Mater. [**170**]{} (1997) 253. G. M. Japiassu, M. A. Continentino and A. Troper, Phys. Rev. B [**45**]{} (1992) 2986. E.J. Calegari, S.G. Magalhães and A.A. Gomes, Physica B Vol. [**359-361C**]{} (2005) 560. Tudor D. Stanescu, Ivar Martin and Philip Philips, Phys. Rev. B [**62**]{} (2000) 4300. A. Avella, F. Mancini, D. Villani, L. Siurakshina, V.Y. Yushankhai, Intern. Journ. of Modern Phys. B, Vol. [**12**]{} No. 1 (1998) 81. E.J. Calegari, S.G. Magalhães and A.A. Gomes, Intern. Journ. of Modern Phys. B, Vol. [**16**]{} No. 26 (2002) 3895. K. Sengupta and S.K. Ghatak, Physics Lett. A [**186**]{} (1994) 419. Hybertsen et al., Phys. Rev. B [**39**]{} (1989) 9028 . G. M. Japiassu, M. A. Continentino and A. Troper, J. Appl. Phys. [**73**]{} (1993) 6648. L.H.C.M. Nunes [*et al.*]{}, Phys. Rev. B [**68**]{} (2003) 134511. R.S. Markiewicz, J. Phys. Chem. Solids [**58**]{} (1997) 1179. [^1]: ggarcia@ccne.ufsm.br
{ "pile_set_name": "ArXiv" }
--- abstract: 'Interference of multiple photons via a linear-optical network has profound applications for quantum foundation, quantum metrology and quantum computation. Particularly, a boson sampling experiment with a moderate number of photons becomes intractable even for the most powerful classical computers, and will lead to “quantum supremacy". Scaling up from small-scale experiments requires highly indistinguishable single photons, which may be prohibited for many physical systems. Here we experimentally demonstrate a time-resolved version of boson sampling by using photons not overlapping in their frequency spectra from three atomic-ensemble quantum memories. Time-resolved measurement enables us to observe nonclassical multiphoton correlation landscapes. An average fidelity over several interferometer configurations is measured to be 0.936(13), which is mainly limited by high-order events. Symmetries in the landscapes are identified to reflect symmetries of the optical network. Our work thus provides a route towards quantum supremacy with distinguishable photons.' author: - 'Xu-Jie Wang$^{1,\,2,\,3,\,*}$' - 'Bo Jing$^{1,\,2,\,3,\,*}$' - 'Peng-Fei Sun$^{1,\,2,\,3}$' - 'Chao-Wei Yang$^{1,\,2,\,3}$' - 'Yong Yu$^{1,\,2,\,3}$' - 'Vincenzo Tamma$^{4,\,5}$' - 'Xiao-Hui Bao$^{1,\,2,\,3}$' - 'Jian-Wei Pan$^{1,\,2,\,3}$' bibliography: - 'myref.bib' title: 'Time-resolved boson sampling with photons of different colors' --- Universal linear-optical quantum computing [@Kok2007a] is generally considered to be challenging in the near future. An intermediate quantum computing model, namely “boson sampling" which requires less demanding experimental overheads [@aaronson2011computational], has attracted intensive experimental interests in recent years  [@spring2013boson; @broome2013photonic; @tillmann2013experimental; @Crespi2013c; @Spagnolo2014; @Carolan2014; @Carolan2015; @Bentivegna2015; @wang2017high]. A boson sampling machine can be realized by interfering many single photons through a linear optical network. Sampling the output photon distribution is strongly believed to be intractable for a classical computer for large photon numbers [@aaronson2011computational; @Rohde2012]. For experimental realizations, photon indistinguishability is crucially important, since for distinguishable photons the computational complexity collapses to a polynomial scaling, which becomes tractable for a classical computer [@Rohde2012]. Requiring of complete overlap in the photonic spectra as a way to achieve photon indistinguishability may impose a challenge for many types of photon sources, particularly the solid-state single photon emitters [@Aharonovich2016]. The inhomogeneous distribution of complex mesoscopic environment of the solid state tend to cause frequency distinguishability for photons created from different emitters. Photon indistinguishability and interference are also very important fundamentally. The HOM dip is a beautiful manifestation of the interference of two identical photons [@alley1986proceedings; @Hong1987; @shih1988new; @Walmsley2017]. When two photons are different in color, it was shown that high interference visibility can be recovered by using time-resolved measurement [@Legero2003b; @Legero2004]. Perfect coalescence can still happen when photons are detected simultaneously in the two output modes. Later it was studied that perfect entanglement swapping by interfering color-different photons is also possible by using time-resolved measurement and active feed forward [@Zhao2014]. Very recently, Tamma and Laibacher showed that by using polarization- and time-resolved detections at the output of a random linear optical network, a much richer multiphoton correlation landscape can be observed for a Boson sampling experiment with photons not overlapping in their temporal or frequency spectra and photons with different polarizations [@tamma2015multiboson]. They also proved that the computational complexity is at least as hard as in standard boson sampling [@laibacher2015physics; @tamma2016multi; @laibacher2018toward]. In this paper, we report for the first time a time-resolved boson sampling experiment when no overlap occurs between the photonic spectra[@tamma2015multiboson; @laibacher2015physics]. We make use of three cold atomic ensembles to create three independent single photons, which are injected into a linear optical network with its internal phase being adjustable. At the output ports of the network, the three photons are detected in a time-resolved manner. We observe different kinds of multiphoton correlation landscapes as we change the phase configuration. The observed coincidence landscapes agree very well with theoretical calculations. Moreover, we also find that symmetries in the multiphoton coincidence landscape can reveal symmetries of the optical network [@laibacher2017spectrally]. Our work enables multiphton interference and distinguishability to be recovered by using time-resolved measurements, and thus provides a route towards demonstrating quantum supremacy [@Preskill2012; @Harrow2017; @laibacher2018toward] with nonidentical photons. ![**Experimental setup. a**, Schematic diagram for one setup of atomic ensemble quantum memory to create one single photon. A ring cavity, which mainly consists of one partially reflecting mirror (PR, $R\simeq 80\%$) and two highly reflecting mirrors (HR, $R\ge 99.9\%$), is used to enhance single photon generation rate. The half- and quarter-wave plates ($\lambda /2$ and $\lambda /4$) in the cavity are used for polarization compensation. The cavity is intermittently locked by a piezoelectric ceramic transducer (PZT). In our boson sampling experiment, we make use of three similar setups to create three single photons. **b**, Linear optic network to implement boson sampling. The network is mainly composed of three beam splitters (BS, $R:T=1:1$ or $2:1$). Heralded single photon efficiency from atoms to the network is 45%. **c**, Atomic levels used in the single-photon source. Atoms are prepared at state $|1,-1\rangle_{F,mF}$ by optical pumping during each write process. After applying the write pulse, there are two orthogonal polarized write-out photons collected, once the spinwave is produced in state $|2,+1\rangle_{F,mF}$, a $\pi$-pulse is applied to transfer the excitation to state $|2,-1\rangle_{F,mF}$. **d** A typical read-out single-photon profile. The histogram is a measured photon counts distribution by a single-photon detector during coincidence measurement. The solid line represent the read-out photon profile that we used in theoretical calculation.[]{data-label="fig:Setup"}](\string"Fig1".pdf){width="\columnwidth"} In our experiment we make use of a versatile setup of cold atoms [@yang2016efficient] to create single photons [@Chen2006; @Matsukevich2006b]. An atomic ensemble is captured and cooled through magneto-optical trapping (MOT). By employing the spontaneous Raman scattering process with a $\Lambda$ energy scheme, we can create nonclassical correlations between a scattered photon and a spinwave excitation in a probabilistic way [@Duan2001]. The spinwave excitation can be later retrieved as a second photon on demand. The experimental scheme is depicted in Fig. \[fig:Setup\]. To suppress high-order events, the excitation probability in the write process is typically very low. In order to enhance the photon generation rate, we make use of a dual Raman scattering process. A $\sigma^+$ polarized write-out photon heralds a $|2,-1\rangle_{F,mF}$ collective excitation. While a $\sigma^-$ polarized write-out photon heralds a $|2,+1\rangle_{F,mF}$ collective excitation, and we conditionally apply a $\pi$ pulse which transfer the $|2,+1\rangle_{F,mF}$ excitation to $|2,-1\rangle_{F,mF}$ excitation. The $|2,-1\rangle_{F,mF}$ excitation is later retrieved as a single photon on demand. Polarization multiplexing enables us to double the photon generation rate without increasing the contribution of high-order events. $p_e$ 0.01 0.02 0.03 0.04 0.06 ---------------- ---------- ----------- ------------ ----------- ----------- -- $g^{(2)}(s_1)$ 0.072(7) 0.126(9) 0.201(11) 0.233(11) 0.322(12) $g^{(2)}(s_2)$ 0.094(9) 0.165(10) 0.222(11) 0.279(11) 0.335(13) $g^{(2)}(s_3)$ 0.110(9) 0.142(8) 0.220(101) 0.251(10) 0.361(13) : Measurement of the second-order auto-correlation $g^{(2)}$.[]{data-label="table1"} To demonstrate multiphoton boson sampling, we make use of three similar setups to create three single photons as shown in Fig. \[fig:Setup\]. We first measure the single photon qualities. For each setup, we repeat the write process until a write-out photon is detected, and retrieve the heraldedly prepared spin wave excitation afterwards to a single photon. We measure the second-order auto-correlation $g^{(2)}$ for the retrieved photon as a function of retrieval time. Within a storage duration of 50 $\mu$s, we find that the parameter $g^{(2)}$ hardly changes for each setup. We also change the excitation probability $p_e$ and measure the parameter $g^{(2)}$ accordingly for each setup, with the results shown in Table \[table1\]. Each value is averaged over the range of $0\sim50$ $\mu$s. The $g^{(2)}$ parameter under the same write-out probability is nearly the same for three setups. In our experiment, we set the excitation probability to be $p_e=0.04$ by making compromise between single-photon generating rate and the single-photon quality. It is crucial for a boson sampling experiment that many single photons are released simultaneously. For traditional photon sources like spontaneous parametric down-conversion or spontaneous four wave mixing, this requirement imposes a scalability issue, since heralded photons are generated randomly in time and simultaneous creation is rare. An additional quantum memory may be employed to solve this issue [@Kaneda2015; @Kaneda2017]. More recently, the so-called scattershot multiboson correlation sampling problem was introduced by allowing additional sampling in the photonic inner degrees of freedom (e.g. central frequencies and times) at the interferometer input and output[@laibacher2018toward]. In this case, the classical hardness of approximate boson sampling experiments emerges also for photons emitted at random time or with random colors[@laibacher2018toward]. In our experiment however, the on-demand character of the photon source directly enables us to create multiple photons in an efficient way. If the memory lifetime is long enough, we can simply repeat the write process for setup until success and simultaneously retrieve three photons when all setups are ready. While our current setup has a limited lifetime ($\sim 64 {\rm \mu s}$), thus we set a maximal trial number of $m=7$. If less than three setups are ready when maximal trial limit is met, we restart the preparation process, otherwise we retrieve the three photons simultaneously. Such a preparation process enhances the $n$ photon rate by a factor of $\left[ 1-(1-p_e)^m \right]^n/(mp_e^n)$. ![image](\string"Fig2".pdf){width="1.3\columnwidth"} The prepared multiple single photons are coupled into a multiport interferometer, which is constructed using bulk linear optics as shown in Fig. \[fig:Setup\]b. The internal phase is actively stabilized to an adjustable value $\varphi$. Photons at each output mode are detected with a single-photon detector (SPD). All detected events are registered with a multichannel time-to-digit converter (TDC), and from which we can analyse multifold temporal correlations. To demonstrate boson sampling with color-different photons, the three photons are blue detuned by $2\pi \times 72.4$ MHz ($s_1$), $2\pi \times 33.0$ MHz ($s_2$), $2\pi \times 52.4$ MHz ($s_3$) relative to the D1-line transition $\left|F\!=\!1\right>$ $\leftrightarrow$ $\left|F'\!=\!2\right>$ by adjusting the read beam frequency accordingly for each source. We make measurements for a number of different phases $\varphi$, and analyze multiphoton temporal correlations, with results shown in Fig. \[fig2\]a-d. ![**Measured temporal correlation landscapes in a 3-dimension coordinate.** **a**, $\varphi$ is set to $\pi /2$. **b**, $\varphi$ is set to $3\pi /2$. Both plots are viewed along the direction $(1, 1, 1)$. Each data point represents a three-photon coincidence event registered at $(t_1,t_2,t_3)$.[]{data-label="fig3"}](\string"Fig3".pdf){width="\columnwidth"} We find that the correlation landscapes have very interesting structures, and changes remarkably as we change the phase $\varphi$. In the case of $\varphi=0$, the unitary transformation is $$U_0=\frac{1}{\sqrt{3}} \begin{bmatrix} -1 & (\sqrt{3}-1)/2 & i(\sqrt{3}+1)/2 \\ i & i(\sqrt{3}+1)/2 & (1-\sqrt{3})/2 \\ 1 & -1 & i \end{bmatrix}.$$ The permanent of $U_0$ is 0, which means that interference occurs destructively for all multiphoton paths if the input photons are perfectly identical each other. Thus, no three-fold coincidence will be detected at the output modes. In our experiment however the input photons have different frequencies ($\Delta\omega \geq \delta\omega$, $\delta\omega = 2\pi \times 12.9 \,\rm{MHz}$ is the single-photon linewidth). Nevertheless, the adoption of fast detection enable us to erase the color information, and destructive interference can be recovered if the three photons are detected simultaneously[@tamma2015multiboson]. This is clearly proved by Fig. \[fig2\]a, as the region around $(0,0)$ is rather dim in comparison with the peaks nearby. Departing from the dip, we observe beating patterns both in the direction of $x \equiv t_1-t_3$ and $y \equiv t_2-t_3$. Based on our Fourier analysis, beating in the x direction has a period of $51.4(11)$ ns, which is mainly due to interference of photon $s_1$ and and photon $s_3$. While beating in the y direction has a period of $24.9(3)$ ns, which is mainly due to interference of photon $s_1$ and and photon $s_2$. In the case of $\varphi=\pi/2$, the interferometer acts as a symmetric tritter which is described as $U_{ds}=e^{i2\pi ds/3}/\sqrt{3}$, where $s=1,2,3$ refers to the modes at the source side and $d=1,2,3$ refers to the modes at the detector side. The multiphoton correlation landscape has a quite different structure, as shown in Fig. \[fig2\]b[@laibacher2017spectrally]. At the center point, it corresponds to the case of identical photons, for which the output of this tritter is either one photon in each mode or three photons in a same mode [@Spagnolo2013]. Away from the center point, the interference period is measured to be $49.7(7)$ ns and $50.3(13)$ ns respectively for the direction $x$ and $y$, which are mainly due to equal contribution of all three pairwise interferences of the three input photons[@laibacher2017spectrally]. We also find that the observed correlation landscapes have some symmetries which may reflect symmetries either of the photons or the linear optic network [@laibacher2017spectrally]. When $\varphi$ is switched from $0$ to $\pi$ or from $\pi/2$ to $3\pi/2$, it is equivalent of interchanging the mode label 1 and 2 after $BS_3$. Therefore, the correlation landscape of $\varphi=\pi$ ($3\pi/2$) should looks the same as the landscape of $\varphi=0$ ($\pi/2$) if we interchange the axis $x$ and $y$. This is clearly proved by our result shown in Fig. \[fig2\]c and Fig. \[fig2\]d, comparing with Fig. \[fig2\]a and Fig. \[fig2\]b respectively. Moreover, in the case of $\varphi=\pi /2$ or $3\pi /2$ the network acts as a symmetric tritter, which should give rise to a three fold symmetry in the correlation landscapes [@laibacher2017spectrally]. Therefore, we replot our experimental data in a 3-dimensional coordinate as shown in Fig. \[fig3\]. Each data point represents a threefold coincidence event at the time coordinate ($t_1$, $t_2$, $t_3$). We can clearly identify a threefold rotational symmetry around the axis $(1,1,1)$, and three mirror symmetric planes[@laibacher2017spectrally]. To evaluate how well our experiment demonstrates the boson sampling process, we calculate the corresponding theoretical landscapes and made comparisons. By modeling the temporal wave packet with a function shown in Fig. \[fig2\]d, the calculated theoretical landscapes are shown in Fig. \[fig2\]e-h. Apparently the experimental and theoretical landscapes resemble each other very well. To make a quantitative evaluation, we define a fidelity function as [@tillmann2013experimental] $$F=\frac{\sum\nolimits_{i,j}^{N,M} f_{e}\left( x_i,y_j \right)^{1/2} \centerdot f_{t}\left( x_i,y_j \right)^{1/2}}{\left[ \sum\nolimits_{i,j}^{N,M} f_{e}\left( x_i,y_j \right) \right] ^{1/2} \centerdot \left[\sum\nolimits_{i,j}^{N,M} f_{t}\left( x_i,y_j \right) \right] ^{1/2}},$$ where $f_{e}(x,y)$ is the experimental measured probability distribution function, and $f_{t}(x,y)$ is the theoretical distribution. The fidelity gets its maximal value of 1 for two same landscapes. The calculated fidelities for different phases are shown as insets in Fig. \[fig2\], and give an average value of $\overline{F} = 0.936(13)$. We attribute the imperfect fidelity mainly to limited single photon qualities. In summary, we have demonstrated a time-resolved version of boson sampling with color-different photons. The observed correlation landscapes have very rich structures and shows some form of symmetry which is inherently related with symmetries of the linear-optic network. Besides, the adoption of memory-based photon sources enables efficient creation of multiphoton state via feedback. Moreover, the method of using color-different photons for boson sampling mitigates the requirement in generating identical photons significantly for many physical systems. By employing a deterministic approach of photon creation and efficient coupling, scalable extending the current experiment to more disparate photons will become possible in the near future, and may lead to quantum supremacy with photons in a conceptually new way. This work also motivates future demonstration of the computational hardness of boson sampling with input photons with random colors[@laibacher2018toward] as well as novel schemes for the characterization of the evolution of arbitrary single photon states in linear optical networks[@tamma2015multiboson; @laibacher2017spectrally; @zimmermann2017role]. This work was supported by National Key R&D Program of China (No. 2017YFA0303902), National Natural Science Foundation of China, and the Chinese Academy of Sciences. V.T. acknowledges partial support from the Army Research Laboratory under Cooperative Agreement Number W911NF-17-2-0179. V.T. also acknowledges useful discussions with Simon Laibacher.
{ "pile_set_name": "ArXiv" }
--- author: - | Zhan-jun Zhang$^1$, Zhong-xiao Man$^1$ and Yong Li$^2$\ [$^1$ Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China]{}\ [$^2$ Department of Physics, Central China Normal University, Wuhan 430079, China]{}\ [Email: zhangzj@wipm.ac.cn]{} title: 'Economically Improving Message-Unilaterally-Transmitted Quantum Secure Direct Communication to Realize Two-Way Communication [^1]' --- [*We present a subtle idea to economically improve message-unilaterally-transmitted quantum secure direct communication (QSDC) protocols to realize two-way secure direct communication.*]{}\ [*PACS:*]{} 03.67.Hk, 03.65.Ud\ Suppose Alice and Bob securely share a secret key. When the message sender, say, Alice, wants to send her secret messages to Bob, she can first use the secret key to encrypt her secret messages, then she publicly sends the encrypted messages to the message receiver, say, Bob, via a classical channel. After receiving the encrypted messages, by using the secret key Bob can decrypt the encrypted messages to securely obtain the secret messages which Alice wants to send him. According to the classical one-time-pad method, if the secret key is used one time, then both the secret messages and the secret key are secure. Quantum key distribution (QKD) is an ingenious application of quantum mechanics, in which two remote legitimate users (Alice and Bob) establish a shared secret key through the transmission of quantum signals. Hence, much attention has been focused on QKD after the pioneering work of Bennett and Brassard published in 1984 \[1\]. Till now there have been many theoretical QKDs \[2-20\]. Recently, a novel concept, quantum secure direct communication (QSDC) was proposed \[15,16,19\]. In those QSDC protocols \[15,16,19\], secret messages can be transmitted directly without first creating a secret key to encrypt them and quantum mechanics ensures the security of the transmitted secret messages. Nevertheless, so far all the QSDC protocols \[15,16,19,21\] are message-unilaterally-transmitted communication protocols, alternatively, two legitimate parties can not simultaneously transmit their different secret messages to each other (dialogue) in a set of quantum communication device. In general, convenient bidirectional simultaneous mutual communications (dialogue) are very useful and usually desired. In fact, the goal to realize a two-way secure direct communication can be achieved by adopting the strategy of using two sets of message-unilaterally-transmitted QSDC device between two parties. Later in this short paper, we will show the above mentioned strategy of two-way secure direct communication is uneconomical and we will present a subtle idea to economically improve a message-unilaterally-transmitted QSDC to realize two-way secure direct communication. Suppose that there is a message-unilaterally-transmitted QSDC device between Alice and Bob and Alice can transmit her secret messages to Bob via the quantum channel in terms of the QSDC protocol. The security of Alice’s secret message transmission is ensured by the quantum mechanics in the QSDC protocol. After Alice’s secret message transmission, if both Alice and Bob further take the secret messages as a shared secret key, then Bob can use it to encrypt his secret messages and then transmits publicly his encrypted messages to Alice via a classical channel. Since Alice knows the so-called secret key, she can extract Bob’s secret messages after receiving Bob’s encrypted messages. According to the classical one-time-pad method, during this transmission via a classical channel, both the so-called secret key and Bob’s secret messages are secure. Hence in our improved scheme, all the secret messages can be transmitted securely, either via a quantum channel or via a classical channel. Moreover, the strategy of message authentification can be used to protect the secret messages transmitted either from Alice to Bob via the quantum channel or form Bob to Alice via the classical channel. To summarize, our subtle idea is to let both parties take the secret messages securely transmitted from Alice to Bob via a quantum channel in terms of a message-unilaterally-transmitted QSDC protocol as their shared secret key and then Bob directly and securely communicates with Alice via a public classical channel. By the way, in fact, in the QSDC protocols \[15,16,19,21\] classical channel is also employed. In our improved scheme we only use it more frequently. So far, one believes that provided that a secret key is securely shared by two parties in advance the communication cost between them via a public classical channel is much cheaper than the QSDC cost via a quantum channel, according to the present-day technologies. Hence, as mentioned before, the strategy of using two sets of message-unilaterally-transmitted QSDC device between two parties to realize a two-way secure communication is uneconomical. In contrast, our strategy is optimal. Incidentally, in our previous preprints \[22,23\], the subtle idea has also been used without intention. That is, in the works, Bob introduces the additional unitary operations and at last publicly announces his measurement outcomes. The essence of his actions is first to use Alice’s secret message as secret key to encrypt his secret messages and then to communicate with Alice via a public classical channel. According to our above description of our subtle idea, one can see that the additional unitary operations in Refs. 22 and 23 are completely unnecessary. Bob can directly perform a Bell-state measurement, then he uses the measurement result as a secret key to encrypt his secret messages. Although our subtle idea is simple, it is really very important for it can be applied to any message-unilaterally-transmitted QSDC protocols to realize two-way secure direct communications. Now only several QSDC protocols \[15,16,19,21\] have been proposed and they are all message-unilaterally-transmitted protocols. Hence, they all can be improved. This work is supported by the National Natural Science Foundation of China under Grant No. 10304022.\ [**References**]{} \[1\] C. H. Bennett and G. Brassard, in [*Proceedings of the IEEE International Conference on Computers, Systems and Signal Processings, Bangalore, India*]{} (IEEE, New York, 1984), p175. \[2\] A. K. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991). \[3\] C. H. Bennett, Phys. Rev. Lett. [**68**]{}, 3121 (1992). \[4\] C. H. Bennett, G. Brassard, and N.D. Mermin, Phys. Rev. Lett. [**68**]{}, 557(1992). \[5\] L. Goldenberg and L. Vaidman, Phys. Rev. Lett. [**75**]{}, 1239 (1995). \[6\] B. Huttner, N. Imoto, N. Gisin, and T. Mor, Phys. Rev. A [**51**]{}, 1863 (1995). \[7\] M. Koashi and N. Imoto, Phys. Rev. Lett. [**79**]{}, 2383 (1997). \[8\] W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A [**244**]{}, 489 (1998). \[9\] P. Xue, C. F. Li, and G. C. Guo, Phys. Rev. A [**65**]{}, 022317 (2002). \[10\] S. J. D. Phoenix, S. M. Barnett, P. D. Townsend, and K. J. Blow, J. Mod. Opt. [**42**]{}, 1155 (1995). \[11\] H. Bechmann-Pasquinucci and N. Gisin, Phys. Rev. A [**59**]{}, 4238 (1999). \[12\] A. Cabello, Phys. Rev. A [**61**]{},052312 (2000); [**64**]{}, 024301 (2001). \[13\] A. Cabello, Phys. Rev. Lett. [**85**]{}, 5635 (2000). \[14\] G. P. Guo, C. F. Li, B. S. Shi, J. Li, and G. C. Guo, Phys. Rev. A [**64**]{}, 042301 (2001). \[15\] A. Beige, B. G. Englert, C. Kurtsiefer, and H.Weinfurter, Acta Phys. Pol. A [**101**]{}, 357 (2002). \[16\] Kim Bostrom and Timo Felbinger, Phys. Rev. Lett. [**89**]{}, 187902 (2002). \[17\] G. L. Long and X. S. Liu, Phys. Rev. A [**65**]{}, 032302 (2002). \[18\] F. G. Deng and G. L. Long, Phys. Rev. A [**68**]{}, 042315 (2003). \[19\] F. G. Deng, G. L. Long, and X. S. Liu, Phys. Rev. A [**68**]{}, 042317 (2003). \[20\] N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, [*Rev. Mod. Phys.*]{} [**74**]{} 145 (2002). \[21\] F. G. Deng and G. L. Long, [*Phys. Rev.*]{} A [**69**]{} 052319 (2004). \[22\] Z. J. Zhang, e-print quant-ph/0403186. \[22\] Z. J. Zhang and Z. X. Man, e-print quant-ph/0403215. [^1]: Email: zhangzj@wipm.ac.cn.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that the theory of hyperrings, due to M. Krasner, supplies a perfect framework to understand the algebraic structure of the adèle class space $\H_\K=\A_\K/\K^\times$ of a global field $\K$. After promoting $\F_1$ to a hyperfield $\kras$, we prove that a hyperring of the form $R/G$ (where $R$ is a ring and $G\subset R^\times$ is a subgroup of its multiplicative group) is a hyperring extension of $\kras$ if and only if $G\cup\{0\}$ is a subfield of $R$. This result applies to the adèle class space which thus inherits the structure of a hyperring extension $\H_\K$ of $\kras$. We begin to investigate the content of an algebraic geometry over $\kras$. The category of commutative hyperring extensions of $\kras$ is inclusive of: commutative algebras over fields with semi-linear homomorphisms, abelian groups with injective homomorphisms and a rather exotic land comprising homogeneous non-Desarguesian planes. Finally, we show that for a global field $\K$ of positive characteristic, the groupoid of the prime elements of the hyperring $\H_\K$ is canonically and equivariantly isomorphic to the groupoid of the loops of the maximal abelian cover of the curve associated to the global field $\K$.' address: - | A. Connes: Collège de France\ 3, rue d’Ulm\ Paris, F-75005 France\ I.H.E.S. and Vanderbilt University - | C. Consani: Mathematics Department\ Johns Hopkins University\ Baltimore, MD 21218 USA author: - Alain Connes - Caterina Consani title: The hyperring of adèle classes --- Introduction ============ The goal of this paper is to understand the algebraic structure of the adèle class space $\H_\K=\A_\K/\K^\times$ of a global field $\K$. In our recent work [@announc3], we have shown that the introduction of an elementary theory of algebraic geometry over the absolute point $\Spec\F_1$ reveals the role of the natural monoidal structure of the adèle class space $\A_\K/\K^\times$ of a global field. This structure is used to reformulate, in a more conceptual manner, the spectral realization of zeros of $L$-functions. In the subsequent paper [@jamifine], we have given substantial evidence to the statement that idempotent analysis and tropical geometry determine, through the theory of idempotent semi-rings, a natural framework where to develop mathematics in “characteristic one". A key role in the formulation of these ideas is played by the procedure of de-quantization that requires the replacement of the use of real analysis by its idempotent version, and the implementation of the semifield $\rmax$ in place of the classical $\R_+$. Long ago, M. Krasner devised an analogous procedure that can be performed at a finite place of $\Q$ ([[*cf.*]{} ]{}[@Krasner]). His construction shows how to approximate a local field $\F_q((T))$ of positive characteristic by a system of local fields of characteristic zero and with the same residue field, as the absolute ramification index tends to infinity. Krasner’s method is based on the idea of class field and on the generalization of the classical additive law in a ring by the structure of a hypergroup in the sense of F. Marty [@Marty]. This process produces the notion of a (Krasner) [*hyperring*]{} ([[*cf.*]{} ]{}[@Krasner1]) which fits perfectly with our previous constructions and in particular with the framework of noncommutative geometry. In the usual theory of semi-rings, it is not possible to reconcile the characteristic one property stating that $x+x=x$ for all elements $x$ of a semi-ring $R$, with the additive group law requiring that every element in $R$ admits an additive inverse. On the other hand, the existence of an additive inverse plays a crucial role when, for instance, tensor products are involved. The structure of a hyperring makes this compatibility –between characteristic one and existence of additive inverse– possible. Remarkably, the adèle class space $\ads=\A_\K/\K^\times$ of a global field $\K$ turns out to possess the correct hyperring structure that combines the two above properties and in particular one has $ x+x=\{0,x\} $ for all $x\in \ads$. This formula means that $\ads$ is a hyperring over the simplest hyperfield $\kras$ that is defined as the set $\{0,1\}$ endowed with the obvious multiplication and a hyper-addition requiring that $1+1=\{0,1\}$. Moreover, while the quotient of a ring $R$ by a subgroup $G\subset R^\times$ of its multiplicative group is always a hyperring ([[*cf.*]{} ]{}[@Krasner1]), we find that $R/G$ is an extension of $\kras$ exactly when $G\cup \{0\}$ is a [*subfield*]{} of $R$ ([[*cf.*]{} ]{}Proposition \[krasner2\]). We explicitly remark here that the “absolute point" $\Sp \F_1$ should not be confused with $\Sp \kras$, in fact while $\Sp \F_1$ sits under $\Spec\Z$, $\Sp \kras$ is the natural lift of $\Sp\F_1$ above the generic point of $\Spec\Z$. $$\begin{gathered} \label{overall0} \,\hspace{100pt}\raisetag{-47pt} \xymatrix@C=25pt@R=25pt{ \Spec\Z\ar[d] & \Spec\kras\ar[dl]\ar[l]& &\\ \Spec\F_1 & &\\ }\hspace{25pt}\end{gathered}$$ In this paper we show that after suitably extending the classical definition of a $\Z$-scheme, by replacing the category of (commutative) rings with that of hyperrings (as was done [[*e.g.*]{} ]{}in [@Procesi]), the spectrum $\Sp \kras$ plays the role of the “generic point" in algebraic geometry. In fact, in Proposition \[ex2\] we prove that for any scheme $X$ of finite type over $\Z$, there is a canonical identification of sets $$\label{identsets} X\simeq \Hom(\Sp(\kras),X).$$ One should not confuse the content of a geometry over $\Sp\F_1$, that essentially means a theory of (pointed) monoids ([[*cf.*]{} ]{}[@deit] and [@announc3]), with the more refined geometric theory over $\Sp\kras$ that no longer ignores the [*additive*]{} structure. For instance, one finds that the prime spectrum of the [*monoid*]{} $\A_\K/\K^\times$ involves all subsets of the set $\Sigma_\K$ of places of the global field $\K$, while the prime spectrum of the [*hyperring*]{} $\ads$ is made by the subsets of $\Sigma_\K$ with only [*one element*]{}. By restricting this study to the ideals which are [*closed*]{} in the natural topology, one obtains the natural identification $\Sp(\ads)=\Sigma_\K$. The examples of tensor products of hyperrings that we consider in this paper, which correspond geometrically to the fiber product of the geometric spectra, allow us to understand, at a more conceptual level, several fundamental constructions of noncommutative geometry. In particular, this provides a new perspective on the structure of the BC-system [@ccm]. The rule of signs is a basic principle in elementary arithmetic. It is a simple fact that while the sign of the product of two numbers is uniquely determined by their respective signs, the sign of the sum of a positive and a negative number is ambiguous ([[*i.e.*]{} ]{}it can be $+,-,0$). As a straightforward encoding of this rule, one can upgrade the monoid $\F_{1^2}$ into a hyperfield with three elements: $\sign=\{-1,0,1\}$. Following this viewpoint, one discovers that the BC-system is directly related to the following hyperring extension of $\sign$ $$\Z_\sign:=\hat\Z\otimes_\Z\sign,$$ which is obtained by implementing the natural sign homomorphism $\Z\to \sign$ and the embedding $\Z\to \hat\Z$ of the relative integers into the profinite completion. By taking the topological structure into account, the spectrum $\Sp(\Z_\sign)$ is isomorphic to $\Sp(\Z)$, but unlike this latter space, $\Sp(\Z_\sign)$ maps naturally to $\Sp\sign$. Incidentally, we remark that the map $\Sp(\Z_\sign)\to \Sp\sign$ should be viewed as a refinement (and a lift) of the obvious map $\Sp(\Z)\to\Sp\F_1$. The process of adjoining the archimedean place is obtained by moving from finite adèles to the full adèles $\A_\Q$ over $\Q$. Following the hyperring structures, one sees that the hyperfield $\kras$ is the quotient of $\sign$ by the subgroup $\{\pm 1\}$. This fact determines a canonical surjection (absolute value) $\pi: \sign \to \kras$ which is used to show that the adèle class space is described by the hyperring $$\H_\Q=\A_\Q\otimes_\Z\kras$$ whose associated spectrum is $\Sp(\H_\Q)=\Sp(\Z)\cup\{ \infty\}=\Sigma_\Q$. In §\[sectproj\], we take the viewpoint of W. Prenowitz [@Prenowitz] and R. Lyndon [@Lyndon] to explain a natural correspondence between $\kras$-vector spaces and projective geometries in which every line has at least $4$ points. By implementing some classical results of incidence geometry mainly due to H. Karzel [@Karzel], we describe the classification of finite hyperfield extensions of $\kras$. This result depends on a conjecture, strongly supported by results of A. Wagner [@Wagner], on the non-existence of finite non-Desarguesian planes with a simply transitive abelian group of collineations. The relation between $\kras$-vector spaces and projective geometries also shows that, in the case of the adèle class space $\H_\Q$, the hyperring structure encodes the [*full*]{} information on the ring structure on the adèles: [[*cf.*]{} ]{}Theorem \[functads\] and Proposition \[Hopf\]. In [@jamifine], we showed that in a field endowed with a given multiplicative structure, the additive structure is encoded by a [*bijection*]{} $s$ of the field satisfying the two requirements that $s(0)=1$ and that $s$ commutes with its conjugates under multiplication by non-zero elements. In the same paper, we also proved that if one replaces the condition for $s$ to be a bijection by that of being a [*retraction*]{} ([[*i.e.*]{} ]{}$s^2 = s\circ s = s$), one obtains instead an idempotent semi-field. Therefore, it is natural to wonder if one can encode, with a similar construction, the additive structures of the hyperfield extensions of $\kras$ and $\sign$ respectively. In §§  \[hyperequ\] and \[hypersign\] of this paper, we show that given a multiplicative structure on a hyperfield, the additive structure is encoded by $(i)$ an [*equivalence*]{} relation commuting with its conjugates, on a hyperfield extension of $\kras$, $(ii)$ a partial [*order*]{} relation commuting with its conjugates, on a hyperfield extension of $\sign$. This reformulation of the additive law in hyperfields shows that these generalized algebraic structures occupy a very natural place among the more classical notions. Along the way, we also prove that the second axiom of projective geometry (saying that if a line meets two sides of a triangle not at their intersection then it also meets the third side) is equivalent to the commutativity of the equivalence relations obtained by looking at the space from different points ([[*cf.*]{} ]{}Lemma \[lemlign\]). We also give an example, using the construction of M. Hall [@Hall], of an (infinite) hyperfield extension of $\kras$ whose associated geometry is a non-Desarguesian plane. In the paper we start to investigate the content of an algebraic geometry over $\kras$. The category of commutative hyperring extensions of $\kras$ is inclusive of: algebras over fields with semi-linear homomorphisms, abelian groups with injective homorphisms (as explained in Proposition \[Lyndonlem\]) and a rather exotic land comprising homogeneous non-Desarguesian planes. In §\[functions\], we analyze the notion of algebraic function on $\Spec(\H_\Q)$ defined, as in the classical case, by means of elements of the set $\Hom(\Z[T],\H_\Q)$. We use the natural coproducts $\Delta^+(T)=T\otimes 1+1\otimes T$ and $\Delta^\times(T)=T\otimes T$ on $\Z[T]$ to obtain the elementary operations on functions. When $\K$ is a global field, the set $P(\ads)$ of [*prime*]{} elements of the hyperring $\ads$ inherits a natural structure of groupoid with the product given by multiplication and units the set of places of $\K$. The product of two prime elements is a prime element when the two factors sit over the same place, and over each place $v$ there exists a unique idempotent $p_v\in P(\ads)$ ([[*i.e.*]{} ]{}$p^2_v=p_v$). The idèle class group $C_\K=\ads^\times$ acts by multiplication on $P(\ads)$. When $\K$ is a function field over $\F_q$, we denote by $X$ the non singular projective algebraic curve with function field $\K$ and we let $\pi:X^{\rm ab}\to X$ be the abelian cover associated to a fixed maximal abelian extension $\K^{\rm ab}$ of $\K$. We denote by $\Pi_1^{\rm ab}(X)$ the fundamental groupoid associated to $\pi$ and $\Pi_1^{\rm ab}(X)'\subset \Pi_1^{\rm ab}(X)$ the subgroupoid of loops ([[*i.e.*]{} ]{}of paths whose end points coincide). In the final part of the paper we show (Theorem \[ccm2prop\]) that $\Pi_1^{\rm ab}(X)'$ is [*canonically*]{} isomorphic to the groupoid $P(\ads)$ and that this isomorphism is equivariant for the action of the idèle class group $C_\K=\ads^\times$ on $P(\ads)$ and the action of the abelianized Weil group on $\Pi_1^{\rm ab}(X)'$. When ${\rm char}(\K)=0$, the above geometric interpretation is no longer available. On the other hand, the arithmetic of the hyperring $\ads$ continues to hold and the groupoid $P(\ads)$ appears to be a natural substitute for the above groupoid of loops and it also supports an interpretation of the explicit formulae of Riemann-Weil. Hyperrings and hyperfields {#firsthyper} ========================== In this section we shall see that the natural multiplicative monoidal structure on $\F_1=\{0,1\}$ which ignores addition can be refined, within the category of hyperrings, to become the most basic example of a [*hyperfield*]{} ([[*cf.*]{} ]{}[@Krasner1]). We will refer to it as to the [*Krasner hyperfield*]{} $\kras$. The algebraic spectrum $\Spec\kras$ of this hyperstructure is the most natural lift of $\Spec\F_1$ from under $\Spec\Z$ to a basic structure mapping to $\Spec\Z$. In a hyperfield the additive (hyper)structure is that of a [*canonical*]{} hypergroup ([[*cf.*]{} ]{}[@Marty] and [@Krasner1]). We start by reviewing the notion of a canonical hypergroup $H$. For our applications it will be enough to consider this particular class of hypergroups. We denote by $+$ the hyper-composition law in $H$. The novelty is that now the sum $x+y$ of two elements in $H$ is no longer a single element of $H$ but a non-empty subset of $H$. It is customary to define a hyper-operation on $H$ as a map $$+: H\times H \to \mathcal P(H)^*$$ taking values into the set $\cP(H)^*$ of all non-empty subsets of $H$. Thus, $\forall a,b\in H$, $a+b$ is a non-empty subset of $H$, not necessarily a singleton. One uses the notation $\forall A,B\subseteq H,~A+B:=\{\cup (a+b)|a\in A, b\in B\}$. The definition of a canonical hypergroup requires that $H$ has a neutral element $0\in H$ ([[*i.e.*]{} ]{}an additive identity) and that the following axioms apply: $(1)$ $x+y=y+x,\qquad\forall x,y\in H$ $(2)$ $(x+y)+z=x+(y+z),\qquad\forall x,y,z\in H$ $(3)$ $0+x=x= x+0,\qquad \forall x\in H$ $(4)$ $\forall x \in H~ \ \exists!~y(=-x)\in H\quad {\rm s.t.}\quad 0\in x+y$ $(5)$ $x\in y+z~\Longrightarrow~ z\in x-y.$ The uniqueness, in $(4)$, of the symmetric element $y=-x$, for any element $x\in H$, rules out[^1] the trivial choice of taking the addition to be the full set $H$, except for the addition with $0$. Property $(5)$ is usually called [*reversibility*]{}. In this paper we shall always consider canonical hypergroups. Let $(H,+)$ be a (canonical) hypergroup and $x\in H$. The set $$O(x)=\{r\in \Z\mid \exists n\in\Z : 0\in rx+ n(x-x)\}$$ is a subgroup of $\Z$. We say that the [*order*]{} of $x$ is infinite ([[*i.e.*]{} ]{}$o(x) = \infty$) if $O(x)=\{0\}$. If $o(x) \neq \infty$, the smallest positive generator $h$ of $O(x)$ is called the [*principal order*]{} of $x$ ([[*cf.*]{} ]{}[@Corsini] Definition 57). Let $q = \text{min}\{s\in\N|\exists m\neq 0, \,0\in mhx + s(x-x)\}$. The couple $(h,q)$ is then called the order of $x$. The notion of a hyperring ([[*cf.*]{} ]{}[@Krasner], [@Krasner1]) is the natural generalization of the classical notion of a ring, obtained by replacing a classical addition law by a hyperaddition. \[hyperring\] A hyperring $(R,+,\cdot)$ is a non-empty set $R$ endowed with a hyperaddition $+$ and the usual multiplication $\cdot$ satisfying the following properties $(a)$ $(R,+)$ is a canonical hypergroup. $(b)$ $(R,\cdot)$ is a monoid with multiplicative identity $1$ $(c)$ $\forall r,s,t\in R$:  $r(s+t) = rs+rt$ and $(s+t)r = sr+tr$ $(d)$ $\forall r\in R$:  $r\cdot 0=0\cdot r =0$, [[*i.e.*]{} ]{}$0\in R$ is an absorbing element $(e)$ $0\neq 1$. In the original definition of a (Krasner) hyperring, $(R,\cdot)$ is only assumed to be a semi-group satisfying $(d)$ ([[*cf.*]{} ]{}[@Davvaz1] Definition 3.1.1). Let $(R_1,+_1,\cdot_1)$, $(R_2,+_2,\cdot_2)$ be two hyperrings. A map $f: R_1 \to R_2$ is called a homomorphism of hyperrings if the following conditions are satisfied $(1)$ $f(a+_1 b)\subseteq f(a)+_2 f(b),~\forall a,b\in R_1$ $(2)$ $f(a\cdot_1 b) = f(a)\cdot_2 f(b),~\forall a,b\in R_1.$ The map $f$ is said to be an epimorphism if it is a surjective homomorphism such that ([[*cf.*]{} ]{}[@Davvaz] Definition 2.8) $$\label{epi} x+y=\cup \{f(a+b)\mid f(a)=x,\,f(b)=y\}\qqq x,y\in R_2.$$ It is an isomorphism if it is a bijective homomorphism satisfying $f(a+_1 b) = f(a) +_2 f(b)$, $\forall a,b\in R_1$. A hyperring $(R,+,\cdot)$ is called a [*hyperfield*]{} if $(R\setminus\{0\},\cdot)$ is a group. \[defnkras\] We denote by $\kras$ the hyperfield $(\{0,1\},+,\cdot)$ with additive neutral element $0$, satisfying the hyper-rule: $1+1=\{0,1\}$ and with the usual multiplication, with identity $1$. We let $\sign$ be the hyperfield $\sign=\{-1,0,1\}$ with the hyper-addition given by the “rule of signs" $$\label{addsign} 1+1=1\,, \ -1-1=-1\,, \ 1-1=-1+1=\{-1,0,1\}$$ and the usual multiplication also given by the rule of multiplication of signs. The hyperfield $\kras$ is the natural extension, in the category of hyperrings, of the commutative (pointed) monoid $\F_1$, [[*i.e.*]{} ]{}$(\kras,\cdot) = \F_1$. We shall refer to $\kras$ as to the Krasner hyperfield. Note that the order of the element $1\in \kras$ is the pair $(1,0)$, [[*i.e.*]{} ]{}the principal order is $1$ since $0\in 1+1$ and the secondary order is $0$ for the same reason. In a similar manner one sees that the monoid underlying $\sign$ is $\F_{1^2}$, [[*i.e.*]{} ]{}$(\sign,\cdot) = \F_{1^2}$, where the order of the element $1\in \sign$ is the pair $(1,1)$. The homomorphism absolute value $\pi: \sign\to \kras,~\pi(x)= |x|$ is an epimorphism of hyperrings. To become familiar with the operations in hyperstructures, we prove the following simple results \[simp\] In a hyperring extension $R$ of the Krasner hyperfield $\kras$ one has $x+x=\{0,x\}$ for any $x\in R$ and moreover $$a\in a+b~\Longleftrightarrow~b\in\{0,a\}$$ In particular, there is no hyperfield extension of $\kras$ of cardinality $3$ or $4$. Since $1+1=\{0,1\}$ one gets $x+x=\{0,x\}$ using distributivity. Assume that $a\in a+b$ in $R$. Then since $a+a=\{0,a\}$ one has $-a=a$ so that by the reversibility condition $(6)$ in the definition of a hypergroup, one has $b\in a-a=\{0,a\}$. Conversely, if $b\in\{0,a\}$, it follows immediately (by applying the condition $(4)$ for hypergroups) that $a\in a+b$. If $F$ is a hyperfield extension of $\kras$ of cardinality $3$, then $F$ contains an element $\alpha \notin \{0,1\}$. But then one gets a contradiction since the subset $1+\alpha$ cannot contain $0$ (since $1$ is its own opposite) or $1$ or $\alpha$ (by the first part of this proposition). If $F$ is a hyperfield extension of $\kras$ of cardinality $4$, then let $\xi_j$ be the three non-zero elements of $F$. Then, by applying the first part of this proposition, the sum $\xi_j+\xi_k$, for $j\neq k$ is forced to be the third non-zero element $\xi_\ell$ of $F$. This contradicts associativity of the hyperaddition for $\sum \xi_j$. Note that the above proof only uses the structure of $\kras$-vector space ([[*cf.*]{} ]{}§\[sectproj\]). [The same proof shows that in a hyperring extension $R$ of the hyperfield $\sign$ one has $$\label{notin} a\in a+b~\Longleftrightarrow~b\in\{0,\pm a\}.$$]{} Krasner gave in [@Krasner1] a construction of a hyperring as the quotient of a ring $R$ by a multiplicative subgroup $G$ of the group $R^\times$ of the invertible elements of $R$. This result states as follows \[krasner1\] Let $R$ be a commutative ring and $G\subset R^\times$ be a subgroup of its multiplicative group. Then the following operations define a hyperring structure on the set $R/G$ of orbits for the action of $G$ on $R$ by multiplication $\bullet$ Hyperaddition $$\label{addr} x+ y:=\left(xG+yG\right)/G \qqq x,y\in R/G$$ $\bullet$ Multiplication $$xG\cdot yG=xyG\qqq x,y\in R/G.$$ Moreover for any $x_i\in R/G$ one has $$\label{longsum} \sum x_i=\left(\sum x_i G\right)/G$$ In particular, one can start with a field $K$ and consider the hyperring $K/K^\times$. This way, one obtains a hyperstructure whose underlying set is made by two classes [[*i.e.*]{} ]{}the class of $0$ and that of $1$. If $K$ has more than two elements, $K/K^\times$ coincides with the Krasner hyperfield $\kras$. Next, we investigate in the set-up of Proposition \[krasner1\], under which conditions the hyperring $R/G$ contains the Krasner hyperfield $\kras$ as a sub-hyperfield. \[krasner2\] Let $R$ be a commutative ring and $G\subset R^\times$ be a subgroup of the multiplicative group of units in $R$. Assume that $G\neq \{1\}$. Then, the hyperring $R/G$ contains $\kras$ as a sub-hyperfield if and only if $\{0\}\cup G$ is a subfield of $R$. To verify whether $\kras\subset R/G$, it suffices to compute $1+1$ in $R/G$. By definition, $1+1$ is the union of all classes, under the multiplicative action of $G$, of elements of the form $g_1+g_2$, for $g_j\in G$ ($j=1,2$). Thus, the hyperring $R/G$ contains $\kras$ as a sub-hyperfield if and only if $G+G=\{0\}\cup G$. If this equality holds, then $\{0\}\cup G$ is stable under addition. Moreover $0\in G+G$ so that $g_1=-g_2$ for some $g_j\in G$ and thus $-1=g_1g_2^{-1}\in G$. Thus $\{0\}\cup G$ is an additive subgroup of $R$. In fact, since $R^\times$ is a group, it follows that $G\cup\{0\}$ is a subfield of $R$. Conversely, let $F\subset R$ be a subfield and assume that $F$ is not reduced to the finite field $\F_2$. Then the multiplicative group $G=F^\times$ fulfills $G\neq \{1\}$. Moreover $G+G\subset F$ and $0\in G+G$ as $1-1=0$. Moreover, since $G$ contains at least two distinct elements $x,y$ one has $x-y\neq 0$ and thus $G+G=F$. Thus, in $R/G$ one has $1+1=\{0,1\}$ and thus $\kras\subset R/G$. \[ex5\][ This simple example is an application of the above results and it shows that there exists a hyperfield extension of $\kras$ of cardinality $5$. Let $H$ be the union of $0$ with the powers of $\alpha$, $\alpha^4=1$. It is a set with 5 elements and the table of hyper-addition in $H$ is given by the following matrix $$\left( \begin{array}{ccccc} 0&1&\alpha&\alpha^2&\alpha^3\\ 1 &\{0,1\} & \left\{\alpha ^2,\alpha ^3\right\} & \left\{\alpha ,\alpha ^3\right\} & \left\{\alpha ,\alpha ^2\right\} \\ \alpha& \left\{\alpha ^2,\alpha ^3\right\} & \{0,\alpha \} & \left\{1,\alpha ^3\right\} & \left\{1,\alpha ^2\right\} \\ \alpha^2& \left\{\alpha ,\alpha ^3\right\} & \left\{1,\alpha ^3\right\} & \left\{0,\alpha ^2\right\} & \{1,\alpha \} \\ \alpha^3&\left\{\alpha ,\alpha ^2\right\} & \left\{1,\alpha ^2\right\} & \{1,\alpha \} & \left\{0,\alpha ^3\right\} \end{array} \right)$$ This hyperfield structure is obtained, with $\alpha=1+\sqrt{-1}$, as the quotient of the finite field $\F_9=\F_3(\sqrt{-1})$ by the multiplicative group $\F_3^\times=\{\pm 1\}$. It follows from Proposition \[krasner2\] that $F=\F_9/\F_3^\times$ is a hyperfield extension of $\kras$. Notice that the addition has a very easy description since for any two distinct non-zero elements $x,y$ the sum $x+y$ is the complement of $\{x,y,0\}$ ([[*cf.*]{} ]{}[@Stratigopoulos] and Proposition \[Lyndonlem\] below for a more general construction). ]{} The notions of ideal and prime ideal extend to the hyperring context ([[*cf.*]{} ]{}[[*e.g.*]{} ]{}[@Procesi], [@Davvaz1]) \[primeideal\] A non-empty subset $I$ of a hyperring $R$ is called a hyperideal if $(a)$ $a,b\in I~\Rightarrow~a-b\subseteq I$ $(b)$ $a\in I$, $r\in R~\Rightarrow~r\cdot a\in I.$ The hyperideal $I\subsetneq R$ is called prime if $\forall a,b\in R$ $(c)$ $a\cdot b\in I~\Rightarrow~a\in I$ or $b\in I$. For any hyperring $R$, we denote by $\Spec(R)$ the set of prime ideals of $R$ ([[*cf.*]{} ]{}[@Procesi]). The following proposition shows that the hyperfield $\kras$ plays, among hyperrings, the equivalent role of the monoid $\F_1$ among monoids ([[*cf.*]{} ]{}[@jamifine] Prop. 3.32). \[ex1\]For any hyperring $R$, the map $$\varphi:\Spec(R)\to \Hom(R,\kras)\,, \qquad\varphi(\ffp)=\varphi_\ffp$$ $$\label{phip} \varphi_\ffp(x)=0\qqq x\in \ffp\, , \ \ \varphi_\ffp(x)=1\qqq x\notin \ffp$$ determines a natural bijection of sets. The map $\varphi_\ffp: R \to \kras$ is multiplicative since the complement of a prime ideal $\ffp$ in $R$ is a multiplicative set. It is compatible with the hyperaddition, using reversibility and Definition \[primeideal\] (a). Thus the map $\varphi$ is well-defined. To define the inverse of $\varphi$, one assigns to a homomorphism of hyperrings $\rho\in \Hom (R,\kras)$ its kernel which is a prime ideal of $R$ that uniquely determines $\rho$. Affine $\Z$-schemes, when viewed as representable functors from the category $\An$ of (commutative) rings to sets, extend canonically to the category of hyperrings as representable functors (they are represented by the same ring). This construction applies in particular to the affine line $\cD=\Sp(\Z[T])$, and one obtains $\cD(R)=\Hom(\Z[T],R)$ for any hyperring $R$. Notice though, that while for an ordinary ring $R$, $\Hom(\Z[T],R)$ coincides with the set underlying $R$, this fact no longer holds for hyperrings. For instance, by applying Proposition \[ex1\] one sees that $\cD(\kras)=\Spec(\Z[T])$ which is an infinite set unlike the set underlying $\kras$. To describe the elements of the set $\Hom(R,\sign)$, for any ring $R$, we first recall the definition of a symmetric cone in $R$: [[*cf.*]{} ]{}[@reid]. \[symcone\] Let $R$ be a ring. A symmetric cone $P$ in $R$ is a subset $P\subset R$ such that $\bullet$  $0\notin P$, $P+P\subset P$, $PP\subset P$, $\bullet$ $P^c+P^c\subset P^c$ where $P^c$ is the complement of $P$ in $R$, $\bullet$ $a\in P$ and $ab\in P$ imply $b\in P$, $\bullet$ $P-P=R$. The following proposition shows that the notion of a symmetric cone in a hyperring is equivalent to that of an element of $\Hom(R,\sign)$. \[charhomsign\] $(1)$ A homomorphism from a ring $R$ to the hyperring $\sign$ is determined by its kernel $\ffp\in\Spec(R)$ and a total order on the field of fractions of the integral domain $R/\ffp$. $(2)$ A homomorphism from a ring $R$ to the hyperring $\sign$ is determined by a symmetric cone of $R$ in the sense of Definition \[symcone\]. $(1)$ Let $\rho\in \Hom(R,\sign)$. The kernel of $\rho$ is unchanged by composing $\rho$ with the absolute value map $\pi:\sign\to \kras$, $\pi(x) = |x|$. Thus ${\rm ker}(\rho)$ is a prime ideal $\ffp\subset R$. Moreover the map $\rho$ descends to the quotient $R/\ffp$ which is an integral domain. Let $F$ be the field of fractions of $R/\ffp$. One lets $P\subset F$ be the set of fractions of the form $x=a/b$ where $\rho(a)= \rho(b)\neq 0$. This subset of $F$ is well defined since $a/b=c/d$ means that $ad=bc$ and it follows that $\rho(c)= \rho(d)\neq 0$. One has $\rho(0)=0$, $\rho(1)=1$ and $\rho(-1)=-1$ since $0\in \rho(1)+\rho(-1)$. Thus $P$ is also stable by addition since one can assume, in the computation of $a/b+c/d$, that $\rho(a)= \rho(b)=\rho(c)= \rho(d)=1$, so that $\rho(ad+bc)=\rho(cd)=1$. $P$ is also multiplicative. Moreover for $x\in F, x\neq 0$ one has $\pm x\in P$ for some choice of the sign. Thus $F$ is an ordered field and $\rho$ is the composition of the canonical morphism $R\to F$ with the map $F\to F/F_+^\times\sim \sign$. Conversely if one is given an order on the field of fractions of the integral domain $R/\ffp$, one can use the natural identification $F/F_+^\times\sim \sign$ to obtain the morphism $\rho$. $(2)$ follows from $(1)$ and Theorem 2.3 of [@reid]. In fact one can also check directly that given a symmetric cone $P\subset R$, the following formula defines an element $\rho\in\Hom(R,\sign)$: $$\label{defnrhos} \rho(x)=\left\{ \begin{array}{ll} 1, & \forall x\in P\\ -1, & \forall x\in -P \\ 0, & \hbox{otherwise.} \end{array} \right.$$ Moreover, one easily checks that if $\rho\in\Hom(R,\sign)$ then $P=\rho^{-1}(1)$ is a symmetric cone. One can then apply Corollary 3.8 of [@reid] to obtain the following The elements of $\cD(\sign)=\Hom(\Z[T],\sign)$ are described by $$\label{omega} \omega_\lambda(P(T))={\rm Sign}(P(\lambda))\qqq \lambda \in [-\infty,\infty]$$ and, for $\lambda\in \bar\Q\cap \R$, by the two elements $$\label{omegapm} \omega_\lambda^\pm(P(T))=\lim_{\epsilon\to 0+}{\rm Sign}(P(\lambda\pm \epsilon)).$$ This follows from Corollary 3.8 of [@reid] for the total orders and from the first part of Proposition \[charhomsign\] for the symmetric orders. One can extend the above statements from the case of affine schemes to the general case (of non-affine schemes). First of all, we recall from [@Procesi] that to any hyperring $R$ is associated its prime spectrum $\Spec(R)$. This is a topological space endowed with a sheaf of hyperrings. Note that it is not true for general hyperrings $R$ that the canonical map from $R$ to global sections of the structural sheaf on $\Spec R$ is bijective. A [*geometric hyperring space*]{} $(X,\cO_X)$ is a topological space $X$ endowed with a sheaf of hyperrings $\cO_X$ (the structural sheaf). As for geometric $\Z$-schemes ([[*cf.*]{} ]{}[@demgab], Chapter I, § 1 Definition 1.1), one needs to impose the condition that the stalks of the structural sheaf of a geometric hyperring space are [*local*]{} algebraic structures, [[*i.e.*]{} ]{}they have only one maximal ideal. A homomorphism $\rho: R_1\to R_2$ of (local) hyperrings is local if the following property holds $$\label{locality} \rho^{-1}(R_2^\times) = R_1^\times.$$ A morphism $\varphi: X \to Y$ of geometric hyperring spaces is a pair $(\varphi,\varphi^\sharp)$ of a continuous map $\varphi: X\to Y$ of topological spaces and a homomorphism of sheaves of hyperrings $\varphi^\sharp: \mathcal O_Y \to \varphi_*\mathcal O_X$, which satisfy the property of being [*local*]{}, [[*i.e.*]{} ]{}$\forall x\in X$ the homomorphisms connecting the stalks $\varphi^\sharp_x: \cO_{Y,\varphi(x)}\stackrel{}{\to}\cO_{X,x}$ are local ([[*cf.*]{} ]{}). With these notations we obtain the following result \[ex2\] For any $\Z$-scheme $X$, one has a canonical identification of sets $$X\simeq\Hom(\Sp(\kras),X).$$ Moreover, an element of $\Hom(\Sp(\sign),X)$ is completely determined by assigning a point $x\in X$ and a total order of the residue field $\kappa(x)$ at $x$. Since $\kras$ is a hyperfield, $\{0\}\subset \kras$ is the only prime ideal and $\Spec\kras$ consists of a single point $\kappa$. Let $\rho\in \Hom(\Sp(\kras),X)$ be a morphism and $x=\rho(\kappa)\in X$. The morphism $\rho^\#$ is uniquely determined by the local morphism $\rho_x^\#:\cO_{X,x}\to \kras$. Since the ring $\cO_{X,x}$ is local, there exists only one local morphism $\rho_x^\#:\cO_{X,x}\to \kras$. Thus the map $\rho\mapsto \rho(\kappa)\in X$ is an injection from $\Hom(\Sp(\kras),X)$ to $X$. The existence of the local morphism $\cO_{X,x}\to \kras$ for any $x\in X$ shows the surjectivity. The same proof applies to describe the elements of $\Hom(\Sp(\sign),X)$ using Lemma \[charhomsign\]. $\kras$-vector spaces and projective geometry {#sectproj} ============================================= Let $R$ be a hyperring containing the Krasner hyperfield $\kras$. In this section we show, following W. Prenowitz [@Prenowitz] and R. Lyndon [@Lyndon] that the additive hyperstructure on $R$ is entirely encoded by a projective geometry $\cP$ such that $\bullet$ The set of points of $\cP$ is $R\backslash 0$ $\bullet$ The line through two distinct points $x,y$ of $\cP$ is given by $$\label{defnline} L(x,y)=(x+y)\cup \{x,y\}.$$ We shortly review the axioms of projective geometry. They are concerned with the properties of a family $\cL$ of subsets $L$ of a set $\cP$. The elements $L\in \cL$ are called lines. These axioms are listed as follows $\P_1$: Two distinct points of $\cP$ determine a unique line $L\in\cL$ [[*i.e.*]{} ]{} $$\forall x\neq y\in \cP \,, \ \exists ! \,L\in \cL\,, \ x\in L\,, \ y\in L.$$ $\P_2$: If a line in $\cL$ meets two sides of a triangle not at their intersection then it also meets the third side, [[*i.e.*]{} ]{} $$\forall x \neq y\in\cP ~\text{and}\ z\notin L(x,y),~\text{one has}$$ $$L(y,z)\cap L(t,u)\neq \emptyset, \ \ \forall t\in L(x,y)\backslash\{x\}\,, \ u\in L(x,z)\backslash\{x\}.$$ $\P_3$: Every line in $\cL$ contains at least three points. We shall consider the following small variant of the axiom $\P_3$ $\P'_3$: Every line in $\cL$ contains at least $4$ points. We use the terminology $\kras$-vector space to refer to a (commutative) hypergroup $E$ with a compatible action of $\kras$. Since $0\in \kras$ acts by the retraction to $\{0\}\subset E$ and $1\in\kras$ acts as the identity on $E$, the $\kras$-vector space structure on $E$ is in fact uniquely prescribed by the hypergroup structure. Thus a hypergroup $E$ is a $\kras$-vector space if and only if it fulfills the rule $$\label{idemcond} x+x=\{0,x\}\qqq x\neq 0.$$ The next result is due essentially to W. Prenowitz [@Prenowitz] and R. Lyndon [@Lyndon] [[*cf.*]{} ]{}also [@Corsini], Chapter I, Theorems 30 and 34. \[propproj\] Let $E$ be a $\kras$-vector space. Let $\cP=E\backslash\{0\}$. Then, there exists a unique geometry having $\cP$ as its set of points and satisfying . This geometry fulfills the three axioms $\P_1,\P_2,\P'_3$ of a projective geometry. Conversely, let $(\cP,\cL)$ be a projective geometry fulfilling the axioms $\P_1,\P_2,\P'_3$. Let $E=\cP\cup \{0\}$ endowed with the hyperaddition having $0$ as neutral element and defined by the rule $$\label{defnaddi} x+y=\left\{ \begin{array}{ll} L(x,y)\backslash\{x,y\}, & \hbox{if } \ x\neq y\\ \{0,x\}, & \hbox{if }\ x=y. \end{array} \right.$$ Then $E$ is a $\kras$-vector space. Before starting the proof of Proposition \[propproj\] we prove the following result \[lemproj\] Let $E$ be a $\kras$-vector space. Then for any two subsets $X,Y\subset E$ one has $$\label{equiinter} X\cap Y \neq \emptyset~ \Longleftrightarrow~ 0\in X+Y.$$ If $x\in X\cap Y$ then $0\in (x+x)\subset X+Y$. Conversely, if $0\in X+Y$, then $0\in x+y$, for some $x\in X$ and $y\in Y$. By reversibility one gets $x\in 0-y=\{y\}$ and $x=y$ so that $X\cap Y \neq \emptyset$. (of the Proposition \[propproj\]) We define $\cL$ as the set of subsets of $\cP=E\backslash 0$ of the form $L(x,y)=(x+y)\cup \{x,y\}$ for some $x\neq y\in \cP$. Let us check that the axiom $\P_1$ holds. We need to show that for $a\neq b\in \cP$, any line $L(x,y)$ containing $a$ and $b$ is equal to $L(a,b)$. We show that if $z\in L(x,y)$ is distinct from $x,y$, then $L(x,z)=L(x,y)$. One has $z\in x+y$ and hence by reversibility $y\in x+z$. Thus $x+y\subset x+x+z=z\cup (x+z)$ and $L(x,y)\subset L(x,z)$. Moreover, since $y\in x+z$ one gets in the same way that $L(x,z)\subset L(x,y)$. This proves that for any two (distinct) points $a,b\in L(x,y)$ one has $L(a,b)=L(x,y)$. Indeed $$a\in L(x,y)\Rightarrow L(x,y)=L(a,x), \ b\in L(x,y)=L(a,x)\Rightarrow L(a,b)=L(a,x)=L(x,y).$$ We now check the axiom $\P_2$. Let $t\in L(x,y)\backslash\{x\}\,, \ u\in L(x,z)\backslash\{x\}$. Then $x\in(y+t)\cap(u+z)$ so that by Lemma \[lemproj\] one has $0\in y+t+u+z$. It follows again from Lemma \[lemproj\] and the commutativity of the sum, that $(y+z)\cap(u+t)\neq \emptyset$ and $L(y,z)\cap L(t,u)\neq \emptyset$. Note that to get $x\in(y+t)\cap(u+z)$ one uses $y\neq t$ and $z\neq u$ but the validity of $\P_2$ is trivial in these cases. Thus one has $\P_2$. By Proposition \[simp\] one has $x\notin (x+y)$ for $x\neq y\in P$ and thus every line contains at least three points so that axiom $\P_3$ holds true. Let us show that in fact one has $\P'_3$. Assume $x+y=\{z\}$. Then $(x+y)+z=\{0,z\}$. Since $0\in x+(y+z)$ one has $x\in y+z$, but then $x\in x+(y+z)=\{0,z\}$ which is a contradiction. Conversely, let $(\cP,\cL)$ be a projective geometry fulfilling axioms $\P_1,\P_2,\P'_3$ and endow $E=\cP\cup \{0\}$ with the hyperaddition as in . This law is associative since when $x,y,z\in \cP$ are not collinear one checks that the sum $x+y+z$ is the plane they generate with the three sides of the triangle deleted. For three distinct points on the same line $L$, their sum is $L\cup\{0\}$ if the cardinality of the line is $>4$ and the complement of the fourth point in $L\cup\{0\}$ if the cardinality of the line is $4$. Let us show that $\forall x \in E \ \exists! y(=-x)\,, \ 0\in x+y$. We can assume $x\neq 0$. One has $0\in x+x$. Moreover for any $y\neq x$ one has $0\notin x+y=L(x,y)\backslash\{x,y\}$. Finally we need to prove the reversibility which takes the form $x\in y+z\Rightarrow z\in x+y$. If $y=0$ or $z=0$, the conclusion is obvious, thus we can assume that $y,z\neq 0$. If $y=z$ then $y+z=\{0,z\}$ and one gets $z\in x+y$. Thus we can assume $y\neq z$. Then $x\in y+z$ means that $x\in L(y,z)\backslash\{y,z\}$ and this implies $z\in L(x,y)\backslash\{x,y\}$. \[finitedim\] [Let $V$ be a $\kras$-vector space. For any finite subset $F=\{x_j\}_{j\in J}\subset \cP=V\setminus 0$, the subset $$E=\{\sum_{j\in J}\lambda_j x_j\mid \lambda_j\in \kras\}$$ is stable under hyperaddition and it follows from the formula $(x+x)=\{0,x\}$ that $E$ coincides with $\displaystyle{\sum_{j\in J}}(x_j+x_j)$. Thus, $W=E\setminus 0$ is a subspace of the geometry $\cP$ [[*i.e.*]{} ]{}a subset of $V\setminus 0$ such that $$\label{subsp} \forall x\neq y\in W\, \ \ L(x,y)\subset W$$ and the restriction to $W=E\setminus 0$ of the geometry of $\cP$ is finite dimensional. We refer to [@Stratigopoulos] for the notion of dimension of a vector space over a hyperfield. Here, such dimension is related to the dimension $\dim W$ of the associated projective geometry by the equation $$\label{subsp1} \dim W=\dim_\kras E -1\,.$$ ]{} Next result shows that hyperfield extensions of $\kras$ correspond precisely to the “Zweiseitiger Inzidenszgruppen" (two-sided incidence groups) of [@Ellers]. In particular, the [*commutative*]{} hyperfield extensions of $\kras$ are classified by projective geometries together with a simply transitive action by a commutative subgroup of the collineation group. We first recall the definition of a two-sided incidence group \[Inzidenszgruppen\] Let $G$ be a group which is the set of points of a projective geometry. Then $G$ is called a two-sided incidence group if the left and right translations by $G$ are automorphisms of the geometry. We can now state the precise relation between hyperfield extensions of $\kras$ and two-sided incidence groups ([@Ellers] and [@Ellers1]) whose projective geometry satisfies the axiom $\P'_3$ in place of $\P_3$. \[main\] Let $\H\supset \kras$ be a hyperfield extension of $\kras$. Let $(\cP,\cL)$ be the associated geometry ([[*cf.*]{} ]{}Proposition \[propproj\]). Then, the multiplicative group $\H^\times$ endowed with the geometry $(\cP,\cL)$ is a two-sided incidence group fulfilling $\P'_3$. Conversely, let $G$ be a two-sided incidence group fulfilling $\P'_3$. Then, there exists a unique hyperfield extension $\H\supset \kras$ such that $\H=G\cup \{0\}$. The hyperaddition in $\H$ is defined by the rule $$x+y=L(x,y)\backslash\{x,y\}\quad \text{for any}~x\neq y\in \cP$$ and the multiplication is the group law of $G$, extended by $0\cdot g=g\cdot 0=0$, $\forall g\in G$. For the proof of the first statement it suffices to check that the left and right multiplication by a non-zero element $z\in \H$ is a collineation. This follows from the distributivity property of the hyperaddition which implies that $$\label{added-formula} zL(x,y)=z(x+y)\cup \{zx,zy\}=L(zx,zy)\,.$$ A similar argument shows that the right multiplication is also a collineation. Conversely, let $G$ be a two-sided incidence group fulfilling $\P'_3$. Let $\H=G\cup \{0\}$ and define the hyperaddition as in Proposition \[propproj\]. With this operation, $\H$ is an additive hypergroup. Let the multiplication be the group law of $G$, extended by $0\cdot g=g\cdot 0=0$, $\forall g\in G$. This operation is distributive with respect to the hyperaddition because $G$ acts by collineations. Thus one obtains an hyperfield $\H$. Moreover, by construction, the projective geometry underlying $\H$ is $(\cP,\cL)$. Let $H$ be an abelian group. We define the geometry on $H$ to be that of a single line. By applying Lemma \[main\], we obtain the following result ([[*cf.*]{} ]{}[@Stratigopoulos], Proposition 2) \[Lyndonlem\] Let $H$ be an abelian group of order at least $4$. Then, there exists a unique hyperfield extension $\kras[H]$ of $\kras$ whose underlying monoid is $\F_1[H]$ and whose geometry is that of a single line. The assignment $H\mapsto \kras[H]$ is functorial only for injective homomorphisms of abelian groups and for the canonical surjection $\kras[H]\to\kras$. Let $R=H\cup\{0\}$ viewed as a monoid. The construction of Lemma \[main\] gives the following hyperaddition on $R=\kras[H]$ ([[*cf.*]{} ]{}[@Lyndon]) $$\label{addlyndon} x+y=\left\{ \begin{array}{ll} x, & \hbox{if}\ y=0\\ \{0,x\}, & \hbox{if}\ y=x \\ R\backslash\{0,x,y\}, & \hbox{if}\ \#\{0,x,y\}=3. \end{array} \right.$$ One easily checks that this (hyper)operation determines a hypergroup law on $\kras[H]$, provided that the order of $H$ is at least $4$. Moreover, since the left multiplication by a non-zero element is a bijection preserving $0$, one gets the distributivity. Let then $\rho:H_1\to H_2$ be a group homomorphism. If $\rho$ is injective and $x\neq y$ are elements of $H_1$ then, by extending $\rho$ by $\rho(0)=0$, one sees that $\rho(x+y)\subset\rho(x)+\rho(y)$. If $\rho$ is not injective and does not factor through $\kras[H_1]\to\kras\subset \kras[H_2]$, then there exists two elements of $H_1$, $x\neq y$ such that $\rho(x)=\rho(y)\neq 1$. This contradicts the required property $\rho(x+y)\subset\rho(x)+\rho(y)$ of a homomorphism of hyperrings ([[*cf.*]{} ]{}§ \[firsthyper\]) since $\rho(x)+\rho(y)=\{0,\rho(x)\}$ while $1\in x+y$ so that $1=\rho(1)\in\rho(x+y)$. \[monoids\] The association $H\to \kras[H]$ determines a functor from abelian groups (and injective morphisms) to hyperfield extensions of $\kras$. This functor does not extend to a functor from monoids to hyperring extensions of $\kras$ since the distributivity (of left/right multiplication) with respect to the addition fails in general when $H$ is only a monoid. One can show that all commutative hyperring extensions $R$ of $\kras$ such that $\dim_\kras(R)=2$ are of the form $R=\kras[H]^{(j)}$ for some $j\in \{0,1,2\}$ where $H$ is an abelian group of cardinality $>3-j$. Here $\kras[H]^{(0)}=\kras[H]$, $\kras[H]^{(1)}=\kras[H]\cup \{a\}$ with the presentation $$\label{(1)} a^2=0,\ au=ua=a\qqq u\in H$$ while $ \kras[H]^{(2)}=\kras[H]\cup \{e,f\}$ with the presentation ([[*cf.*]{} ]{}[@Stratigopoulos]) $$\label{(2)} e^2=e,\ f^2=f, \ ef=fe=0,\ au=ua=a\qqq u\in H, a\in\{e,f\}\,.$$ Next result is, in view of Lemma \[main\], a restatement of the classification of Desarguesian “Kommutative Inzidenszgruppen" of [@Karzel]. \[thmmain\] Let $\H\supset \kras$ be a commutative hyperfield extension of $\kras$. Assume that the geometry associated to the $\kras$-vector space $\H$ is Desarguesian[^2] and of dimension $\geq 2$. Then, there exists a unique pair $(F,K)$ of a commutative field $F$ and a subfield $K\subset F$ such that $$\label{isomain} \H=F/K^\times.$$ By applying Lemma \[main\] one gets a Desarguesian geometry with a simply transitive action of an abelian group by collineations. It follows from [@Karzel] (§5 Satz 3) that there exists a normal near-field $(F,K)$ such that the commutative incidence group is $F^\times/K^\times$. By [[*op.cit.*]{} ]{}(§7 Satz 7), the near-field $F$ is in fact a commutative field. The uniqueness of this construction follows from [[*op.cit.*]{} ]{}: §5, (5.8). \[ncremark\][By applying the results of H. Wähling ([[*cf.*]{} ]{}[@Wahling]), the above Theorem \[thmmain\] is still valid without the hypothesis of commutativity (for the multiplication) of $\H$. The field $F$ is then a skew field and $K$ is [*central*]{} in $F$. ]{} Theorem \[thmmain\] generalizes to the case of commutative, integral hyperring extensions of $\kras$. \[corthmmain\] Let $\H\supset \kras$ be a commutative hyperring extension of $\kras$. Assume that $\H$ has no zero divisors and that $\dim_\kras\H >3$. Then, there exists a unique pair $(A,K)$ of a commutative integral domain $A$ and a subfield $K\subset A$ such that $$\label{isomain1} \H=A/K^\times.$$ By [@Procesi], Prop. 6 and 7 ([[*cf.*]{} ]{}also [@Davvaz]), $\H$ embeds in its hyperfield of fractions. Thus, by applying Theorem \[thmmain\] one obtains the desired result. Finite extensions of $\kras$ ---------------------------- In view of Theorem \[thmmain\], the classification of all finite, [*commutative*]{} hyperfield extensions of $\kras$ reduces to the determination of non-Desarguesian finite projective planes with a simply transitive abelian group $G$ of collineations. \[classcor\] Let $\H\supset \kras$ be a finite commutative hyperfield extension of $\kras$. Then, one of the following cases occurs $(1)$ $\H=\kras[G]$ ([[*cf.*]{} ]{}Proposition \[Lyndonlem\]), for a finite abelian group $G$. $(2)$ There exists a finite field extension $\F_q\subset \F_{q^m}$ of a finite field $\F_q$ such that $\H=\F_{q^m}/\F_q^\times$. $(3)$ There exists a finite, non-Desarguesian projective plane $\cP$ and a simply transitive abelian group $G$ of collineations of $\cP$, such that $G$ is the commutative incidence group associated to $\H$ by Lemma \[main\]. Let $G$ be the incidence group associated to $\H$ by Lemma \[main\]. Then, if the geometry on $G$ consists of a single line, case (1) applies. If the geometry associated to $\H$ is Desarguesian, then by Theorem \[thmmain\] case (2) applies. If neither $(1)$ nor $(2)$ apply, then the geometry of $\H$ is a finite non-Desarguesian plane with a simply transitive abelian group $G$ of collineations. [There are no known examples of finite, commutative hyperfield extensions $\H\supset\kras$ producing projective planes as in case (3) of the above theorem. In fact, there is a conjecture ([[*cf.*]{} ]{}[@Beutel] page 114) based on results of A. Wagner and T. Ostrom ([[*cf.*]{} ]{}[@Beutel] Theorem 2.1.1, Theorem 2.1.2, [@Wagner], [@Wagner1]) stating that such case cannot occur. A recent result of K. Thas and D. Zagier [@Thas] relates the existence of potential counter-examples to Fermat curves and surfaces and number-theoretic exponential sums. More precisely, the existence of a special prime $p=n^2+n+1$ in the sense of [[*op.cit.*]{} ]{}Theorem 3.1 (other than $7$ and $73$) would imply the existence of a non-Desarguesian plane $\Pi=\Pi(\F_p,(\F_p^\times)^n)$ as in case $(3)$ of the above theorem. Note that, by a result of M. Hall [@Hall] there exists [*infinite*]{} non-Desarguesian projective planes with a [*cyclic*]{} simply transitive group of collineations. We shall come back to the corresponding hyperfield extensions of $\kras$ in §\[hyperequ\]. ]{} Morphisms of quotient hyperrings {#endomorphisms} -------------------------------- Let $E,F$ be $\kras$-vector spaces. Let $T:E\to F$ be a homomorphism of hypergroups (respecting the action of $\kras$). The kernel of $T$ $$\Ker\, T=\{\xi\in E\mid T\xi=0\}$$ intersects $\cP_E=E\backslash \{0\}$ as a subspace $N=\Ker\, T\cap\cP_E$ of the geometry $(\cP_E,\cL_E)$. For any $\eta\in \cP_E$, the value of $T(\eta)$ only depends upon the subspace $N(\eta)$ of $\cP_E$ generated by $N$ and $\eta$, since $T(\eta+\xi)\subset T(\eta)+T(\xi)=T(\eta)$ for $\xi\in N$. One obtains in this way a morphism of projective geometries in the sense of [@Faure] from $(\cP_E,\cL_E)$ to $(\cP_F,\cL_F)$. More precisely the restriction of $T$ to the complement of $\Ker\, T$ in $\cP_E$ satisfies the following properties $(M1)$ $N$ is a subspace of $\cP_E$. $(M2)$ $a,b\notin N$, $c\in N$ and $a\in L(b,c)$ imply $T(a)=T(b)$. $(M3)$ $a,b,c\notin N$ and $a\in b\vee c$ imply $T(a)\in T(b)\vee T(c)$. In the last property one sets $x\vee y=L(x,y)$ if $x\neq y$ and $x\vee y=x$ if $x=y$. Note that $(M3)$ implies that if $T(b)\neq T(c)$ the map $T$ injects the line $L(b,c)$ in the line $L(T(b),T(c))$. Conversely one checks that any morphism of projective geometries (fulfilling $\P_3'$) in the sense of [@Faure] comes from a unique morphism of the associated $\kras$-vector spaces. A complete description of the non-degenerate[^3] morphisms of Desarguesian geometries in terms of semi-linear maps is also given in [[*op.cit.*]{} ]{}In our context we use it to show the following result \[main0\] Let $A_j$ ($j=1,2$) be a commutative algebra over the field $K_j\neq \F_2$, and let $$\rho\,:\, A_1/K_1^\times\to A_2/K_2^\times$$ be a homomorphism of hyperrings. Assume that the range of $\rho$ is of $\kras$-dimension $>2$, then $\rho$ is induced by a unique ring homomorphism $\tilde \rho:A_1\to A_2$ such that $\alpha=\tilde \rho|_{K_1}$ is a field inclusion $\alpha:K_1\to K_2$. Since $\rho$ is a homomorphism of $\kras$-vector spaces, it defines a morphism of projective geometries in the sense of [@Faure]. Moreover, since $\rho$ is non-degenerate by hypothesis, there exists by [[*op.cit.*]{} ]{}Theorem 5.4.1, ([[*cf.*]{} ]{}also [@Faure1] Theorem 3.1), a semi-linear map $f:A_1\to A_2$ inducing $\rho$. We let $\alpha:K_1\to K_2$ be the corresponding morphism of fields. Moreover, $f$ is uniquely determined up to multiplication by a scalar, and hence it is uniquely fixed by the condition $f(1)=1$ (which is possible since $\rho(1)=1$ by hypothesis). Let us show that, with this normalization, the map $f$ is a homomorphism. First of all, since $\rho$ is a homomorphism one has $$\label{prop} f(xy)\in K_2^\times f(x)f(y)\qqq x,y\in A_1\,.$$ Let us then show that if $\rho(x)\neq 1$ one has $f(xy)=f(x)f(y)$ for all $y\in A_1$. We can assume, using , that $f(x)f(y)\neq 0$ and we let $\lambda_{x,y}\in K_2^\times$ be such that $f(xy)=\lambda_{x,y} f(x)f(y)$. We assume $\lambda_{x,y}\neq 1$ and get a contradiction. Let us show that $$\label{prop1} \xi(s,t)=1+\alpha(s)f(x)+\alpha(t)f(y)\in K_2f(x)f(y)\qqq s,t\in K_1^\times.$$ This follows from which proves the collinearity of the vectors $$f\left((1+sx)(1+ty)\right)=\xi(s,t)+\alpha(st)f(xy)=\xi(s,t)+\alpha(st)\lambda_{x,y} f(x)f(y)$$ $$f(1+sx)f(1+ty)=\xi(s,t)+\alpha(st)f(x)f(y)$$ Thus by the vectors $\xi(s,t)$ are all proportional to a fixed vector. Taking two distinct $t\in K_1^\times$ shows that $f(y)$ is in the linear span of the (independent) vectors $1,f(x)$ [[*i.e.*]{} ]{}$f(y)=a+bf(x)$ for some $a,b\in K_2$. But then taking $t$ with $1+\alpha(t)a \neq 0$ and two distinct $s\in K_1^\times$ contradicts the proportionality since $1$, $f(x)$ are independent, while $$\xi(s,t)=(1+\alpha(t)a)1+(\alpha(s)+\alpha(t)b)f(x)\,.$$ Thus we have shown that if $\rho(x)\neq 1$ one has $f(xy)=f(x)f(y)$ for all $y\in A_1$. Let then $x_0\in A_1$ be such that $\rho(x_0)\neq 1$. One has $f(x_0y)=f(x_0)f(y)$ for all $y\in A_1$. Then for $x\in A_1$ with $\rho(x)=1$ one has $\rho(x+x_0)\neq 1$ and the equality $f((x+x_0)y)=f(x+x_0)f(y)$ for all $y\in A_1$ gives $f(xy)=f(x)f(y)$. \[main1\] Let $A$ and $B$ be commutative algebras over $\Q$ and let $$\rho\,:\, A/\Q^\times\to B/\Q^\times$$ be a homomorphism of hyperrings. Assume that the range of $\rho$ is of $\kras$-dimension $>2$, then $\rho$ is induced by a unique ring homomorphism $\tilde \rho:A\to B$. \[quot\][Let $A$ and $B$ be commutative $\Q$-algebras and let $$\rho\,:\, A \to B/\Q^\times$$ be a homomorphism of hyperrings. One has $\rho(1)=1$ by hypothesis. By induction one gets $\rho(n)\in \{0,1\}$ for $n\in \N$. Moreover, since $0=\rho(0)\in \rho(1)+\rho(-1)$, $\rho(-1)$ is the additive inverse of $1$ in $B/\Q^\times$, it follows that $\rho(-1)=1$. By the multiplicativity of $\rho$ one gets $\rho(n)\in \{0,1\}$ for $n\in \Z$. Using the property that $n\cdot 1/n=1$ one obtains $\rho(n)=1$ for $n\in \Z, n\neq 0$. Again by the multiplicativity of $\rho$, it follows that $\rho$ induces a homomorphism $A/\Q^\times \to B/\Q^\times$ and Corollary \[main1\] applies. ]{} \[sub2\][Let $A,B,\rho$ be as in Corollary \[main1\]. Assume that the range $\rho(A)$ of $\rho$ has $\kras$-dimension $\leq 2$. Then, one has $\rho(1)=1\in \rho(A)$, and either $\rho(A)=\kras$ or there exists $\xi \in \rho(A)$, $\xi\notin \kras$ such that $\rho(A)\subset \Q+\Q b\subset B/\Q^\times$ where $b\in B$ is a lift of $\xi$. Since $\rho$ is multiplicative, one has $\xi^2\in \rho(A)$ and $b$ fulfills a quadratic equation $$b^2=\alpha+\beta b\,, \ \alpha,\beta\in \Q.$$ One can reduce to the case when $b$ fulfills the condition $$\label{quadequ} b^2=N\,, \ N\in \Z\,, \ N\ \text{square\, free}.$$ Thus the morphism $\rho\,:\, A/\Q^\times\to B/\Q^\times$ factorizes through the quadratic subalgebra $\Q(\sqrt N):=\Q[T]/(T^2-N)$ $$\label{factorrho} \rho\,:\, A/\Q^\times\to \Q(\sqrt N)/\Q^\times \to B/\Q^\times.$$ Let us consider the case $N=1$. In this case $\Q(\sqrt N)$ is the algebra $B_0=\Q\oplus \Q$ direct sum of two copies of $\Q$. For $n\in \N$, an odd number, the map $\rho_n: B_0\to B_0,~\rho_n(x)=x^n$ is a multiplicative endomorphism of $B_0$. Let $\tilde \P_1=B_0/\Q^\times$ be the quotient hyperring. The corresponding geometry is the projective line $\P^1(\Q)$ and for any $x\neq y\in \tilde \P_1\backslash \{0\}$ one has $$x+y=\tilde \P_1\backslash \{0,x,y\}.$$ Since $\rho_n$ induces an injective self-map of $\tilde \P_1$, one gets that $$\rho_n(x+y)=\rho_n(\tilde \P_1\backslash \{0,x,y\})\subset \tilde \P_1\backslash \{0,\rho_n(x),\rho_n(y)\} =\rho_n(x)+\rho_n(y).$$ Thus $\rho_n: B_0/\Q^\times\to B_0/\Q^\times$ is an example of a morphism of hyperrings which does not lift to a ring homomorphism. The same construction applies when the map $x\mapsto x^n$ is replaced by any injective group homomorphism $\Q^\times\to \Q^\times$. ]{} The equivalence relation on a hyperfield extension of $\kras$ {#hyperequ} ============================================================= In this section we prove that the addition in a hyperfield extension $F$ of the Krasner hyperfield $\kras$ is uniquely determined by an equivalence relation on $F$ whose main property is that to commute with its conjugates by rotations. Commuting relations ------------------- Given two relations $T_j$ $(j=1,2$) on a set $X$, one defines their composition as $$T_1\circ T_2=\{(x,z)\mid \exists y\in X, \ (x,y)\in T_1\,, \ (y,z)\in T_2\}.$$ By definition, an equivalence relation $T$ on a set $X$ fulfills $\Delta\subset T$, where $\Delta=\Delta_X$ denotes the diagonal. Moreover, one has $T^{-1}=T$ where $$T^{-1}=\{(x,y)\mid (y,x)\in T\}$$ and finally $T\circ T=T$. We say that two equivalence relations $T_j$ on a set $X$ commute when any of the following equivalent conditions hold: $\bullet$ $T_1\circ T_2=T_2\circ T_1$ $\bullet$ $T_1\circ T_2$ is the equivalence relation generated by the $T_j$ $\bullet$ $T_1\circ T_2$ is an equivalence relation. Notice that any of the above conditions holds if and only if for any class $C$ of the equivalence relation generated by the $T_j$, the restrictions of the $T_j$ to $C$ are independent in the sense that any class of $T_1|_C$ meets every class of $T_2|_C$. Projective geometry as commuting points of view ----------------------------------------------- Given a point $a\in \cP$ in a projective geometry $(\cP,\cL)$, one gets a natural equivalence relation $R_a$ which partitions the set of points $\cP\backslash\{a\}$ as the lines through $a$. We extend this to an equivalence relation $R_a$, denoted $\sim_a$, on $\cP\cup\{0\}$ such that $0\sim_a a$ and for $x\neq y$ not in $\{0,a\}$ $$\label{aligned} x\sim_a y~\Longleftrightarrow~ a\in L(x,y).$$ We now relate the commutativity of these equivalence relations with the axiom $\P_2$. More precisely, we have the following \[lemlign\] The axiom $\P_2$ of a projective geometry $(\cP,\cL)$ is equivalent to the commutativity of the equivalence relations $R_a$. Let us first assume that the axiom $\P_2$ holds and show that the relations $R_a$’s commute pairwise. Given two points $a\neq b$ in $\cP$, we first determine the equivalence relation $R_{ab}$ generated by $R_a$ and $R_b$. We claim that the equivalence classes for $R_{ab}$ are $\bullet$ The union of $L(a,b)$ with $\{0\}$. $\bullet$ The complement of $L(a,b)$ in any plane containing $L(a,b)$. One checks indeed that these subsets are stable under $R_a$ and $R_b$. Moreover let us show that in each of these subsets, an equivalence class of $R_a$ meets each equivalence class of $R_b$. In the first case, $R_a$ has two classes: $\{0,a\}$ and $L(a,b)\backslash \{a\}$ (similarly for $R_b$), so the result is clear. For the complement of $L(a,b)$ in any plane containing $L(a,b)$, each class of $R_a$ is the complement of $a$ in a line through $a$ and thus meets each class of $R_b$, since coplanar lines meet non-trivially. Thus $R_a$ commutes with $R_b$. Conversely, assume that for all $a\neq b$ the relation $R_a$ commutes with $R_b$. Let then $x,y,z,t,u$ as in the statement of the axiom $\P_2$. One has $t\sim_y x$ and $z\sim_u x$. Thus $z\in R_uR_y(t)$. Then $z\in R_yR_u(t)$ and $ L(y,z)\cap L(t,u)\neq \emptyset$. We can thus reformulate the axioms of projective geometry in terms of a collection of [*commuting points of view*]{}, more precisely: \[propequigeom\] Let $X=\cP\cup \{0\}$ be a pointed set and let $\{R_a; a\in \cP\}$ be a family of equivalence relations on $X$ such that $(1)$ $R_a$ commutes with $R_b$, $\forall~a,b\in \cP$ $(2)$ $\{0,a\}$ is an equivalence class for $R_a$, for all $a\in \cP$ $(3)$ Each equivalence class of $R_a$, other than $\{0,a\}$, contains at least three elements. For $a\neq b\in \cP$ let $L(a,b)$ be the intersection with $\cP$ of the class of $0$ for $R_a\circ R_b$. Define a collection $\cL$ of lines in $\cP$ as the set of all lines $L(a,b)$. Then $(\cP,\cL)$ is a projective geometry fulfilling the axioms $\P_1$, $\P_2$ and $\P'_3$. One has $R_b(0)=\{0,b\}$ and thus the points of $L(a,b)\backslash \{a\}$ are those of $R_a(b)$. The same statement holds after interchanging $a$ and $b$. Let us show that if $c\in L(a,b)$ is distinct from both $a$ and $b$, then $L(a,c)=L(a,b)$. The points of $L(a,c)\backslash \{a\}$ are those of $R_a(c)$ and $c\in R_a(b)$ since $c\in L(a,b)\backslash \{a\}$. By transitivity it follows $R_a(c)=R_a(b)$. Thus $L(a,c)\backslash \{a\}=L(a,b)\backslash \{a\}$ and $L(a,c)=L(a,b)$. Hence, for any two (distinct) points $a,b\in L(x,y)$ one has $L(a,b)=L(x,y)$. Thus, if we let the set $\cL$ of lines in $\cP$ be given by all $L(a,b)$ axiom $\P_1$ follows while the condition (3) ensures $\P'_3$. For $x\neq y$ not in $\{0,a\}$, one has that $a\in L(x,y)$ iff $x\in R_a(y)$. Indeed, if $a\in L(x,y)$ then $L(x,y)=L(y,a)$ and $x\in L(y,a)\backslash \{a\}=R_a(y)$. Conversely, if $x\in R_a(y)$, then $x\in L(y,a)$ and $a\in L(x,y)$. Thus, by Lemma \[lemlign\] one gets $\P_2$. The basic equivalence relation on a hyperfield extension of $\kras$ {#subhyperequ} ------------------------------------------------------------------- In the case of a hyperring containing $\kras$, the following statement shows that the equivalence relation associated to the unit $1$ plays a privileged role. \[addequ\] Let $R$ be a hyperring containing $\kras$ as a sub-hyperring. We introduce the multi-valued map $s: R \to R$, $s(a) = a+1$. Then, the following conditions are equivalent. For $x,y\in R$ $(1)$ $x=y$ or $x\in y+1$ $(2)$ $x\cup (x+1)=y\cup (y+1)$ $(3)$ $s^2(x)=s^2(y)$, ($s^2 = s\circ s$). The above equivalent conditions define an equivalence relation $\sim_R$ on $R$. We show that (1) implies (2). Assume $x\in y+1$. Then $x+1\subset y+1+1=y\cup (y+1)$. Thus $x\cup (x+1)\subset y\cup (y+1)$. By reversibility one has $y\in x+1$ and thus $y\cup (y+1)\subset x\cup (x+1)$ so that $x\cup (x+1)= y\cup (y+1)$. Next, we claim that (2) and (3) are equivalent since $s^2(a)=a+1+1=a\cup (a+1)$ for any $a$. Finally (2) implies (1), since if $x\neq y$ and (2) holds one has $x\in y+1$. One knows by Proposition \[simp\] that $a\notin s(a)$ provided that $a\neq 1$. It follows that the map $s$ is in fact completely determined by the equivalence relation $\sim_R$. Thus one obtains \[addequ1\] Let $R$ be a hyperring containing the Krasner hyperfield $\kras$ and let $\sim_R$ be the associated equivalence relation. Then one has $$\label{sxequ} x+1=\{y\sim_R x\,, \ y\neq x\}\qqq x\in R,\ x\neq 1.$$ In particular, when $R$ is a hyperfield its additive hyper-structure is uniquely determined by the equivalence relation $\sim_R$. We now check directly the commutativity of $\sim_R$ with its conjugates under multiplication by any element $a\in R^\times$. \[comrel\] Let $R$ be a hyperring containing the Krasner hyperfield $\kras$ as sub-hyperring and let $\sim_R$ be the corresponding equivalence relation. Then $\sim_R$ commutes with its conjugates under multiplication by any element $a\in R^\times$. Let $T=\sim_R$. One has $T(x)=x+1+1$ for all $x\in R$. It follows that for the conjugate relation $T^a:=aTa^{-1}$ one has $T^a(x)=x+a+a$. Thus $$T\circ T^a(x)=1+1+(a+a+x)=a+a+(1+1+x)=T^a\circ T(x).$$ Thus, one can start with any abelian group $H$ (denoted multiplicatively) and by applying Corollary \[addequ1\], consider on the set $R=H\cup \{0\}$ an equivalence relation $S$ which commutes with its conjugates under rotations. Let us assume that $\{0,1\}$ forms an equivalence class for $S$. In this generality, it is not true that the multivalued map $s: R \to R$ defined by $$\label{sxequbis} s(x)=\{y\in S(x)\,, \ y\neq x\}\qqq x\in R,\ x\neq 1,\qquad s(1)=\{0,1\}$$ commutes with its conjugates under rotations. One can consider, for example, $H=\Z/3\Z$ and on the set $R=H\cup \{0\}$ one can define the equivalence relation $S$ with classes $\{0,1\}$ and $\{j,j^2\}$. This relation $S$ commutes with its conjugates under rotations, but one has $$s^j(s(1)=\{j,j^2\}\,, \ \ s(s^j(1))=j.$$ But the commutativity of $s$ with its conjugates holds provided the equivalence classes for $S$ other than $\{0,1\}$ have cardinality at least three. One in fact obtains the following \[lemadd\] Let $H$ be an abelian group. Let $S$ be an equivalence relation on the set $R=H\cup \{0\}$ such that $\bullet$ $\{0,1\}$ forms an equivalence class for $S$ $\bullet$ Each class of $S$, except $\{0,1\}$, contains at least three elements $\bullet$ The relation $S$ commutes with its conjugates for the action of $H$ by multiplication on the monoid $R$. Then with $s$ defined by , the law $$\label{mainadddefn} x+y:=\begin{cases} y&\text{if}~x=0\\ xs(yx^{-1})&\text{if}~x\neq 0\end{cases}$$ defines a commutative hypergroup structure on $R$. With this hyper-addition the monoid $R$ becomes a commutative hyperfield containing $\kras$. For each $a\in H$ let $S_a$ be the equivalence relation obtained by conjugating $S$ by the multiplication by $a$. Thus $x\sim y\,, \ (S_a)$ means $a^{-1}x\sim a^{-1}y\,, \ (S)$. In particular $\{0,a\}$ is an equivalence class for $S_a$. One checks that all conditions of Proposition \[propequigeom\] are fulfilled and thus one gets a geometry fulfilling axioms $\P_1$, $\P_2$ and $\P'_3$. By construction the abelian group $H$ acts by collineations on this geometry and thus Theorem \[main\] applies. Note that one can give a direct proof of Proposition \[lemadd\], in fact we shall use that approach to treat a similar case in §\[hypersign\]. [The construction of projective planes from [*difference sets*]{} ([[*cf.*]{} ]{}[@Singer]) is a special case of Proposition \[lemadd\]. Let $H$ be an abelian group, and $\cD\subset H$ be a subset of $H$ such that the following map is bijective $$\cD\times \cD\backslash \Delta\to H\backslash \{1\},\ (x,y)\mapsto xy^{-1}$$ (where $\Delta$ is the diagonal). Then the partition of $H\backslash \{1\}$ into the subsets $\cD\times \{u\}$ for $u\in \cD$ defines on $R=H\cup \{0\}$ an equivalence relation $S$ which fulfills all conditions of Proposition \[lemadd\]. By [@Hall] Theorem 2.1 one obtains in this manner all cyclic projective planes [[*i.e.*]{} ]{}in the above context all hyperfield extensions of $\kras$ whose multiplicative group is cyclic and whose associated geometry is of dimension $2$. By [@Hall] Theorem 3.1, difference sets $\cD$ exist for the infinite cyclic group $\Z$ and thus provide examples of hyperfield extensions of $\kras$ whose multiplicative group is cyclic and whose associated geometry is non-Desarguesian. ]{} The order relation on a hyperfield extension of $\sign$ {#hypersign} ======================================================= Let $\sign$ be the hyperfield of Definition \[defnkras\]. Recall that $\sign=\{-1,0,1\}$ with hyper-addition given by the “rule of signs" , and the (classical) multiplication also given by the rule of signs. In this section, we generalize the results proved in §\[hyperequ\] for extensions of the hyperfield $\kras$, to hyperfield extensions of $\sign$. In particular, we show that one can recast the hyperaddition in a hyperfield extension of $\sign$ by implementing an [*order relation*]{} commuting with its conjugates. \[krasnersign\] Let $R$ be a commutative ring and let $G\subset R^\times$ be a subgroup of its multiplicative group. Assume that $-1\notin G\neq \{1\}$. Then, the hyperring $R/G$ contains $\sign$ as a sub-hyperfield if and only if $\{0\}\cup G\cup(-G)$ is an ordered subfield of $R$ with positive part $\{0\}\cup G$. Let $F=\{0\}\cup G\cup(-G)$. If $(F,G)$ is an ordered field, then $F/G=\sign$ and $R/G$ contains $\sign$ as a sub-hyperfield. Conversely one notices that $H=G\cup(-G)$ is a multiplicative subgroup $H\subset R^\times$ and that the hyperring $R/H$ contains $\kras$ as a sub-hyperfield. Thus, by Proposition \[krasner1\], $\{0\}\cup G\cup(-G)$ is a subfield $F$ of $R$. This subfield is ordered by the subset $\{0\}\cup G=F_+$. Indeed, from $1+1=1$ in $\sign$ one gets that $G+G=G$ and for $x,y\in F_+$, both $x+y$ and $xy$ are in $F_+$. The statement follows. \[addequsign\] Let $R$ be a hyperring containing $\sign$ as a sub-hyperring. Then, the following condition defines a partial order relation $\leq_R$ on $R$ $$\label{order} x \leq_R y~ \Longleftrightarrow~y\in x+1\, \ \text{or}\ \ y=x.$$ We show that the relation is transitive. Assume $x\leq_R y$ and $y\leq_R z$. Then unless one has equality one gets $y\in x+1$ and $z\in y+1$ so that $z\in (x+1)+1=x+1$ since $1+1=1$. It remains to show that if $x\leq_R y$ and $y\leq_R x$ then $x=y$. If these conditions hold and $x\neq y$ one gets $x\in y+1\subset (x+1)+1=x+1$. Thus $x\in x+1$ but by the reversibility condition $(5)$ on hypergroups one has $1\in x-x$ but $x-x=\{-x,0,x\}$ and one gets that $x=\pm 1$. Similarly $y=\pm 1$, and since $x\neq y$, one of them say $x$ is equal to $1$ and one cannot have $y\in x+1=1$. \[addequ1sign\] Let $R$ be a hyperring containing $\sign$ as a sub-hyperring and let $\leq_R$ be the corresponding partial order relation. Then $$\label{sxequsign} x+1=\{y\geq_R x\,, \ y\neq x\}\qqq x\in R,\ x\neq \pm 1.$$ When $R$ is a hyperfield, its additive structure is uniquely determined by the partial order relation $\leq_R$. By , if $x\neq \pm 1$ one has $x\notin x+1$ and thus using one gets . This determines the operation $x\mapsto x+1$ for all $x$, including for $x\in\sign\subset R$. When $R$ is a hyperfield this determines the addition. \[noextsign\] Any hyperfield extension of $\sign$ is infinite. Let $F$ be a hyperfield extension of $\sign$, and $x\in F$, $x\notin \sign\subset F$. Then $(x+1)\cap \sign=\emptyset$, since otherwise using reversibility, one would obtains $x\in \sign$. Let $x_1\in x+1$. Then, one has $x<_F x_1$ and iterating this construction one defines a sequence $$x<_F x_1<_F x_2<_F \cdots <_F x_n.$$ The antisymmetry of the partial order relation shows that the $x_k$ are all distinct. \[comrelsign\] Let $R$ be a hyperring containing $\sign$ as a sub-hyperring and let $\leq_R$ be the corresponding partial order relation. Then, $\leq_R$ commutes with its conjugates under multiplication by any element $a\in R^\times$. Let $T=\leq_R$. One has $T(x)=(x+1)\cup x$ for all $x\in R$. It follows that for the conjugate relation $T^a=aTa^{-1}$ one obtains $T^a(x)=(x+a)\cup x$. Thus $$T\circ T^a(x)=(x+a)+1\cup(x+1)\cup(x+a)\cup x=T^a\circ T(x).$$ \[lemaddsign\] Let $H$ be an abelian group and let $1\neq \epsilon\in H$ be an element of order two. Let $S$ be a partial order relation on the set $R=H\cup \{0\}$ such that $\bullet$ $S(\epsilon)=\{\epsilon,0,1\}$, $S(0)=\{0,1\}$, $S(1)=1$ and $$\label{rev} x\leq_S y~\Longleftrightarrow~ \epsilon y\leq_S \epsilon x$$ $\bullet$ The map $s$ defined by $s(\epsilon)=\{\epsilon,0,1\}$, $s(0)=1$, $s(1)=1$ and $$\label{ssign} s(x)=\{y\in S(x)\,, \ y\neq x\}\qqq x\in R,\ x\notin \{\epsilon,0,1\}$$ fulfills $s(x)\neq \emptyset$ for all $x$ and commutes with its conjugates for the action of $H$ by multiplication on $R$. Then, the hyperoperation $$\label{mainadddefn1} x+y:=\begin{cases} y&\text{if}~x=0\\ xs(yx^{-1})&\text{if}~x\neq 0\end{cases}$$ defines a commutative hypergroup law on $R$. With this law as addition, the monoid $R$ becomes a commutative hyperfield containing $\sign$. For $x\in R^\times$, let $s^x$ be the conjugate of $s$ by multiplication by $x$, [[*i.e.*]{} ]{}$$s^x(y):=xs(yx^{-1})\qqq y\in R.$$ The commutation $s\circ s^x=s^x\circ s$ gives, when applied to $y=0$ and using $s(0)=1$, and $s^x(0)=x$ $$s(x)=xs(x^{-1}).$$ Assume that $x\neq 0,y\neq 0$, then $$x+y=xs(yx^{-1})=yXs(X^{-1})=ys(X)=y+x\, , \ X=xy^{-1}$$ The same result holds if $x$ or $y$ is $0$ (if they are both zero one gets $0$, otherwise one gets $0+y=y=y+0$ since $s(0)=1$). Moreover, one has the commutation $$s^x\circ s^z=s^z\circ s^x\qqq x,z \in R^\times$$ which shows that, provided both $x$ and $z$ are non-zero $$(x+y)+z=s^z(s^x(y))=s^x(s^z(y))=x+(y+z).$$ If $x=0$ one has $(x+y)+z=y+z=x+(y+z)$, and similarly for $z=0$. Thus the addition is associative. The distributivity follows from the homogeneity of . Next, we show that $\forall x \in R, \ \exists!~ y(=-x)\,, \ 0\in x+y$. Take $y=\epsilon x$ then, provided $x\neq 0$, one has $x+y=xs(\epsilon)=\{\epsilon x ,0,x\}$ so that $0\in x+y$. We show that $y=\epsilon x$ is the unique solution. For $x\neq 0$ and $0\in x+y$ one has $0\in s(yx^{-1})$. Thus it is enough to show that $0\in s(a)$, $a\neq 0$, implies $a=\epsilon$. Indeed, one has $a\leq_S 0$ and thus $0\leq_S \epsilon a$ by , thus $\epsilon a=1$, since $S(0)=\{0,1\}$. Finally one needs to show that $x\in y+z\Rightarrow z\in x+\epsilon y$. One can assume that $y=1$ using distributivity. We thus need to show that $$x\in s(z)\Rightarrow z\in x+\epsilon$$ In fact, it is enough to show that $\epsilon z\geq_S \epsilon x$ and this holds by . [Let $F=U(1)\cup\{0\}$ be the union of the multiplicative group $U(1)$ of complex numbers of modulus one with $\{0\}$. $F$ is, by construction, a multiplicative monoid. For $z,z' \in U(1)$, let $(z,z')\subset U(1)$ be the shortest open interval between $z$ and $z'$. This is well defined if $z'\neq \pm z$. One defines the hyper-addition in $F$ so that $0$ is a neutral element and for $z,z' \in U(1)$ one sets $$\label{uone} z+z'= \left\{ \begin{array}{ll} z, & \hbox{if} \ z=z'\\ \{-z,0,z\}, & \hbox{if}\ z=-z'\\ (z,z'), & \hbox{otherwise.} \end{array} \right.$$ This determines the hyperfield extension of $\sign$: $F=\C/\R_+^\times$. This hyperfield represents the notion of the argument of a complex number. The quotient topology is quasi-compact, and $0$ is a closed point such that $F$ is its only neighborhood. The subset $U(1)\subset F$ is not closed but the induced topology is the usual topology of $U(1)$. ]{} \[orderskew\][In §\[sectproj\] we showed that $\kras$-vector spaces are projective geometries. Similarly, one can interpret $\sign$-vector spaces in terms of [*spherical*]{} geometries. In the Desarguesian case, any such geometry is the quotient $V/H^+$ of a left $H$-vector space $V$ over an ordered skew field $H$ by the [*positive*]{} part $H^+$ of $H$. It is a double cover of the projective space $\P(V)=V/H^\times$. More generally, a $\sign$-vector space $E$ is a double cover of the $\kras$-vector space $E\otimes_\sign\kras$. We shall not pursue further this viewpoint in this paper, but refer to Theorem 28 of Chapter I of [@Corsini] as a starting point. This extended construction is the natural framework for several results proved in this section. ]{} Relation with $\B$ and $\F_1$ ============================= By definition, a map $f:R_1\to R_2$ from a hypersemiring $R_1$ to a hypersemiring $R_2$ is a homomorphism when it is a morphism of multiplicative monoids and it fulfills the inclusion $$\label{homo} f(x+y)\subset f(x)+f(y)\qqq x,y\in R_1.$$ Thus there is no homomorphism of hypersemirings $f:\Z\to\B$ to the semifield $\B=\{0,1\}$ ($1+1=1$ in $\B$, [[*cf.*]{} ]{}[@Lescot]) such that $f(0)=0, \,f(1)=1$. Indeed $f(-1)$ should be an additive inverse of $1$ in $\B$ which is a contradiction. On the other hand, the similar map $\sigma:\Z\to\sign$ does extend to a hyperring homomorphism $$\label{hmorp} \sigma:\Z\to \sign\,, \ \ \sigma(n)={\rm sign}(n)\qqq n\neq 0\,, \ \sigma(0)=0.$$ Such map is in fact the unique element of $\Hom(\Z,\sign)$. Moreover, the identity map ${\rm id}:\B\to\sign$ is a hypersemiring homomorphism since $1+1=1$ in $\sign$. Thus one can identify $B$ as the positive part of $\sign$: $\B=\sign_+$. Notice also that $\kras$ is the quotient of $\sign$ by the subgroup $\{\pm 1\}$. One deduces a canonical epimorphism (absolute value) $\pi:\sign \to \kras$. Thus, by considering the associated geometric spectra (and reversing the arrows), we obtain the following commutative diagram $$\begin{gathered} \label{overall} \,\hspace{100pt}\raisetag{-47pt} \xymatrix@C=25pt@R=25pt{ & \Spec\sign\ar[dl]\ar[d]&\Spec\kras \ar[l]\ar[dl]\\ \Spec\Z\ar[d] & \Spec\B\ar[dl]& &\\ \Spec\F_1 & &\\ }\hspace{25pt}\end{gathered}$$ The BC-system as $\Z_\sign=\hat\Z\otimes_\Z\sign$ ------------------------------------------------- It follows from what has been explained above that $\Spec\sign$ sits over $\Spec\Z$ and that the map from $\Spec\kras$ to the generic point of $\Spec\Z$ factorizes through $\Spec\sign$. To introduce in this set-up an algebraic geometry over $\Spec\sign$, it is natural to try to lift $\Spec\Z$ to an object over $\Spec\sign$. This is achieved by considering the spectrum of the tensor product $\Z_\sign=\hat\Z\otimes_\Z\sign$, using the natural sign homomorphism $\Z\to \sign$ and the embedding of the relative integers in their profinite completion. Notice that, since the non-zero elements of $\sign$ are $\pm 1$, every element of $\hat\Z\otimes_\Z\sign$ belongs to $\hat\Z\otimes_\Z 1$. Thus the hyperring $\Z_\sign$ is, by construction, the quotient of $\hat \Z$ by the equivalence relation $$x\sim y~\Longleftrightarrow~\exists n,m\in \N^\times, \ nx=my.$$ This is precisely the relation that defines the noncommutative space associated to the BC-system. Geometrically, it corresponds to a fibered product given by the commutative diagram $$\begin{gathered} \label{BC} \,\hspace{60pt}\raisetag{-47pt} \xymatrix@C=25pt@R=25pt{ & \Spec\Z_\sign\ar[dl]\ar[d]& \\ \Spec\hat\Z\ar[d] & \Spec\sign\ar[dl]& \\ \Spec\Z & \\ }\hspace{25pt}\end{gathered}$$ Using the morphism $h=\pi\circ\sigma$ of , one can perform the extension of scalars from $\Z$ to $\kras$. The relation between $-\otimes_\Z\kras$ and $-\otimes_\Z\sign$ is explained by the following result \[extscalar\] Let $R$ be a (commutative) ring containing $\Q$. Let $R/\Q^\times$ be the hyperring quotient of $R$ by the multiplicative group $\Q^\times$ of $\Q$. Then one has $$\label{extsca} R\otimes_\Z\kras=R/\Q^\times.$$ Let $R/\Q_+^\times$ be the hyperring quotient of $R$ by the positive multiplicative group $\Q_+^\times$. Then one has $$\label{extscasign} R\otimes_\Z\sign=R/\Q_+^\times.$$ Every element of $R\otimes_\Z\kras$ arises from an element of $R$ in the form $a\otimes 1_{\kras}$. Moreover one has, for $n\in \Z$, $n\neq 0$ $$n a\otimes 1_{\kras}=a\otimes h(n){1}_{\kras}=a\otimes 1_{\kras}.$$ This shows that for any non-zero rational number $q\in \Q^\times$ one has $$q a\otimes 1_{\kras}= a\otimes 1_{\kras}.$$ Thus, since $R/\Q^\times$ is a hyperring over $\kras$, by Proposition \[krasner2\] one gets . The proof of the second statement is similar. When $R=\A_{\Q,\,f}$ the ring $\A_{\Q,\,f}=\hat \Z\otimes_\Z\Q$ of finite adèles over $\Q$, Proposition \[extscalar\] yields the hyperring $\Z_\sign$. Taking $R=\A_\Q$, the ring of adèles over $\Q$, and tensoring by $\kras$ one obtains the hyperring $\H_\Q$ of adèle classes over $\Q$ ([[*cf.*]{} ]{}Theorem \[thmfine\] below). At the level of spectra one obtains $$\Spec(\H_\Q) = \Spec(\A_\Q) \times_{\Spec(\Z)} \Spec(\kras)$$ where $\H_\Q$ is the hyperring of adèle classes over $\Q$. When combined with , this construction produces the following (commutative) diagram $$\begin{gathered} \label{overall2} \,\hspace{40pt}\raisetag{-47pt} \xymatrix@C=25pt@R=25pt{ & \Spec\H_\Q\ar[dl]\ar[d]& \\ \Spec\A_\Q \ar[d]& \Spec\kras\ar[dl]\ar[d]& \\ \Spec\Z\ar[d] & \Spec\B\ar[dl]& \\ \Spec\F_1 & \\ }\hspace{25pt}\end{gathered}$$ The profinite completion $\Z\to\hat\Z$ and ideals ------------------------------------------------- Let us consider the (compact) topological ring $R=\hat\Z$. Given a closed ideal $J\subset R$, we define $$\label{quasirad} \sqrt[\infty]{J}=\{x\in R\mid \lim_{n\to\infty}x^n\subset J\}$$ In this definition we are not assuming that $x^n$ converges and we define $\lim_{n\to\infty}x^n$ as the set of limit points of the sequence $x^n$. Thus $x \in \sqrt[\infty]{J}$ means that $x^n\to 0$ in the quotient (compact) topological ring $R/J$. \[basictop\] Let $\ell\in \Sigma_\Q$ be a finite place. $(a)$ For $a=(a_w)\in \hat\Z\sim\prod \Z_p$, the condition $a_\ell=0$ defines a closed ideal $\ffp_\ell\subset \hat\Z$ which is invariant under the equivalence relation induced by the partial action of $\Q^\times$ on $\hat\Z$ by multiplication. $(b)$ The intersection $\Z\cap \sqrt[\infty]{\ffp_\ell}$ is the prime ideal $(\ell)\subset\Z$. $(a)$ The ideal $\ffp_\ell$ is closed in $\hat\Z\sim\prod \Z_p$ by construction. For any prime $\ell$, the ring $\Z_\ell$ contains $\Z$ and has no zero divisor, thus $a_\ell=0 \Leftrightarrow n a_\ell=0$ for any non-zero $n\in \N$. $(b)$ For $a=(a_w)\in \hat\Z\sim\prod \Z_p$, one has $a\in \sqrt[\infty]{\ffp_\ell}$ if and only if the component $a_\ell$ belongs to the maximal ideal $\ell \Z_\ell$. The result follows. The relations between the various algebraic structures discussed above are summarized by the following diagram, with $\Z_\kras=\Z_\sign\otimes_\sign \kras=\Z_\sign/\{\pm 1\}$, $$\begin{gathered} \label{overall3} \,\hspace{50pt}\raisetag{57pt} \xymatrix@C=25pt@R=25pt{ & \Spec\Z_\sign\ar[dl]\ar[d]&\Spec\Z_\kras\ar[d]\ar[l]\ar[r] &\Spec\H_\Q \ar[dl]&\\ \Spec\hat \Z \ar[d]& \Spec\sign\ar[dl]\ar[d]&\Spec\kras \ar[l]\ar[dl]&&\\ \Spec\Z\ar[d] & \Spec\B\ar[dl]& &&\\ \Spec\F_1 & &&\\ }\hspace{25pt}\end{gathered}$$ Arithmetic of the hyperring $\ads$ of adèle classes {#hyper} =================================================== The quotient construction of Proposition \[krasner2\] applies, in particular, to the ring $R=\A_\K$ of adèles over a global field $\K$, and to the subgroup $\K^\times\subset\A_\K^\times$. One then obtains a new algebraic structure and description of the adèle class space as follows \[thmfine\] Let $\K$ be a global field. The adèle class space $\A_\K/\K^\times$ is a hyperring $\ads$ over $\kras$. By using the unique morphism $\K\to \kras$ for the extension of scalars one has $\ads=\A_\K\otimes_\K\kras$. The fact that $\ads=\A_\K/\K^\times$ is a hyperring follows from the construction of Krasner. This hyperring contains $\kras$ by Proposition \[krasner2\]. The identification with $\A_\K\otimes_\K\kras$ follows as in Proposition \[extscalar\]. This section is devoted to the study of the arithmetic of the hyperring $\H_\K$ of the adèle classes of a global field. In particular we show that, for global fields of positive characteristic, the action of the units $\H_\K^\times$ on the prime elements of $\H_\K$ corresponds to the action of the abelianized Weil group $\cW^{\rm ab}\subset{\rm Gal}(\K^{\rm ab}:\K)$ on the space ${\rm Val}(\K^{\rm ab})$ of valuations of the maximal abelian extension $\K^{\rm ab}$ of $\K$ [[*i.e.*]{} ]{}on the space of the (closed) points of the corresponding projective tower of algebraic curves. More precisely we shall construct a canonical isomorphism of the groupoid of prime elements of $\H_\K$ with the loop groupoid of the above abelian cover. The space $\Spec(\ads)$ of closed prime ideals of $\ads$ -------------------------------------------------------- Given a [*finite*]{} product of fields $R=\prod_{v\in S}\K_v$, an ideal $J$ in the ring $R$ is of the form $$J_Z=\{x=(x_v)\in R\mid x_w=0\qqq w\in Z\}$$ where $Z\subset S$ is a non-empty subset of $S$. To see this fact one notes that if $x\in J$ and the component $x_v$ does not vanish, then the characteristic function $1_v$ (whose all components are zero except at $v$ where the component is $1$) belongs to $J$ since it is a multiple of $x$. By adding all these $1_v$’s, one gets a generator $p=\sum 1_v$ of $J$. Let $\K$ be a global field. We endow the ring $\A_\K$ of adèles with its locally compact topology. For any subset $E\subset \Sigma(\K)$ of the set of places of $\K$, one has the convergence $$\label{converge} \sum_F \, 1_v\to 1_E$$ where $F$ runs through the finite subsets of $E$, and $1_E$ is the characteristic function of $E$. \[propidealsads\] There is a one to one correspondence between subsets $Z\subset \Sigma(\K)$ and closed ideals of $\A_\K$ (for the locally compact topology) given by $$\label{idealsads} Z\mapsto J_Z=\{x=(x_v)\in \A_\K\mid x_w=0\qqq w\in Z\}.$$ First of all we notice that, by construction, $J_Z$ is a closed ideal of $\A_\K$, for any subset $Z\subset \Sigma(\K)$. Let $J$ be a closed ideal of $\A_\K$. To define the inverse of the map , let $E\subset \Sigma(\K)$ be the set of places $v$ of $\K$ for which there exists an element of $J$ which does not vanish at $v$. One has $1_v\in J$ for all $v\in E$. Thus, since $J$ is closed one has $1_E\in J$ by . The element $1_E$ is a generator of $J$, since for $j\in J$ all components $j_w$ of $j$ vanish for $w\notin E$, so that $j=j1_E$ is a multiple of $1_E$. By taking $Z=E^c$ to be the complement of $E$ in $\Sigma(\K)$, one has $J=J_Z$. Proposition \[propidealsads\] applies, in particular, in the case $Z = \{w\}$, for $w\in\Sigma(\K)$ and it gives rise to the closed ideal of the hyperring $\H_\K=\A_\K/\K^\times$ $$\label{prideal} \ffp_w=\{x\in \H_\K\,|\, x_w=0\}.$$ Notice that the ideal $\ffp_w$ is well defined since the condition for an adèle to vanish at a place is invariant under multiplication by elements in $\K^\times$. The set $\ffp_w$ is in fact a [*prime ideal*]{} in $\H_\K$ whose complement is the multiplicative subset $$\ffp_w^c=\{x\in \H_\K\,|\, x_w\neq 0\}.$$ \[propidealsads1\] There is a one to one correspondence between the set $\Sigma(\K)$ of places of $\K$ and the set of closed prime ideals of $\H_\K$ (for the quotient topology) given by $$\label{idealsads1} \Sigma(\K)\to \Spec(\H_\K),\quad w\mapsto \ffp_w.$$ The projection $\pi: \A_\K\to \ads$ gives a one to one correspondence for closed prime ideals. Thus, it is enough to prove the statement for the topological ring $\A_\K$. One just needs to show that an ideal of the form $J_Z$ in $\A_\K$ is prime if and only if $Z=\{w\}$ for some place $w\in\Sigma(\K)$. Assume that $Z$ contains two distinct places $w_j$ ($j=1,2$). Then one has $1_{w_j}\notin J_Z$, while the product $1_{w_1}1_{w_2}=0$. Thus $J_Z$ is not a prime ideal of $\A_\K$. Since we have just proved that the $\ffp_w$’s are prime ideals of $\ads$, we thus get the converse. [When viewed as a multiplicative monoid, the adèle class space $\A_\K/\K^\times$ has many more prime ideals than when it is viewed as a hyperring. This is a consequence of the fact that [*any*]{} union of prime ideals in a monoid is still a prime ideal. This statement implies, in particular, that all subsets of the set of places determine a prime ideal. ]{} Functions on $\Spec(\H_\Q)$ {#functions} --------------------------- In algebraic geometry one defines a function on a scheme $X$, viewed as a covariant functor $\underline X:\An\to\Se$, as a morphism of functors $f:\underline X\to \cD$ to the (functor) affine line $\cD=\mathfrak{spec}(\Z[T])$ (whose geometric scheme is $\Spec(\Z[T])$, [[*cf.*]{} ]{}[@announc3]). For $X=\Spec(R)$, where $R$ is an object of $\An$ ([[*i.e.*]{} ]{}a commutative ring with unit), one derives a natural identification of functions on $X$ with elements of the ring $R$ $$\label{ztidd} \Hom_\An(\Z[T],R)\simeq R.$$ In the category of hyperrings, the identification no longer holds in general as easily follows from Proposition \[ex1\]. Indeed, $\kras$ has only two elements while $\Hom_\han(\Z[T],\kras)\simeq \Spec(\Z[T])$ is countably infinite. The next Theorem describes the functions on $\Spec(\ads)$, for $\K=\Q$. \[functads\] Let $\H_\Q$ be the hyperring of adèle classes over $\Q$, and let $\rho\in \Hom_\han(\Z[T],\H_\Q).$ Then, either $\rho=\xi_a$ $$\label{lift} \xi_a(P(T))=P(a)\Q^\times \in \H_\Q\qqq P\in \Z[T]$$ for a unique adèle $a\in \A_\Q$, or $\rho$ factorizes through $\Q[e_Z]/\Q^\times$, where $e_Z$ is the idempotent of $\A_\Q$ associated to a subset $Z\subset \Sigma_\Q$. Assume first that the range $\rho(\Z[T])$ is contained in $L\cup \{0\}$ where $L$ is a line of the projective space $\H_\Q\backslash \{0\}$. Let $\pi:\A_\Q\to \H_\Q$ be the projection. The two dimensional subspace $E=\pi^{-1}(L\cup \{0\})$ of the $\Q$-vector space $\A_\Q$ contains $1$ since $\rho(1)=1$. Unless $\rho(\Z[T])=\kras$, the line $L$ is generated by $1$ and $\xi\in\rho(\Z[T])$, $\xi\neq 1$. Let $x\in E$ with $\pi(x)=\xi$. Since $\xi^2\in\rho(\Z[T])$ one has $x^2\in E$ and $x^2=ax + b$ for some $a,b\in\Q$. As in remark \[sub2\], one can assume that $x^2=N$ for a square free integer $N$. But the equation $y^2=N$ has no solution in $\A_\Q$ except for $N=1$. It follows that $\Q[x]\subset \A_\Q$ is a two dimensional subalgebra over $\Q$ of the form $\Q[e_Z]$ where $e_Z$ is the idempotent of $\A_\Q$ associated to a subset $Z\subset \Sigma_\Q$. Thus $\rho$ factorizes through $\Q[e_Z]/\Q^\times$. We can thus assume now that $\rho(\Z[T])\backslash \{0\}$ is not contained in a line $L$ of the projective space $\H_\Q\backslash \{0\}$. The restriction of $\rho$ to $\Z\subset \Z[T]$ is a morphism from $\Z$ to $\kras$ and its kernel is a prime ideal $\ffp\subset \Z$. If $\ffp\neq \{0\}$, one has $\ffp=p\Z$ for a prime number $p$. Then $$\rho\left(\sum (pa_n)T^n\right)=\rho(p)\rho\left(\sum a_nT^n\right)=0\qqq a_k\in \Z$$ and the inclusion $\rho(x+y)\subset\rho(x)+\rho(y)$ shows that $\rho(P(T))$ only depends upon the class of $P(T)$ in $\F_p[T]$. Since $\rho(\F_p^\times)=1$ one gets a morphism[^4], in the sense of [@Faure], from the projective space $(\F_p[T]/\F_p^\times)\backslash \{0\}$ to the projective space $\H_\Q\backslash \{0\}$. Since $\rho(\Z[T])\backslash \{0\}$ is not contained in a line $L$ of the projective space $\H_\Q\backslash \{0\}$, this morphism is non-degenerate. By [[*op.cit.*]{} ]{}Theorem 5.4.1, ([[*cf.*]{} ]{}also [@Faure1] Theorem 3.1) there exists a semi-linear map inducing this morphism but this gives a contradiction since there is no field homomorphism from $\F_p$ to $\Q$. Thus one has $\ffp=\{0\}$ and $\rho(n)=1$ for all $n\in \Z\backslash \{0\}$. One can then extend $\rho$ to a morphism $$\rho'\,:\, \Q[T]\to \H_\Q\,, \ \ \rho'(P(T))=\rho(nP(T))\qqq n \neq 0, \ nP(T)\in \Z[T].$$ By Corollary \[main1\] one then gets a unique ring homomorphism $\tilde\rho: \Q[T]\to \A_\Q$ which lifts $\rho'$. This gives a unique adèle $a\in \A_\Q$ such that holds. The above result shows that there are two different types of “functions" on $\Spec(\H_\Q)$: functions corresponding to adèles (which recover the algebraic information of the ring $\A_\Q$) and functions factorizing through $\Q[e_Z]/\Q^\times$. These latter functions should be thought of as [*“two-valued"*]{} functions, in analogy with the case of continuous functions on a compact space $X$. Indeed, the range of $f\in C(X,\R)$ has two elements if and only if the subalgebra of $C(X,\R)$ generated by $f$ is of the form $\R[e]$ for some idempotent $e\in C(X,\R)$. In the above case of $\Spec(\H_\Q)$ the subset $Z\subset \Sigma_\Q\simeq \Spec(\H_\Q)$ and its complement specify the partition of $\Spec(\H_\Q)$ corresponding to the two values of $\rho$. Once this partition is given, the remaining freedom is in the set $\Hom_\han(\Z[T],(\Q\oplus\Q)/\Q^\times)$. We shall not attempt to describe explicitly this set here, but refer to Remark \[sub2\] to show that it contains many elements. Let $\cH$ be a commutative ring, and let $\Delta:\cH\to \cH\otimes_\Z\cH$ be a coproduct. Given two ring homomorphisms $\rho_j: \cH\to R$ ($j=1,2$) to a commutative ring $R$, the composition $\rho=(\rho_1\otimes \rho_2)\circ \Delta$ defines a homomorphism $\rho: \cH\to R$. When $R$ is a hyperring, one introduces the following notion Let $(\cH,\Delta)$ be a commutative ring with a coproduct and let $R$ be a hyperring. Let $\rho_j\in \Hom_\han(\cH,R)$, $j=1,2$. One defines $\rho_1\star_\Delta \rho_2$ to be the set of $\rho\in \Hom_\han(\cH,R)$ such that for any $x\in \cH$ and any decomposition $\Delta(x)=\sum x_{(1)}\otimes x_{(2)}$ one has $$\label{equhopf} \rho(x)\in \sum \rho_1(x_{(1)})\rho_2(x_{(2)}) \,.$$ In genera,l $\rho_1\star_\Delta \rho_2$ can be empty or it may contain several elements. When $\rho_1\star_\Delta \rho_2=\{\rho\}$ is made by a single element we simply write $\rho_1\star_\Delta \rho_2= \rho $. When $\cH=\Z[T]$, $\Delta^+(T)=T\otimes 1+1\otimes T$ and $\Delta^\times(T)=T\otimes T$, this construction allows one to recover the algebraic structure of the ring of adèles, in terms of functions on $\Spec(\H_\Q)$ ([[*cf.*]{} ]{}Theorem \[functads\]). \[Hopf\] Let $\rho_j=\xi_{a_j}\in \Hom_\han(\Z[T],\H_\Q)$ ($j=1,2$), be the homomorphisms uniquely associated to $a_j\in \A_\Q$ by . Assume that monomials of degree $\leq 2$ in $a_j$ are linearly independent over $\Q$. Then one has $$\label{coprodfine} \rho_1\star_{\Delta^+} \rho_2=\xi_{a_1+a_2}\,, \ \ \rho_1\star_{\Delta^\times} \rho_2=\xi_{a_1a_2}.$$ Let $\tilde \rho_j\in \Hom(\Z[T],\A_\Q)$, $\tilde \rho_j(P(T))=P(a_j)$ be the lift of $\rho_j$. Then $\rho^+=(\tilde\rho_1\otimes \tilde\rho_2)\circ \Delta^+$ fulfills the equation $$\rho^+(x)=\sum \tilde\rho_1(x_{(1)})\tilde\rho_2(x_{(2)})$$ for any decomposition $\Delta^+(x)=\sum x_{(1)}\otimes x_{(2)}$. Thus, since $\xi_{a_1+a_2}=\pi\circ \rho^+$ (with $\pi:\A_\Q\to \H_\Q$ the projection) one gets, using , that $\xi_{a_1+a_2}\in \rho_1\star_{\Delta^+} \rho_2$. In a similar manner one obtains $\xi_{a_1a_2}\in \rho_1\star_{\Delta^\times} \rho_2$. It remains to show that they are the only solutions. We do it first for $\Delta^\times$. Let $\rho\in \rho_1\star_{\Delta^\times} \rho_2$. Since $\Delta^\times(T)=T\otimes T$, gives $\rho(T)=a_1a_2\in \H_\Q$. Similarly $\rho(T^2)=a_1^2a_2^2\in \H_\Q$. Thus since $1,a_1a_2, a_1^2a_2^2$ are linearly independent over $\Q$, the range of $\rho$ is of $\kras$-dimension $\geq 3$ and by Theorem \[functads\] there exists $a\in \A_\Q$ such that $\rho=\xi_a$. Moreover $a=\lambda a_1a_2$ for some $\lambda\in\Q^\times$ and it remains to show that $\lambda=1$. One has $$\Delta^\times(1+T)=(1-T)\otimes (1- T)+T\otimes 1+1\otimes T$$ Thus shows that $1+a$ belongs to $ (1-a_1)(1-a_2)\Q^\times+ a_1 \Q^\times+a_2\Q^\times. $ But by $\Q$-linear independence the only element of this set which is of the form $1+\lambda a_1 a_2$ is $1+a_1a_2$ which implies that $\lambda=1$ and thus that $\rho=\xi_{a_1a_2}$. Let now $\rho\in \rho_1\star_{\Delta^+} \rho_2$. One has $$\Delta^+(T)=T\otimes 1+1\otimes T=(1+T)\otimes (1+T)-1\otimes 1-T\otimes T$$ which implies by that $\rho(T)=\lambda(a_1+a_2)$ for some $\lambda\in \Q^\times$. Since $\rho(T^2)\in a_1^2\Q^\times+ a_1a_2 \Q^\times+a_2^2\Q^\times$ the range of $\rho$ is of $\kras$-dimension $\geq 3$ and by Theorem \[functads\] there exists $a\in \A_\Q$ such that $\rho=\xi_a$. One has $a=\lambda(a_1+a_2)$ and to show that $\lambda=1$ one proceeds as above using $$\Delta^+(1+T)=(1+T)\otimes (1+T)-T\otimes T\,.$$ The groupoid $P(\ads)$ of prime elements of $\ads$ {#space} -------------------------------------------------- The notion of [*principal*]{} prime ideal in a hyperring is related to the following notion of prime element \[prime\] In a hyperring $R$, an element $a\in R$ is said to be [*prime*]{} if the ideal $aR$ is a prime ideal. We let $P(\ads)$ be the set of prime elements of the hyperring $\ads=\A_\K/\K^\times$, for $\K$ a global field. \[classprime\] $1)$ Any principal prime ideal of $\ads$ is equal to $\ffp_w = a\H_\K$ for a place $w\in\Sigma(\K)$ uniquely determined by $a\in\H_\K$. $2)$ The group $C_\K=\A_\K^\times/\K^\times$ acts transitively on the generators of the principal prime ideal $\ffp_w$. $3)$ The isotropy subgroup of any generator of the prime ideal $\ffp_w$ is $\K_w^\times\subset C_\K$. $1)$  Let $\ffp=a\ads$ be a prime principal ideal in $\ads$. We consider the support of $a$ [[*i.e.*]{} ]{}the set $ S=\{v\in \Sigma(\K)\,|\, a_v\neq 0\}$. We shall prove that the characteristic function $1_S$ generates the same ideal as $a$, [[*i.e.*]{} ]{}$a\ads=1_S \ads$, where $1_S\in \ads$ is the class of the adèle $\alpha= (\alpha_v)$, with $\alpha_v=1$ for $v\in S$ and $\alpha_v=0$ otherwise. For each $v\in \Sigma(\K)$, let $\cO^\times_v$ be the multiplicative group $ \cO^\times_v=\{x\in \K_v\,:\,|x|_v=1\}$ of elements in $\K_v$ of norm $1$. When the place $v$ is non-archimedean this is the group of invertible elements of the local ring $\cO_v$. We let $a=(a_v)$ be an adèle in the given class and first show that there exists a finite subset $F\subset \Sigma(\K)$ such that $$\label{contra} a_v\in \cO^\times_v\qqq v\in S \,, \ v\notin F.$$ Otherwise, there would exist an infinite subset $Y\subset S $ such that $$|a_v|_v<1\qqq v\in Y.$$ Let then $Y'$ be an infinite subset of $Y$ whose complement in $Y$ is infinite. Consider the adèles $y,z\in \A_\K$ defined by $$y_v=\left\{ \begin{array}{ll} 1, & v\notin Y' \\ a_v, & v\in Y' \end{array} \right. \,, \ \ z_v=\left\{ \begin{array}{ll} a_v, & v\notin Y' \\ 1, & v\in Y'. \end{array} \right.$$ By construction $yz=a$. The same equality holds in $\ads$. Since the ideal $\ffp=a\ads$ is prime, its complement in $\ads$ is multiplicative and thus $y\in \ffp$ or (and) $z\in \ffp$. However $y\notin a\A_\K$ since $|y_v|_v> |a_v|_v$ on the complement of $Y'$ in $Y$ which is an infinite set of places. Similarly $z\notin a\A_\K$ since $|z_v|_v> |a_v|_v$ on $Y'$. Thus one gets a contradiction and this proves . In fact one may assume, without changing the principal ideal $\ffp=a\ads$, that $$\label{invert} a_v\in \cO^\times_v\qqq v\in S$$ Since the ideal $\ffp=a\ads$ is non-trivial the complement $Z$ of $S $ in $\Sigma(\K)$ is non-empty. Assume that $Z$ contains two places $v_1\neq v_2$. Let $1_{v}$ be the (class of the) adèle all of whose components vanish except at the place $v$ where its component is $1\in\K_v$. Then one has $1_{v_j}\notin \ffp=a\ads$, but the product $1_{v_1}1_{v_2}=0\in \ffp=a\ads$ which contradicts the fact that $\ffp=a\ads$ is prime. This shows that $Z=\{w\}$ for some $w\in\Sigma(\K)$ and that $a\ads=\ffp_w$ using . $2)$  Let $a,b\in \ads$ be two generators of the ideal $\ffp_w$. Let $\alpha,\beta\in \A_\K$ be adèles in the classes of $a$ and $b$ respectively. Then by , the equality $$j_v=\beta_v/\alpha_v\qqq v\neq w\,, \ \ j_w=1$$ defines an idèle such that $j\alpha=\beta$. This shows that the group $C_\K$ acts transitively on the generators of $\ffp_w$. $3)$  Let $a\in \ads$ be a generator of the principal ideal $\ffp_w$ and let $\alpha$ be an adèle in the class of $a$. For $g\in C_\K$ the equality $ga=a$ in $\ads$ means that for $j$ an idèle in the class of $g$, there exists $q\in \K^\times$ such that $j\alpha=q\alpha$. In other words one has $q^{-1}j\alpha=\alpha$. Since all components $\alpha_v$ are non-zero except at $v=w$ one thus gets that all components of $q^{-1}j$ are equal to $1$ except at $w$. The component $j_w$ can be arbitrary and thus, the isotropy subgroup of any generator of $\ffp_w$ is $\K_w^\times\subset C_\K$. On $P(\ads)$ we define a groupoid law given by multiplication. More precisely, \[groupoid\] Let $\K$ be a global field and $s:P(\ads)\to \Sigma_\K$ the map which associates to a prime element of $\ads$ the principal prime ideal of $\ads$ it generates. Then $P(\ads)$ with range and source maps equal to $s$ and partial product given by the product in the hyperring $\ads$, is a groupoid. Since the source and range maps coincide, one needs simply to show that each fiber $s^{-1}(v)$ is a group. For each place $v\in \Sigma_\K$, there is a unique generator $p_v$ of the prime ideal $\ffp_v$ which fulfills $p_v^2=p_v$. It is given by the class of the characteristic function $1_S$ where $S$ is the complement of $v$ in $\Sigma_\K$. Any other element of $s^{-1}(v)$ is, by Theorem \[classprime\], of the form $\gamma=up_v$ where $u\in C_\K/\K^\times$ is uniquely determined. The product in $s^{-1}(v)$ corresponds to the product in the group $C_\K/\K^\times$. Note that the product $p_1p_2$ of two prime elements is a prime element only when these factors generate the same ideal. The groupoids $\Pi_1^{\rm ab}(X)'$ and $P(\ads)$ in characteristic $p\neq 0$ ---------------------------------------------------------------------------- Let $\K$ be a global field of characteristic $p> 0$ [[*i.e.*]{} ]{}a function field over a constant field $\F_q\subset \K$. We fix a separable closure $\bar \K$ of $\K$ and let $\K^{\rm ab}\subset \bar \K$ be the maximal abelian extension of $\K$. Let $\bar \F_q\subset \bar \K$ be the algebraic closure of $\F_q$. We denote by $\cW^{\rm ab}\subset {\rm Gal}( \K^{\rm ab}:\K)$ the abelianized Weil group, [[*i.e.*]{} ]{}the subgroup of elements of ${\rm Gal}( \K^{\rm ab}:\K)$ whose restriction to $\bar \F_q$ is an integral power of the Frobenius. Let ${\rm Val}(\K^{\rm ab})$ be the space of all valuations of $\K^{\rm ab}$. By restriction to $\K\subset \K^{\rm ab}$ one obtains a natural map $$\label{mapp} p\;:\; {\rm Val}(\K^{\rm ab})\to \Sigma_\K\,, \ \ p(v)=v|_\K.$$ By construction, the action of ${\rm Gal}( \K^{\rm ab}:\K)$ on ${\rm Val}(\K^{\rm ab})$ preserves the map $p$. \[val\]Let $w\in \Sigma_\K$. $(1)$ The abelianized Weil group $\cW^{\rm ab}$ acts transitively on the fiber $p^{-1}(w)$ of $p$. $(2)$ The isotropy subgroup of an element in the fiber $p^{-1}(w)$ coincides with the abelianized local Weil group $\cW^{\rm ab}_w\subset \cW^{\rm ab}$. This follows from standard results of class field theory but we give the detailed proof for completeness. Let $v\in {\rm Val}(\K^{\rm ab})$ with $p(v)=w$. By construction, the completion $\K^{\rm ab}_v$ of $\K^{\rm ab}$ at $v$ contains the local field $\K_w$ completion of $\K$ at $w$. The subfield $\K_w \vee \K^{\rm ab}$ of $\K^{\rm ab}_v$ generated by $\K^{\rm ab}$ and $\K_w$ coincides with the maximal abelian extension $\K_w^{\rm ab}$ of $\K_w$. One has the translation isomorphism ([[*cf.*]{} ]{}[@Bourbaki] Theorem V, A.V.71) $$\label{galiso} {\rm Gal}((\K_w \vee \K^{\rm ab}):\K_w) \cong {\rm Gal}( \K^{\rm ab}:(\K_w\cap \K^{\rm ab}))\subset {\rm Gal}( \K^{\rm ab}:\K)$$ obtained by restricting an automorphism to $\K^{\rm ab}$. The subgroup ${\rm Gal}( \K^{\rm ab}:(\K_w\cap \K^{\rm ab}))\subset {\rm Gal}( \K^{\rm ab}:\K)$ is the isotropy subgroup $\Gamma_v$ of the valuation $v$ [[*i.e.*]{} ]{}an element $g\in {\rm Gal}( \K^{\rm ab}:\K)$ fixes $v$ if and only if $g$ fixes pointwise the subfield $\K_w\cap \K^{\rm ab}\subset \K^{\rm ab}$. Indeed, if $g$ fixes $v$ it extends uniquely by continuity to an automorphism of $\K^{\rm ab}_v$. This automorphism is the identity on $\K$ and hence also on the completion $\K_w$ of $\K$ at $w$ and thus on $\K_w\cap \K^{\rm ab}$. Next, let $g\in {\rm Gal}( \K^{\rm ab}:\K)$ be the identity on $\K_w\cap \K^{\rm ab}$. The fact that $g$ fixes $v$ follows from . Indeed, this shows that any element $g\in {\rm Gal}( \K^{\rm ab}:(\K_w\cap \K^{\rm ab}))$ is the restriction of an automorphism in ${\rm Gal}((\K_w \vee \K^{\rm ab}):\K_w)$ and preserves $v$ since the valuation $w$ of the local field $\K_w$ extends uniquely to finite algebraic extensions of $\K_w$, and thus to $\K_w \vee \K^{\rm ab}$, by uniqueness of the maximal compact subring. $(1)$ Let us check that the abelianized Weil group $\cW^{\rm ab}$ acts transitively on the valuations in the set $p^{-1}(w)$. The Galois group ${\rm Gal}( \K^{\rm ab}:\K)$ acts transitively on $p^{-1}(w)$. Indeed the space of valuations extending $w$ is by construction the projective limit of the finite sets of valuations extending $w$ over finite algebraic extensions of $\K$. The Galois group ${\rm Gal}( \K^{\rm ab}:\K)$ is a compact profinite group which acts transitively on the finite sets of valuations extending $w$ over finite algebraic Galois extensions of $\K$ ([@Rosen], § 9 Proposition 9.2). Thus it acts transitively on the fiber $p^{-1}(w)$. It remains to show that the transitivity of the action continues to hold for $\cW^{\rm ab}\subset {\rm Gal}( \K^{\rm ab}:\K)$. It is enough to show that the orbit $\cW^{\rm ab} v$ of a place $v\in p^{-1}(w)$ is the same as its orbit under ${\rm Gal}( \K^{\rm ab}:\K)$. This is a consequence of the co-compactness of the isotropy subgroup $\Gamma_v\cap \cW^{\rm ab}\subset \cW^{\rm ab}$ but it is worthwhile to describe what happens in more details. In the completion process from $\K$ to $\K_w$, the maximal finite subfield (constant field) passes from $\F_q$ to a finite extension $\F_{q^\ell}$. Let $\K_w^{\rm un}\subset \K_w^{\rm ab}$ be the largest unramified extension of $\K_w$ inside $\K^{\rm ab}$. It is obtained by adjoining to $\K_w$ all roots of unity of order prime to $p$ which are not already in the constant field $F_{q^\ell}$ of $\K_w$. One has ([@Weil] VI, [@Tate] Chapter VII) $${\rm Gal}(\K_w^{\rm ab}:\K_w^{\rm un}) \cong {\rm Gal}( \K^{\rm ab}:(\K_w^{\rm un}\cap \K^{\rm ab})) \subset {\rm Gal}( \K^{\rm ab}:(\K_w\cap \K^{\rm ab}))$$ The extension $\K_w^{\rm un}\cap \K^{\rm ab}$ contains $\bar\F_q\otimes_{\F_q}\K$. This determines the following diagram of inclusions of fields $$\begin{gathered} \label{functCFTmap} \,\hspace{25pt}\raisetag{-47pt} \xymatrix@C=25pt@R=25pt{ \K_w^{\rm ab}= \K_w \vee \K^{\rm ab}\subset (\K^{\rm ab})_v & \K^{\rm ab} \ar[l]^-{ }& \\ \K_w^{\rm un} \ar[u]^-{ } & \K_w^{\rm un}\cap \K^{\rm ab}\ar[u]^{ }\ar[l]^{ }&\bar\F_q\otimes_{\F_q}\K\ar[l]^{ }\\ \K_w \ar[u]^-{ } & \K_w\cap \K^{\rm ab}\ar[u]^{ }\ar[l]_{ }&\K\ar[u]^{ }\ar[l]^{ }\nonumber \\ }\hspace{140pt}\end{gathered}$$ The topological generator of ${\rm Gal}(\K_w^{\rm un}:\K_w)$ induces the $\ell$-th power $\sigma^\ell$ of the Frobenius automorphism on $\K'$ and the same holds for the topological generator of ${\rm Gal}((\K_w^{\rm un}\cap \K^{\rm ab}):(\K_w\cap \K^{\rm ab}))$. The abelianized Weil group $\cW^{\rm ab}\subset {\rm Gal}(\K^{\rm ab}:\K)$ is defined by $$\cW^{\rm ab}=\rho^{-1}\{\sigma^\Z\}\,, \ \rho\,:\, {\rm Gal}(\K^{\rm ab}:\K)\to {\rm Gal}(\K':\K)\,, \ \sigma^\Z\subset {\rm Gal}(\K':\K)$$ Thus, the statement that the group $\cW^{\rm ab}\subset {\rm Gal}(\K^{\rm ab}:\K)$ acts transitively on the fiber $p^{-1}(w)$ is equivalent to the fact that the dense subgroup $\Z\subset\hat\Z$ acts transitively in the finite space $\hat\Z/\ell\hat\Z$. $(2)$ follows from $(1)$ and the remarks made at the beginning of the proof. We now implement the geometric language. Given an extension $E$ of $\bar \F_q$ of transcendence degree $1$, it is a well-known fact that the space of valuations of $E$, ${\rm Val}(E)$, coincides with the set of (closed) points of the unique projective nonsingular algebraic curve with function field $E$. Moreover, one also knows ([[*cf.*]{} ]{}[@Hart] Corollary 6.12) that the category of nonsingular projective algebraic curves and dominant morphisms is equivalent to the category of function fields of dimension one over $\bar \F_q$. Given a global field $\K$ of positive characteristic $p>1$ with constant field $\F_q$, one knows that the maximal abelian extension $\K^{\rm ab}$ of $\K$ is an inductive limit of extensions $E$ of $\bar \F_q$ of transcendence degree $1$. Thus the space ${\rm Val}(\K^{\rm ab})$ of valuations of $\K^{\rm ab}$, endowed with the action of the abelianized Weil group $\cW^{\rm ab}\subset{\rm Gal}(\K^{\rm ab}:\K)$, inherits the structure of a projective limit of projective nonsingular curves. This construction determines the maximal abelian cover $\pi:X^{\rm ab}\to X$ of the non singular projective curve $X$ over $\F_q$ with function field $\K$. Let $\pi:\tilde X\to X$ be a Galois covering of $X$ with Galois group $W$. The fundamental groupoid of $\pi$ is by definition the quotient $\Pi_1=(\tilde X\times \tilde X)/W$ of $\tilde X\times \tilde X$ by the diagonal action of $W$ on the self-product. The (canonical) range and source maps: $r$ and $s$ are defined by the two projections $$\label{rs} r(\tilde x,\tilde y)=x\,, \ s(\tilde x,\tilde y)=y.$$ Let us consider the subgroupoid of [*loops*]{} [[*i.e.*]{} ]{}$$\label{sub} \Pi_1'=\{\gamma\in \Pi_1\mid r(\gamma)=s(\gamma)\}.$$ Each fiber of the natural projection $r=s:\Pi_1'\to X$ is a group. Moreover, if $W$ is an abelian group one defines the following natural action of $W$ on $\Pi_1'$ $$\label{actionW} w\cdot (\tilde x,\tilde y)=(w\tilde x,\tilde y)=(\tilde x,w^{-1}\tilde y).$$ We apply these results to the maximal abelian cover $\pi:X^{\rm ab}\to X$ of the non singular projective curve $X$ over $\F_q$ with function field $\K$. We view $X$ as a scheme over $\F_q$. In this case, we let $W=\cW^{\rm ab}\subset{\rm Gal}(\K^{\rm ab}:\K)$ be the abelianized Weil group. We let $\Pi_1^{\rm ab}(X)$ be the fundamental groupoid of this maximal abelian cover and $\Pi_1^{\rm ab}(X)'\subset \Pi_1^{\rm ab}(X)$ the loop groupoid. Since the two projections from $X^{\rm ab}\times X^{\rm ab}$ to $X$ are $W$-invariant, $\Pi_1^{\rm ab}(X)'$ is the quotient of the fibered product $X^{\rm ab}\times_X X^{\rm ab}$ by the diagonal action of $W$. We identify the closed points of $X^{\rm ab}\times_X X^{\rm ab}$ with pairs of valuations of $\K^{\rm ab}$ whose restrictions to $\K$ are the same. We obtain the following refinement of Proposition 8.13 of [@CCM2]. \[ccm2prop\] Let $\K$ be a global field of characteristic $p\neq 0$, and let $X$ be the corresponding non-singular projective algebraic curve over $\F_q$. $\bullet$ The loop groupoid $\Pi_1^{\rm ab}(X)'\subset \Pi_1^{\rm ab}(X)$ is canonically isomorphic to the groupoid $P(\ads)$ of prime elements of the hyperring $\ads=\A_\K/\K^\times$. $\bullet$ The above isomorphism $\Pi_1^{\rm ab}(X)'\simeq P(\ads)$ is equivariant for the action of $W$ on $\Pi_1^{\rm ab}(X)'$ and the action of the units $\ads^\times=C_\K$ on prime elements by multiplication. Under the class-field theory isomorphism $W=\cW^{\rm ab}\sim C_\K$, the local Weil group at a place $w\in\Sigma_\K$ corresponds to the subgroup $\K_w^\times\subset C_\K$. By applying Proposition \[val\], this shows that given two elements $v_j\in{\rm Val}(\K^{\rm ab})$ above the same place $w\in \Sigma_\K$, there exists a unique element $\gamma(v_1,v_2)\in C_\K/\K_w^\times$ such that (under the class field theory isomorphism) $$\label{move} \gamma(v_1,v_2)(v_2)=v_1.$$ For a place $v\in \Sigma_\K$ we let $p_v\in P(\ads)$ be the unique idempotent element ([[*i.e.*]{} ]{}$p^2_v=p_v$) which generates the ideal $\ffp_v$. We define the map ([[*cf.*]{} ]{}) $$\label{groupoidmap} \varphi: \Pi_1^{\rm ab}(X)'\to P(\ads)\,, \ \varphi(v_1,v_2)=\gamma(v_1,v_2)p_w \qqq v_j\in p^{-1}(w).$$ The map $\varphi$ is well defined since by Theorem \[classprime\] the isotropy subgroup of points above $w$ in $P(\ads)$ is $\K_w^\times$ and one has $\gamma(uv_1,uv_2)=\gamma(v_1,v_2)$ for all $u\in \cW^{\rm ab}\sim C_\K$. One also checks the equivariance $$\label{equiv} \varphi(u\cdot \alpha)=u\varphi(\alpha) \qqq u \in \cW^{\rm ab}\sim C_\K.$$ Finally, the equality $$\label{complaw} \gamma(v_1,v_2)\gamma(v_2,v_3)=\gamma(v_1,v_3)$$ together with $ap_vbp_v=abp_v$ show that the map $\varphi$ is a morphism of groupoids which is also bijective over each place in $\Sigma_\K$, by Proposition \[val\] and Theorem \[classprime\]. Thus $\varphi$ is an isomorphism. [99]{} A. Beutelspacher, [*Projective planes*]{}. Handbook of incidence geometry, 107–136, North-Holland, Amsterdam, 1995. N. Bourbaki [*Algebra II. Chapters 4–7*]{}. Translated from the 1981 French edition by P. M. Cohn and J. Howie. Reprint of the 1990 English edition. Elements of Mathematics (Berlin). Springer-Verlag, Berlin, 2003. J. W. S. Cassels, A. Fröhlich [*Algebraic number theory*]{}, Academic Press London (1967). A. Connes, C. Consani [*On the notion of geometry over $\F_1$*]{}, to appear in Journal of Algebraic Geometry; arXiv08092926v2 \[mathAG\]. A. Connes, C. Consani [*Schemes over $\F_1$ and zeta functions*]{}, to appear in Compositio Mathematica; arXiv:0903.2024v3 \[mathAG,NT\] A. Connes, C. Consani [*Characteristic $1$, entropy and the absolute point*]{}; arXiv:0911.3537v1 \[mathAG\] A. Connes, C. Consani, M. Marcolli, [*The Weil proof and the geometry of the adeles class space*]{}, to appear in “Algebra, Arithmetic and Geometry – Manin Festschrift”, Progress in Mathematics, Birkhäuser (2008); arXiv0703392. A. Connes, C. Consani, M. Marcolli, [*Fun with $\F_1$*]{}, Journal of Number Theory 129 (2009) 1532–1561. P. Corsini, V. Leoreanu-Fotea, [*Applications of hyperstructure theory*]{}. Advances in Mathematics (Dordrecht), 5. Kluwer Academic Publishers, Dordrecht, 2003. B. Davvaz, A. Salasi, [*A realization of hyperrings*]{}. (English summary) Comm. Algebra 34 (2006), no. 12, 4389–4400. B. Davvaz, V. Leoreanu-Fotea, [*Hyperring theory and applications*]{}. International Academic Press 2007. A. Deitmar [*Schemes over F1*]{}, in Number Fields and Function Fields � Two Parallel Worlds. Ed. by G. van der Geer, B. Moonen, R. Schoof. Progr. in Math, vol. 239, 2005. M. Demazure, P. Gabriel [*Groupes algébriques*]{}, Masson & CIE, Éditeur Paris 1970. E. Ellers, H. Karzel, [*Involutorische Geometrien*]{}. (German) Abh. Math. Sem. Univ. Hamburg 25 1961 93–104. 50.48 E. Ellers, H. Karzel, [*Involutory incidence spaces*]{}. J. Geometry 1 (1971) 117–126. C. A. Faure, A. Frölicher, [*Morphisms of projective geometries and semilinear maps*]{}. Geom. Dedicata 53 (1994), no. 3, 237–262. C. A. Faure, [*An elementary proof of the Fundamental Theorem of Projective Geometry*]{}. Geom. Dedicata 90 (2002) 145–151. M. Hall, [*Cyclic projective planes*]{}. Duke Math. J. 14, (1947), 1079 �-1090 R. Hartshorne [*Algebraic Geometry*]{}, Graduate Texts in Mathematics 52, Springer-Verlag, New York Heidelberg Berlin 1977. M. Hestenes, [*Singer groups*]{}. Canad. J. Math. 22 1970 492–513 M. Kapranov and A. Smirnov, [*Cohomology determinants and reciprocity laws*]{} Prepublication. H. Karzel, [*Bericht über projektive Inzidenzgruppen*]{}. (German) Jber. Deutsch. Math.-Verein. 67 1964/1965 Abt. 1, 58–92. H. Karzel, [*Ebene Inzidenzgruppen*]{}. (German) Arch. Math. (Basel) 15 1964 10–17. H. Karzel, [*Normale Fastkörper mit kommutativer Inzidenzgruppe*]{}. (German) Abh. Math. Sem. Univ. Hamburg 28 1965 124–132. H. Karzel and G. Kist, [*Some applications of nearfields*]{}. Proceedings of the Edinburgh Math. Soc. 23 (1980) 129–139. C. Kohls, J. Reid, [*Orders on commutative rings*]{}. Duke Math. J. 33 1966 657–666. M. Krasner, [*Approximation des corps valués complets de caractéristique $p\not=0$ par ceux de caractéristique $0$*]{}, (French) 1957 Colloque d’algèbre supérieure, tenu à Bruxelles du 19 au 22 décembre 1956 pp. 129–206 Centre Belge de Recherches Mathématiques Établissements Ceuterick, Louvain; Librairie Gauthier-Villars, Paris. M. Krasner, [*A class of hyperrings and hyperfields*]{}. Internat. J. Math. Math. Sci. 6 (1983), no. 2, 307–311. N. Kurokawa, H. Ochiai, A. Wakayama, [*Absolute Derivations and Zeta Functions*]{} Documenta Math. Extra Volume: Kazuya Kato�s Fiftieth Birthday (2003) 565-584. P. Lescot [*Algèbre absolue*]{} arXiv:0911.1989 J. Lopez Pena, O. Lorscheid [*Mapping $\F_1$-land: An overview of geometries over the field with one element*]{} arXiv:0909.0069 Lyndon, R. C. [*Relation algebras and projective geometries*]{}. Michigan Math. J. 8 1961 21�28 Y. I. Manin, [*Lectures on zeta functions and motives (according to Deninger and Kurokawa)*]{} Columbia University Number-Theory Seminar (1992), Astérisque No. 228 (1995), 4, 121–163. F. Marty, [*Sur une généralisation de la notion de groupe*]{} in Huitième Congres des Mathématiciens, Stockholm 1934, 45–59. W. Prenowitz [*Projective Geometries as Multigroups*]{} American Journal of Mathematics, Vol. 65, No. 2 (1943), pp. 235–256 R. Procesi–Ciampi, R. Rota, [*The hyperring spectrum*]{}. Riv. Mat. Pura Appl. No. 1 (1987), 71–80. M. Rosen, [*Number theory in function fields*]{}, Graduate Texts in Mathematics 210, Springer, New-York (2002). J. Singer [*A Theorem in Finite Projective Geometry and Some Applications to Number Theory*]{}, Transactions of the American Mathematical Society, Vol. 43, No. 3 (May, 1938), pp. 377-385 C. Soulé, [*Les variétés sur le corps à un élément*]{}. Mosc. Math. J. 4 (2004), no. 1, 217–244. D. Stratigopoulos, [*Sur les hypercorps et les hyperanneaux*]{} (French) \[On hyperfields and hyperrings\] Algebraic hyperstructures and applications (Xanthi, 1990), 33–53, World Sci. Publ., Teaneck, NJ, 1991. D. Stratigopoulos, [*Hyperanneaux non commutatifs: Hyperanneaux, hypercorps, hypermodules, hyperespaces vectoriels et leurs propriétés élémentaires*]{}. (French) C. R. Acad. Sci. Paris Sér. A-B 269 (1969) A489–A492. K. Thas, D. Zagier, [*Finite projective planes, Fermat curves, and Gaussian periods*]{}. J. Eur. Math. Soc. (JEMS) 10 (2008), no. 1, 173–190. B. Töen, M. Vaquié [*Au dessous de $\text{Spec}(\Z)$*]{} (preprint) arXiv0509684v4 to appear in K-theory. A. Wagner, [*On perspectivities of finite projective planes*]{}. Math. Z 71 (1959) 113–123 A. Wagner, [*On collineation groups of projective spaces. I.*]{} Math. Z. 76 1961 411–426 H. Wahling, [*Darstellung zweiseitiger Inzidenzgruppen durch Divisionsalgebren*]{}. (German) Abh. Math. Sem. Univ. Hamburg 30 (1967) 220–240. A. Weil, [*Sur la théorie du corps de classes*]{} J. math. Soc. Japan, t. 3, 1951, p. 1-35. [^1]: as soon as $H$ has more than two elements [^2]: This is automatic if the $\dim_\kras\H$ is $>3$ [^3]: a morphism is non-degenerate when its range is not contained in a line [^4]: Note that this holds even for $p=2$ even though $\F_2[T]/\F_2^\times$ is not a $\kras$-vector space.
{ "pile_set_name": "ArXiv" }
--- author: - | $^1$, Marco Regis$^{2}$, Paolo Marchegiani$^{1}$, Geoff Beck$^{1}$, Rainer Beck$^{3}$, Hannes Zechlin$^{2}$, Andrei Lobanov$^{3}$, Dieter Horns$^{4}$\ $^1$School of Physics, University of the Witwatersrand, Johannesburg, South Africa\ $^2$Dipartimento di Fisica, Università degli Studi di Torino and INFN-Sezione di Torino,\ via P. Giuria, 1, 10125 Torino, Italy\ $^3$Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany\ $^4$Institut für Experimentalphysik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany\ \ E-mail: title: Probing the nature of Dark Matter with the SKA --- Unveiling the nature of Dark Matter =================================== The Square Kilometre Array (SKA) is the most ambitious radio telescope ever planned, and it is a unique multi-disciplinary experiment. Even though the SKA, in its original conception, has been dedicated to constrain the fundamental physics aspects on dark energy, gravitation and magnetism, much more scientific investigation could be done with its configuration: the exploration of the nature of Dark Matter is one of the most important additional scientific themes. Among the viable competitors for having a cosmologically relevant DM species, the leading candidate is the lightest particle of the minimal supersymmetric extension of the Standard Model (MSSM, Jungman et al. 1996), plausibly the neutralino $\chi$, with a mass $M_{\chi}$ in the range between a few GeV to several TeV. Information on the nature and physical properties of the neutralino DM can be obtained by studying the astrophysical signals of their interaction/annihilation in the halos of cosmic structures. These signals (see Colafrancesco et al. 2006, 2007 for details) involve, in the case of a $\chi$ DM, emission along a wide range of frequencies, from radio to $\gamma$-rays (see Fig. \[fig:multiflux\] for a DM spectral energy distribution (SED) in a typical dwarf galaxy). Neutral pions produced in $\chi \chi$ annihilation decay promptly in $\pi^0 \to \gamma \gamma$ and generate most of the continuum photon spectrum at energies $E \gtrsim 1$ GeV. Secondary electrons are produced through various prompt generation mechanisms and by the decay of charged pions, $\pi^{\pm}\to \mu^{\pm} + \nu_{\mu}(\bar{\nu}_{\mu})$, with $\mu^{\pm}\to e^{\pm} + \bar{\nu}_{\mu}(\nu_{\mu}) + \nu_e (\bar{\nu}_e)$. The different composition of the $\chi\chi$ annihilation final state will in general affect the form of the electron spectrum. The time evolution of the secondary electron spectrum is described by the transport equation: $$\frac{\partial n_e}{\partial t} = \nabla \left[ D \nabla n_e\right] + \frac{\partial}{\partial E} \left[ b_e(E) n_e \right]+ Q_e(E,r)\;, \label{diffeq}$$ where $Q_e(E,r)$ is the $e^{\pm}$ source spectrum, $n_e(E,r)$ is the $e^{\pm}$ equilibrium spectrum (at each fixed time) and $b_e\equiv -dE/dt$ is the $e^{\pm}$ energy loss per unit time, $b_e =b_{ICS} + b_{synch} + b_{brem} + b_{Coul}$ (see Colafrancesco et al. 2006 for details). The diffusion coefficient $D$ in eq.(\[diffeq\]) sets the amount of spatial diffusion for the secondary electrons: it turns out that diffusion can be neglected in galaxy clusters while it is relevant on galactic and sub-galactic scales (see discussion in Colafrancesco et al. 2006, 2007). Under the assumption that the population of high-energy $e^{\pm}$ can be described by a quasi-stationary ($\partial n_e / \partial t \approx 0$) transport equation, the secondary electron spectrum $n_e (E,r)$ reaches its equilibrium configuration mainly due to synchrotron and ICS losses at energies $E \gtrsim 150$ MeV and due to Coulomb losses at lower energies. Secondary electrons eventually produce radiation by synchrotron emission in the magnetized atmosphere of cosmic structures, bremsstrahlung with ambient protons and ions, and ICS of CMB (and other background) photons (and hence an SZ effect, Colafrancesco 2004). These secondary particles also heat the ambient gas by Coulomb collisions. A large amount of efforts have been put in the search for DM indirect signals at $\gamma$-ray energies looking predominantly for two key spectral features: the $\pi^0 \to \gamma \gamma$ decay spectral bump , and the direct $\chi \chi \to \gamma \gamma$ annihilation line emission, with results that are not conclusive yet (e.g., Daylan et al. 2014, Weniger 2012, Doro et al. 2014). The non-detection of signals related to DM annihilation/decay from various astrophysical targets (including observations of dwarf spheroidal galaxies, the Galactic Center, galaxy clusters, the diffuse gamma-ray background emission) is interpreted in terms of constraints on the (self-)annihilation cross-section (or decay time) of the DM particle candidate. Assuming, for instance, a canonical WIMP of $M_{\chi}=100$GeV annihilating to $b$-quarks, stacked observations of dwarf spheroidal galaxies with Fermi-LAT put a constraint of $\langle \sigma V \rangle < 2\times 10^{-25}\,\mathrm{cm}^3\,\mathrm{s}^{-1}$ (95% c.l.) on the thermally-averaged self-annihilation cross-section of DM particles (Ackerman et al. 2014). Hopes of discovering annihilating WIMPs in $\gamma$-rays are relegated to Fermi-LAT successors and the forthcoming CTA experiment (Doro et al. 2013).\ There are, however, also good hopes to obtain relevant information on the nature of DM from radio observations of DM halos on large scales, i.e. from dwarf galaxies to clusters of galaxies. Radio emission signals from Dark Matter annihilation ==================================================== Observations of radio halos produced by DM annihilations are, in principle, very effective in constraining the neutralino mass and composition (see, e.g., Colafrancesco and Mele 2001, Colafrancesco et al. 2006, 2007), under the hypothesis that DM annihilation provides an observable contribution to the radio-halo flux. ![Flux densities for dwarf galaxies ($M = 10^{7}$ M$_{\odot}$), galaxies ($M = 10^{12}$ M$_{\odot}$), and galaxy clusters ($M = 10^{15}$ M$_{\odot}$). The halo profile is NFW, $\langle B \rangle = 5$ $\mu$G, and the annihilation cross section is fixed to the value $\langle \sigma V \rangle \approx 3 \times 10^{-27}$ cm $^3$ s$^{-1}$. Black lines are for dwarf galaxies, red lines for galaxies and green for clusters. Left: WIMP mass is 60 GeV, solid lines are for composition $b\overline{b}$ and dashed lines for $\tau^+\tau^-$. Right: Composition is $b\overline{b}$, solid lines are for WIMP mass $60$ GeV and dashed lines for $500$ GeV. Dash-dotted lines are SKA sensitivity limits for integration times of 30, 240 and 1000 hours (Dewdney et al. 2012). The flux is calculated within the virial radius (from Colafrancesco et al. 2014).[]{data-label="fig:spec_sig"}](spectral_sig.eps "fig:") ![Flux densities for dwarf galaxies ($M = 10^{7}$ M$_{\odot}$), galaxies ($M = 10^{12}$ M$_{\odot}$), and galaxy clusters ($M = 10^{15}$ M$_{\odot}$). The halo profile is NFW, $\langle B \rangle = 5$ $\mu$G, and the annihilation cross section is fixed to the value $\langle \sigma V \rangle \approx 3 \times 10^{-27}$ cm $^3$ s$^{-1}$. Black lines are for dwarf galaxies, red lines for galaxies and green for clusters. Left: WIMP mass is 60 GeV, solid lines are for composition $b\overline{b}$ and dashed lines for $\tau^+\tau^-$. Right: Composition is $b\overline{b}$, solid lines are for WIMP mass $60$ GeV and dashed lines for $500$ GeV. Dash-dotted lines are SKA sensitivity limits for integration times of 30, 240 and 1000 hours (Dewdney et al. 2012). The flux is calculated within the virial radius (from Colafrancesco et al. 2014).[]{data-label="fig:spec_sig"}](spectral_sig2.eps "fig:") The wide range of frequencies probed by the SKA and the variety of achievable observational targets (and in turn of magnetic fields) will allow testing the non-thermal electron spectrum from about 1 GeV to few hundreds of GeV, that is the most relevant range in the WIMP search. Figure \[fig:spec\_sig\] displays the predicted spectral differences between various annihilation channels and WIMP masses: note that these differences manifest mainly in low-$\nu$ slope variation for differing annihilation channels and in the high-$\nu$ spectral flattening/steepening for larger/smaller masses. The SKA sensitivity curves for SKA1-LOW and SKA1-MID are taken from Dewdney et al. (2012).\ The surface brigthness produced by DM-induced synchrotron emission is heavily affected by diffusion in small scale structures, e.g., dwarf and standard galaxies, while it is less important in large structures, e.g., galaxy clusters (see Colafrancesco et al. 2006-2007 for a detailed discussion). Polarization from DM-induced radio emission is expected at very low fractional levels due to the fact that DM spatial and velocity distribution is nearly homogeneous and that DM annihilation is mediated by secondary particle production. Therefore, a low polarization level of detectable radio signals in the directions of DM halos would be consistent with the DM origin of such radio emission. Residual high-polarization signals could be hence attributed to astrophysical sources in the direction or within the DM halos, and one could use these signals to infer properties of the magnetic field in these structures (see R. Beck et al. 2015, and F. Govoni et al. 2015). Cosmological evolution of Dark Matter radio emission ---------------------------------------------------- Figure \[fig:evol\_bb60\] displays the evolution of radio emission from DM halos of mass $10^{7}$ M$_{\odot}$, $10^{12}$ M$_{\odot}$, and $10^{15}$ M$_{\odot}$ for a constant magnetic field of $5$ $\mu$G, in accordance with arguments made in Colafrancesco et al. (2014). The annihilation channel is $b\overline{b}$, the mass of the neutralino is 60 GeV and a DM annihilation cross-section $\langle \sigma V \rangle = 3 \times 10^{-27}$ cm $^3$ s$^{-1}$ was adopted. Emission from dwarf galaxy halos ($M \approx 10^{7}$ M$_{\odot}$) is just below the SKA detection threshold for this value of $\langle \sigma V \rangle$ but would be visible at redshift $z \leq 0.01$ for the assumed DM annihilation cross-section. This justify the search of DM-induced radio signals mainly in dwarf galaxies of the local environment, at distances $\simlt 3$ Mpc ($z \simlt 0.0007$). Emission from galactic DM halos ($M \approx 10^{12}$ M$_{\odot}$) are detectable by SKA out to $z \approx 0.8$ even with the reference value of $\langle \sigma V \rangle$, and can provide a non-detection upper-bound on $\langle \sigma V \rangle$ an order of magnitude below the assumed value even at such high redshifts. Emission form galaxy cluster halos can provide similar constraints but out to higher redshifts $z \simlt 3$. These objects thus offer the option of deep-field observations that can scan a larger fraction of the DM parameter space than the best current data.\ Figure \[fig:evol\_bb60\] also shows that the effect of diffusion are far less significant when observing higher-$z$ objects, again simplifying the modelling and analysis of the SKA observations. Optimal DM laboratories ----------------------- In order to identify the optimal DM laboratories for radio observations we scan a parameter space extending from dwarf galaxies to galaxy clusters over a wide redshifts range $z \approx 0-5$. The choice to examine the halos of both large and small structures is crucial, as dwarf spheroidal galaxies are well known to be highly DM dominated but produce faint emissions, while larger structures, but not immaculate test-beds for DM emissions, provide substantially stronger fluxes. This indicates that a survey of DM halos with different mass is essential to identify the best detection prospects for future radio telescopes like the SKA. ![Exclusion plot in redshift versus halo mass based on projected SKA (at 1 GHz in Band 1) sensitivity data for the reference value of $\langle \sigma V \rangle = 3 \times 10^{-27}$ cm $^3$ s$^{-1}$ with 30 hour integration time (black lines) and 1000 hour integration time (green lines). $\langle B \rangle = 5$ $\mu$G was adopted. Solid lines are the $1\sigma$ sensitivity exclusion, dashed lines that of $2\sigma$ and dotted lines correspond to $3\sigma$. The yellow dash-dotted line corresponds to 30 hours of integration and 1$\sigma$ confidence with $\langle \sigma V \rangle = 3 \times 10^{-30}$ cm$^3$ s$^{-1}$. An annihilation channel $b\overline{b}$ is assumed with a neutralino mass of 60 GeV. Representative objects with known DM mass are shown for illustrative purposes of DM radio signal detection. The dSph group contains the galaxies: Draco, Sculptor, Fornax, Carina and Sextans. Unlabelled Galaxies are: NGC3917, NGC3949 and NGC4010. For very local objects the redshift is estimated from the average distance data. From Colafrancesco et al. 2014.[]{data-label="fig:loop"}](loop_mh_z.eps) Figure \[fig:loop\] shows the redshift-mass exclusion plot obtained by using the SKA sensitivity bound for SKA1 LOW and SKA1 MID (at 1 GHz in Band 1). For each DM halo we obtain the DM halo mass and redshift combination that produce the minimal SKA-detectable fluxes. DM-dominated objects lying above the black and green curves cannot be detected with the SKA1 at the given confidence threshold for $\langle \sigma V \rangle = 3 \times 10^{-27}$ cm$^3$ s$^{-1}$. Objects below a curve are visible to SKA1, and the further below the curve they lie the greater the region of the cross-section parameter space we can explore through the observation of the object. For reference, the yellow dash-dotted line displays the curve given for $\langle \sigma V \rangle = 3 \times 10^{-30}$ cm$^3$ s$^{-1}$ and $1\sigma$ confidence level. A few representative know objects (irrespective of their location in the sky) with good estimates of the DM mass are plotted in the $M_{DM}-z$ plane for the sake of illustration of the DM search potential with the SKA.\ **Dwarf galaxies**, given their extreme proximity, provide an excellent test-bed for DM radio probes, granting access to a parameter space that extends even below the value $\langle \sigma V \rangle = 3 \times 10^{-30}$ cm$^3$ s$^{-1}$. Additionally their large mass-to-light ratios and absence of strong star formation and diffuse non-thermal emission make them very clean sources for radio DM searches.\ **Galaxies** can be probed to significantly larger redshifts than the dwarf galaxies due to their larger DM mass, and those located in the redshift range $0.5 \simlt z \simlt 1.0$ provide stronger constraints. However, an optimized DM search should be confined to galaxies with little background radio noise, making low star-formation-rate galaxies good candidates. High-$z$ galaxies come also with the advantage of observing more primitive structures with fewer sources of baryonic radio emission.\ **Clusters of galaxies** provide extremely good candidates in cases, such as the Bullet cluster, where the dark and baryonic matter are spatially separated. Our recent analysis of the ATCA observation of the Bullet cluster (Colafrancesco and Marchegiani 2014) indicates that deeper radio observations (possible with the SKA) will be able indeed to separate the DM-induced signal from the CR-induced one and hence have the possibly to investigate the nature of DM particles using the technique here proposed. More in general, the large predicted radio fluxes due to DM annihilation in clusters indicate that DM-induced radio emission can be observed in radio out to large redshifts $z \approx 2$, again with the advantage of fewer sources of baryonic radio emission. Disentangling magnetic fields and Dark Matter --------------------------------------------- Studying the magnetic properties of DM halos are crucial to disentangle the DM particle density from the magnetic field energy density contributing to the expected synchrotron radio emission from DM annihilation. The SKA is the most promising experiment to determine the magnetic field structure in extragalactic sources (see Johnston-Hollitt et al. 2015), and will have the potential of measuring RMs toward a large number of sources allowing a detailed description of the strength, structure, and spatial distribution of magnetic fields in dSph galaxies, galaxies (see Beck et al. 2015) and galaxy clusters (see Govoni et al. 2015). We stress that these measurements of the magnetic field can be obtained by the SKA simultaneously, for the first time, with the constraints on DM nature from the expected radio emission. The impact of SKA on the search for the DM nature ================================================= Deep observations of radio emission in DM halos are not yet available, and this limits the capabilities of the current radio experiments to set relevant constraints on DM models. We have already explored a project (Regis et al. 2014a,b,c) dedicated to the WIMP search making use of radio interferometers, that could be considered as a pilot experiment for the next generation high-sensitivity and high-resolution radio telescope arrays like the SKA. For the particle DM search we are interested in, the use of multiple array detectors having synthesized beams of $\sim$ arcmin size has a number of advantages with respect to single-dish observations. First, the large collecting area allows for an increase in the sensitivity over that a single-dish telescope. The best beam choice for the detection of a diffuse emission requires a large synthesized beam (in order to maximize the integrated flux), but still smaller than the source itself to be able to resolve it. A good angular resolution is also crucial in order to distinguish between a possible non-thermal astrophysical emission and the DM-induced signal, which clearly becomes very hard if the DM halo is not well-resolved. The possibility of simultaneously detecting small scale sources with the long-baselines of the array allows one to overcome the confusion limit. In the case of arcmin beams, the confusion level can be easily reached with observations lasting for few tens of minutes, even by current telescopes. A source subtraction is thus a mandatory and crucial step of the analysis. Finally, single dish telescopes face the additional complication related to Galactic foreground contaminations, which are instead subdominant for the angular scales typically probed by telescope arrays at GHz frequency. The limits derived from ATCA observations of 6 dSphs (Regis et al. 2014c) on the WIMP annihilation/decay rate as a function of the mass for different final states of annihilation/decay are already comparable to the best limits obtained with $\gamma$-ray observations and are much more constraining than what obtained in the X-ray band or with previous radio observations (Spekkens et al. 2013, Natarayan et al. 2013). In this context, the SKA will have the possibility to explore DM models with cross-section values well below the DM relic abundance one (see Fig.6 in Regis et al. 2014c). The SKA1-MID Band 1 (350 -1050 MHz) will probably be the most promising frequency range for the majority of WIMP models. The full SKA-2 phase will bring another factor $\sim 10 \times$ increase in sensitivity and an extended frequency range up to at least 25 GHz. Typical values of the SKA sensitivity ($A_{eff} /T_{sys} = 2 \times 10^4$ m$^2$/K) and bandwidth ($300$ MHz at GHz frequency) provide rms flux values of $\approx 30$ nJy for 10 hours of integration time. This is about a $10^3$ factor of gain in sensitivity with respect to the most recent ATCA observations (Regis et al. 2014a). A further improvement by a factor of 2-3 can be confidently foreseen due to the larger number of accessible dSph satellites from the southern hemisphere. The SKA will also have the unique advantage to be able to determine the dSph magnetic field (via FR measurements and possibly also polarization), provided its strength is around the $\mu$G level (as expected from star formation rate arguments, Regis et al. 2014c). This will make the predictions for the expected DM signal much more robust and obtainable with a single experimental configuration. The prospects of detection/constraints of the WIMP particle properties with the SKA will therefore progressively close in on the full parameter space, even in a pessimistic sensitivity case, and up to $\sim$ TeV WIMP masses, irrespective of astrophysical assumptions. The SKA will also allow to investigate the possibility that point-sources detected in the proximity of the dSph optical center might be associated to the emission from a DM cuspy profile. This possibility is likely only in the “loss at injection” scenario, while spatial diffusion should in any case flatten the e$^{\pm}$ distribution, making the source extended rather than point-like. The investigation of these sources with the SKA will deserve particular attention, since we have already found that the WIMP scenario can fit the point-like emission with annihilation rates consistent with existing bounds (Regis et al. 2014c). ![The $\langle \sigma V\rangle$ upper limits from 30 hour of SKA integration time for $z = 0.01$ at 300 MHz (top) and 1 GHz (bottom) as the neutralino mass $M_{\chi}$ is varied with annihilation channel $b\overline{b}$ in solid lines and $\tau\bar{\tau}$ in dashed lines. A value $\langle B \rangle =5$ $\mu$G was adopted. Black lines correspond to halos with mass $10^{15}$ M$_{\odot}$, red lines to $10^{12}$ M$_{\odot}$ and green lines to $10^{7}$ M$_{\odot}$ (from Colafrancesco et al. 2014).[]{data-label="fig:sigv_z0"}](sigv_z0_mx_300MHz.eps "fig:") ![The $\langle \sigma V\rangle$ upper limits from 30 hour of SKA integration time for $z = 0.01$ at 300 MHz (top) and 1 GHz (bottom) as the neutralino mass $M_{\chi}$ is varied with annihilation channel $b\overline{b}$ in solid lines and $\tau\bar{\tau}$ in dashed lines. A value $\langle B \rangle =5$ $\mu$G was adopted. Black lines correspond to halos with mass $10^{15}$ M$_{\odot}$, red lines to $10^{12}$ M$_{\odot}$ and green lines to $10^{7}$ M$_{\odot}$ (from Colafrancesco et al. 2014).[]{data-label="fig:sigv_z0"}](sigv_z0_mx_1GHz.eps "fig:") The SKA1-MID Band 1 (350-1050 MHz) to Band 4 (2.8-5.18 GHz) are important to probe the DM-induced synchrotron spectral curvature at low-$\nu$ (sensitive to DM composition) and at high-$\nu$ (sensitive to DM particle mass), and the implementation of Band 5 (4.6-13.8 GHz) will bring further potential to assess the DM-induced radio spectrum. As we can see from Fig.\[fig:evol\_bb60\], the best frequency range to detect these radio emissions is around 1 GHz. So, the upper frequency regions of SKA1-MID Band 1 provides the strongest spectral candidate for probing the cross-section parameter space due to an optimal combination of the SKA sensitivity within this band and the relatively strong fluxes at these frequencies. This frequency band will also allow for an optimal description of the magnetic field (see Johnston-Hollitt et al. 2014). There are two main caveats in the forecasts for DM detection in the radio frequency band. The first stems from the fact that, for an extended radio emission,the confusion issue becomes stronger and stronger as one tries to probe fainter and fainter fluxes. Thus, the source subtraction procedure becomes crucial and this can affect the estimated sensitivities. The impact of this effect on the actual sensitivity is hardly predictable at the present time, especially for the SKA, since it will depend on the properties of the detected sources, the efficiency of deconvolution algorithms, and the accuracy of the telescope beam shape.\ The second caveat is that by bringing down the observational threshold, one can possibly start to probe the very low levels of possible non-thermal emission associated to the tiny rate of star formation in dSph, or in galaxies and galaxy clusters. The DM contribution should be then disentangled from such astrophysical background. The superior angular resolution of the SKA will allow for the precise mapping of emissions, putatively either DM or baryonically induced, and will enable their correlation with the stellar or DM profiles (obtained via optical and/or kinematic measurements). Early DM science can be done with a small sample of local dSphs, a small sample of nearby galaxies with good DM density profile reconstruction, and the Bullet cluster, observing with a somewhat larger beam ($\approx 7^{{\prime}{\prime}} -10^{{\prime}{\prime}}$). These objects have been already studied in radio with similar objectives (e.g., DM limits) and therefore provide the best science cases to prove the capabilities of the SKA1 for the study of the nature of DM with radio observations. The implementation of the SKA1-MID Band 5 (4.6-13.8 GHz) will increase the ability to detect the expected high-frequency spectral curvature of DM-induced radio emissions, and the ability to place constraints on the DM cross-section and to differentiate between different annihilation channel spectra. The 10$\times$ increased sensitivity of SKA2-MID compared to SKA1-MID will allow us to increase the angular resolution by a factor $\approx 20$ or to increase the sensitivity of SKA2-MID to DM-induced radio signals by a factor $\approx10$. The possibility to extend the frequency coverage of the SKA in its Phase-2 realization up to $\sim 25$ GHz, will allow to detect the expected high-$\nu$ spectral cut-off of DM-induced radio emissions, and then set accurate constraints to the DM particle mass. Conclusion ========== The SKA has the potential to unveil the elusive nature of Dark Matter. Its ability to resolve the intrinsic degeneracy (between magnetic field properties and particle distribution) of the synchrotron emission expected from secondary particles produced in DM annihilation (decay) will allow such a discovery to be unbiased and limited only by the sensitivity to the DM particle mass and annihilation cross-section (decay rate). The unprecedented sensitivity of the SKA to the DM fundamental properties will bring this instrument in a leading position for unveiling the nature of the dark sector of the universe.\ The information provided by the SKA can be complemented with analogous studies in other spectral bands, which will be able to prove the ICS signal of DM-produced secondary electrons (spanning from $\mu$waves to hard X-rays and $\gamma$-rays) and the distinctive presence of the $\pi^0 \to \gamma \gamma$ emission bump in the $\gamma$-rays (see Fig. \[fig:multiflux\]). The next decade will offer excellent multi-frequency opportunities in this respect with the advent of Millimetron, the largest space-borne single-dish mm. astronomy satellite operating in the $10^2-10^3$ GHz range (optimal to prove the DM-induced SZ effect), the Astro-H mission operating in the hard X-rays frequency range (with the highest expected sensitivity to probe the high-energy tail of the DM-induced ICS emission), and the CTA with unprecedent sensitivity in the energy range between a few tens GeV to hundreds TeV. [^1] [99]{} Ackermann, M., et al., 2014, Phys. Rev. D, 89, 042001 Beck, R., Bomans, D., Colafrancesco, S., et al. 2015, in *Advancing Astrophysics with the Square Kilometre Array*, PoS(AASKA14)94 Borriello, E. et al. 2010, ApJ, 709, L32 Burns, J.O. [*et al.*]{}., 1995, ApJ, 446,583 Colafrancesco, S., 2004, A&A, 422, L23 Colafrancesco, S. 2010, invited lecture at the 4th Gamow International Conference on Astrophysics & Cosmology After Gamow (9th Gamow Summer School), AIPC, 1206, 5C Colafrancesco, S. & Mele, B. 2001, ApJ, 562, 24 Colafrancesco, S., Profumo, S. and Ullio, P. 2006, A&A, 455, 21 Colafrancesco, S., Profumo, S. and Ullio, P. 2007, PhRvD, 75, 3513 Colafrancesco, S. et al. 2011, A&A, 527, 80 Colafrancesco, S., and Marchegiani, P, 2014, A&A in press Colafrancesco, S., Marchegiani, P. and Beck, G., 2014, \[2014arXiv1409.4691C\] Daylan, T. et al., 2014, \[arXiv:1402.6703\] Deiss, B.M., Reich, W., Lesch, H. & Wielebinski, R. 1997, A&A, 321, 55 Dewdney, P., Turner, W., Millenaar, R., McCool, R., Lazio, J. & Cornwell, T., 2012, SKA baseline design document, [*http://www.skatelescope.org/wp-content/uploads/2012/07/SKA-TEL-SKO-DD-001-1\_BaselineDesign1.pdf*]{} Doro, M., 2014, Nuclear Instruments and Methods in Physics Research A,742, 99 Doro, M., et al., 2013, Astroparticle Physics, 43, 189 Govoni, F., Murgia, M., Feretti, L., et al., 2006, A&A, 460, 425 Govoni, F. et al. 2015, in *Advancing Astrophysics with the Square Kilometre Array*, PoS(AASKA14)105 Johnston-Hollitt, M. et al. 2015, in *Advancing Astrophysics with the Square Kilometre Array*, PoS(AASKA14)092 Jungman, G., Kamionkowki, M. & Griest, K. 1996, Phys.rep, 267, 195 Moran, E.C. et al. 2014, \[arXiv:1408.4451\] Natarajan, A. et al., 2013, Phys. Rev. D 88 083535 \[arXiv:1308.4979 \[astro-ph.CO\]\]. Planck Collaboration, 2011, A&A, 536, A7 Regis, M., Richter, L., Colafrancesco, S., Massardi, M., de Blok, W. J. G., Profumo, S., Orford, N. 2014a, \[arXiv:1407.5479\] Regis, M., Richter, L., Colafrancesco, S., Profumo, S., de Blok, W. J. G., Massardi, M. 2014b, \[arXiv:1407.5482\] Regis, M., Colafrancesco, S., Profumo, S., de Blok, W. J. G., Massardi, M., Richter, L. 2014c, JCAP, 10, 016R \[arXiv:1407.4948\] Spekkens, K., Mason, B.S., Aguirre, J.E. and Nhan, B., 2013, ApJ, 773, 61 \[arXiv:1301.5306 \[astro-ph.CO\]\]. Weniger, C., 2012, JCAP, 1208, 007 [^1]: S.C., P.M. and G.B. acknowledge support by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation and by the Square Kilometre Array (SKA).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the asymptotic structure of (possibly type ${\rm III}$) crossed product von Neumann algebras $M = B \rtimes \Gamma$ arising from arbitrary actions $\Gamma \curvearrowright B$ of bi-exact discrete groups (e.g. free groups) on amenable von Neumann algebras. We prove a spectral gap rigidity result for the central sequence algebra $N'' \cap M^\omega$ of any nonamenable von Neumann subalgebra with normal expectation $N \subset M$. We use this result to show that for any strongly ergodic essentially free nonsingular action $\Gamma \curvearrowright (X, \mu)$ of any bi-exact countable discrete group on a standard probability space, the corresponding group measure space factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ has no nontrivial central sequence. Using recent results of Boutonnet–Ioana–Salehi Golsefidy [@BISG15], we construct, for every $0 < \lambda \leq 1$, a type ${\rm III_\lambda}$ strongly ergodic essentially free nonsingular action ${\mathbf{F}}_\infty \curvearrowright (X_\lambda, \mu_\lambda)$ of the free group $\mathbf F_\infty$ on a standard probability space so that the corresponding group measure space type ${\rm III_\lambda}$ factor ${\mathord{\text{\rm L}}}^\infty(X_\lambda, \mu_\lambda) \rtimes {\mathbf{F}}_\infty$ has no nontrivial central sequence by our main result. In particular, we obtain the first examples of group measure space type ${\rm III}$ factors with no nontrivial central sequence.' address: - 'Laboratoire de Mathématiques d’Orsay, Université Paris-Sud, CNRS, Université Paris-Saclay, 91405 Orsay, France' - 'RIMS, Kyoto University, 606-8502 Kyoto, Japan' author: - Cyril Houdayer - Yusuke Isono title: 'Bi-exact groups, strongly ergodic actions and group measure space type III factors with no central sequence' --- [^1] [^2] Introduction and statement of the main results ============================================== The [*group measure space construction*]{} of Murray and von Neumann [@MvN43] associates to any ergodic (essentially) free nonsingular action $\Gamma \curvearrowright (X, \mu)$ of a countable discrete group on a standard probability space a factor denoted by ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$. A fundamental question in operator algebras is how much information does the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ retain from the group action $\Gamma \curvearrowright (X, \mu)$? This question has attracted a lot of attention during the last 15 years and several important developments regarding the structure and the rigidity of group measure space factors have been made possible thanks to Popa’s [*deformation/rigidity*]{} theory [@Po06a]. We refer the reader to [@Ga10; @Va10; @Io12b] for recent surveys on this topic. One of the questions we address in this paper is the following general problem: Under which assumptions on the countable discrete group $\Gamma$ and the ergodic free nonsingular action $\Gamma \curvearrowright (X, \mu)$, the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ is full? Recall from [@Co74] that a factor $M$ with separable predual is [*full*]{} if its asymptotic centralizer $M_\omega$ is trivial for some (or any) nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. By [@AH12 Theorem 5.2], a factor $M$ with separable predual is full if and only if its central sequence algebra $M' \cap M^\omega$ is trivial for some (or any) nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ (see Section \[preliminaries\] for further details). If the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ is full then the free nonsingular action $\Gamma \curvearrowright (X, \mu)$ is necessarily [*strongly ergodic*]{}, that is, any $\Gamma$-asymptotically invariant sequence of measurable subsets of $X$ is trivial. The converse is not true in general as demonstrated in the celebrated example by Connes and Jones [@CJ81]. Indeed, they exhibited an example of a strongly ergodic free probability measure preserving (pmp) action such the associated group measure space ${\rm II_1}$ factor is [*McDuff*]{}, that is, tensorially absorbs the hyperfinite ${\rm II_1}$ factor of Murray and von Neumann. The general problem mentioned above has nevertheless a satisfactory answer in the case when the action $\Gamma \curvearrowright (X, \mu)$ is [*pmp*]{}. Indeed, it was shown by Choda in [@Ch81] that when the countable discrete group $\Gamma$ is not [*inner amenable*]{} and the free pmp action $\Gamma \curvearrowright (X, \mu)$ is strongly ergodic, then the group measure space ${\rm II_1}$ factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ is full. The facts that the group $\Gamma$ is not inner amenable and the action $\Gamma \curvearrowright (X, \mu)$ is pmp imply that all the central sequences in ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ must asymptotically lie in ${\mathord{\text{\rm L}}}^\infty(X)$. It follows immediately that ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ is full if the action is strongly ergodic. In the above reasoning, the assumption that the action $\Gamma \curvearrowright (X, \mu)$ is pmp is crucial since nonamenable (and in particular non-inner amenable) groups always admit an amenable (in the sense of Zimmer [@Zi84 Definition 4.3.1]) type ${\rm III}$ ergodic nonsingular action, namely the Poisson boundary action. Very little is known about the general problem mentioned above when the action $\Gamma \curvearrowright (X, \mu)$ is no longer pmp and is more generally nonsingular (possibly of type ${\rm III}$). In this paper, we investigate the asymptotic structure of (possibly type ${\rm III}$) group measure space factors ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ and more generally of (possibly type ${\rm III}$) crossed product von Neumann algebras $B \rtimes \Gamma$ arising from arbitrary actions $\Gamma \curvearrowright B$ of bi-exact discrete groups on amenable von Neumann algebras. The class of [*bi-exact*]{} discrete groups was introduced by Ozawa in [@Oz04] (see also [@BO08 Chapter 15]) and includes amenable groups, free groups, Gromov word-hyperbolic groups and discrete subgroups of connected simple Lie groups of real rank one. We refer the reader to Section \[preliminaries\] for a precise definition. Any bi-exact discrete group is either amenable or non-inner amenable [@Oz04]. Ozawa’s celebrated result [@Oz03] asserts that bi-exact discrete groups $\Gamma$ give rise to [*solid*]{} group von Neumann algebras ${\mathord{\text{\rm L}}}(\Gamma)$, that is, for any diffuse von Neumann algebra $A \subset {\mathord{\text{\rm L}}}(\Gamma)$, the relative commutant $A' \cap {\mathord{\text{\rm L}}}(\Gamma)$ is amenable. Moreover, any solid ${\rm II_1}$ factor is either amenable or full [@Oz03 Proposition 7]. Recall that an inclusion of von Neumann algebras $N \subset M$ is [*with expectation*]{} if there exists a faithful normal conditional expectation ${\mathord{\text{\rm E}}}_N : M \to N$. Our first main result is a [*spectral gap rigidity*]{} result inside crossed product von Neumann algebras $M = B \rtimes \Gamma$ arising from arbitrary actions $\Gamma \curvearrowright B$ of bi-exact discrete groups on amenable $\sigma$-finite von Neumann algebras. More precisely, we prove that for any von Neumann subalgebra with expectation $N \subset M$, either $N$ has a nonzero amenable direct summand or the central sequence algebra $N' \cap M^\omega$ lies in the smaller algebra $B^\omega \rtimes \Gamma$. Our Theorem \[thmA\] can be regarded as an analogue of the spectral gap rigidity results discovered by Peterson in [@Pe06 Theorem 4.3] and Popa in [@Po06b Theorem 1.5] and [@Po06c Lemma 2.2]. \[thmA\] Let $\Gamma$ be any bi-exact discrete group, $B$ any amenable $\sigma$-finite von Neumann algebra and $\Gamma \curvearrowright B$ any action. Denote by $M:=B\rtimes \Gamma$ the corresponding crossed product von Neumann algebra. Let $p\in M$ be any nonzero projection and $N\subset pMp$ any von Neumann subalgebra with expectation. Let $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ be any nonprincipal ultrafilter. Then at least one of the following conditions holds true: - The von Neumann algebra $N$ has a nonzero amenable direct summand. - We have $N'\cap pM^\omega p \subset p(B^\omega \rtimes \Gamma) p$. In this case, we further obtain $A \preceq_{B^\omega \rtimes \Gamma} B^\omega$ for any finite von Neumann subalgebra with expectation $A\subset N'\cap pM^\omega p$. We refer the reader to Section \[preliminaries\] for ultraproduct von Neumann algebras and Popa’s intertwining techniques inside arbitrary von Neumann algebras. The proof of Theorem \[thmA\] given in Section \[section-thmA\] (see Theorems \[theorem for thmA 1\] and \[theorem for thmA 2\]) uses a combination of Ozawa’s ${\mathord{\text{\rm C}}}^*$-algebraic techniques [@Oz03; @Oz04; @Is12], ultraproduct von Neumann algebraic techniques [@Oc85; @AH12] and the recent generalization of Popa’s intertwining-by-bimodules to the framework of type ${\rm III}$ von Neumann algebras developed by the authors in [@HI15]. The interesting feature of the proof of Theorem \[thmA\] is that it does [*not*]{} rely on Connes–Tomita–Takesaki modular theory. Indeed, unlike other instances of Popa’s spectral gap rigidity results in the literature which typically rely on using amenable traces and hence require the ambient von Neumann algebra to be (semi)finite, we use instead unital completely positive (ucp) maps and exploit Ozawa’s ${\mathord{\text{\rm C}}}^*$-algebraic techniques [@Oz03; @Oz04] to prove the existence of norm one projections. The main advantage of this approach is that it allows us to work directly inside the (possibly type ${\rm III}$) crossed product von Neumann algebra $M = B \rtimes \Gamma$ without appealing to the continuous core decomposition. In this respect, our approach is similar to the one we developed in our previous paper [@HI15]. We refer the reader to [@HR14; @HU15; @HV12; @Is12; @Is13] for other structural/rigidity results for type ${\rm III}$ factors involving the continuous core decomposition. Following [@HR14; @Oz04], we say that a von Neumann algebra $M$ is $\omega$-[*semisolid*]{} if for any von Neumann subalgebra $N\subset M$ with expectation such that the relative commutant $N'\cap M^\omega$ has no type $\rm I$ direct summand, we have that $N$ is amenable. The next corollary strengthens the indecomposability properties of crossed product von Neumann algebras $B \rtimes \Gamma$ arising from arbitrary actions $\Gamma \curvearrowright B$ of bi-exact discrete groups on [*abelian*]{} von Neumann algebras (see [@Oz04; @HV12; @Is12] for previous results). \[corB\] Let $\Gamma$ be any bi-exact discrete group, $B$ any abelian $\sigma$-finite von Neumann algebra and $\Gamma \curvearrowright B$ any action. Let $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ be any nonprincipal ultrafilter. Then the crossed product von Neumann algebra $B\rtimes \Gamma$ is $\omega$-semisolid. In particular, if $B\rtimes \Gamma$ is a nonamenable factor, then $B\rtimes \Gamma$ is [*prime*]{}, that is, $B\rtimes \Gamma$ cannot be written as a tensor product $Q_1 {\mathbin{\overline{\otimes}}}Q_2$ of diffuse factors. Our second main result, Theorem \[thmC\] below, is an answer to the general problem mentioned earlier in the case when the acting group is bi-exact. Indeed, using Theorem \[thmA\] in the case when the action $\Gamma \curvearrowright B$ arises from a strongly ergodic free nonsingular action $\Gamma \curvearrowright (X, \mu)$ of a bi-exact countable discrete group on a standard probability space, we show that the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma$ is full. \[thmC\] Let $\Gamma$ be any bi-exact countable discrete group and $\Gamma \curvearrowright (X,\mu)$ any strongly ergodic free nonsingular action on a standard probability space. Then the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X)\rtimes \Gamma$ is full. The proof of Theorem \[thmC\] uses a combination of Theorem \[thmA\] and the useful Lemma \[lem-strong-ergodicity\] below which proves the existence of a nontrivial centralizing sequence $(u_n)_n$ in every nonfull factor $M = {\mathord{\text{\rm L}}}(\mathcal R)$ arising from a strongly ergodic nonsingular equivalence relation $\mathcal R$ defined on a standard probability space such that $(u_n)_n$ “does not embed" into the Cartan subalgebra ${\mathord{\text{\rm L}}}^\infty(X)$. Our Lemma 5.1 is a nonsingular generalization of a recent result of Hoff (see the first part of the proof of [@Ho15 Proposition C]). In view of Choda’s result [@Ch81], we do not know whether Theorem \[thmC\] holds true more generally for (arbitrary strongly ergodic free nonsingular actions of) arbitrary non-inner amenable groups instead of bi-exact groups. We point out that Ozawa recently showed in [@Oz16] that Theorem \[thmC\] holds true for arbitrary strongly ergodic free nonsingular actions of ${\mathord{\text{\rm SL}}}_3({\mathbf{Z}})$, which is not bi-exact by [@Sa09]. We finally exploit recent results of Boutonnet–Ioana–Salehi Golsefidy [@BISG15] to construct, for every $0 < \lambda \leq 1$, examples of type ${\rm III_\lambda}$ strongly ergodic free nonsingular actions of a free group on a standard probability space. It is shown in [@BISG15 Theorem A] that for any (not necessarily compact) connected simple Lie group and any countable dense subgroup $\Lambda < G$ with “algebraic entries" (e.g. $(\Lambda < G) = ({\mathord{\text{\rm SL}}}_n({\mathbf{Q}}) < {\mathord{\text{\rm SL}}}_n({\mathbf{R}}))$ for $n \geq 2$), the left translation action $\Lambda \curvearrowright G$ is strongly ergodic. By taking a suitable non-unimodular closed subgroup $P < G$, the quotient action $\Lambda \curvearrowright G/P$ is still strongly ergodic and of type ${\rm III}$. The quotient action $\Lambda \curvearrowright G/P$ need not be essentially free in general. However, using a “direct product" construction similar to the one used in [@HV12 Corollary B], we can then construct strongly ergodic essentially free nonsingular actions of free groups and we obtain the following corollary. \[corD\] For every $0 < \lambda \leq 1$, there exists a strongly ergodic free nonsingular action ${\mathbf{F}}_\infty \curvearrowright (X_\lambda, \mu_\lambda)$ of type ${\rm III_\lambda}$ so that the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X_\lambda, \mu_\lambda) \rtimes {\mathbf{F}}_\infty$ is of type ${\rm III_\lambda}$ and is full. Moreover, there exists a strongly ergodic free nonsingular action ${\mathbf{F}}_\infty \curvearrowright (X_\infty, \mu_\infty)$ of type ${\rm II_\infty}$ so that the group measure space factor ${\mathord{\text{\rm L}}}^\infty(X_\infty, \mu_\infty) \rtimes {\mathbf{F}}_\infty$ is of type ${\rm II_\infty}$ and is full. The first examples of full factors of type ${\rm III}$ were discovered by Connes in [@Co74]. He showed that the factors $$M_{n,k, \varphi} = \left( \overline{\bigotimes}_{{\mathbf{F}}_n} (\mathbf M_k({\mathbf{C}}), \varphi) \right) \rtimes {\mathbf{F}}_n$$ arising from Connes–Størmer Bernoulli shifts of free groups ${\mathbf{F}}_n \curvearrowright \overline{\bigotimes}_{{\mathbf{F}}_n} (\mathbf M_k({\mathbf{C}}), \varphi)$ are full if $n, k \geq 2$ and of type ${\rm III}$ if $\varphi$ is not tracial. Observe that the factors $M_{n,k, \varphi}$ possess $\overline{\bigotimes}_{{\mathbf{F}}_n} {\mathbf{C}}^k$ as a Cartan subalgebra. This implies that the underlying ergodic nonsingular equivalence relation is strongly ergodic [@FM75]. However, Connes–Størmer Bernoulli crossed products need not be $\ast$-isomorphic to group measure space factors. In this respect, Corollary \[corD\] above provides the first class of group measure space type ${\rm III}$ factors with no nontrivial central sequence. We finally point out that the group measure space type ${\rm III}$ factors in Corollary \[corD\] possess a unique Cartan subalgebra, up to unitary conjugacy, by [@HV12 Theorem A] (see [@PV11] for the trace preserving case). Moreover, Corollary \[corD\] provides new examples of group measure type ${\rm III}$ factors with a unique Cartan subalgebra, up to unitary conjugacy. Indeed, the examples of ergodic free nonsingular actions considered in [@HV12 Corollary B] are not strongly ergodic since they have an amenable action as a quotient. In particular, the group measure space type ${\rm III}$ factors in [@HV12 Corollary B] are not full while the ones in Corollary \[corD\] are full. Acknowledgments {#acknowledgments .unnumbered} --------------- It is our pleasure to thank Adrian Ioana, Dimitri Shlyakhtenko, Yoshimichi Ueda and Stefaan Vaes for their valuable comments. Preliminaries ============= For any von Neumann algebra $M$, we will denote by $\mathcal Z(M)$ the centre of $M$, by $\mathcal U(M)$ the group of unitaries in $M$ and by $(M, {\mathord{\text{\rm L}}}^2(M), J^M, \mathfrak P^M)$ a standard form for $M$. We will say that an inclusion of von Neumann algebras $P \subset 1_P M 1_P$ is [*with expectation*]{} if there exists a faithful normal conditional expectation ${\mathord{\text{\rm E}}}_P : 1_P M 1_P \to P$. Crossed product von Neumann algebras {#crossed-product-von-neumann-algebras .unnumbered} ------------------------------------ We will use the following terminology and notation regarding crossed product von Neumann algebras. Let $\Gamma$ be any discrete group, $B$ any $\sigma$-finite von Neumann algebra and $\Gamma \curvearrowright B$ any action. Denote by $M:= B\rtimes \Gamma$ the corresponding [*crossed product*]{} von Neumann algebra and by ${\mathord{\text{\rm E}}}_{B} : M \to B$ the canonical faithful normal conditional expectation given by ${\mathord{\text{\rm E}}}_B(b \lambda_g) = \delta_{g, e} b$ for all $g \in \Gamma$ and all $b \in B$. Fix a standard form $(B, {\mathord{\text{\rm L}}}^2(B), J^B, \mathfrak P^B)$ for $B$. Denote by $u : \Gamma \to \mathcal U({\mathord{\text{\rm L}}}^2(B))$ the canonical unitary representation implementing the action $\Gamma \curvearrowright B$. A standard form $(M, {\mathord{\text{\rm L}}}^2(M), J^M, \mathfrak P^M)$ for $M$ is given by ${\mathord{\text{\rm L}}}^2(M) = {\mathord{\text{\rm L}}}^2(B) \otimes \ell^2(\Gamma)$ and $$J^M (\xi \otimes \delta_g) = u_g^* J^B \xi \otimes \delta_{g^{-1}} \quad \text{for all } \xi \in {\mathord{\text{\rm L}}}^2(B) \text{ and all } g \in \Gamma.$$ The Jones projection $e_B : {\mathord{\text{\rm L}}}^2(M) \to {\mathord{\text{\rm L}}}^2(B)$ is then simply given by $e_B = 1 \otimes P_{{\mathbf{C}}\delta_e}$ where $P_{{\mathbf{C}}\delta_e} : \ell^2(\Gamma) \to {\mathbf{C}}\delta_e$ is the orthogonal projection onto ${\mathbf{C}}\delta_e$. For crossed product von Neumann algebras $M = B \rtimes \Gamma$, we will always use such a standard form $(M, {\mathord{\text{\rm L}}}^2(M), J^M, \mathfrak P^M)$ as defined above. Ultraproduct von Neumann algebras {#ultraproduct-von-neumann-algebras .unnumbered} --------------------------------- Let $M$ be any $\sigma$-finite von Neumann algebra and $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ any nonprincipal ultrafilter. Define $$\begin{aligned} \mathcal I_\omega(M) &= \left\{ (x_n)_n \in \ell^\infty(M) \mid x_n \to 0\ \ast\text{-strongly as } n \to \omega \right\} \\ \mathcal M^\omega(M) &= \left \{ (x_n)_n \in \ell^\infty(M) \mid (x_n)_n \, \mathcal I_\omega(M) \subset \mathcal I_\omega(M) \text{ and } \mathcal I_\omega(M) \, (x_n)_n \subset \mathcal I_\omega(M)\right\}.\end{aligned}$$ The [*multiplier algebra*]{} $\mathcal M^\omega(M)$ is a C$^*$-algebra and $\mathcal I_\omega(M) \subset \mathcal M^\omega(M)$ is a norm closed two-sided ideal. Following [@Oc85 §5.1], we define the [*ultraproduct von Neumann algebra*]{} $M^\omega$ by $M^\omega := \mathcal M^\omega(M) / \mathcal I_\omega(M)$, which is indeed known to be a von Neumann algebra. We denote the image of $(x_n)_n \in \mathcal M^\omega(M)$ by $(x_n)^\omega \in M^\omega$. For every $x \in M$, the constant sequence $(x)_n$ lies in the multiplier algebra $\mathcal M^\omega(M)$. We will then identify $M$ with $(M + \mathcal I_\omega(M))/ \mathcal I_\omega(M)$ and regard $M \subset M^\omega$ as a von Neumann subalgebra. The map ${\mathord{\text{\rm E}}}_\omega : M^\omega \to M : (x_n)^\omega \mapsto \sigma \text{-weak} \lim_{n \to \omega} x_n$ is a faithful normal conditional expectation. For every faithful state $\varphi \in M_\ast$, the formula $\varphi^\omega := \varphi \circ {\mathord{\text{\rm E}}}_\omega$ defines a faithful normal state on $M^\omega$. Observe that $\varphi^\omega((x_n)^\omega) = \lim_{n \to \omega} \varphi(x_n)$ for all $(x_n)^\omega \in M^\omega$. Following [@Co74 §2], we define $$\mathcal M_\omega(M) := \left\{ (x_n)_n \in \ell^\infty(M) \mid \lim_{n \to \omega} \|x_n \varphi - \varphi x_n\| = 0, \forall \varphi \in M_\ast \right\}.$$ We have $\mathcal I_\omega (M) \subset \mathcal M_\omega(M) \subset \mathcal M^\omega(M)$. The [*asymptotic centralizer*]{} is defined by $M_\omega := \mathcal M_\omega(M)/\mathcal I_\omega(M)$. We have $M_\omega \subset M^\omega$. Moreover, by [@Co74 Proposition 2.8] (see also [@AH12 Proposition 4.35]), we have $M_\omega = M' \cap (M^\omega)^{\varphi^\omega}$ for every faithful state $\varphi \in M_\ast$. Let $Q \subset M$ be any von Neumann subalgebra with faithful normal conditional expectation ${\mathord{\text{\rm E}}}_Q : M \to Q$. Choose a faithful state $\varphi \in M_\ast$ in such a way that $\varphi = \varphi \circ {\mathord{\text{\rm E}}}_Q$. We have $\ell^\infty(Q) \subset \ell^\infty(M)$, $\mathcal I_\omega(Q) \subset \mathcal I_\omega(M)$ and $\mathcal M^\omega(Q) \subset \mathcal M^\omega(M)$. We will then identify $Q^\omega = \mathcal M^\omega(Q) / \mathcal I_\omega(Q)$ with $(\mathcal M^\omega(Q) + \mathcal I_\omega(M)) / \mathcal I_\omega(M)$ and be able to regard $Q^\omega \subset M^\omega$ as a von Neumann subalgebra. Observe that the norm $\|\cdot\|_{(\varphi |_Q)^\omega}$ on $Q^\omega$ is the restriction of the norm $\|\cdot\|_{\varphi^\omega}$ to $Q^\omega$. Observe moreover that $({\mathord{\text{\rm E}}}_Q(x_n))_n \in \mathcal I_\omega(Q)$ for all $(x_n)_n \in \mathcal I_\omega(M)$ and $({\mathord{\text{\rm E}}}_Q(x_n))_n \in \mathcal M^\omega(Q)$ for all $(x_n)_n \in \mathcal M^\omega(M)$. Therefore, the mapping ${\mathord{\text{\rm E}}}_{Q^\omega} : M^\omega \to Q^\omega : (x_n)^\omega \mapsto ({\mathord{\text{\rm E}}}_Q(x_n))^\omega$ is a well-defined conditional expectation satisfying $\varphi^\omega \circ {\mathord{\text{\rm E}}}_{Q^\omega} = \varphi^\omega$. Hence, ${\mathord{\text{\rm E}}}_{Q^\omega} : M^\omega \to Q^\omega$ is a faithful normal conditional expectation. For more on ultraproduct von Neumann algebras, we refer the reader to [@AH12; @Oc85]. We record the following observation that will be used throughout. Let $\Gamma$ be any discrete group, $B$ any $\sigma$-finite von Neumann algebra and $\Gamma \curvearrowright B$ any action. Put $M := B \rtimes \Gamma$ and denote by ${\mathord{\text{\rm E}}}_B : M \to B$ the canonical faithful normal conditional expectation. Choose any faithful state $\varphi \in M_\ast$ such that $\varphi \circ {\mathord{\text{\rm E}}}_B = \varphi$. Then the von Neumann subalgebra $B^\omega \vee M \subset M^\omega$ is globally invariant under the modular automorphism group $\sigma^{\varphi^\omega}$ and hence is with expectation. Observe that we have $B^\omega \vee M = B^\omega \rtimes \Gamma$ canonically. Therefore the von Neumann subalgebra $B^\omega \rtimes \Gamma \subset M^\omega$ is with expectation. Denote by ${\mathord{\text{\rm E}}}_{B^\omega} : M^\omega \to B^\omega$ and by ${\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma} : M^\omega \to B^\omega \rtimes \Gamma$ the unique $\varphi^\omega$-preserving conditional expectations. By uniqueness of the $\varphi^\omega$-preserving conditional expectation ${\mathord{\text{\rm E}}}_{B^\omega} : M^\omega \to B^\omega$, we have ${\mathord{\text{\rm E}}}_{B^\omega} \circ {\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma} = {\mathord{\text{\rm E}}}_{B^\omega}$. We thank Hiroshi Ando for pointing out to us the following well-known result. \[lem-ultraproduct\] Let $M$ be any $\sigma$-finite von Neumann algebra and $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ any nonprincipal ultrafilter. For any $u \in \mathcal U(M^\omega)$, there exists a sequence $(u_n)_n \in \mathcal M^\omega(M)$ such that $u = (u_n)^\omega$ and $u_n \in \mathcal U(M)$ for every $n \in {\mathbf{N}}$. Denote by $f : \mathbf T \to (- \pi, \pi]$ the unique Borel function such that $\exp({\rm i} f(z)) = z$ for all $z \in \mathbf T$. Let $u \in \mathcal U(M^\omega)$ and put $h = f(u) \in M^\omega$. Then $h^* = h$ and $\exp({\rm i} h) = u$. Write $h = (h_n)^\omega$ for some $(h_n)_n \in \mathcal M^\omega(M)$. Since $h^* = h$, up to replacing each $h_n$ by $\frac12(h_n +h_n^*)$, we may assume that $h_n^* = h_n$ for every $n \in {\mathbf{N}}$. Put $u_n = \exp({\rm i} h_n) \in \mathcal U(M)$ for every $n \in {\mathbf{N}}$. Since $[-\pi, \pi] \to \mathbf T : t \mapsto \exp({\rm i} t)$ is a continuous function, it is a uniform limit of polynomial functions by Stone–Weierstrass theorem. It follows that $(u_n)_n = (\exp({\rm i} h_n))_n = \exp({\rm i} (h_n)_n) \in \mathcal M^\omega(M)$ and $u = \exp({\rm i} h) = \exp({\rm i} (h_n)^\omega) = (\exp({\rm i} h_n))^\omega = (u_n)^\omega$. We next recall the construction of the Groh–Raynaud ultraproduct. For any Hilbert space $H$, define the [*ultraproduct Hilbert space*]{} $H_\omega$ as the completion/separation of $\ell^\infty(H)$ with respect to the semi-inner product given by $\langle (\xi_n)_n, (\eta_n)_n \rangle := \lim_{n\to \omega} \langle \xi_n, \eta_n \rangle_H$ for all $(\xi_n)_n, (\eta_n)_n \in \ell^\infty(H)$. We denote the image of $(\xi_n)_n\in \ell^\infty(H)$ by $(\xi_n)_\omega \in H_\omega$. Let $M\subset {\mathbf{B}}(H)$ be any von Neumann algebra. We define the unital $\ast$-representation $\pi^\omega : \ell^\infty(M) \to {\mathbf{B}}(H_\omega)$ by $$\pi^\omega((x_n)_n) (\xi_n)_\omega = ( x_n \xi_n)_\omega \quad \text{for all } (x_n)_n\in \ell^\infty({\mathbf{B}}(H)) \text{ and all } (\xi_n)_n \in \ell^\infty(H).$$ Let $(M, {\mathord{\text{\rm L}}}^2(M), J^M, \mathfrak P^M)$ be a standard form for $M$. The [*Groh–Raynaud ultraproduct*]{} $N := \prod^\omega M$ is the von Neumann algebra generated by $\pi^\omega(\ell^\infty(M))$. It is known that the inclusion $N \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M)_\omega)$ is in standard form with modular conjugation given by $J^N (\xi_n)_\omega := (J^M\xi_n)_\omega$ for all $(\xi_n)_\omega\in {\mathord{\text{\rm L}}}^2(M)_\omega$ (see [@Ra99 Corollary 3.9] and [@AH12 Theorem 3.18]). By [@AH12 Theorem 3.7], the Ocneanu ultraproduct is $\ast$-isomorphic to a corner of the Groh–Raynaud ultraproduct. More precisely, for any faithful state $\varphi \in M_\ast$, denote by $\xi_\varphi \in \mathfrak P^M$ the canonical representing vector. Then the isometry given by $$w_\varphi : {\mathord{\text{\rm L}}}^2(M^\omega) \to {\mathord{\text{\rm L}}}^2(M)_\omega : (x_n)^\omega \xi_{\varphi^\omega} \mapsto (x_n\xi_\varphi)_\omega$$ satisfies $w_\varphi^* N w_\varphi = M^\omega$. Define the ultraproduct state $\varphi_\omega = \langle \, \cdot \, (\xi_\varphi)_\omega, (\xi_\varphi)_\omega\rangle \in N_\ast$ and denote by $p \in N$ the support projection of $\varphi_\omega \in N_\ast$. We have $w_\varphi w_\varphi^* = pJ^N p J^N$. Then the condition $w_\varphi^* N w_\varphi = M^\omega$ implies that $pJ^NpJ^N \, N \, p J^NpJ^N\cong pNp\cong M^\omega$ so that the standard representation of $M^\omega$ is given by ${\mathord{\text{\rm L}}}^2(M^\omega) = pJ^N pJ^N \, {\mathord{\text{\rm L}}}^2(M)_\omega$ with modular conjugation $J^{M^\omega} = p J^N p$. \[lemma for AO in ultraproduct\] Let $B\subset M$ be any inclusion of $\sigma$-finite von Neumann algebras with faithful normal conditional expectation ${\mathord{\text{\rm E}}}_B : M \to B$. Denote by $e_B : {\mathord{\text{\rm L}}}^2(M) \to {\mathord{\text{\rm L}}}^2(B)$ the corresponding Jones projection. Denote by $N := \prod^\omega M$ the Groh–Raynaud ultraproduct. Let $\varphi \in M_\ast$ be any faithful state such that $\varphi\circ {\mathord{\text{\rm E}}}_B=\varphi$ and denote by $p \in N$ the support projection of $\varphi_\omega \in N_\ast$. Then $\pi^\omega( (e_B)_n )$ commutes with $p$ and $J^N$. Since $e_B$ commutes with $J^M$, $\pi^\omega( (e_B)_n )$ commutes with $J^N$. Denote by $\xi_\varphi \in \mathfrak P^M$ the canonical vector representing $\varphi \in M_\ast$. Since $e_B M e_B = B e_B$ and $e_B \xi_\varphi = \xi_\varphi$, we have $$\begin{aligned} \pi^\omega( (e_B)_n )J^N \pi^\omega(\ell^\infty(M))(\xi_\varphi)_\omega &=&J^N \pi^\omega( (e_B)_n ) \pi^\omega(\ell^\infty(M))\pi^\omega( (e_B)_n ) (\xi_\varphi)_\omega \\ &=& J^N \pi^\omega(\ell^\infty(B))\pi^\omega( (e_B)_n ) (\xi_\varphi)_\omega \\ &\subset& J^N \pi^\omega(\ell^\infty(M)) (\xi_\varphi)_\omega.\end{aligned}$$ Since $p$ is the projection onto the closure of $J^N \pi^\omega(\ell^\infty(M)) (\xi_\varphi)_\omega$, we obtain that $\pi^\omega( (e_B)_n )$ commutes with $p$. Equivalence relations and von Neumann algebras {#equivalence-relations-and-von-neumann-algebras .unnumbered} ---------------------------------------------- Let $(X, \mu)$ be any standard probability space. A [*nonsingular equivalence relation*]{} $\mathcal R$ [*defined on*]{} $(X, \mu)$ is an equivalence relation $\mathcal R \subset X \times X$ which satisfies the following three conditions: - $\mathcal R \subset X \times X$ is a Borel subset, - $\mathcal R$ has countable classes and - for every $\varphi \in [\mathcal R]$, we have $[\varphi_\ast \mu] = [\mu]$ where $[\mathcal R]$ denotes the [*full group*]{} of $\mathcal R$ consisting in all the Borel automorphisms $\varphi : X \to X$ such that ${\mathord{\text{\rm gr}}}(\varphi) \subset \mathcal R$. Following [@FM75], to any nonsingular equivalence relation $\mathcal R$ defined on a standard probability space $(X, \mu)$, one can associate a von Neumann algebra $M = {\mathord{\text{\rm L}}}(\mathcal R)$ which contains $A = {\mathord{\text{\rm L}}}^\infty(X)$ as a [*Cartan subalgebra*]{}, that is, $A \subset M$ is maximal abelian with expectation and the group of normalizing unitaries $\mathcal N_M(A) = \{u \in \mathcal U(M) : u A u^*\}$ generates $M$ as a von Neumann algebra. When $\Gamma \curvearrowright (X, \mu)$ is a nonsingular Borel action of a countable discrete group on a standard probability space, we will denote by $\mathcal R(\Gamma \curvearrowright X)$ the nonsingular [*orbit equivalence relation*]{} defined by $$\mathcal R(\Gamma \curvearrowright X) = \{(x, \gamma x) \mid \gamma \in \Gamma, x \in X\}.$$ When the action $\Gamma \curvearrowright (X, \mu)$ is moreover [*essentially free*]{}, there is a canonical isomorphism of pairs of von Neumann algebras $$\left( {\mathord{\text{\rm L}}}^\infty(X) \subset {\mathord{\text{\rm L}}}(\mathcal R(\Gamma \curvearrowright X)) \right) \cong \left({\mathord{\text{\rm L}}}^\infty(X) \subset {\mathord{\text{\rm L}}}^\infty(X) \rtimes \Gamma \right).$$ For more information on nonsingular equivalence relations and their von Neumann algebras, we refer the reader to [@FM75]. Strongly ergodic actions and full factors {#strongly-ergodic-actions-and-full-factors .unnumbered} ----------------------------------------- We first recall the concept of [*strong ergodicity*]{} for group actions and equivalence relations. Let $(X, \mu)$ be any standard probability space. - Let $\Gamma$ be any countable discrete group and $\Gamma \curvearrowright (X, \mu)$ any ergodic nonsingular action. The action $\Gamma \curvearrowright (X, \mu)$ is said to be [*strongly ergodic*]{} if for any sequence $(C_n)_n$ of measurable subsets of $X$ such that $\lim_n \mu(\gamma C_n \triangle C_n) = 0$ for all $\gamma \in \Gamma$, we have $\lim_n \mu(C_n)(1 - \mu(C_n)) = 0$. - Let $\mathcal R$ be any ergodic nonsingular equivalence relation defined on $(X, \mu)$. The equivalence relation $\mathcal R$ is said to be [*strongly ergodic*]{} if for any sequence $(C_n)_n$ of measurable subsets of $X$ such that $\lim_n \mu(g C_n \triangle C_n) = 0$ for all $g \in [\mathcal R]$, we have $\lim_n \mu(C_n)(1 - \mu(C_n)) = 0$. Put $A = {\mathord{\text{\rm L}}}^\infty(X)$ and fix any nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. Then the ergodic nonsingular action $\Gamma \curvearrowright (X, \mu)$ is strongly ergodic if and only if the ultraproduct action $\Gamma \curvearrowright A^\omega$ defined by $\gamma \cdot (a_n)^\omega = (\gamma \cdot a_n)^\omega$ is ergodic, that is, $(A^\omega)^\Gamma = {\mathbf{C}}1$. Likewise, the ergodic nonsingular equivalence relation $\mathcal R$ defined on $(X, \mu)$ is strongly ergodic if and only if ${\mathord{\text{\rm L}}}(\mathcal R)' \cap A^\omega = {\mathbf{C}}1$. We also have that the nonsingular action $\Gamma \curvearrowright (X, \mu)$ is strongly ergodic if and only if the nonsingular orbit equivalence relation $\mathcal R(\Gamma \curvearrowright X)$ is strongly ergodic. Following [@Co74], we say that a factor $M$ with separable predual is [*full*]{} if $M_\omega = {\mathbf{C}}1$ for some (or any) nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. By [@AH12 Theorem 5.2], $M$ is full if and only if $M' \cap M^\omega = {\mathbf{C}}1$ for some (or any) nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. Observe that for any ergodic nonsingular equivalence relation $\mathcal R$ defined on a standard probability space $(X, \mu)$, if ${\mathord{\text{\rm L}}}(\mathcal R)$ is full then $\mathcal R$ is strongly ergodic. Connes proved in [@Co74 Theorem 2.12] that factors of type ${\rm III_0}$ are never full. Ueda showed in [@Ue00 Corollary 11] that ergodic nonsingular equivalence relations of type ${\rm III_0}$ are never strongly ergodic. We give a short proof of Ueda’s result. We refer to [@Co72] for the type classification of factors. Let $\mathcal R$ be any ergodic nonsingular equivalence relation defined on a standard probability space $(X, \mu)$. If $\mathcal R$ is of type ${\rm III_0}$, then $\mathcal R$ is not strongly ergodic. Put $A = {\mathord{\text{\rm L}}}^\infty(X)$ and $M = {\mathord{\text{\rm L}}}(\mathcal R)$. Assume that $\mathcal R$ is of type ${\rm III_0}$. Then $M$ is of type ${\rm III_0}$. Fix a nonprincipal ultrafilter $\omega$ on ${\mathbf{N}}$. Then $\mathcal Z(M^\omega) \neq {\mathbf{C}}1$ by [@AH12 Theorem 6.18]. Since $A^\omega$ is maximal abelian in $M^\omega$ by [@Po95 Theorem A.1.2], we have $${\mathbf{C}}1 \neq \mathcal Z(M^\omega) = (M^\omega)' \cap M^\omega =(M^\omega)' \cap A^\omega \subset M' \cap A^\omega.$$ Since $M' \cap A^\omega \neq {\mathbf{C}}1$, $\mathcal R$ is not strongly ergodic. Popa’s intertwining-by-bimodules {#popas-intertwining-by-bimodules .unnumbered} -------------------------------- In this subsection, we briefly recall Popa’s intertwining-by-bimodules [@Po01; @Po03]. In the present paper, we will need a generalization of Popa’s intertwining-by-bimodules to the framework of type ${\rm III}$ von Neumann algebras developed by the authors in [@HI15]. We will use the following terminology (see [@HI15 Definition 4.1]). \[definition intertwining\] Let $M$ be any $\sigma$-finite von Neumann algebra, $1_A$ and $1_B$ any nonzero projections in $M$, $A\subset 1_AM1_A$ and $B\subset 1_BM1_B$ any von Neumann subalgebras with faithful normal conditional expectations ${\mathord{\text{\rm E}}}_A : 1_A M 1_A \to A$ and ${\mathord{\text{\rm E}}}_B : 1_B M 1_B \to B$ respectively. We say that $A$ [*embeds with expectation into*]{} $B$ [*inside*]{} $M$ and write $A \preceq_M B$ if there exist projections $e \in A$ and $f \in B$, a nonzero partial isometry $v \in eMf$ and a unital normal $\ast$-homomorphism $\theta : eAe \to fBf$ such that the inclusion $\theta(eAe) \subset fBf$ is with expectation and $av = v \theta(a)$ for all $a \in eAe$. The main characterization of intertwining subalgebras we will use in this paper is the following result proven in [@HI15 Theorem 4.3]. \[intertwining for type III\] Keep the same notation as in Definition \[definition intertwining\] and moreover assume that $A$ is finite. Then the following conditions are equivalent. 1. $A \preceq_M B$. 2. There exists no net $(w_i)_{i \in I}$ of unitaries in $\mathcal U(A)$ such that ${\mathord{\text{\rm E}}}_{B}(b^*w_i a)\rightarrow 0$ in the $\sigma$-strong topology for all $a,b\in 1_AM1_B$. Bi-exactness and Ozawa’s condition (AO) {#AO} ======================================= Bi-exactness for discrete groups {#bi-exactness-for-discrete-groups .unnumbered} -------------------------------- Recall from [@Oz03] that a von Neumann algebra $M\subset {\mathbf{B}}(H)$ satisfies [*condition*]{} (AO) if there exist unital $\sigma$-weakly dense C$^{\ast}$-subalgebras $A\subset M$ and $B\subset M'$ such that $A$ is locally reflexive and the map $$\nu : A{\otimes_{\text{\rm alg}}}B\longrightarrow {\mathbf{B}}(H)/{\mathbf{K}}(H) : a\otimes b\mapsto ab+{\mathbf{K}}(H)$$ is continuous with respect to the minimal tensor norm. Recall that $A$ is [*locally reflexive*]{} or equivalently has [*property*]{} $C''$ (see e.g. [@BO08 Section 9]) if for any C$^{\ast}$-algebra $C$, the inclusion map $A^{\ast\ast}{\otimes_{\text{\rm alg}}}C\hookrightarrow (A{\otimes_{\text{\rm min}}}C)^{\ast\ast}$ is continuous with respect to the minimal tensor norm. In this case, any $\ast$-homomorphism $\pi\colon A{\otimes_{\text{\rm min}}}C \to {\mathbf{B}}(K)$ has an extension $\widetilde \pi : A^{**}{\otimes_{\text{\rm min}}}C \to {\mathbf{B}}(K)$ which is normal on $A^{**}\otimes {\mathbf{C}}1$ (since $\pi$ always has a canonical extension on $(A{\otimes_{\text{\rm min}}}C)^{**}$). We next recall the notion of [*bi-exactness*]{} for discrete groups which was introduced by Ozawa in [@Oz04] (using the terminology [*class*]{} $\mathcal S$) and intensively studied in [@BO08 Chapter 15]. Our definition is different from the original one, but it is equivalent to it and it is moreover adapted to the framework of discrete quantum groups [@Is13 Definition 3.2]. Let $\Gamma$ be any discrete group. We say that $\Gamma$ is [*bi-exact*]{} if there exists a $(\Gamma \times \Gamma)$-globally invariant unital ${\mathord{\text{\rm C}}}^*$-subalgebra $\mathcal{B} \subset \ell^{\infty}(\Gamma)$ such that the following two conditions are satisfied: - The algebra $\mathcal{B}$ contains $c_0(\Gamma)$ so that the quotient $\mathcal{B}_\infty:=\mathcal{B}/c_0(\Gamma)$ is well-defined. - The left translation action $\Gamma \curvearrowright \ell^\infty(\Gamma)$ induces an amenable action $\Gamma \curvearrowright \mathcal{B}_\infty$ and the right translation action $\ell^\infty(\Gamma) \curvearrowleft \Gamma$ induces the trivial action on $\mathcal{B}_\infty$. The class of bi-exact discrete groups includes amenable groups, free groups [@AO74], discrete subgroups of simple connected Lie groups of real rank one [@Sk88] and Gromov word-hyperbolic groups [@Oz03]. Observe that for any bi-exact discrete group $\Gamma$, the group von Neumann algebra ${\mathord{\text{\rm L}}}(\Gamma)$ satisfies condition (AO). We refer the reader to [@BO08 Chapter 15] for more information on bi-exact discrete groups. Ozawa’s condition (AO) in crossed product von Neumann algebras {#ozawas-condition-ao-in-crossed-product-von-neumann-algebras .unnumbered} -------------------------------------------------------------- In this subsection, we prove a relative version of Ozawa’s condition (AO) in the framework of crossed product von Neumann algebras. This result will be used in the proof of Theorem \[thmA\]. Let $\Gamma$ be any discrete group, $B \subset \mathcal B$ any inclusion of $\sigma$-finite von Neumann algebras and $\Gamma \curvearrowright \mathcal B$ any action that leaves globally invariant the subalgebra $B$. Denote by $M:= B\rtimes \Gamma$ and $\mathcal M = \mathcal B \rtimes \Gamma$ the corresponding crossed product von Neumann algebras, by ${\mathord{\text{\rm E}}}_{\mathcal B} : \mathcal M \to \mathcal B$ the canonical faithful normal conditional expectation and by $e_{\mathcal B} : {\mathord{\text{\rm L}}}^2(\mathcal M) \to {\mathord{\text{\rm L}}}^2(\mathcal B)$ the corresponding Jones projection. We use the notation and terminology of Section \[preliminaries\] for the standard forms $(\mathcal B, {\mathord{\text{\rm L}}}^2(\mathcal B), J^{\mathcal B}, \mathfrak P^{\mathcal B})$ of $\mathcal B$ and $(\mathcal M, {\mathord{\text{\rm L}}}^2(\mathcal M), J^{\mathcal M}, \mathfrak P^{\mathcal M})$ of $\mathcal M = \mathcal B \rtimes \Gamma$. We define a nondegenerate (and possibly nonunital) ${\mathord{\text{\rm C}}}^*$-algebra and its multiplier ${\mathord{\text{\rm C}}}^*$-algebra inside ${\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal M))$ by $$\begin{aligned} \mathcal{K_{\mathcal B}} &:= \mathrm{C}^* \left\{aJ^{\mathcal M}x J^{\mathcal M} e_{\mathcal B} b J^{\mathcal M}yJ^{\mathcal M}\mid a,b,x,y\in B\rtimes_{\rm red}\Gamma \right\} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal M)) \\ \mathfrak{M}(\mathcal K_{\mathcal B})&:= \left\{ T \in {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal M)) \mid T\mathcal K_{\mathcal B}\subset \mathcal K_{\mathcal B} \text{ and } \mathcal K_{\mathcal B} T \subset \mathcal K_{\mathcal B} \right\}.\end{aligned}$$ where $\mathrm{C}^* \left\{ \mathcal Y \right \} \subset \mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M))$ denotes the ${\mathord{\text{\rm C}}}^*$-subalgebra of $\mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M))$ generated by the subset $\mathcal Y \subset \mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M))$. We record the following elementary lemma. \[lemma AO3\] We have $$\mathcal{K}_\mathcal{B} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{B})) {\otimes_{\text{\rm min}}}{\mathbf{K}}(\ell^2(\Gamma)).$$ Denote by $\sigma : \Gamma \curvearrowright \mathcal B$ the action and by $u : \Gamma \to \mathcal U({\mathord{\text{\rm L}}}^2(\mathcal B))$ the canonical unitary representation implementing the action $\sigma$. Recall that ${\mathord{\text{\rm L}}}^2(\mathcal M) = {\mathord{\text{\rm L}}}^2(\mathcal B) \otimes \ell^2(\Gamma)$. Regard $\mathcal M = \mathcal B\rtimes \Gamma$ as generated by $\pi_\sigma(b) = \sum_{h \in \Gamma}\sigma_{h^{-1}}(b) \otimes P_{{\mathbf{C}}\delta_h}$ for $b \in \mathcal B$ and $1 \otimes \lambda_g$ for $g \in \Gamma$ where $P_{{\mathbf{C}}\delta_h} : \ell^2(\Gamma) \to {\mathbf{C}}\delta_h$ is the orthogonal projection onto ${\mathbf{C}}\delta_h$. We have $J^{\mathcal M} (1\otimes \lambda_g) J^{\mathcal M} = u_g \otimes \rho_g $ for all $g\in \Gamma$. Let $\mathcal C\subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal B))$ be the ${\mathord{\text{\rm C}}}^*$-algebra generated by $B$, $J^{\mathcal B} B J^{\mathcal B}$ and $u_g$ for all $g\in \Gamma$. We will show that $\mathcal{K}=\mathcal C{\otimes_{\text{\rm min}}}{\mathbf{K}}(\ell^2(\Gamma))$. Recall that $e_{\mathcal B}=1\otimes P_{{\mathbf{C}}\delta_e}$. For all $g,h\in \Gamma$, denote by $e_{g,h} : {\mathbf{C}}\delta_h \to {\mathbf{C}}\delta_g$ the partial isometry sending $\delta_h$ onto $\delta_g$. For all $a,b\in B$ and all $g,h,s,t\in \Gamma$, we have $$\begin{aligned} \pi_\sigma(a) (J^{\mathcal B} b J^{\mathcal B}\otimes 1) e_{\mathcal B} &= e_{\mathcal B} \pi_\sigma(a)(J^{\mathcal B} b J^{\mathcal B}\otimes 1) = aJ^{\mathcal B} b J^{\mathcal B} \otimes P_{{\mathbf{C}}\delta_e} \\ (1\otimes \lambda_g) (u_s\otimes \rho_s) e_{\mathcal B} (1\otimes \lambda_h)(u_t\otimes \rho_t) &= u_{st} \otimes \lambda_g\rho_{s}P_{{\mathbf{C}}\delta_e} \lambda_h\rho_{t}=u_{st}\otimes e_{gs^{-1}, h^{-1}t}.\end{aligned}$$ We then have $$\begin{aligned} \mathcal K_{\mathcal B}&=\mathrm{C}^* \left\{aJ^{\mathcal M}x J^{\mathcal M} e_{\mathcal B} b J^{\mathcal M}yJ^{\mathcal M}\mid a,b,x,y\in B\rtimes_{\rm red}\Gamma \right\} \\ &=\mathrm{C}^* \left\{aJ^{\mathcal B}b J^{\mathcal B} u_g \otimes e_{s,t} \mid a,b\in B , g,s,t\in \Gamma \right\}\\ &=\mathcal C\otimes_{\rm min}{\mathbf{K}}(\ell^2(\Gamma)).\end{aligned}$$ This finishes the proof of Lemma \[lemma AO3\]. Consider now the following unital $\ast$-homomorphism: $$\nu_{\mathcal B} : (B\rtimes_{\rm red}\Gamma) \otimes_{\rm alg} J^{\mathcal M}(B\rtimes_{\rm red} \Gamma)J^{\mathcal M} \rightarrow \mathfrak{M}(\mathcal{K}_{\mathcal B})/\mathcal{K}_{\mathcal B} : a\otimes J^{\mathcal M}b J^{\mathcal M} \mapsto a \, J^{\mathcal M}bJ^{\mathcal M}+\mathcal{K}_{\mathcal B}.$$ Ozawa proved in [@Oz04 Proposition 4.2] that when $\Gamma$ is bi-exact and $B = \mathcal B$ is finite and amenable, the map $\nu_{\mathcal B}$ is continuous with respect to the minimal tensor norm. This is nothing but a relative version of the condition (AO) in the framework of crossed product von Neumann algebras. Observe that when $B= \mathcal B = {\mathbf{C}}1$, continuity of $\nu_{\mathcal B}$ with respect to the minimal tensor norm implies that ${\mathord{\text{\rm L}}}(\Gamma)$ satisfies condition (AO). Since $\mathcal{K}_{\mathcal{B}}$ is the smallest ${\mathord{\text{\rm C}}}^*$-algebra containing $1\otimes c_0(\Gamma)$ and such that its multiplier algebra contains $B\rtimes_{\rm red}\Gamma$ and $J^{\mathcal M}(B\rtimes_{\rm red} \Gamma)J^{\mathcal M}$, we can easily generalize [@Oz04 Proposition 4.2] as follows. \[AO in crossed product\] Keep the same setting as above and assume that $\Gamma$ is bi-exact and $B$ is amenable. Then the map $$\nu_{\mathcal B} : (B\rtimes_{\rm red}\Gamma) \otimes_{\rm alg} J^{\mathcal M}(B\rtimes_{\rm red} \Gamma)J^{\mathcal M} \rightarrow \mathfrak{M}(\mathcal{K}_{\mathcal B})/\mathcal{K}_{\mathcal B} : \ a\otimes J^{\mathcal M}b J^{\mathcal M} \mapsto a \, J^{\mathcal M}bJ^{\mathcal M}+\mathcal{K}_{\mathcal B}$$ is well-defined and continuous with respect to the minimal tensor norm. As in the proof of Lemma \[lemma AO3\], regard $\mathcal M = \mathcal B \rtimes \Gamma$ as generated by $\pi_\sigma(\mathcal B)$ and $(1 \otimes \lambda)(\Gamma)$. Since $B$ is amenable (i.e. semidiscrete), the map $$\pi_\sigma(B) {\otimes_{\text{\rm alg}}}J^{\mathcal M} \pi_\sigma(B) J^{\mathcal M} \to \mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M)) : \pi_\sigma(a)\otimes J^{\mathcal M} \pi_\sigma(b) J^{\mathcal M} \mapsto \pi_\sigma(a) \, J^{\mathcal M} \pi_\sigma(b) J^{\mathcal M}$$ is continuous with respect to the minimal tensor norm. Then the proof of [@Oz04 Proposition 4.2] applies [*mutatis mutandis*]{} to show that the map $\nu_{\mathcal B}$ is continuous with respect to the minimal tensor norm. We will apply Proposition \[AO in crossed product\] in Theorem \[theorem for thmA 1\] (in the case when $\mathcal B = B$) and in Theorem \[theorem for thmA 2\] (in the general case). Ozawa’s condition (AO) in ultraproduct von Neumann algebras {#ozawas-condition-ao-in-ultraproduct-von-neumann-algebras .unnumbered} ----------------------------------------------------------- In this subsection, we prove a version of Ozawa’s condition (AO) in the framework of ultraproduct von Neumann algebras. Although we will not use this result in this paper, we nevertheless mention it since we believe it is interesting in its own right. We keep the same notation as in the previous subsection and we moreover assume that $B = \mathcal B$. Let $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ be any nonprincipal ultrafilter. Denote by $(M^\omega, {\mathord{\text{\rm L}}}^2(M^\omega), J^{M^\omega}, \mathfrak P^{M^\omega})$ a standard form for $M^\omega$ and by $e_{B^\omega} : {\mathord{\text{\rm L}}}^2(M^\omega) \to {\mathord{\text{\rm L}}}^2(B^\omega)$ the Jones projection corresponding to the inclusion $B^\omega\subset M^\omega$. We define a (possibly degenerate and nonunital) C$^*$-subalgebra $\mathcal{K}_\omega$ and its multiplier algebra $\mathfrak{M}(\mathcal{K}_\omega)$ inside ${\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega))$ by $$\begin{aligned} \mathcal{K}_\omega &:= \mathrm{C}^* \left\{aJ^{M^\omega}xJ^{M^\omega} e_{B^\omega} b J^{M^\omega}yJ^{M^\omega} \mid a,b,x,y\in B\rtimes_{\rm red}\Gamma \right \} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega)) \\ \mathfrak{M}(\mathcal{K}_\omega) &:= \left \{ T \in {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega))\mid T \mathcal{K}_\omega \subset \mathcal{K}_\omega \text{ and } \mathcal{K}_\omega T \subset \mathcal{K}_\omega \right\}.\end{aligned}$$ Recall from Proposition \[AO in crossed product\] (in the case when $B=\mathcal{B}$ with $\mathcal{K}:=\mathcal{K}_{\mathcal{B}}$ which is exactly [@Oz04 Proposition 4.2]) that the map $$\nu \colon (B\rtimes_{\rm red}\Gamma) \otimes_{\rm alg} J^M(B\rtimes_{\rm red} \Gamma)J^M \rightarrow \mathfrak{M}(\mathcal{K})/\mathcal{K} : a\otimes J^M b J^M \mapsto a \, J^MbJ^M+\mathcal{K}$$ is continuous with respect to the minimal tensor norm. We now state a version of Ozawa’s condition (AO) in the ultraproduct representation ${\mathord{\text{\rm L}}}^2(M^\omega)$. \[AO in ultraproduct\] Keep the same setting as above and assume that $\Gamma$ is bi-exact and $B$ is amenable. Then the map $$\nu_\omega : (B\rtimes_{\rm red}\Gamma) \otimes_{\rm alg} J^{M^\omega}(B\rtimes_{\rm red} \Gamma)J^{M^\omega} \rightarrow \mathfrak{M}(\mathcal{K}_\omega)/\mathcal{K}_\omega : a\otimes J^{M^\omega}bJ^{M^\omega} \mapsto a \, J^{M^\omega} bJ^{M^\omega}+\mathcal{K}_\omega$$ is well-defined and continuous with respect to the minimal tensor norm. Put $$\begin{aligned} C &:=\mathrm{C}^* \left\{ B{\rtimes_{\text{\rm red}}}\Gamma, J^M(B{\rtimes_{\text{\rm red}}}\Gamma)J^M \right\} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M))\\ C_\omega &:=\mathrm{C}^* \left\{ B{\rtimes_{\text{\rm red}}}\Gamma, J^{M^\omega}(B{\rtimes_{\text{\rm red}}}\Gamma)J^{M^\omega} \right\} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega)).\end{aligned}$$ Observe that $C + \mathcal{K}$ (resp. $C_\omega + \mathcal{K}_\omega$) is a C$^*$-algebra since it is the sum of a ${\mathord{\text{\rm C}}}^*$-subalgebra and an ideal in $\mathfrak{M}(\mathcal{K})$ (resp. $\mathfrak{M}(\mathcal{K}_\omega)$). There is a $\ast$-homomorphism $\theta : C + \mathcal{K} \to {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega))$ such that $\theta(x)=x$ and $\theta(J^M y J^M)= J^{M^\omega}y J^{M^\omega}$ for all $x,y\in B{\rtimes_{\text{\rm red}}}\Gamma$ and $\theta(e_B)=e_{B^\omega}$. Indeed, fix any faithful state $\varphi \in M_\ast$ such that $\varphi \circ {\mathord{\text{\rm E}}}_B = \varphi$ and denote by $p$ the support projection in $N =\prod^\omega M$ of the ultraproduct state $\varphi_\omega \in N_\ast$. By Lemma \[lemma for AO in ultraproduct\], $\pi^\omega((e_B)_n)$ commutes with $p$ and $J^N$ and hence $\pi^\omega((e_B)_n)$ commutes with $\widetilde{p}:= pJ^N p J^N$. Since $\widetilde{p}$ commutes with $\pi^\omega(M)$ and $\pi^\omega(J^M M J^M) = J^N \pi^\omega(M) J^N$, $\widetilde{p}$ commutes with $\pi^\omega(C + \mathcal{K})$. Recall that $\widetilde p N \widetilde p \cong p N p \cong M^\omega$ and $\widetilde p {\mathord{\text{\rm L}}}^2(M)_\omega = {\mathord{\text{\rm L}}}^2(M^\omega)$. Then the $\ast$-homomorphism $$\theta : C + \mathcal{K} \to {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega)) : T \mapsto \widetilde p \pi^\omega(T) \widetilde p$$ satisfies all the conditions of the Claim. Since $\theta(C)=C_\omega$ and $\theta(\mathcal{K}) = \mathcal{K}_\omega$, $\theta$ induces a $\ast$-homomorphism $$\widetilde{\theta} : (C + \mathcal{K})/\mathcal{K} \to (C_\omega + \mathcal{K}_\omega)/\mathcal{K}_\omega \subset \mathfrak{M}(\mathcal{K}_\omega)/\mathcal{K}_\omega.$$ Denote by $\iota : (B\rtimes_{\rm red}\Gamma) \otimes_{\min} J^{M^\omega}(B\rtimes_{\rm red} \Gamma)J^{M^\omega} \to (B\rtimes_{\rm red}\Gamma) \otimes_{\min} J^{M}(B\rtimes_{\rm red} \Gamma)J^{M}$ the tautological $\ast$-isomorphism. Then the composition map $$\nu_\omega = \widetilde{\theta}\circ \nu \circ \iota : (B\rtimes_{\rm red}\Gamma) \otimes_{{\mathord{\text{\rm alg}}}} J^{M^\omega}(B\rtimes_{\rm red} \Gamma)J^{M^\omega} \rightarrow \mathfrak{M}(\mathcal{K}_\omega)/\mathcal{K}_\omega$$ is continuous with respect to the minimal tensor norm. Weak exactness for ${\mathord{\text{\rm C}}}^*$-algebras {#weak-exactness-for-mathordtextrm-c-algebras .unnumbered} -------------------------------------------------------- To obtain structural results for von Neumann algebras $M$ satisfying Ozawa’s (relative) condition (AO) [@Oz03; @Oz04], it is usually necessary to impose [*local reflexivity*]{} or [*exactness*]{} of the given unital $\sigma$-weakly dense ${\mathord{\text{\rm C}}}^*$-algebra in $M$. Observe that in the setting of Proposition \[AO in crossed product\], the reduced crossed product ${\mathord{\text{\rm C}}}^*$-algebra $B{\rtimes_{\text{\rm red}}}\Gamma$ need not be locally reflexive since it contains the von Neumann algebra $B$. To avoid this difficulty, Ozawa assumed in [@Oz04 Theorem 4.6] that $\Gamma$ is exact and $B$ is [*abelian*]{} so that $B{\rtimes_{\text{\rm red}}}\Gamma$ is exact and hence locally reflexive. In [@Is12], the second named author introduced a notion of [*weak exactness*]{} for ${\mathord{\text{\rm C}}}^*$-algebras and could settle this problem. Namely, he generalized [@Oz04 Theorem 4.6] under the assumptions that $\Gamma$ is exact and $B$ is [*amenable*]{} (not necessarily abelian). The main idea behind this generalization was to use [*some*]{} exactness (or equivalently property $C'$) of the opposite algebra $(B{\rtimes_{\text{\rm red}}}\Gamma)^{\rm op}$, instead of local reflexivity of $B{\rtimes_{\text{\rm red}}}\Gamma$. In the present paper, to study more general cases, we will make use of this notion of weak exactness for ${\mathord{\text{\rm C}}}^*$-algebras. Recall from [@Is12 Theorem 3.1.3(1)(ii)] that for an inclusion of a unital ${\mathord{\text{\rm C}}}^*$-algebra $A\subset M$ in a von Neumann algebra $M$, we say that *$A$ is weakly exact in $M$* if for any unital ${\mathord{\text{\rm C}}}^*$-algebra $C$, any $\ast$-homomorphism $\pi\colon A{\otimes_{\text{\rm min}}}C \to {\mathbf{B}}(K)$ which is $\sigma$-weakly continuous on $A\otimes {\mathbf{C}}1$ has an extension $\widetilde \pi : A{\otimes_{\text{\rm min}}}C^{**} \to {\mathbf{B}}(K)$ which is normal on ${\mathbf{C}}1 \otimes C^{**}$. In the case when $A=M$, we simply say that $M$ is weakly exact. Here we recall the following fundamental fact. \[weakly exact for crossed product\] Let $\Gamma$ be any exact discrete group, $B$ any $\sigma$-finite amenable (and hence weakly exact) von Neumann algebra and $\Gamma \curvearrowright B$ any action. Then the reduced crossed product ${\mathord{\text{\rm C}}}^*$-algebra $B{\rtimes_{\text{\rm red}}}\Gamma$ is weakly exact in $B \rtimes \Gamma$. If moreover $\Gamma$ is countable and $B$ has separable predual, then $B\rtimes \Gamma$ is weakly exact. Using this property, we prove an important lemma, which is a variant of [[@Oz03 Lemma 5]]{} (see also [@BO08 Proposition 15.1.6] and [@Is12 Lemma 5.1.1]). The proof is essentially the same as the one of [@BO08 Proposition 15.1.6] but we nevertheless include it for the reader’s convenience. \[lemma for AO\] Let $M\subset \mathcal{M}$ be any inclusion of $\sigma$-finite von Neumann algebras with expectation and $(\mathcal{M}, {\mathord{\text{\rm L}}}^2(\mathcal{M}), J^{\mathcal M}, \mathfrak P^{\mathcal M})$ a standard form for $\mathcal M$. Let $C\subset M$ be any unital $\sigma$-weakly dense ${\mathord{\text{\rm C}}}^*$-subalgebra, $p\in \mathcal{M}$ any nonzero projection and $\varphi : M \to p\mathcal{M}p$ any normal ucp map. We will use the identification $p \mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M))p = \mathbf B(p {\mathord{\text{\rm L}}}^2(\mathcal M))$. Assume that the following two conditions hold: - The map $$\Phi : C {\otimes_{\text{\rm alg}}}J^{\mathcal M}CJ^{\mathcal M} \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M})) : \sum_{i=1}^n x_i \otimes J^{\mathcal M}y_iJ^{\mathcal M} \mapsto \sum_{i=1}^n \varphi(x_i) \, J^{\mathcal M}y_iJ^{\mathcal M}p$$ is continuous with respect to the minimal tensor norm. - The ${\mathord{\text{\rm C}}}^*$-algebra $C$ is locally reflexive or $C$ is weakly exact in $M$. Then the ucp map $\varphi : M \to p\mathcal{M}p$ has a ucp extension $\widetilde{\varphi}: {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{M})) \to(J^{\mathcal M}CJ^{\mathcal M}p)'\cap {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$. To simplify the notation, we will write $J = J^{\mathcal M}$. Observe that $\Phi = \nu \circ (\varphi \otimes {\text{\rm id}}_{JCJ})$ where $\nu : p\mathcal{M}p {\otimes_{\text{\rm alg}}}JCJ \to \mathbf B(p {\mathord{\text{\rm L}}}^2(\mathcal M))$ is the multiplication map. We first prove the following result. The ucp map $\Phi : C \otimes_{\min} JCJ \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$ can be extended to a ucp map $ \widetilde \Phi : M \otimes_{\min} JCJ \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$ which is normal $M \otimes {\mathbf{C}}1$. In particular, we have $\widetilde \Phi(x \otimes 1) = \varphi(x)$ for all $x \in M$. Indeed, let $(\pi, V, K)$ be a [*minimal*]{} Stinespring dilation for $\Phi : C \otimes_{\min} JCJ \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$, that is, $\pi : C{\otimes_{\text{\rm min}}}JCJ\to {\mathbf{B}}(K)$ is a unital $\ast$-representation and $V : p{\mathord{\text{\rm L}}}^2(\mathcal{M})\to K$ is an isometry such that the subspace $\pi(C \otimes_{\min} JCJ)V p {\mathord{\text{\rm L}}}^2(\mathcal M)$ is dense in $K$ and $\Phi(x) = V^* \pi(x) V$ for all $x \in C$. By minimality of $(\pi, V, K)$ and since $\Phi$ is $\sigma$-weakly continuous on $C \otimes {\mathbf{C}}1$ (resp. ${\mathbf{C}}1 \otimes JCJ$), we have that $\pi$ is also $\sigma$-weakly continuous on $C \otimes {\mathbf{C}}1$ (resp. ${\mathbf{C}}1 \otimes JCJ$). Indeed, it suffices to notice that for all $c_1, c_2 \in C$, all $x, y \in C \otimes_{\min} JCJ$ and all $\xi , \eta \in p {\mathord{\text{\rm L}}}^2(\mathcal M)$, we have $$\begin{aligned} \langle \pi (c_1 \otimes Jc_2J) \, \pi(x)V \xi, \pi(y)V\eta\rangle_K &= \langle V^* \pi(y^*(c_1 \otimes Jc_2J)x) V \xi , \eta\rangle_K \\ &= \langle \Phi(y^*(c_1 \otimes Jc_2J)x) \xi, \eta\rangle_K.\end{aligned}$$ Since $C$ is assumed to be locally reflexive or weakly exact in $M$ (which is equivalent to saying that $JCJ$ is weakly exact in $JMJ$), the unital $\ast$-homomorphism $\pi : C{\otimes_{\text{\rm min}}}JCJ\to {\mathbf{B}}(K)$ always has an extension $\widetilde{\pi} : C^{**}{\otimes_{\text{\rm min}}}JCJ \to \mathbf B(K)$ which is normal on $C^{**} \otimes {\mathbf{C}}1$. Observe that we do not need $\sigma$-weak continuity on ${\mathbf{C}}1\otimes JCJ$ when $C$ is locally reflexive. Let $z\in C^{**}$ be the central projection such that $zC^{**} = M$ canonically and let $z_i\in C$ be a bounded net converging to $z$ in the $\sigma$-weak topology in $C^{**}$. Observe that $z_i \to 1_M$ $\sigma$-weakly in $M$ and recall that $\widetilde{\pi}$ (resp. $\pi$) is $\sigma$-weakly continuous on $C^{**}\otimes {\mathbf{C}}1 $ (resp. $C\otimes {\mathbf{C}}1 \subset M\otimes {\mathbf{C}}1$). We then have that $$\widetilde{\pi}(z\otimes 1) = \lim_i \pi(z_i \otimes 1) = \pi(1_M \otimes 1)=1$$ and hence $\widetilde{\pi}((1-z)\otimes 1)=0$. Since $\widetilde{\pi}$ is a $\ast$-homomorphism, it satisfies $\widetilde{\pi}((z\otimes 1) x )= \widetilde{\pi}(x)$ for all $x\in C^{**}{\otimes_{\text{\rm min}}}JCJ$. In particular, we have $\widetilde \pi ((z \otimes 1) \, \cdot \,)|_{C \otimes_{\min} JCJ} = \pi$. Using moreover the identification $M \otimes_{\min} JCJ = z C^{**} \otimes_{\min} JCJ$, we obtain that the unital $\ast$-homomorphism $\widetilde \pi((z \otimes 1) \, \cdot \, ) : M \otimes_{\min} JCJ \to \mathbf B(K)$ is an extension of $\pi : C \otimes_{\min} JCJ \to \mathbf B(K)$ which is normal on $M \otimes {\mathbf{C}}1$. Therefore the ucp map $\widetilde \Phi = {\mathord{\text{\rm Ad}}}(V^*) \circ \widetilde \pi((z \otimes 1) \, \cdot \, ) : M \otimes_{\min} JCJ \to \mathbf B(p {\mathord{\text{\rm L}}}^2(\mathcal M))$ is an extension of $\Phi : C \otimes_{\min} JCJ \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$ which is normal on $M \otimes {\mathbf{C}}1$. In particular, we have $\widetilde \Phi(x \otimes 1) = \varphi(x)$ for all $x \in M$. We next apply Arveson’s extension theorem to the ucp map $\widetilde \Phi : M \otimes_{\min} JCJ \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$ and we obtain a ucp extension map that we still denote by $\widetilde \Phi : {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{M})){\otimes_{\text{\rm min}}}JCJ \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$. Since $\widetilde \Phi |_{{\mathbf{C}}1\otimes JCJ} : {\mathbf{C}}1 \otimes JCJ \to \mathbf B(p {\mathord{\text{\rm L}}}^2(\mathcal M)) : 1 \otimes JxJ \mapsto JxJp$ is a unital $\ast$-homomorphism, ${\mathbf{C}}1\otimes JCJ$ is contained in the multiplicative domain of $\widetilde \Phi$ (see e.g. [@BO08 Section 1.5]). Therefore, for all $u\in \mathcal{U}( C)$ and all $x\in {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{M}))$, we have $$\Phi(x\otimes 1) \, JuJ p = \Phi(x\otimes 1)\Phi(1\otimes JuJ)= \Phi(x\otimes JuJ)=\Phi(1\otimes JuJ)\Phi(x\otimes 1)=JuJp \, \Phi(x\otimes 1)$$ and hence $\Phi({\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{M}))\otimes 1)\subset (JCJp)' \cap {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$. Thus, $\widetilde{\varphi}:=\Phi(\, \cdot \, \otimes 1) : {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{M})) \to(JCJp)'\cap {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$ is the desired ucp extension map. Proofs of Theorem \[thmA\] and Corollary \[corB\] {#section-thmA} ================================================= We first prove two intermediate results, namely Theorems \[theorem for thmA 1\] and \[theorem for thmA 2\], from which we will deduce Theorem \[thmA\]. While these two results are independent from each other, their proofs are in fact very similar and use Ozawa’s condition (AO) for crossed product von Neumann algebras from Section \[AO\]. Theorem \[theorem for thmA 1\] below is a spectral gap rigidity result for subalgebras with expectation $N \subset M$ of crossed product von Neumann algebras $M = B \rtimes \Gamma$ arising from arbitrary actions of bi-exact discrete groups on amenable von Neumann algebras. \[theorem for thmA 1\] Let $\Gamma$ be any bi-exact discrete group, $B$ any amenable $\sigma$-finite von Neumann algebra and $\Gamma \curvearrowright B$ any action. Put $M:=B\rtimes \Gamma$. Let $p\in M$ be any nonzero projection and $N\subset pMp$ any von Neumann subalgebra with expectation. Let $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ be any nonprincipal ultrafilter. Then at least one of the following conditions holds true. - The von Neumann algebra $N$ has a nonzero amenable direct summand. - We have $N'\cap pM^\omega p \subset p (B^\omega \rtimes \Gamma) p$. Assume that $N' \cap pM^\omega p \not \subset p(B^\omega \rtimes \Gamma)p$. Let $Y \in N' \cap pM^\omega p$ be such that $Y \notin p(B^\omega \rtimes \Gamma)p$. Up to replacing $Y$ by $Y - {\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(Y) \neq 0$ which still lies in $N' \cap pM^\omega p$, we may assume that $Y \in N' \cap p M^\omega p$, $Y \neq 0$ and ${\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(Y) = 0$. Put $y = {\mathord{\text{\rm E}}}_\omega(Y^*Y) \in (N' \cap pMp)^+$. Define the nonzero spectral projection $p_0 := \mathbf 1_{[\frac12\|y\|_\infty, \|y\|_\infty]}(y) \in N' \cap pMp$ and put $c := (yp_0)^{-1/2} \in N' \cap pMp$. We have $${\mathord{\text{\rm E}}}_\omega ((Yc)^*(Yc)) = {\mathord{\text{\rm E}}}_\omega(c \, Y^*Y \, c) = c \, {\mathord{\text{\rm E}}}_{\omega}(Y^*Y) \, c = c \, y \, c = p_0.$$ Up to replacing $Y$ by $Yc$ which still lies in $N' \cap pM^\omega p$, we may assume that $Y \in N' \cap p M^\omega p$, $Y \neq 0$, ${\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(Y) = 0$ and ${\mathord{\text{\rm E}}}_\omega(Y^*Y) = p_0$. Write $Y = (y_n)^\omega$ for some $(y_n)_n \in \mathcal M^\omega(M)$. Observe that $\sigma\text{-weak} \lim_{n \to \omega} y_n^* y_n = {\mathord{\text{\rm E}}}_\omega(Y^*Y) = p_0$. Denote by $(M, {\mathord{\text{\rm L}}}^2(M), J^M, \mathfrak P^M)$ a standard form for $M = B \rtimes \Gamma$ as in Section \[preliminaries\]. To further simplify the notation, we will write $J = J^M$ and $\mathfrak P = \mathfrak P^M$. Define the cp map $$\Psi : \mathbf B({\mathord{\text{\rm L}}}^2(M)) \to \mathbf B({\mathord{\text{\rm L}}}^2(M)) : T \mapsto \sigma\text{-weak} \lim_{n \to \omega} y_n^* T y_n.$$ Observe that $\Psi(1) = p_0$. Since $\Psi$ is a cp map and $\Psi(1) = p_0$ is a projection, we have $\Psi(T) = \Psi(1) \Psi(T) \Psi(1) = p_0 \Psi(T) p_0$ for every $T \in \mathbf B({\mathord{\text{\rm L}}}^2(M))$ and hence $\Psi(\mathbf B({\mathord{\text{\rm L}}}^2(M))) \subset \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M))$ using the identification $p_0 \mathbf B({\mathord{\text{\rm L}}}^2(M)) p_0 = \mathbf B(p_0 {\mathord{\text{\rm L}}}^2(M))$. We will then regard $\Psi : \mathbf B({\mathord{\text{\rm L}}}^2(M)) \to \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M))$ as a ucp map. Observe that $\Psi(x) = {\mathord{\text{\rm E}}}_\omega(Y^*x Y)$ for all $x \in M$ and hence $\Psi(M) \subset p_0Mp_0$ and $\Psi |_M$ is normal. Moreover, observe that $\Psi |_N : N \to \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M)) : x \mapsto xp_0$ and $\Psi |_{JMJ} : {JMJ} \to \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M)) : JxJ \mapsto JxJ p_0$ are unital $\ast$-homomorphisms. We will denote by $\psi := \Psi |_M : M \to p_0Mp_0 : x \mapsto \Psi(x)$. Let $\mathcal K_B$ as in Proposition \[AO in crossed product\] (for $\mathcal B = B$). For all $a, b \in M$, we have $${\mathord{\text{\rm E}}}_{B^\omega}(b^* Y a) = {\mathord{\text{\rm E}}}_{B^\omega}( {\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(b^* Y a)) = {\mathord{\text{\rm E}}}_{B^\omega}(b^* {\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}( Y) a) = 0.$$ Since $0 = {\mathord{\text{\rm E}}}_{B^\omega}(b^* Y a) = ({\mathord{\text{\rm E}}}_B(b^* y_n a))^\omega$, we obtain that ${\mathord{\text{\rm E}}}_B(b^* y_n a) \to 0$ $\sigma$-strongly as $n \to \omega$. Choose any cyclic unit vector $\xi \in \mathfrak P$ such that $e_B \xi = \xi$. For all $a, b, c, d \in M$, we have $$\begin{aligned} \left|\langle \Psi(a e_B b) \, c\xi , d\xi \rangle_{{\mathord{\text{\rm L}}}^2(M)} \right| &= \lim_{n \to \omega} \left|\langle y_n^* a e_B b y_n \, c \xi, d\xi \rangle_{{\mathord{\text{\rm L}}}^2(M)} \right| \\ &= \lim_{n \to \omega} \left| \langle e_B b y_n c \xi, e_B a^*y_nd\xi \rangle_{{\mathord{\text{\rm L}}}^2(M)} \right| \\ &= \lim_{n \to \omega}\left| \langle e_B \, b y_n c \, e_B \xi, e_B \, a^*y_nd \, e_B\xi \rangle_{{\mathord{\text{\rm L}}}^2(M)} \right| \\ &= \lim_{n \to \omega} \left| \langle {\mathord{\text{\rm E}}}_B( b y_n c) e_B\xi, {\mathord{\text{\rm E}}}_B( a^*y_nd) e_B\xi \rangle_{{\mathord{\text{\rm L}}}^2(M)} \right| \\ &= \lim_{n \to \omega} \left| \langle {\mathord{\text{\rm E}}}_B( b y_n c) \xi, {\mathord{\text{\rm E}}}_B( a^*y_nd) \xi \rangle_{{\mathord{\text{\rm L}}}^2(M)} \right| \\ & \leq \lim_{n \to \omega} \| {\mathord{\text{\rm E}}}_B( b y_n c) \xi \|_{{\mathord{\text{\rm L}}}^2(M)} \|{\mathord{\text{\rm E}}}_B( a^*y_nd) \xi\|_{{\mathord{\text{\rm L}}}^2(M)} \\ &= 0.\end{aligned}$$ This implies that $\Psi(a e_B b) = 0$. By construction, we have $M = B \rtimes \Gamma$ and hence $e_B$ corresponds to the projection $1 \otimes P_{{\mathbf{C}}\delta_e}$. Taking $a = \lambda_g$ and $b = \lambda_h$ for $g, h \in \Gamma$, we then obtain $\Psi \left({\mathbf{C}}1 \otimes \mathbf K(\ell^2(\Gamma)) \right) = 0$. Since $\Psi$ is a ucp map, we obtain $\Psi \left(\mathbf B({\mathord{\text{\rm L}}}^2(B)) \otimes_{\min} \mathbf K(\ell^2(\Gamma)) \right) = 0$ and hence $\Psi(\mathcal K_B) = 0$ using Lemma \[lemma AO3\]. Define the ucp map $\overline \Psi : \mathfrak M(\mathcal{K}_B)/\mathcal{K}_B \to \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M)) : a + \mathcal K_B \mapsto \Psi(a)$. Using Proposition \[AO in crossed product\] in the case when $B = \mathcal B$, we may then define the ucp composition map $$\Phi = \overline \Psi \circ \nu : (B {\rtimes_{\text{\rm red}}}\Gamma) \otimes_{\min} J(B {\rtimes_{\text{\rm red}}}\Gamma)J \to \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M)) : a \otimes JbJ \mapsto \Psi(a \, JbJ).$$ Since $\Psi |_{JMJ}$ is a unital $\ast$-homomorphism and since $\psi = \Psi |_M$ by definition, we have $\Phi(a \otimes JbJ) = \Psi(a \, JbJ) = \Psi(a) \, \Psi(JbJ) = \psi(a) \, JbJp_0$ for all $a, b \in B {\rtimes_{\text{\rm red}}}\Gamma$. Since $$(J(B {\rtimes_{\text{\rm red}}}\Gamma)J p_0)' \cap \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M)) = p_0 (JMJ)' p_0 \cap \mathbf B(p_0{\mathord{\text{\rm L}}}^2(M)) = p_0Mp_0,$$ Proposition \[weakly exact for crossed product\] and Lemma \[lemma for AO\] imply that the normal ucp map $\psi : M \to p_0Mp_0$ has a ucp extension $\widetilde \psi : \mathbf B({\mathord{\text{\rm L}}}^2(M)) \to p_0Mp_0$. Observe that $Np_0 \subset p_0 M p_0$ is still with expectation by [@HU15 Proposition 2.2]. Denote by ${\mathord{\text{\rm E}}}_{Np_0} : p_0 Mp_0 \to Np_0$ a faithful normal conditional expectation. Define the unital $\ast$-homomorphism $\iota : N \to Np_0 : x \mapsto xp_0$ and denote by $z \in \mathcal Z(N)$ the unique central projection such that $\ker(\iota) = N z^\perp$. Then $Nz \cong Np_0$ and $Nzp_0 = Np_0$. Define the ucp map $\Theta = \iota^{-1} \circ {\mathord{\text{\rm E}}}_{Np_0} \circ \widetilde \psi (z \cdot z) : \mathbf B(z{\mathord{\text{\rm L}}}^2(M)) \to Nz$. Since $\Theta |_{Nz} = {\text{\rm id}}_{Nz}$, $\Theta$ is a norm one projection and hence $Nz$ is amenable. We have therefore proved that if $N' \cap pM^\omega p \not\subset p(B^\omega \rtimes \Gamma)p$, then $N$ has a nonzero amenable direct summand. \[theorem for thmA 2\] Let $\Gamma$ be any bi-exact discrete group, $B\subset \mathcal{B}$ any inclusion of $\sigma$-finite von Neumann algebras with expectation and $\Gamma \curvearrowright \mathcal B$ any action that leaves the subalgebra $B$ globally invariant. Assume moreover that $B$ is amenable. Put $M:=B\rtimes \Gamma\subset \mathcal{B}\rtimes \Gamma=:\mathcal{M}$. Let $p\in M$ be any nonzero projection and $N\subset pMp$ any von Neumann subalgebra with expectation. Then at least one of the following conditions holds true. - The von Neumann algebra $N$ is amenable. - We have $A\preceq_{\mathcal{M}} \mathcal{B}$ for any finite von Neumann subalgebra $A\subset N'\cap p\mathcal{M} p$ with expectation. Since the proof is very similar to the one of Theorem \[theorem for thmA 1\], we will simply sketch it and point out the necessary changes compared to Theorem \[theorem for thmA 1\]. Denote by $(\mathcal M, {\mathord{\text{\rm L}}}^2(\mathcal M), J^{\mathcal M}, \mathfrak P^{\mathcal M})$ a standard form for $\mathcal M = \mathcal B \rtimes \Gamma$ as in Section \[preliminaries\]. Suppose that there exists a finite von Neumann subalgebra $A\subset N'\cap p\mathcal{M}p$ with expectation such that $A\not\preceq_{\mathcal{M}} \mathcal{B}$. Observe that since $N'\cap p\mathcal{M}p \subset p\mathcal Mp$ is with expectation, so is $A\subset p\mathcal{M}p$. We will show $N$ is amenable. We will use the identifications $p\mathbf B({\mathord{\text{\rm L}}}^2(M))p = \mathbf B(p {\mathord{\text{\rm L}}}^2(M))$ and $p\mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M))p = \mathbf B(p {\mathord{\text{\rm L}}}^2(\mathcal M))$. Take a net of unitaries $(u_i)_{i\in I}$ in $\mathcal{U}(A)$ as in Theorem \[intertwining for type III\]($\rm ii$) such that ${\mathord{\text{\rm E}}}_\mathcal{B}(b^* u_i a) \to 0$ $\sigma$-strongly for any $a,b \in \mathcal{M}$. Fix a cofinal ultrafilter $\mathcal{U}$ on the directed set $I$ and define the ucp map $$\Psi : {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal{M})) \to {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M})): T \mapsto \sigma\text{-weak } \lim_{i\to \mathcal{U}} u_i^* T u_i.$$ Observe that $\Psi|_{\mathcal M} : \mathcal{M}\to p\mathcal{M}p$ is normal. Indeed, since $A \subset p\mathcal{M}p$ is finite and with expectation and since $\mathcal M$ is $\sigma$-finite, there exists a faithful state $\varphi \in (p \mathcal M p)_\ast$ such that $A\subset (p \mathcal{M} p)^\varphi$. Since $u_i \in \mathcal U(A)$ for all $i \in I$, this implies that $(\varphi \circ \Psi)(p x p) = \varphi(x)$ for all $x \in p \mathcal M p$. Since $\varphi$ is faithful and normal and since $\Psi = \Psi (p \cdot p)$, it follows that $\Psi|_{\mathcal M} : \mathcal M \to p\mathcal Mp$ is indeed normal. Moreover, we have $\Psi (x) = x$ for all $x \in N$. Let $\mathcal{K}_{\mathcal{B}}$ be as in Proposition \[AO in crossed product\]. By a reasoning entirely similar to the one of the proof of Theorem \[theorem for thmA 1\], we have $\Psi(\mathcal{K}_{\mathcal{B}}) = 0$. Define the ucp map $\overline \Psi : \mathfrak M(\mathcal K_{\mathcal B})/\mathcal K_{\mathcal B} \to \mathbf B(p{\mathord{\text{\rm L}}}^2(\mathcal M)) : a + \mathcal K_{\mathcal B} \mapsto \Psi(a)$. Using Proposition \[AO in crossed product\], we may then define the ucp composition map $$\overline \Psi \circ \nu_{\mathcal B} : (B {\rtimes_{\text{\rm red}}}\Gamma) \otimes_{\min} J^{\mathcal M}(B {\rtimes_{\text{\rm red}}}\Gamma)J^{\mathcal M} \to \mathbf B(p{\mathord{\text{\rm L}}}^2(\mathcal M)) : a \otimes J^{\mathcal M}bJ^{\mathcal M} \mapsto \Psi(a \, J^{\mathcal M}bJ^{\mathcal M}).$$ Proposition \[weakly exact for crossed product\] and Lemma \[lemma for AO\] imply that the normal ucp map $\psi := \Psi |_M : M \to p\mathcal Mp$ has a ucp extension $\widetilde \psi : \mathbf B({\mathord{\text{\rm L}}}^2(\mathcal M)) \to (J^{\mathcal M}(B{\rtimes_{\text{\rm red}}}\Gamma)J^{\mathcal M}p)'\cap {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M}))$. Denote by $e_M : {\mathord{\text{\rm L}}}^2(\mathcal M) \to {\mathord{\text{\rm L}}}^2(M)$ the Jones projection corresponding to the inclusion $M \subset \mathcal M$. We then have the identifications $e_M {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal M)) e_M = {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(M))$ and $e_M J^{\mathcal M} e_M = J^M$. Then we have $$e_M \left( (J^{\mathcal M}(B{\rtimes_{\text{\rm red}}}\Gamma)J^{\mathcal M}p)'\cap {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2(\mathcal{M})) \right) e_M = (J^M(B{\rtimes_{\text{\rm red}}}\Gamma)J^Mp)'\cap {\mathbf{B}}(p{\mathord{\text{\rm L}}}^2({M})) = pMp$$ and hence the ucp map $\widetilde \Psi := e_M \, \widetilde \psi(\, \cdot \,) \, e_M : {\mathbf{B}}({\mathord{\text{\rm L}}}^2(\mathcal M)) \to pMp$ takes indeed values in $pMp$. Moreover, we have $\widetilde \Psi(x) = x$ for all $x \in N$. If we denote by ${\mathord{\text{\rm E}}}_N : p M p \to N$ a faithful normal conditional expectation, the ucp map $\Theta = {\mathord{\text{\rm E}}}_N \circ \widetilde \Psi(p \cdot p) : \mathbf B(p {\mathord{\text{\rm L}}}^2(\mathcal M)) \to N$ is a norm one projection. Therefore, $N$ is amenable. Suppose that $N$ has no amenable direct summand. Then by Theorem \[theorem for thmA 1\], we have $N'\cap pM^\omega p \subset p(B^\omega \rtimes \Gamma) p$. We then apply Theorem \[theorem for thmA 2\] to $N$ in the case when $\mathcal{B}:=B^\omega$ and we obtain $A\preceq_{B^\omega \rtimes \Gamma} B^\omega$ for any finite von Neumann subalgebra $A \subset N'\cap pM^\omega p$ with expectation. Let $N\subset M$ be any von Neumann subalgebra with expectation such that $N' \cap M^\omega$ has no type ${\rm I}$ direct summand. Denote by $p \in \mathcal Z(N)$ the unique central projection such that $Np$ has no amenable direct summand and $N(1 - p)$ is amenable. Assume by contradiction that $p \neq 0$. Then $(Np)' \cap p M^\omega p \subset p(B^\omega \rtimes \Gamma)p$ by Theorem \[thmA\] and $(Np)' \cap p M^\omega p = p(N' \cap M^\omega)p$ has no type ${\rm I}$ direct summand. By [@CS78 Corollary 8] (see also [@HS90 Theorem 11.1]), $(Np)'\cap pM^\omega p$ contains a copy of the hyperfinite $\rm II_1$ factor $R$ with expectation. We then have $R\preceq_{B^\omega \rtimes \Gamma} B^\omega$ by Theorem \[thmA\]. Since $R$ is of type ${\rm II_1}$ and $B^\omega$ is abelian and hence of type ${\rm I}$, we obtain a contradiction. Therefore, $p = 0$ and $N$ is amenable. Proof of Theorem \[thmC\] ========================= We start by proving a useful lemma which can be regarded as a generalization of the first part of the proof of [@Ho15 Proposition C]. \[lem-strong-ergodicity\] Let $\mathcal R$ be any strongly ergodic nonsingular equivalence relation defined on a standard probability space $(X, \mu)$. Put $A = {\mathord{\text{\rm L}}}^\infty(X)$ and $M = {\mathord{\text{\rm L}}}(\mathcal R)$. Denote by ${\mathord{\text{\rm E}}}_A : M \to A$ the unique faithful normal conditional expectation. Fix any faithful state $\tau \in A_\ast$ and put $\varphi = \tau \circ {\mathord{\text{\rm E}}}_A \in M_\ast$. If $M$ is not full, then there exists a sequence of unitaries $u_n \in \mathcal U(M)$ such that the following conditions hold: - $\lim_n \|u_n \varphi - \varphi u_n\| = 0$, - $\lim_n \|x u_n - u_n x\|_\varphi = 0$ for all $x \in M$ and - $\lim_n \|{\mathord{\text{\rm E}}}_A(x u_n y)\|_\varphi = 0$ for all $x, y \in M$. Assume that $M$ is not full. Then $M' \cap (M^\omega)^{\varphi^\omega}$ is diffuse by [@HR14 Corollary 2.6] for any nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. Then a combination of the first part of the proof of [@HR14 Theorem A] and Lemma \[lem-ultraproduct\] shows that there exists a sequence of unitaries $u_n \in \mathcal U(M)$ such that the following conditions hold: - $\lim_n \|u_n \varphi - \varphi u_n\| = 0$, - $\lim_n \|x u_n - u_n x\|_\varphi = 0$ for all $x \in M$ and - $u_n \to 0$ $\sigma$-weakly as $n \to \infty$. It remains to prove that Conditions $\rm(i), \rm(ii), \rm(iii)$ imply that $\lim_n \|{\mathord{\text{\rm E}}}_A(x u_n y)\|_\varphi = 0$ for all $x, y \in M$. The rest of the proof is entirely analogous to the first part of the proof of [@Ho15 Proposition C] and we only give the details for the sake of completeness. Observe that for every nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$, Condition $\rm(i)$ implies that $(u_n)_n \in \mathcal M^\omega(M)$ and Conditions $\rm(ii)$ and $\rm(iii)$ imply that $(u_n)^\omega \in M' \cap (M^\omega)^{\varphi^\omega}$ and $\varphi^\omega((u_n)^\omega) = 0$. We start by proving the following claim. We have $\lim_n \|{\mathord{\text{\rm E}}}_A(u_n)\|_\varphi = 0$ Let $g \in [\mathcal R]$ be any element and denote by $u_g \in \mathcal U({\mathord{\text{\rm L}}}(\mathcal R))$ the corresponding unitary element. Since $u_g {\mathord{\text{\rm E}}}_A(u_n) u_g^* = {\mathord{\text{\rm E}}}_A(u_g u_n u_g^*)$, we have $$\begin{aligned} \|{\mathord{\text{\rm E}}}_A(u_n) u_g^* - u_g^* {\mathord{\text{\rm E}}}_A(u_n)\|_\varphi &= \|u_g {\mathord{\text{\rm E}}}_A(u_n) u_g^* - {\mathord{\text{\rm E}}}_A(u_n)\|_\varphi \\ &= \|{\mathord{\text{\rm E}}}_A(u_g u_n u_g^* - u_n)\|_\varphi \\ &\leq \|u_g u_n u_g^* - u_n\|_\varphi \\ &= \| u_n u_g^* - u_g^* u_n\|_\varphi \to 0 \quad \text{as} \quad n \to \infty.\end{aligned}$$ Define $\mathcal E= {\mathord{\text{\rm span}}}\left\{a u_g \mid a \in A, g \in [\mathcal R] \right\}$ and observe that $\mathcal E$ is a unital $\sigma$-strongly dense $\ast$-subalgebra of $M$. The above calculation implies that $\lim_n \|x {\mathord{\text{\rm E}}}_A(u_n) - {\mathord{\text{\rm E}}}_A(u_n) x\|_\varphi = 0$ for every $x \in \mathcal E$. Let $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ be any nonprincipal ultrafilter. For every $x \in \mathcal E$, we have $\|x {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) - {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) x\|_{\varphi^\omega} = \lim_{n \to \omega} \|x {\mathord{\text{\rm E}}}_A(u_n) - {\mathord{\text{\rm E}}}_A(u_n) x\|_\varphi = 0$ and hence we have $x {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) = {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) x$. Since $\mathcal E$ is $\sigma$-strongly dense in $M$, this further implies that $x {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) = {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) x$ for every $x \in M$. Since $\mathcal R$ is strongly ergodic, this implies that ${\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega) = \varphi^\omega ((u_n)^\omega) 1 = 0$ and hence $\lim_{n \to \omega} \|{\mathord{\text{\rm E}}}_A(u_n)\|_\varphi = \|{\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega)\|_{\varphi^\omega} = 0$. Since this is true for every $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$, we finally obtain that $\lim_n \|{\mathord{\text{\rm E}}}_A(u_n)\|_\varphi = 0$. We can now finish the proof of Lemma \[lem-strong-ergodicity\]. Let $g \in [\mathcal R]$ be any element such that $g^2 = 1$. Put $X_g = \{s \in X \mid g \cdot s = s\}$ and observe that $u_g = u_g^*$, $z_g := {\mathord{\text{\rm E}}}_A(u_g) = \mathbf 1_{X_g}$ and $u_g^* z_g = z_g u_g^* = z_g$. Since $A$ is abelian and hence tracial, a combination of the fact that $u_g^* z_g \in A$ and the Claim implies that $$\|{\mathord{\text{\rm E}}}_A(u_n u_g^*) z_g \|_\varphi = \|{\mathord{\text{\rm E}}}_A(u_n \, u_g^* z_g) \|_\varphi = \|{\mathord{\text{\rm E}}}_A(u_n) \, u_g^* z_g \|_\varphi \leq \|{\mathord{\text{\rm E}}}_A(u_n) \|_\varphi \to 0 \quad \text{as} \quad n \to \infty.$$ Denote by $\mathcal J$ the nonempty directed set (for the inclusion) of all the families of projections $(z_i)_{i \in I}$ in $A$ such that $z_i \leq 1 - z_g$, $z_i \perp z_j$ for all $i \neq j \in I$ and $z_i \perp u_g z_j u_g^*$ for all $i, j \in I$. By Zorn’s lemma, let $(z_i)_{i \in I}$ be a maximal element in $\mathcal J$. Put $z = \sum_{i \in I} z_i$ and assume by contradiction that $z + u_g z u_g^* \neq 1 - z_g$. Put $e = 1 - z_g - z - u_gz u_g^* \neq 0$. Since $g^2=1$, we have $u_geu_g^*=e$. Since $e\leq 1-z_g=\mathbf 1_{\{s \in X \mid g\cdot s\neq s\}}$, we can find $0 \neq z' \leq e$ such that $z' \perp u_g z' u_g^*$. Then the family $((z_i)_{i \in I}, z')$ is in $\mathcal J$ and this contradicts the maximality of the family $(z_i)_{i \in I}$ in $\mathcal J$. Therefore, we have $z + u_g z u_g^* = 1 - z_g$. A calculation entirely analogous to [@Ho15 Proposition C, Equation (6.6)] shows that $$\begin{aligned} \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)(1 - z_g)\|_\varphi^2 &= \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)(z + u_g z u_g^*)\|_\varphi^2 \\ &= \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)z\|_\varphi^2 + \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)u_g z u_g^*\|_\varphi^2 \\ &= \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)(z - u_g z u_g^*)\|_\varphi^2 \\ &= \|{\mathord{\text{\rm E}}}_A((z u_n - u_n z) u_g^*)\|_\varphi^2.\end{aligned}$$ Since $zu_n - u_n z \to 0$ $\sigma$-strongly as $n \to \infty$, we also have that ${\mathord{\text{\rm E}}}_A((z u_n - u_n z) u_g^*) \to 0$ $\sigma$-strongly as $n \to \infty$. The above calculation implies that $\lim_n \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)(1 - z_g)\|_\varphi = 0$. This further implies that $$\limsup_n \|{\mathord{\text{\rm E}}}_A(u_n u_g^*) \|_\varphi^2 = \limsup_n \left(\|{\mathord{\text{\rm E}}}_A(u_n u_g^*)z_g \|_\varphi^2 + \|{\mathord{\text{\rm E}}}_A(u_n u_g^*)(1 - z_g) \|_\varphi^2\right) = 0.$$ Define $\mathcal F = {\mathord{\text{\rm span}}}\{ au_g \mid a \in A, g \in [\mathcal R], g^2 = 1\}$. By the proof of [@FM75 Theorem 1], it follows that $\mathcal F$ is a $\sigma$-strongly dense linear $\ast$-subspace of $M$. The previous reasoning shows that $\lim_n \|{\mathord{\text{\rm E}}}_A(u_n x)\|_\varphi = 0$ for every $x \in \mathcal F$. Let $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ be any nonprincipal ultrafilter. For every $x \in \mathcal F$, we have $\|{\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega x)\|_{\varphi^\omega} = \lim_{n \to \omega} \|{\mathord{\text{\rm E}}}_A(u_n x)\|_\varphi = 0$ and hence ${\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega x) = 0$. Since $\mathcal F$ is $\sigma$-strongly dense in $M$, this further implies that ${\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega x) = 0$ for every $x \in M$. Using Condition $(\rm ii)$, we also have ${\mathord{\text{\rm E}}}_{A^\omega}(x (u_n)^\omega y) = {\mathord{\text{\rm E}}}_{A^\omega}((u_n)^\omega xy) = 0$ for every $x, y \in M$. This implies that $\lim_{n \to \omega} \|{\mathord{\text{\rm E}}}_A(x u_n y)\|_\varphi = \|{\mathord{\text{\rm E}}}_{A^\omega}(x (u_n)^\omega y)\|_{\varphi^\omega} = 0$. Since this is true for every $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$, we finally obtain that $\lim_n \|{\mathord{\text{\rm E}}}_A(x u_n y)\|_\varphi = 0$ for all $x, y \in M$. Simply write $B = {\mathord{\text{\rm L}}}^\infty(X)$ and $M = B \rtimes \Gamma$. Assume by contradiction that $M$ is not full. Fix any nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. Since $\Gamma \curvearrowright (X, \mu)$ is strongly ergodic, Lemma \[lem-strong-ergodicity\] shows that there exists $u \in \mathcal U(M_\omega)$ such that ${\mathord{\text{\rm E}}}_{B^\omega}(u \lambda_s) = 0$ for every $s \in \Gamma$. Then for every $s \in \Gamma$, we have $${\mathord{\text{\rm E}}}_{B^\omega}({\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(u) \lambda_s) = {\mathord{\text{\rm E}}}_{B^\omega}({\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(u \lambda_s)) = {\mathord{\text{\rm E}}}_{B^\omega}(u \lambda_s) = 0.$$ This implies that ${\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(u) = 0$. Since $M$ is a nonamenable factor, Theorem \[thmA\] shows that $M_\omega \subset M' \cap M^\omega \subset B^\omega \rtimes \Gamma$ and hence $u \in \mathcal U(B^\omega \rtimes \Gamma)$. We then have $u = {\mathord{\text{\rm E}}}_{B^\omega \rtimes \Gamma}(u) = 0$. This is a contradiction. Group measure space type ${\rm III}$ factors with no central sequence ===================================================================== Let $G$ be any locally compact second countable group. Let $G \curvearrowright (X, \mu)$ and $G \curvearrowright (Y, \nu)$ be any nonsingular Borel actions on standard probability spaces. We say that - $G \curvearrowright (Y, \nu)$ is a [*measurable quotient*]{} of $G \curvearrowright (X, \mu)$ if, after discarding null $G$-invariant Borel subsets, there exists a $G$-equivariant Borel quotient map $q : X \to Y$ such that $[q_\ast \mu] = [\nu]$. - $G \curvearrowright (Y, \nu)$ is [*measurably conjugate*]{} to $G \curvearrowright (X, \mu)$ if, after discarding null $G$-invariant Borel subsets, there exists a $G$-equivariant Borel isomorphism $\theta : X \to Y$ such that $[\theta_\ast \mu] = [\nu]$. Let $G$ be any locally compact second countable group and $H < G$ any closed subgroup. Endowed with the quotient topology, $G/H$ is a continuous $G$-space, that is, the action $G \curvearrowright G/H$ defined by $(g, hH) \mapsto ghH$ is continuous. The quotient space $G/H$ carries, up to equivalence, a unique $G$-quasi-invariant regular Borel probability measure $\nu \in {\mathord{\text{\rm Prob}}}(G/H)$. Any such $G$-quasi-invariant regular Borel probability measure is associated with a rho-function for the pair $(G, H)$ (see e.g. [@BdlHV08 Appendix B]). The action $G \curvearrowright G/H$ is indeed a measurable quotient of the translation action $G \curvearrowright G$ (see [@BdlHV08 Theorem B.1.4]). Let $G$ be any noncompact connected simple Lie group and $P < G$ any minimal parabolic subgroup (e.g. $G = {\mathord{\text{\rm SL}}}_n({\mathbf{R}})$ and $P = $ subgroup of upper triangular matrices, for $n \geq 2$). Fix a $G$-quasi-invariant Borel regular probability measure $\nu \in {\mathord{\text{\rm Prob}}}(G/P)$. Denote by $\Delta_P : P \to {\mathbf{R}}^*_+$ the modular homomorphism and observe that $\Delta_P(P) = {\mathbf{R}}^*_+$ (this follows from [@BdlHV08 Proposition B.1.6 (ii)] and [@Zi84 Proposition 4.3.2]). Put $L = \ker(\Delta_P)$. The Radon-Nikodym cocycle associated with the action $G \curvearrowright G/P$ is the map defined by $$\Omega : G \times G/P \to {\mathbf{R}}: (g, hP) \mapsto \log \left(\frac{{\rm d} g_\ast\nu}{{\rm d}\nu}(hP)\right).$$ Observe that $\Omega : G \times G/P \to {\mathbf{R}}$ is a continuous map by [@BdlHV08 Theorem B.1.4]. The Maharam extension $G \curvearrowright G/P \times {\mathbf{R}}$ is the continuous action defined by $$g \cdot (hP, t) = (ghP, t + \Omega(g, hP)).$$ By [@BdlHV08 Lemma B.1.3], we have $\Omega(g, P) = (\log \circ \Delta_P)(g)$ for every $g \in P$. Since moreover $(\log \circ \Delta_P)(P) = {\mathbf{R}}$, the Maharam extension $G \curvearrowright G/P \times {\mathbf{R}}$ is transitive and the stabilizer of the point $(P, 0)$ is equal to $L$. The mapping $$\theta : G/L \to G/P \times {\mathbf{R}}: gL \mapsto (gP, \Omega(g, P))$$ is a well-defined $G$-equivariant homeomorphism that yields a measurable conjugacy between the action $G \curvearrowright G/L$ and the Maharam extension $G \curvearrowright G/P \times {\mathbf{R}}$. Therefore, we have proved the following useful fact. \[proposition-maharam\] The Maharam extension of $G \curvearrowright G/P$ is measurably conjugate to $G \curvearrowright G/L$. From now on, fix $n \geq 2$, $G = {\mathord{\text{\rm SL}}}_n({\mathbf{R}})$ and $\Lambda = {\mathord{\text{\rm SL}}}_n({\mathbf{Q}})$ and denote by $P < G$ the minimal subgroup of upper triangular matrices. By [@BISG15 Theorem A and Proposition 7.4], the translation action $\Lambda \curvearrowright G$ is strongly ergodic and so is the nonsingular action $\Lambda \curvearrowright G/P$ (recall that strong ergodicity is stable under taking measurable quotients). Fix a surjective group homomorphism $\pi : {\mathbf{F}}_\infty \to \Lambda$ such that $\ker(\pi) < {\mathbf{F}}_\infty$ is a nonamenable subgroup. For simplicity, write $\Gamma = {\mathbf{F}}_\infty$. Put $X = [0, 1]^\Gamma$ and $\mu = {\mathord{\text{\rm Leb}}}^{\otimes \Gamma}$ and consider the Bernoulli shift action $\Gamma \curvearrowright X$ defined by $\gamma \cdot (x_{\gamma'})_{\gamma' \in \Gamma} = (x_{\gamma^{-1}\gamma'})_{\gamma' \in \Gamma}$. Observe that $\Gamma \curvearrowright X$ preserves the Borel probability measure $\mu$ and is essentially free and strongly ergodic. Since $\ker(\pi)$ is nonamenable, the restricted action $\ker(\pi) \curvearrowright X$ is also strongly ergodic. Since $\ker(\pi) < {\mathbf{F}}_\infty$ is a nonamenable free subgroup and hence not inner amenable, the crossed product ${\rm II_1}$ factor ${\mathord{\text{\rm L}}}^\infty(X) \rtimes \ker(\pi)$ is full by [@Ch81]. Define the action $\Gamma \curvearrowright X \times G/P$ by $$\gamma \cdot (x, hP) = (\gamma x, \pi(\gamma) hP).$$ Observe that $\Gamma \curvearrowright X \times G/P$ quasi-preserves the product measure $\mu \otimes \nu$ and is essentially free. \[thm-examples\] Keep the same notation as above. The following assertions hold true: - The nonsingular action $\Gamma \curvearrowright X \times G/P$ is essentially free and strongly ergodic and its Maharam extension $\Gamma \curvearrowright X \times G/P \times {\mathbf{R}}$ is also essentially free and strongly ergodic. - The group measure space factor $M = {\mathord{\text{\rm L}}}^\infty(X \times G/P) \rtimes \Gamma$ is a full type ${\rm III_1}$ factor and its continuous core ${\mathord{\text{\rm c}}}(M)$ is a full type ${\rm II_\infty}$ factor. $\rm(i)$ As we already pointed out, the nonsingular action $\Gamma \curvearrowright X \times G/P$ is essentially free and so is its Maharam extension $\Gamma \curvearrowright X \times G/P \times {\mathbf{R}}$. We next prove that the nonsingular action $\Gamma \curvearrowright X \times G$ defined by $\gamma \cdot (x, h) = (\gamma x, \pi(\gamma) h)$ is strongly ergodic. Put $A = {\mathord{\text{\rm L}}}^\infty(X)$ and $B = {\mathord{\text{\rm L}}}^\infty(G)$ so that ${\mathord{\text{\rm L}}}^\infty(X \times G) = A {\mathbin{\overline{\otimes}}}B$. Write $N = (A {\mathbin{\overline{\otimes}}}B) \rtimes \Gamma$. Fix a nonprincipal ultrafilter $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$. We need to show that $N' \cap (A {\mathbin{\overline{\otimes}}}B)^\omega = {\mathbf{C}}1$. Observe that $(A {\mathbin{\overline{\otimes}}}B) \rtimes \ker(\pi) = (A \rtimes \ker(\pi)) {\mathbin{\overline{\otimes}}}B$ and $N' \cap (A {\mathbin{\overline{\otimes}}}B)^\omega \subset (A \rtimes \ker(\pi))' \cap ((A \rtimes \ker(\pi)) {\mathbin{\overline{\otimes}}}B)^\omega$. Since $A \rtimes \ker(\pi)$ is a full type ${\rm II_1}$ factor, [@Co75 Theorem 2.1] implies that $(A \rtimes \ker(\pi))' \cap ((A \rtimes \ker(\pi)) {\mathbin{\overline{\otimes}}}B)^\omega = B^\omega$ and hence $N' \cap (A {\mathbin{\overline{\otimes}}}B)^\omega = N' \cap B^\omega = (B^\omega)^{\Lambda}$. By [@BISG15 Theorem A], the nonsingular action $\Lambda \curvearrowright G$ is strongly ergodic, that is, $(B^\omega)^{\Lambda} = {\mathbf{C}}1$. This implies that $N' \cap (A {\mathbin{\overline{\otimes}}}B)^\omega = {\mathbf{C}}1$ and hence the nonsingular action $\Gamma \curvearrowright X \times G$ is strongly ergodic. Since the nonsingular action $\Gamma \curvearrowright X \times G/P$ is a quotient of the strongly ergodic nonsingular action $\Gamma \curvearrowright X \times G$, it follows that $\Gamma \curvearrowright X \times G/P$ is also strongly ergodic. Consider the Maharam extension $\Lambda \curvearrowright G/P \times {\mathbf{R}}$ of the nonsingular action $\Lambda \curvearrowright G/P$. By Proposition \[proposition-maharam\], the Maharam extension $\Lambda \curvearrowright G/P \times {\mathbf{R}}$ is measurably conjugate to the nonsingular action $\Lambda \curvearrowright G/L$ where $L = \ker(\Delta_P)$ and $\Delta_P : P \to {\mathbf{R}}^*_+$ is the modular homomorphism. Since $\Gamma \curvearrowright X$ is pmp, the action $\Gamma \curvearrowright X \times G/L$ defined by $\gamma \cdot (x, hL) = (\gamma x, \pi(\gamma)hL)$ can be identified with the Maharam extension of the nonsingular action $\Gamma \curvearrowright X \times G/P$. Since the nonsingular action $\Gamma \curvearrowright X \times G/L$ is a quotient of the strongly ergodic nonsingular action $\Gamma \curvearrowright X \times G$, it follows that $\Gamma \curvearrowright X \times G/L$ is also strongly ergodic. Therefore, the Maharam extension $\Gamma \curvearrowright X \times G/P \times {\mathbf{R}}$ of the nonsingular action $\Gamma \curvearrowright X \times G/P$ is strongly ergodic. $\rm(ii)$ This is a consequence of Theorem \[thmC\]. Keep the same notation as above. Fix $0 < \lambda < 1$, put $T = \frac{2 \pi}{|\log \lambda|}$ and identify $\mathbf T = {\mathbf{R}}/ (T {\mathbf{Z}})$. Define the nonsingular action $\Gamma \curvearrowright X \times G/P \times \mathbf T$ by $$\gamma \cdot (x, hP, t + T {\mathbf{Z}}) = (\gamma x, \pi(\gamma) hP, t + \Omega(\pi(\gamma), hP) + T {\mathbf{Z}}).$$ Observe that the nonsingular action $\Gamma \curvearrowright X \times G/P \times \mathbf T$ is a measurable quotient of the nonsingular action $\Gamma \curvearrowright X \times G/P \times \mathbf R$ and hence is strongly ergodic by Theorem \[thm-examples\]$\rm(i)$. Moreover, we have a canonical identification $${\mathord{\text{\rm L}}}^\infty(X \times G/P \times \mathbf T) \rtimes \Gamma = M \rtimes_{\sigma^\varphi_T} {\mathbf{Z}}.$$ It follows that ${\mathord{\text{\rm L}}}^\infty(X \times G/P \times \mathbf T) \rtimes \Gamma$ is a type ${\rm III_\lambda}$ factor by [@Co85 Lemma 1]. Observe that ${\mathord{\text{\rm L}}}^\infty(X \times G/P \times \mathbf T) \rtimes \Gamma$ is full by Theorem \[thmC\]. Alternatively, since ${\mathord{\text{\rm c}}}(M)$ is full by Theorem \[thm-examples\]$(\rm ii)$, ${\mathord{\text{\rm L}}}^\infty(X \times G/P \times \mathbf T) \rtimes \Gamma = M \rtimes_{\sigma^\varphi_T} {\mathbf{Z}}$ is full by [@TU14 Lemma 6]. This is a consequence of Theorem \[thm-examples\] and the above construction. Further remarks =============== In [@HR14], the first named author and Raum investigated the asymptotic structure of Shlyakhtenko’s free Araki–Woods factors [@Sh96]. Among other things, they proved in [@HR14 Theorem A] that any diffuse von Neumann algebra $M$ with separable predual satisfying Ozawa’s condition (AO) is $\omega$-[*solid*]{}, that is, for any von Neumann subalgebra with expectation $N \subset M$ such that the relative commutant $N' \cap M^\omega$ is diffuse, we have that $N$ is amenable. The proof was based on a combination of Ozawa’s C$^*$-algebraic techniques and an analysis of the relative commutant $N' \cap M^\omega$ and its centralizer [@HR14 Theorem 2.3] (see [@Io12a Lemma 2.7] for the tracial case). In this subsection, we observe that $\omega$-solidity can be easily obtained using the same proof as the one of Theorem \[theorem for thmA 1\] without relying on the analysis of the relative commutant $N' \cap M^\omega$ from [@HR14 Theorem 2.3]. We moreover remove the separability assumption of the predual. Let $M$ be any diffuse $\sigma$-finite von Neumann algebra satisfying Ozawa’s condition $\rm (AO)$. Let $p\in M$ be any nonzero projection and $N\subset pMp$ any von Neumann subalgebra with expectation. Then at least one of the following conditions holds true: - The von Neumann algebra $N$ has a nonzero amenable direct summand. - We have $N'\cap pM^\omega p \subset pMp$. In that case, $N'\cap pM^\omega p = N' \cap pMp$ is moreover discrete. Suppose that $N$ has no amenable direct summand. Then the exact same argument as in the proof of Theorem \[theorem for thmA 1\] using Ozawa’s condition (AO) in lieu of Proposition \[AO in crossed product\] shows that $N'\cap pM^\omega p\subset pMp $ and hence $N'\cap pM^\omega p= N'\cap pMp $. Since $M$ is diffuse and solid [@Oz03 Theorem 6] (see also [@VV05 Theorem 2.5]), it follows that $p M p$ is also diffuse and solid. Since $N \subset pMp$ has no amenable direct summand, it follows that $N' \cap pMp$ is necessarily discrete. In view of Proposition \[AO in ultraproduct\], we finally observe the following condition (AO) in the ultraproduct representation. Let $M$ be any $\sigma$-finite von Neumann algebra and $\omega \in \beta({\mathbf{N}}) \setminus {\mathbf{N}}$ any nonprincipal ultrafilter. Denote by $(M, {\mathord{\text{\rm L}}}^2(M), J^M, \mathfrak P^M)$ (resp. $(M^\omega, {\mathord{\text{\rm L}}}^2(M^\omega), J^{M^\omega}, \mathfrak P^{M^\omega})$) a standard form for $M$ (resp. $M^\omega$). Assume there are unital ${\mathord{\text{\rm C}}}^*$-subalgebras $A, B\subset M$ such that the map $$\nu : A {\otimes_{\text{\rm alg}}}J^MBJ^M \to {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M))/{\mathbf{K}}({\mathord{\text{\rm L}}}^2(M)) : a \otimes J^MbJ^M \mapsto a \, J^MbJ^M + {\mathbf{K}}({\mathord{\text{\rm L}}}^2(M))$$ is continuous with respect to the minimal tensor norm. Then the map $$\nu_\omega : A \otimes_{\rm alg} J^{M^\omega}B J^{M^\omega} \rightarrow {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega))/{\mathbf{K}}({\mathord{\text{\rm L}}}^2(M^\omega)) : a \otimes J^{M^\omega} b J^{M^\omega} \mapsto a \, J^{M^\omega}bJ^{M^\omega} + {\mathbf{K}}({\mathord{\text{\rm L}}}^2(M^\omega))$$ is continuous with respect to the minimal tensor norm. The proof is a variation of the one of Proposition \[AO in ultraproduct\]. Put $$\begin{aligned} C &:={\mathord{\text{\rm C}}}^* \left\{ M, J^MM J^M \right\} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M)) \\ C_\omega &:={\mathord{\text{\rm C}}}^* \left\{ M, J^{M^\omega}M J^{M^\omega} \right\} \subset {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega)).\end{aligned}$$ Observe that $C + \mathbf K({\mathord{\text{\rm L}}}^2(M))$ (resp. $C_\omega + \mathbf K({\mathord{\text{\rm L}}}^2(M^\omega))$) is a ${\mathord{\text{\rm C}}}^*$-subalgebra of $\mathbf B({\mathord{\text{\rm L}}}^2(M))$ (resp. $\mathbf B({\mathord{\text{\rm L}}}^2(M^\omega))$). Fix a faithful state $\varphi \in M_\ast$. Denote by $e : {\mathord{\text{\rm L}}}^2(M) \to {\mathbf{C}}\xi_{\varphi}$ and $f : {\mathord{\text{\rm L}}}^2(M^\omega) \to {\mathbf{C}}\xi_{\varphi^\omega}$ the corresponding orthogonal projections. Observe that $\mathbf K({\mathord{\text{\rm L}}}^2(M))$ is the norm closure in $\mathbf B({\mathord{\text{\rm L}}}^2(M))$ of $M e M$. Denote by $N := \prod^\omega M$ the Groh–Raynaud ultraproduct and by $p \in N$ the support projection of the ultraproduct state $\varphi_\omega \in N_\ast$. There is a $\ast$-homomorphism $\theta : C + \mathbf K({\mathord{\text{\rm L}}}^2(M)) \to \mathbf B({\mathord{\text{\rm L}}}^2(M^\omega))$ such that $\theta(x) = x$ and $\theta(J^M y J^M) = J^{M^\omega} y J^{M^\omega}$ for all $x, y \in M$ and $\theta(e) = f$. Keep the same notation as in the proof of the Claim of Proposition \[AO in ultraproduct\]. By Lemma \[lemma for AO in ultraproduct\], $\pi^\omega((e)_n)$ commutes with $p$ and $J^N$ and hence $\pi^\omega((e)_n)$ commutes with $\widetilde p := p J^N p J^N$. Since $\widetilde p$ commutes with $\pi^\omega(M)$ and $\pi^\omega(J^M M J^M) = J^N \pi^\omega(M) J^N$, $\widetilde p$ commutes with $\pi^\omega(C + \mathbf K({\mathord{\text{\rm L}}}^2(M)))$. Recall that $\widetilde p N \widetilde p \cong p N p \cong M^\omega$ and $\widetilde p {\mathord{\text{\rm L}}}^2(M)_\omega = {\mathord{\text{\rm L}}}^2(M^\omega)$. Then the $\ast$-homomorphism $$\theta : C + \mathbf K({\mathord{\text{\rm L}}}^2(M)) \to \mathbf B({\mathord{\text{\rm L}}}^2(M^\omega)) : T \mapsto \widetilde p \pi^\omega(T) \widetilde p$$ satisfies all the conditions of the Claim. Since $\theta(C)=C_\omega$ and $\theta(\mathbf K({\mathord{\text{\rm L}}}^2(M))) \subset \mathbf K({\mathord{\text{\rm L}}}^2(M^\omega))$, $\theta$ induces a $\ast$-homomorphism $$\widetilde{\theta} : \left( C + \mathbf K({\mathord{\text{\rm L}}}^2(M)) \right)/\mathbf K({\mathord{\text{\rm L}}}^2(M)) \to \left( C_\omega + \mathbf K({\mathord{\text{\rm L}}}^2(M^\omega)) \right)/\mathbf K({\mathord{\text{\rm L}}}^2(M^\omega)).$$ Denote by $\iota : A \otimes_{\min} J^{M^\omega}B J^{M^\omega} \to A \otimes_{\min} J^{M}BJ^{M}$ the tautological $\ast$-isomorphism. Then the composition map $$\nu_\omega = \widetilde{\theta}\circ \nu \circ \iota : A \otimes_{{\mathord{\text{\rm alg}}}} J^{M^\omega}BJ^{M^\omega} \rightarrow {\mathbf{B}}({\mathord{\text{\rm L}}}^2(M^\omega))/{\mathbf{K}}({\mathord{\text{\rm L}}}^2(M^\omega))$$ is continuous with respect to the minimal tensor norm. [BdlHV08]{} , [*On a tensor product ${\mathord{\text{\rm C}}}^*$-algebra associated with the free group on two generators.*]{} J. Math. Soc. Japan [**27**]{} (1975), 589–599. , [*Ultraproducts of von Neumann algebras.*]{} J. Funct. Anal. [**266**]{} (2014), 6842–6913. , [*Kazhdan’s property (T).*]{} New Mathematical Monographs, [**11**]{}. Cambridge University Press, Cambridge, 2008. xiv+472 pp. , [*Local spectral gap in simple Lie groups and applications.*]{} [arXiv:1503.06473]{} , [*Pointwise ergodic theorems beyond amenable groups.*]{} Ergodic Theory Dynam. Systems [**33**]{} (2013), 777–820. , [*C$^*$-algebras and finite-dimensional approximations.*]{} Graduate Studies in Mathematics, [**88**]{}. American Mathematical Society, Providence, RI, 2008. , [*Inner amenability and fullness.*]{} Proc. Amer. Math. Soc. [**86**]{} (1982), 663–666. , [*Une classification des facteurs de type ${\rm III}$.*]{} Ann. Sci. École Norm. Sup. [**6**]{} (1973), 133–252. , [*Almost periodic states and factors of type ${\rm III_1}$.*]{} J. Funct. Anal. [**16**]{} (1974), 415–445. , [*Classification of injective factors. Cases ${\rm II_1}$, ${\rm II_\infty}$, ${\rm III_\lambda}$, $\lambda \neq 1$.*]{} Ann. of Math. [**74**]{} (1976), 73–115. , [*Factors of type ${\rm III_1}$, property $L'_\lambda$ and closure of inner automorphisms.*]{} J. Operator Theory [**14**]{} (1985), 189–211. , [*A ${\rm II_1}$ factor with two non-conjugate Cartan subalgebras.*]{} Bull. Amer. Math. Soc. [**6**]{} (1982), 211–212. , [*Homogeneity of the state space of factors of type $\rm III_1$.*]{} J. Funct. Anal. [**28**]{} (1978), 187–196. , [*Ergodic equivalence relations, cohomology, and von Neumann algebras. ${\rm I}$, ${\rm II}$.*]{} Trans. Amer. Math. Soc. [**234**]{} (1977), 289–324, 325–359. , [*Orbit equivalence and measured group theory.*]{} Proceedings of the International Congress of Mathematicians (Hyderabad, 2010), Vol. III, Hindustan Book Agency (2010), 1501–1527. , [*Equivalence of normal states on von Neumann algebras and the flow of weights.*]{} Adv. Math. [**83**]{} (1990), 180–262. , [*Von Neumann algebras of equivalence relations with nontrivial one-cohomology.*]{} J. Funct. Anal. [**270**]{} (2016), 1501–1536. , [*Unique prime factorization and bicentralizer problem for a class of type ${\rm III}$ factors.*]{} [arXiv:1503.01388]{} , [*Asymptotic structure of free Araki-Woods factors.*]{} Math. Ann. [**363**]{} (2015), 237–267. , [*Rigidity of free product von Neumann algebras.*]{} [arXiv:1507.02157]{} , [*Type ${\rm III}$ factors with unique Cartan decomposition.*]{} J. Math. Pures Appl. [**100**]{} (2013), 564–590. , [*Cartan subalgebras of amalgamated free product ${\rm II_1}$ factors.*]{} Ann. Sci. École Norm. Sup. [**48**]{} (2015), 71–130. , [*Classification and rigidity for von Neumann algebras.*]{} Proceedings of the 6th European Congress of Mathematics (Krakow, 2012), European Mathematical Society Publishing House , [*Weak exactness for ${\mathord{\text{\rm C}}}^*$-algebras and application to condition $\rm (AO)$*]{}. J. Funct. Anal. [**264**]{} (2013), 964–998. , [*On bi-exactness of discrete quantum groups.*]{} Int. Math. Res. Not. Volume 2014, Article ID rnu043. , [*Rings of operators.*]{} ${\rm IV}$. Ann. of Math. [**44**]{} (1943), 716–808. , [*Actions of discrete amenable groups on von Neumann algebras.*]{} Lecture Notes in Mathematics, [**1138**]{}. Springer-Verlag, Berlin, 1985. iv+115 pp. , [*Solid von Neumann algebras.*]{} Acta Math. [**192**]{} (2004), 111–117. , [*A Kurosh type theorem for type $\rm II_1$ factors.*]{} Int. Math. Res. Not. (2006), Art. ID 97560, 21 pp. , [*A remark on fullness of some group measure space von Neumann algebras.*]{} [arXiv:1602.02654]{} , [*${\mathord{\text{\rm L}}}^2$-rigidity in von Neumann algebras.*]{} Invent. Math. [**175**]{} (2009), 417–433. , [*Classification of subfactors and their endomorphisms.*]{} CBMS Regional Conference Series in Mathematics, [**86**]{}. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1995. x+110 pp. , [*On a class of type $\rm II_1$ factors with Betti numbers invariants.*]{} Ann. of Math. [**163**]{} (2006), 809–899. , [*Strong rigidity of $\rm II_1$ factors arising from malleable actions of w-rigid groups $\rm I$.*]{} Invent. Math. [**165**]{} (2006), 369–408. , [*Deformation and rigidity for group actions and von Neumann algebras.*]{} Proceedings of the International Congress of Mathematicians (Madrid, 2006), Vol. I, European Mathematical Society Publishing House, 2007, p. 445–477. , [*On the superrigidity of malleable actions with spectral gap.*]{} J. Amer. Math. Soc. [**21**]{} (2008), 981–1000. , [*On Ozawa’s property for free group factors.*]{} Int. Math. Res. Not. IMRN [**2007**]{}, no. 11, Art. ID rnm036, 10 pp. , [*Unique Cartan decomposition for ${\rm II_1}$ factors arising from arbitrary actions of free groups.*]{} Acta Math. [**212**]{} (2014), 141–198. , [*On ultrapowers of non commutative L$_p$-spaces.*]{} J. Operator Theory [**48**]{} (2002), 41–68. , [*Measure equivalence rigidity and bi-exactness of groups.*]{} J. Funct. Anal. [**257**]{} (2009), 3167–3202. , [*Free quasi-free states.*]{} Pacific J. Math. [**177**]{} (1997), 329–368. , [*Une notion de nucl�éarit�é en K-thé�orie (d’aprè�s J. Cuntz).*]{} K-Theory [**1**]{} (1988), 549–573. Encyclopaedia of Mathematical Sciences, [**125**]{}. Operator Algebras and Non-commutative Geometry, 6. Springer-Verlag, Berlin, 2003. xxii+518 pp. , [*A characterization of fullness of continuous cores of type ${\rm III_1}$ free product factors.*]{} To appear in Kyoto J. Math. [arXiv:1412.2418]{} , [*Fullness, Connes’ $\chi$-groups, and ultra-products of amalgamated free products over Cartan subalgebras.*]{} Trans. Amer. Math. Soc. [**355**]{} (2003), 349–371. , [*Rigidity for von Neumann algebras and their invariants.*]{} Proceedings of the International Congress of Mathematicians (Hyderabad, 2010), Vol. III, Hindustan Book Agency, 2010, 1624–1650. , [*The boundary of universal discrete quantum groups, exactness, and factoriality.*]{} Duke Math. J. [**140**]{} (2007), 35–84. , [*Ergodic theory and semisimple groups.*]{} Monographs in Mathematics, [**81**]{}. Birkhäuser Verlag, Basel, 1984. x+209 pp. [^1]: CH is supported by ERC Starting Grant GAN 637601 [^2]: YI is supported by JSPS Research Fellowship
{ "pile_set_name": "ArXiv" }
--- abstract: | This paper considers inference in a partially identified moment (in)equality model with many moment inequalities. We propose a novel two-step inference procedure that combines the methods proposed by [@chernozhukov/chetverikov/kato:2014c] (, hereafter) with a first-step moment inequality selection based on the Lasso. Our method controls size uniformly, both in underlying parameter and data distribution. Also, the power of our method compares favorably with that of the corresponding two-step method in for large parts of the parameter space, both in theory and in simulations. Finally, our Lasso-based first step is straightforward to implement. *Keywords and phrases*: Many moment inequalities, self-normalizing sum, multiplier bootstrap, empirical bootstrap, Lasso, inequality selection. *JEL classification*: C13, C23, C26. author: - | Federico A. Bugni\ Department of Economics\ Duke University - | Mehmet Caner\ Department of Economics\ Ohio State University - | Anders Bredahl Kock\ Department of Economics & Business\ Aarhus University - | [Soumendra Lahiri]{}\ Department of Statistics\ North Carolina State University bibliography: - 'BIBLIOGRAPHY.bib' title: ' Inference in partially identified models with many moment inequalities using Lasso[^1] ' --- Introduction ============ This paper contributes to the growing literature on inference in partially identified econometric models defined by many unconditional moment (in)equalities, i.e., inequalities and equalities. Consider an economic model with a parameter $\theta $ belonging to a parameter space $\Theta $, whose main prediction is that the true value of $\theta $, denoted by $ \theta _{0}$, satisfies a collection of moment (in)equalities. This model is partially identified, i.e., the restrictions of the model do not necessarily restrict $\theta _{0}$ to a single value, but rather they constrain it to belong to a certain set, called the identified set. The literature on partially identified models discusses several examples of economic models that satisfy this structure, such as selection problems, missing data, or multiplicity of equilibria (see, e.g., [@manski:1995] and [@tamer:2003]). The first contributions in the literature of partially identified moment (in)equalities focus on the case in which there is a fixed and finite number of moment (in)equalities, both unconditionally[^2] and conditionally[^3]. In practice, however, there are many relevant econometric models that produce a large set of moment conditions (even infinitely many). As several references in the literature point out (e.g. @menzel:2009 [@menzel:2014]), the associated inference problems cannot be properly addressed by an asymptotic framework with a fixed number of moment (in)equalities.[^4] To address this issue, [@chernozhukov/chetverikov/kato:2014c] (hereafter referred to as ) obtain inference results in a partially identified model with *many* moment (in)equalities.[^5] According to this asymptotic framework, the number of moment (in)equalities, denoted by $p$, is allowed to be larger than the sample size $n$. In fact, the asymptotic framework allows $p$ to be an increasing function of $n$ and even to grow at certain exponential rates. Furthermore, allow their moment (in)equalities to be “unstructured”, in the sense that they do not impose restrictions on the correlation structure of the sample moment conditions.[^6] For these reasons, represents a significant advancement relative to the previous literature on inference in moment (in)equalities. This paper builds on the inference method proposed in . Their goal is to test whether a collection of $p$ moment inequalities simultaneously holds or not. In order to implement their test they propose a test statistic based on the maximum of $p$ Studentized statistics and several methods to compute the critical values. Their critical values may include a first stage inequality selection procedure with the objective of detecting slack moment inequalities, thus increasing the statistical power. According to their simulation results, including a first stage can result in significant power gains. Our contribution is to propose a new inference method based on the combination of two ideas. On the one hand, our test statistic and critical values are based on those proposed by . On the other hand, we propose a new first stage selection procedure based on the Lasso. The Lasso was first proposed in the seminal contribution by [@tibshirani:1996] as a regularization technique in the linear regression model. Since then, this method has found wide use as a dimension reduction technique in large dimensional models with strong theoretical underpinnings.[^7] It is precisely these powerful shrinkage properties that serve as motivation to consider the Lasso as a procedure to separate out and select binding moment inequalities from the non-binding ones. Our Lasso first step inequality selection can be combined with any of the second step inference procedures in : self-normalization, multiplier bootstrap, or empirical bootstrap. The present paper considers using the Lasso to select moments in a partially identified moment (in)equality model. In the context of point identified problems, there is an existing literature that proposes the Lasso to address estimation and moment selection in GMM settings. In particular, [@caner:2009] introduce Lasso type GMM-Bridge estimators to estimate structural parameters in a general model. The problem of selection of moment in GMM is studied in [@liao:2013] and [@cheng/liao:2015]. In addition, [@caner/zhang:2014] and [@caner/han/lee:2016] find a method to estimate parameters in GMM with diverging number of moments/parameters, and selecting valid moments among many valid or invalid moments respectively. In addition, [@fan/liao/yao:2015] consider the problem of inference in high dimensional models with sparse alternatives. Finally, [@caner/fan:2015] propose a hybrid two-step estimation procedure based on Generalized Empirical Likelihood, where instruments are chosen in a first-stage using an adaptive Lasso procedure. We obtain the following results for our two-step Lasso inference methods. First, we provide conditions under which our methods are uniformly valid, both in the underlying parameter $\theta $ and the distribution of the data. According to the literature in moment (in)equalities, obtaining uniformly valid asymptotic results is important to guarantee that the asymptotic analysis provides an accurate approximation to finite sample results.[^8] Second, by virtue of results in , all of our proposed tests are asymptotically optimal in a minimax sense. Third, we compare the power of our methods to the corresponding one in , both in theory and in simulations. Since our two-step procedure and the corresponding one in share the second step, our power comparison is a comparison of the Lasso-based first-step vis-à-vis the ones in . On the theory front, we obtain a region of underlying parameters under which the power of our method dominates that of . We also conduct extensive simulations to explore the practical consequences of our theoretical findings. Our simulations indicate that a Lasso-based first step is usually as powerful as the one in , and can sometimes be more powerful. Fourth, we show that our Lasso-based first step is straightforward to implement. The remainder of the paper is organized as follows. Section \[sec:Setup\] describes the inference problem and introduces our assumptions. Section \[sec:Lasso\] introduces the Lasso as a method to distinguish binding moment inequalities from non-binding ones and Section \[sec:Inference\] considers inference methods that use the Lasso as a first step. Section \[sec:Power\] compares the power properties of inference methods based on the Lasso with the ones in the literature. Section \[sec:MonteCarlos\] provides evidence of the finite sample performance using Monte Carlo simulations. Section \[sec:Conclusions\] concludes. Proofs of the main results and several intermediate results are reported in the appendix. Throughout the paper, we use the following notation. For any set $S$, $|S|$ denotes its cardinality, and for any vector $x \in \mathbb{R}^{d}$, $||x||_{1}\equiv \sum_{i=1}^{d}|x_i|$. Setup {#sec:Setup} ===== For each $\theta \in \Theta $, let $X(\theta):\Omega \to \mathbb{R}^{k}$ be a $k$-dimensional random variable with distribution $P(\theta )$ and mean $\mu (\theta )\equiv E_{P(\theta )}[X(\theta)]\in \mathbb{R} ^{k}$. Let $\mu _{j}(\theta )$ denote the $j$th component of $\mu(\theta)$ so that $\mu (\theta )=\{\mu _{j}(\theta )\}_{j\leq k}$. The main tenet of the econometric model is that the true parameter value $\theta _{0}$ satisfies the following collection of $p$ moment inequalities and $v\equiv k-p$ moment equalities: $$\begin{aligned} \mu _{j}(\theta _{0}) &\leq &0\text{ for }j=1,\ldots ,p, \notag \\ \mu _{j}(\theta _{0}) &=&0\text{ for }j=p+1,\ldots ,k. \label{eq:MI}\end{aligned}$$ As in , we are implicitly allowing the collection $P$ of distributions of $X(\theta)$ and the number of moment (in)equalities, $k = p+v$ to depend on $n$. In particular, we are primarily interested in the case in which $p=p_{n}\to \infty$ and $v=v_{n}\to \infty$ as $n \to \infty$, but the subscripts will be omitted to keep the notation simple. In particular, $p$ and $v$ can be much larger than the sample size and increase at rates made precise in Section \[sec:Assumptions\]. We allow the econometric model to be partially identified, i.e., the moment (in)equalities in Eq.  do not necessarily restrict $\theta _{0}$ to a single value, but rather they constrain it to belong to the identified set, denoted by $\Theta _{I}(P)$. By definition, the identified set is as follows: $$\Theta _{I}(P)~\equiv ~\cbr[4]{ \theta \in \Theta :\cbr[4]{ \begin{array}{l} \mu _{j}(\theta )\leq 0\text{ for }j=1,\ldots ,p, \\ \mu _{j}(\theta )=0\text{ for }j=p+1,\ldots ,k. \end{array}} }. \label{eq:IdSet}$$ Our goal is to test whether a particular parameter value $\theta \in \Theta $ is a possible candidate for the true parameter value $\theta _{0} \in \Theta_{I}(P)$. In other words, we are interested in testing: $$H_{0}:\theta _{0}=\theta~~ \text{ vs. }~~H_{1}:\theta _{0}\not=\theta . \label{eq:HypTest}$$ By definition, the identified set is composed of all parameters that are observationally equivalent to the true parameter value $\theta _{0}$, i.e., every parameter value in $\Theta _{I}(P)$ is a candidate for $\theta _{0}$. In this sense, $\theta=\theta _{0}$ is observationally equivalent to $\theta \in \Theta _{I}(P)$ and so the hypothesis test in Eq.  can be equivalently reexpressed as: $$\begin{aligned} &\quad H_{0}:\theta \in \Theta _{I}(P)\quad \text{vs.}\quad H_{1}:\theta \not\in \Theta _{I}(P), \notag \\[.05in] \text{i.e.,}\quad H_{0} &:\cbr[4]{\begin{array}{l} \mu _{j}(\theta )\leq 0\text{ for all }j=1,\ldots ,p, \\ \mu _{j}(\theta )=0\text{ for all }j=p+1,\ldots ,k. \end{array}}\quad \text{vs.}\quad H_{1} : \mbox{``not $H_0$''}. \label{eq:HypTest2}\end{aligned}$$ In this paper, we propose a procedure to implement the hypothesis test in Eq.  (or, equivalently, Eq. ) with a given significance level $\alpha \in (0,1)$ based on a random sample of $ X(\theta)\sim P(\theta)$, denoted by $X^{n}(\theta) \equiv \{X_{i}(\theta)\}_{i\leq n}$. The inference procedure will reject the null hypothesis whenever a certain test statistic $ T_{n}(\theta )$ exceeds a critical value $c_{n}(\alpha ,\theta )$, i.e., $$\phi _{n}(\alpha ,\theta )~\equiv ~ 1[T_{n}(\theta )>c_{n}(\alpha ,\theta )], \label{eq:HT}$$ where $1[\cdot]$ denotes the indicator function. By the duality between hypothesis tests and confidence sets, a confidence set for $\theta _{0}$ can be constructed by collecting all parameter values for which the inference procedure is not rejected, i.e., $$C_{n}(1-\alpha )~\equiv ~\{\theta \in \Theta :T_{n}(\theta ) \leq c_{n}(\alpha ,\theta )\}. \label{eq:CS}$$ Our formal results will have the following structure. Let $\mathcal{P}$ denote a set of probability distributions. We will show that for all $P \in \mathcal{P}$ and under $H_0$, $$P\del[1]{T_{n}(\theta ) > c_{n}(\alpha ,\theta )}~~\leq ~~ \alpha + o(1). \label{eq:CSvalid2}$$ Moreover, the convergence in Eq.  will be shown to occur uniformly over both $P \in \mathcal{P}$ and $\theta\in \Theta$. This uniform size control result in Eq.  has important consequences regarding our inference problem. First, this result immediately implies that the hypothesis test procedure in Eq.  uniformly controls asymptotic size i.e., for all $\theta \in \Theta$ and under $H_0:\theta_0 = \theta$, $$\underset{n\to \infty }{\lim \sup }~\sup_{P\in \mathcal{P} }~E[\phi _{n}(\alpha ,\theta )]~~\leq ~~ \alpha. \label{eq:HTvalid}$$ Second, the result also implies that the confidence set in Eq.  is asymptotically uniformly valid, i.e., $$\underset{n\to \infty }{\lim \inf }~\inf_{P\in \mathcal{P} }~\inf_{\theta \in \Theta _{I}(P)}~P\del[1]{\theta\in C_{n}(1-\alpha )}~~\geq ~~1-\alpha. \label{eq:CSvalid}$$ The rest of the section is organized as follows. Section \[sec:Assumptions\] specifies the assumptions on the probability space $\mathcal{P}$ that are required for our analysis. All the inference methods described in this paper share the test statistic $ T_{n}(\theta )$ and differ only in the critical value $c_{n}(\alpha ,\theta )$. The common test statistic is introduced and described in Section \[sec:TestStat\]. Assumptions {#sec:Assumptions} ----------- The collection of distributions $P\equiv \{P(\theta ):\theta \in \Theta \}$ are assumed to satisfy the following assumptions. \[ass:Basic\] For every $\theta \in \Theta$, let ${X}^{n}(\theta) \equiv \{X_{i}(\theta)\}_{i\leq n}$ be i.i.d. $k$-dimensional random vectors distributed according to $P(\theta)$. Further, let $E_{P(\theta)}[X_{1j}(\theta)]\equiv \mu_{j}(\theta) $ and $Var_{P(\theta)}[X_{1j}(\theta)] \equiv \sigma^2_{j}(\theta) >0$, where $X_{ij}(\theta)$ denotes the $j$ component of $X_i(\theta)$. \[ass:Rates\] For some $\delta \in (0,1]$, $ \max_{j=1,\dots ,k} \sup_{\theta \in \Theta } ( E_{P(\theta)}[|X_{1j}(\theta)|^{2+\delta }]) ^{1/( 2+\delta ) } \equiv M_{n,2+\delta }<\infty $ and $M_{n,2+\delta }^{2+\delta }( \ln (2k-p) ) ^{( 2+\delta) /2}n^{-\delta /2}\to 0$. \[ass:Rates2\] For some $c\in (0,1)$, $( n^{-( 1-c) /2}\ln (2k-p)+n^{-3/2}( \ln (2k-p)) ^{2}) B_{n}^{2}\to 0$, where $\sup_{\theta \in \Theta}( E_{P(\theta)}[\max_{j=1,\dots,k}|Z_{1j}(\theta)|^{4 }]) ^{1/4 }\equiv B_{n} <\infty $ and $Z_{ij}(\theta) \equiv (X_{ij}(\theta) - \mu_{j}(\theta))/ \sigma_{j}(\theta)$. \[ass:Rates3\] For some $c\in (0,1/2)$ and $C>0$, $\max \{M_{n,3}^{3},M_{n,4}^{2},B_{n}\}^{2}\ln ( (2k-p)n) ^{7/2}\leq Cn^{1/2-c}$, where $M_{n,2+\delta }$ and $B_{n}$ are as in Assumptions \[ass:Rates\]-\[ass:Rates2\]. We now briefly describe these assumptions. Assumption \[ass:Basic\] is standard in microeconometric applications. Assumption \[ass:Rates\] has two parts. The first part requires that $X_{ij}(\theta)$ has finite $(2+\delta)$-moments for all $ j=1,\dots,k$. The second part limits the rate of growth of $M_{n,2+\delta }$ and the number of moment (in)equalities. Notice that $M_{n,2+\delta }$ is a function of the sample size because $ \max_{j=1,\dots ,k} \sup_{\theta \in \Theta } ( E_{P(\theta)}[|X_{1j}(\theta)|^{2+\delta }]) ^{1/( 2+\delta ) }$ is function of $P$ and $k=v+p$, both of which could depend on $n$. Also, notice that $2k-p = 2v+p$, i.e., the total number of moment inequalities $p$ plus twice the number of moment equalities $v$, all of which could depend on $n$. Assumption \[ass:Rates2\] could be interpreted in a similar fashion as Assumption \[ass:Rates\], except that it refers to the standardized random variable $Z_{ij}(\theta)\equiv (X_{ij}(\theta) - \mu_{j}(\theta))/ \sigma_{j}(\theta)$. Assumption \[ass:Rates3\] is a technical assumption that is used to control the size of the bootstrap test in .[^9] Test statistic {#sec:TestStat} -------------- Throughout the paper, we consider the following test statistic: $$T_{n}(\theta) ~\equiv~ \max \cbr[3]{\max_{j=1,\dots ,p}\frac{\sqrt{n}\hat{\mu}_{j}(\theta)}{\hat{ \sigma}_{j}(\theta)},\max_{s=p+1,\dots ,k}\frac{\sqrt{n}\left\vert \hat{\mu} _{s}(\theta)\right\vert }{\hat{\sigma}_{s}(\theta)}}, \label{eq:TestStat}$$ where, for $j=1,\dots ,k$, $ \hat{\mu}_{j}(\theta) \equiv \frac{1}{n}\sum_{i=1}^{n}X_{ij}(\theta)$ and $\hat{\sigma}_{j}^{2}(\theta) \equiv \frac{1}{n}\sum_{i=1}^{n}\left( X_{ij}(\theta)-\hat{\mu} _{j}(\theta)\right) ^{2} $. Note that Eq.  is not properly defined if $\hat{\sigma}_{j}^{2}(\theta)=0$ for some $j=1,\dots,k$ and, in such cases, we use the convention that $C/0 \equiv \infty \times 1[C>0] -\infty \times 1[C<0] $. The test statistic is identical to that in with the exception that we allow for the presence of moment equalities. By definition, large values of $T_n(\theta)$ are an indication that $H_0: \theta = \theta_0$ is likely to be violated, leading to the hypothesis test in Eq. . The remainder of the paper considers several procedures to construct critical values that can be associated to this test statistic. Lasso as a first step moment selection procedure {#sec:Lasso} ================================================ In order to propose a critical value for our test statistic $T_{n}(\theta)$, we need to approximate its distribution under the null hypothesis. According to the econometric model in Eq. , the true parameter satisfies $p$ moment inequalities and $v$ moment equalities. By definition, the moment equalities are always binding under the null hypothesis. On the other hand, the moment inequalities may or may not be binding, and a successful approximation of the asymptotic distribution depends on being able to distinguish between these two cases. Incorporating this information into the hypothesis testing problem is one of the key issues in the literature on inference in partially identified moment (in)equality models. In their seminal contribution, is the first paper in the literature to conduct inference in a partially identified model with many unstructured moment inequalities. Their paper proposes several procedures to select binding moment inequalities from non-binding based on three approximation methods: self-normalization (SN), multiplier bootstrap (MB), and empirical bootstrap (EB). Our relative contribution is to propose a novel approximation method based on the Lasso. By definition, the Lasso penalizes parameters values by their $\ell_{1}$-norm, with the ability of producing parameter estimates that are exactly equal to zero. This powerful shrinkage property is precisely what motivates us to consider the Lasso as a first step moment selection procedure in a model with many moment (in)equalities. As we will soon show, the Lasso is an excellent method to detect binding moment inequalities from non-binding ones, and this information can be successfully incorporated into an inference procedure for many moment (in)equalities. For every $\theta \in \Theta$, let $J(\theta)$ denote the true set of binding moment inequalities, i.e., $J(\theta)~\equiv~ \{j=1,\dots ,p~:~\mu _{j}(\theta)\geq 0\}$. Let $\mu_{I}(\theta) \equiv \{\mu_j(\theta)\}_{j=1}^{p}$ denote the moment vector for the moment inequalities and let $\hat{\mu}_{I}(\theta) \equiv \{\hat{\mu}_j(\theta)\}_{j=1}^{p}$ denote its sample analogue. In order to detect binding moment inequalities, we consider the weighted Lasso estimator of $\mu_{I}(\theta)$, given by: $$\hat{\mu}_{L}(\theta) ~\equiv~ \underset{t\in \mathbb{R} ^{p}}{\arg \min } \cbr[3]{ \left( \hat{\mu}_{I}(\theta) -t\right)^{\prime }\hat{W}(\theta)\left( \hat{\mu}_{I}(\theta)-t\right) + \lambda _{n}\left\Vert \hat{W}(\theta)^{1/2}t\right\Vert _{1}}, \label{eq:Lasso0}$$ where $\lambda _{n}$ is a positive penalization sequence that controls the amount of regularization and $\hat{W}(\theta)$ is a positive definite weighting matrix. To simplify the computation of the Lasso estimator, we impose $\hat{W}(\theta) \equiv diag\{ 1/\hat{\sigma}_{j}(\theta)^{2}\} _{j=1}^{p}$. As a consequence, Eq.  becomes: $$\hat{\mu}_{L}(\theta) ~=~ \cbr[3]{ \underset{m\in \mathbb{R} }{\arg \min }\left\{ \left( {\hat{ \mu}_{j}(\theta)}-m\right) ^{2}+\lambda _{n}\hat{\sigma}_{j}(\theta)|m|\right\}} _{j=1}^{p}.\label{eq:Lasso1}$$ Notice that instead of using the Lasso in one $p$-dimensional model we instead use it in $p$ one-dimensional models. As we shall see later, $\hat{\mu}_{L}(\theta)$ in Eq.  is closely linked to the soft-thresholded least squares estimator, which implies that its computation is straightforward. The Lasso estimator $\hat{\mu}_{L}(\theta)$ implies a Lasso-based estimator of $J(\theta)$, given by: $$\hat{J}_{L}(\theta) ~\equiv~ \{j=1,\dots ,p:\hat{\mu}_{j,L}(\theta)/\hat{\sigma}_{j}(\theta) ~\geq~ -\lambda _{n}\}. \label{eq:BindingLasso}$$ In order to implement this procedure, we need to choose the sequence $\lambda_n$, which determines the degree of regularization imposed by the Lasso. A higher value of $\lambda_{n}$ will produce a larger number of moment inequalities considered to be binding, resulting in a lower rejection rate. In consequence, this is a critical choice for our inference methodology. According to our theoretical results, a suitable choice of $\lambda_n$ is given by: $$\lambda _{n}=(4/3+\varepsilon) n^{-1/2}\left( {M}_{n,2+\delta }^{2}n^{-\delta /(2+\delta )}-n^{-1}\right) ^{-1/2} \label{eq:LambdaConcrete}$$ for any arbitrary $\varepsilon>0$. Assumption \[ass:Rates\] implies that $\lambda _{n}$ in Eq.  satisfies $\lambda_n\to 0$. Notice that Eq.  is infeasible as it depends on the unknown expression ${M}_{n,2+\delta }$. In practice, one can replace this unknown expression with its sample analogue: $$\hat{M}^{2}_{n,2+\delta } = \max_{j=1,\dots ,k} ~\sup_{\theta \in \Theta } ~ \left(n^{-1} \sum\nolimits_{i=1}^{n}|X_{ij}(\theta)|^{2+\delta }\right) ^{2/( 2+\delta ) }.$$ In principle, a more rigorous choice of $\lambda_{n}$ can be implemented via a modified BIC method designed for divergent number of parameters as in [@wang/li/leng:2009] or [@caner/han/lee:2016].[^10] As explained earlier, our Lasso procedure is used as a first step in order to detect binding moment inequalities from non-binding ones. The following result formally establishes that our Lasso procedure includes all binding ones with a probability that approaches one, uniformly. \[lem:LassoNoOverFit\] Assume Assumptions \[ass:Basic\]-\[ass:Rates2\], and let $\lambda_n$ be as in Eq. . Then, $$\begin{aligned} P[J(\theta)\subseteq \hat{J}_{L}(\theta)] ~\geq~ 1-2p\exp \Big( - \frac{n^{\delta /(2+\delta )}}{2M_{n,2+\delta }^{2}}\Big) \left[ 1+K\Big( \frac{M_{n,2+\delta }}{n^{\delta /(2(2+\delta ))}} +1\Big) ^{2+\delta }\right] +\tilde{K} n^{-c} ~=~ 1 + o(1),\end{aligned}$$ where $K,\tilde{K}$ are universal constants and the convergence is uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. Thus far, our Lasso estimator of the binding constrains in Eq.  has been defined in terms of the solution of the $p$-dimensional minimization problem in Eq. . We conclude the subsection by providing an equivalent closed form solution for this set. \[lem:LassoClosedForm\] Eq.  can be equivalently reexpressed as follows: $$\hat{J}_{L}(\theta) ~=~ \{j=1,\dots ,p: {\hat{\mu}_{j}(\theta)}/{\hat{\sigma}_{j}(\theta)}\geq -{3}\lambda_n/{2}\}. \label{eq:BindingLasso2}$$ Lemma \[lem:LassoClosedForm\] is a very important computational aspect of our methodology. This result reveals that $\hat{J}_{L}(\theta) $ can be computed by comparing standardized sample averages with a modified threshold of $-{3}\lambda_n/{2}$. In other words, our Lasso-based first stage can be implemented without the need of solving the $p$-dimensional minimization problem in Eq. . Inference methods with Lasso first step {#sec:Inference} ======================================= In the remainder of the paper we show how to conduct inference in our partially identified many moment (in)equality model by combining the Lasso-based first step in Section \[sec:Lasso\] with a second step based on the inference methods proposed by . In particular, Section \[sec:SN\] combines our Lasso-based first step with their self-normalization approximation, while Section \[sec:Boot\] combines it with their bootstrap approximations. Self-normalization approximation {#sec:SN} -------------------------------- Before describing our self-normalization (SN) approximation with Lasso first stage, we first describe the “plain vanilla” SN approximation without first stage moment selection. Our treatment extends the SN method proposed by to the presence of moment equalities. As a preliminary step, we now define the SN approximation to the $(1-\alpha)$-quantile of $T_{n}(\theta)$ in a hypothetical moment (in)equality model composed of $|J|$ moment inequalities and $k-p$ moment equalities, given by: $$c_{n}^{SN}(|J|,\alpha)\equiv \left\{ \begin{array}{ll} 0 & \text{if }2(k-p)+|J|=0, \\ \tfrac{\Phi ^{-1}\left( 1-\alpha /(2(k-p)+|J|)\right) }{\sqrt{1-\left( \Phi ^{-1}\left( 1-\alpha /(2(k-p)+|J|)\right) \right) ^{2}/n}} & \text{if } 2(k-p)+|J|>0. \end{array} \right. \label{eq:CVSN_oracle}$$ Lemma \[lem:SNSize\] in the appendix shows that $c_{n}^{SN}(|J|,\alpha)$ provides asymptotic uniform size control in a hypothetical moment (in)equality model with $|J|$ moment inequalities and $k-p$ moment equalities under Assumptions \[ass:Basic\]-\[ass:Rates\]. The main difference between this result and (Theorem 4.1) is that we allow for the presence of moment equalities. Since our moment (in)equality model has $|J|=p$ moment inequalities and $k-p$ moment equalities, we can define the regular (i.e. one-step) SN approximation method by using $|J|=p$ in Eq. , i.e., $$c_{n}^{SN,1S}(\alpha )\equiv c_{n}^{SN}(p,\alpha )=\tfrac{\Phi ^{-1}\left( 1-\alpha /(2k-p)\right) }{\sqrt{1-\left( \Phi ^{-1}\left( 1-\alpha /(2k-p)\right) \right) ^{2}/n}}. $$ The following result is a corollary of Lemma \[lem:SNSize\]. \[thm:SN1Scorollary\] Assume Assumptions \[ass:Basic\]-\[ass:Rates\], $\alpha \in (0,0.5)$, and that $H_{0}$ holds. Then, $$P\del[1]{T_n(\theta) > c_{n}^{SN,1S}(\alpha )} ~\leq~ \alpha + \alpha Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta } \Big(1+\Phi ^{-1}\left( 1-\alpha /(2k-p)\right) \Big)^{2+\delta } = \alpha + o(1),$$ where $K$ is a universal constant and the convergence is uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. By definition, this SN approximation considers all moment inequalities in the model as binding. A more powerful test can be constructed by using the data to reveal which moment inequalities are slack. In particular, propose a two-step SN procedure which combines a first step moment inequality based on SN methods and the second step SN critical value in Theorem \[thm:SN1Scorollary\]. If we adapt their procedure to the presence of moment equalities, this would be given by: $$\begin{aligned} c^{SN,2S}(\theta,\alpha) ~&\equiv~ c_{n}^{SN}(|\hat{J}_{SN}(\theta)|,\alpha - 2 \beta_{n} ) \label{eq:SN2S-CV}\end{aligned}$$ with: $$\begin{aligned} \hat{J}_{SN}(\theta) ~&\equiv ~ \Big\{j\in \{1,\dots,p\}~: ~\sqrt{n}\hat{\mu}_j(\theta)/\hat{\sigma}_j(\theta)>-2c^{SN,1S}(\beta_n)\Big\}, $$ where $\{\beta_n\}_{n\ge 1}$ is an arbitrary sequence of constants in $(0,\alpha/3)$. By extending arguments in to include moment equalities, one can show that inference based on the critical value $c^{SN,2S}(\theta,\alpha)$ in Eq.  is asymptotically valid in a uniform sense. In this paper, we propose an alternative SN procedure by using our Lasso-based first step. In particular, we define the following two-step Lasso SN critical value: $$\begin{aligned} c_{n}^{SN,L}(\theta, \alpha )~\equiv~ c_{n}^{SN}(|\hat{J}_{L}(\theta)|,\alpha) , \label{eq:cn-SNLasso}\end{aligned}$$ where $\hat{J}_{L}(\theta)$ is as in Eq. . The following result shows that an inference method based on our two-step Lasso SN critical value is asymptotically valid in a uniform sense. \[thm:Lasso2SSize\] Assume Assumptions \[ass:Basic\]-\[ass:Rates2\], $\alpha \in (0,0.5)$, and that $H_{0}$ holds, and let $\lambda_n$ be as in Eq. . Then, $$\begin{aligned} &P\del[1]{T_{n}(\theta)>c_{n}^{SN,L}(\theta,\alpha)}\\ &\leq \alpha +\sbr[4]{\begin{array}{c} \alpha Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\Phi ^{-1}\left( 1-\alpha /(2k-p)\right) )^{2+\delta }+ \\ 4p\exp\del[2]{-2^{-1}n^{\delta /(2+\delta )}M_{n,2+\delta }^{-2}}\sbr[2]{1+K\left( n^{-\delta /(2(2+\delta ))} M_{n,2+\delta }+1\right) ^{2+\delta }} +2\tilde{K}n^{-c} \end{array}}\\ &=\alpha +o(1),\end{aligned}$$ where $K,\tilde{K}$ are universal constants and the convergence is uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. We now compare our two-step SN Lasso method with the SN methods in . Since all inference methods share the test statistic, the only difference lies in the critical values. While the one-step SN critical values considers all $p$ moment inequalities as binding, our two-step SN Lasso critical value considers only $|\hat{J}_{L}(\theta)|$ moment inequalities as binding. Since $|\hat{J}_{L}(\theta)|\leq p$ and $c_{n}^{SN}(\alpha ,|J|)$ is weakly increasing in $|J|$ (see Lemma \[lem:CSNincreasing\] in the appendix), then our two-step SN method results in a weakly larger rejection probability for all sample sizes. In contrast, the comparison between $c_{n}^{SN,L}(\theta,\alpha)$ and $c_{n}^{SN,2S}(\theta,\alpha)$ is not straightforward as these differ in two aspects. First, the set of binding constrains $\hat{J}_{SN}(\theta)$ according to SN differs from the set of binding constrains $\hat{J}_{L}(\theta)$ according to the Lasso. Second, the quantile of the critical values are different: the two-step SN method in Eq.  considers the $\alpha - 2\beta_{n}$ quantile while the Lasso-based method considers the usual $\alpha$ quantile. As a result of these differences, the comparison of these critical values is ambiguous and so is the resulting power comparison. This topic will be discussed in further detail in Section \[sec:Power\]. Bootstrap methods {#sec:Boot} ----------------- also propose two bootstrap approximation methods: multiplier bootstrap (MB) and empirical bootstrap (EB). Relative to the SN approximation, bootstrap methods have the advantage of taking into account the dependence between the coordinates of $\{\sqrt{n}\hat{\mu}_{j}({\theta})/\hat{\sigma}_{j}(\theta)\}_{j=1}^{p}$ involved in the definition of the test statistic $T_{n}(\theta)$. As in the previous subsection, we first define the bootstrap approximation to the $(1-\alpha)$-quantile of $T_n(\theta)$ in a hypothetical moment (in)equality model composed of moment inequalities indexed by the set $J$ and the $k-p$ moment equalities. The corresponding MB and EB approximations are denoted by $c_{n}^{MB}(\theta, J, \alpha)$ and $c_{n}^{EB}(\theta, J, \alpha)$, respectively, and are computed as follows. \[alg:MB\]**Multiplier bootstrap (MB)** 1. Generate i.i.d. standard normal random variables $\{\epsilon _{i}\}_{i = 1 }^{n}$, and independent of the data $X^{n}(\theta)$. 2. Construct the multiplier bootstrap test statistic: $$W_{n}^{MB}(\theta, J)=\max \cbr[4]{\max_{j\in J}\frac{\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \epsilon _{i}( X_{ij}(\theta)-\hat{\mu}_{j}(\theta)) }{\hat{\sigma}_{j}(\theta)} ,\max_{s=p+1,\dots ,k}\frac{\frac{1}{\sqrt{n}}|\sum_{i=1}^{n} \epsilon _{i}( X_{is}(\theta)-\hat{\mu}_{s}(\theta)) |}{\hat{\sigma}_{s}(\theta)}}.$$ 3. Calculate $c_{n}^{MB}(\theta, J, \alpha)$ as the conditional $(1-\alpha )$-quantile of $W_{n}^{MB}(\theta, J)$ (given $X^{n}(\theta)$). \[alg:EB\]**Empirical bootstrap (EB)** 1. Generate a bootstrap sample $\{ X ^{*}_{i}(\theta)\}_{i = 1 }^{n}$ from the data, i.e., an i.i.d. draw from the empirical distribution of $X^{n}(\theta)$. 2. Construct the empirical bootstrap test statistic: $$W_{n}^{EB}(\theta, J)=\max \cbr[4]{ \max_{j\in J}\frac{\frac{1}{\sqrt{n}}\sum_{i=1}^{n} ( X^{*}_{ij}(\theta)-\hat{\mu}_{j}(\theta)) }{\hat{\sigma}_{j}(\theta)} ,\max_{s=p+1,\dots ,k}\frac{\frac{1}{\sqrt{n}}|\sum_{i=1}^{n} ( X^{*}_{is}(\theta)-\hat{\mu}_{s}(\theta)) |}{\hat{\sigma}_{s}(\theta)}}.$$ 3. Calculate $c_{n}^{EB}(\theta, J, \alpha)$ as the conditional $(1-\alpha )$-quantile of $W_{n}^{EB}(\theta, J)$ (given $X^{n}(\theta)$). All the results in the remainder of the section will apply to both versions of the bootstrap, and under the same assumptions. For this reason, we can use $c_{n}^{B}(\theta, J, \alpha)$ to denote the bootstrap critical value where $B \in \{MB,EB\}$ represents either MB or EB. Lemma \[lem:BootSize\] in the appendix shows that $c_{n}^{B}(\theta, J, \alpha)$ for $B \in \{MB,EB\}$ provides asymptotic uniform size control in a hypothetical moment (in)equality model composed of moment inequalities indexed by the set $J$ and the $k-p$ moment equalities under Assumptions \[ass:Basic\] and \[ass:Rates3\]. As in Section \[sec:SN\], the main difference between this result and (Theorem 4.3) is that we allow for the presence of the moment equalities. Since our moment (in)equality model has $|J|=p$ moment inequalities and $k-p$ moment equalities, we can define the regular (i.e. one-step) MB or EB approximation method by using $|J|=p$ in Algorithm \[alg:MB\] or \[alg:EB\], respectively, i.e., $$c_{n}^{B,1S}(\theta, \alpha )\equiv c_{n}^{B}(\theta ,\{1,\dots,p\},\alpha), $$ where $c_{n}^{B}(\theta,J,\alpha )$ is as in Algorithm \[alg:MB\] if $B=MB$ or Algorithm \[alg:EB\] if $B=EB$. The following result is a corollary of Lemma \[lem:BootSize\]. \[thm:B1Scorollary\] Assume Assumptions \[ass:Basic\], \[ass:Rates3\], $\alpha \in (0,0.5)$, and that $H_{0}$ holds. Then, $$P\del[1]{T_{n}(\theta)>c_{n}^{B,1S}(\theta, \alpha ) }~\le~ \alpha +\tilde{C}n^{-\tilde{c}},$$ where $\tilde{c},\tilde{C}>0$ are positive constants that only depend on the constants $c,C$ in Assumption \[ass:Rates3\]. Furthermore, if $\mu _{j}(\theta)=0$ for all $j=1,\dots ,p$, then $$\vert P\del[1]{ T_{n}(\theta)>c_{n}^{B,1S}(\theta, \alpha )} -\alpha \vert \leq \tilde{C}n^{-\tilde{c}}.$$ Finally, the proposed bounds are uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. As in the SN approximation method, the regular (one-step) bootstrap approximation considers all moment inequalities in the model as binding. A more powerful bootstrap-based test can be constructed using the data to reveal which moment inequalities are slack. However, unlike in the SN approximation method, Theorem \[thm:B1Scorollary\] shows that the size of the test using the bootstrap critical values converges to $\alpha$ when all the moment inequalities are binding. This difference comes from the fact that the bootstrap can better approximate the correlation structure in the moment inequalities, which is not taken into account by the SN approximation. As we will see in simulations, this translates into power gains in favor of the bootstrap. propose a two-step bootstrap procedure, combining a first step moment inequality based on the bootstrap with the second step bootstrap critical value in Theorem \[thm:B1Scorollary\].[^11] If we adapt their procedure to the presence of moment equalities, this would be given by: $$\begin{aligned} c^{B,2S}(\theta,\alpha) ~\equiv ~c_{n}^{B}(\theta, \hat{J}_{B}(\theta), \alpha - 2 \beta_{n}) \label{eq:B2S-CV}\end{aligned}$$ with: $$\begin{aligned} \hat{J}_{B}(\theta) ~\equiv ~ \{j\in \{1,\dots,p\}~:~\sqrt{n}\hat{\mu}_j(\theta)/\hat{\sigma}_j(\theta) >-2c^{B,1S}(\alpha,\beta_n)\}, $$ where $\{\beta_n\}_{n\ge 1}$ is an arbitrary sequence of constants in $(0,\alpha/2)$. Again, by extending arguments in to the presence of moment equalities, one can show that an inference method based on the critical value $c^{B,2S}(\theta,\alpha)$ in Eq.  is asymptotically valid in a uniform sense. This paper proposes an alternative bootstrap procedure by using our Lasso-based first step. For $B \in \{MB,EB\}$, define the following two-step Lasso bootstrap critical value: $$c_{n}^{B,L}(\theta, \alpha )~\equiv~ c_{n}^{B}(\theta, \hat{J}_{L}(\theta), \alpha) , \label{eq:cn-BLasso}$$ where $\hat{J}_{L}(\theta)$ is as in Eq. , and $c_{n}^{B}(\theta,J,\alpha )$ is as in Algorithm \[alg:MB\] if $B=MB$ or Algorithm \[alg:EB\] if $B=EB$. The following result shows that an inference method based on our two-step Lasso bootstrap critical value is asymptotically valid in a uniform sense. \[lem:Lasso2SBootstrap\] Assume Assumptions \[ass:Basic\], \[ass:Rates\], \[ass:Rates2\], \[ass:Rates3\], $\alpha \in (0,0.5)$, and that $H_{0}$ holds, and let $\lambda_n$ be as in Eq. . Then, for $B\in \{MB,EB\}$, $$\begin{aligned} &&P \del[1]{ T_n(\theta) > c_n^{B,L} (\theta , \alpha)} \\ &&\leq \alpha + \tilde{C} n^{-\tilde{c}} + C n^{-c} + 2\tilde{K} n^{-c}+ 4p \exp\del[2]{2^{-1} n^{\delta/(2+\delta)}/ M_{n,2+\delta}^2} \sbr[2]{ 1+ K (M_{n,2+\delta}/n^{\delta/(2(2+\delta)} + 1 )^{2+ \delta}} \\ && = \alpha + o(1),\end{aligned}$$ where $\tilde{c},\tilde{C}>0$ are positive constants that only depend on the constants $c,C$ in Assumption \[ass:Rates3\], $K,\tilde{K}$ are universal constants, and the convergence is uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. Furthermore, if $\mu_j(\theta) =0$ for all $1 \le j \le p$ and $$\tilde{K} n^{-c}+ 2p \exp\del[2]{2^{-1} n^{\delta/(2+\delta)}/ M_{n,2+\delta}^2} \sbr[2]{ 1+ K (M_{n,2+\delta}/n^{\delta/(2(2+\delta)} + 1 )^{2+ \delta}} \leq \tilde{C} n^{-\tilde{c}},\label{eq:RestrictionOnParams}$$ then, $$|P \del[1]{T_n(\theta) > c_n^{B,L} (\theta , \alpha)} - \alpha |\leq 3 \tilde{C} n^{- \tilde{c}} + C n^{-c} = o(1),$$ where all constants are as defined earlier and the convergence is uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. By repeating arguments at the end of Section \[sec:SN\], it follows that our two-step bootstrap method results in a larger rejection probability than the one-step bootstrap method for all sample sizes.[^12] Also, the comparison between $c_{n}^{B,L}(\theta,\alpha)$ and $c_{n}^{B,2S}(\theta,\alpha)$ is not straightforward as these differ in the same two aspects described Section \[sec:SN\]. This comparison will be the topic of the next section. Power comparison {#sec:Power} ================ show that all of their inference methods satisfy uniform asymptotic size control under appropriate assumptions. Theorems \[thm:Lasso2SSize\] and \[lem:Lasso2SBootstrap\] show that our Lasso-based two-step inference methods also satisfy uniform asymptotic size control under similar assumptions. Given these results, the natural next step is to compare these inference methods in terms of criteria related to power. One possible such criterion is minimax optimality, i.e., the ability that a test has of rejecting departures from the null hypothesis at the fastest possible rate (without loosing uniform size control). show that all their proposed inference methods are asymptotically optimal in a minimax sense, even in the absence of any inequality selection (i.e. defined as in Theorems \[thm:SN1Scorollary\] and \[thm:B1Scorollary\] in the presence of moment equalities). Since our Lasso-based inequality selection can only reduce the number of binding moment inequalities (thus increasing rejection), we can also conclude that all of our two-step Lasso-based inference methods (SN, MB, and EB) are also asymptotically optimal in a minimax sense. In other words, minimax optimality is a desirable property that is satisfied by all tests under consideration and, thus, cannot be used as a criterion to distinguish between them. Thus, we proceed to compare our Lasso-based inference procedures with those proposed by in terms of rejection rates. Since all inference methods share the test statistic $T_{n}(\theta)$, the power comparison depends exclusively on the critical values. Comparison with one-step methods -------------------------------- As pointed out in previous sections, our Lasso-based two-step inference methods will always be more powerful than the corresponding one-step analogue, i.e., $$\begin{aligned} P \del[1]{T_n(\theta) > c_{n}^{SN,L}(\theta,\alpha)} &\geq& P \del[1]{ T_n(\theta) > c_{n}^{SN,1S}(\alpha)} \\ P \del[1]{ T_n(\theta) > c_{n}^{B,L}(\theta,\alpha)} &\geq& P \del[1]{T_n(\theta) > c_{n}^{B,1S}(\theta,\alpha)}~~\forall B \in \{MB,EB\},\end{aligned}$$ for all $\theta \in \Theta$ and $n \in \mathbb{N}$. This is a direct consequence of the fact that one-step critical values are based on considering all moment inequalities as binding, while the Lasso-based first-step will restrict attention to the subset of them that are sufficiently close to binding, i.e., $\hat{J}_{L}(\theta) \subseteq \{1,\dots,p\}$. Comparison with two-step methods -------------------------------- The comparison between our two-step Lasso procedures and the two-step methods in is not straightforward for two reasons. First, the set of binding inequalities according to the Lasso might be different from the other methods. Second, our Lasso-based methods considers the usual $\alpha$ quantile while the other two-step methods consider the $\alpha - 2\beta_{n}$ quantile for a sequence of positive constants $\{ \beta_{n} \}_{ n\geq 1}$. To simplify the discussion, we focus exclusively on the case where the moment (in)equality model is only composed of inequalities, i.e., $k=p$, which is precisely the setup in . This is done for simplicity of exposition, the introduction of moment equalities would not qualitatively change the conclusions that follow. We begin by comparing the two-step SN method with the two-step Lasso SN method. For all $\theta \in \Theta$ and $n \in \mathbb{N}$, our two-step Lasso SN method will have more power than the two-step SN method if and only if $ c_{n}^{SN,L}(\theta,\alpha)~\leq~ c_{n}^{SN,2S}(\alpha)$. By inspecting the formulas in , this occurs if and only if: $$|\hat{J}_L(\theta)|~\leq~ \frac{\alpha}{\alpha-2\beta_n}|\hat{J}_{SN}(\theta)|, \label{eq:PowerAdvSN2}$$ where, by definition, $\{ \beta_{n} \}_{ n\geq 1}$ satisfies $\beta_n\leq \alpha/3$. We provide sufficient conditions for Eq.  in the following result. \[thm:powercompSN\] For all $\theta \in \Theta$ and $n \in \mathbb{N}$, $$\hat{J}_L(\theta)~~\subseteq~~ \hat{J}_{SN}(\theta) \label{eq:JComparisonSN}$$ implies $$P \del[1]{ T_n(\theta) > c_{n}^{SN,L}(\theta,\alpha)} ~~\geq~~ P \del[1]{T_n(\theta) > c_{n}^{SN,2S}(\alpha)}. \label{eq:PowerComparisonSN}$$ In turn, Eq.  occurs under any of the following circumstances: $$\begin{aligned} &\frac{4}{3}c_{n}^{SN}( \beta _{n})\geq\sqrt{n}\lambda _{n}, ~~~ \mbox{or,} \label{eq:highlevel} \\ \beta_n\leq 0.1,~~ M_{n,2+\delta}^2n^{2/(2+\delta)}\geq 2, ~~\mbox{and}~~ & \ln\del[2]{\frac{p}{2\beta_n\sqrt{2\pi}}} \geq \frac{9}{8}(\frac{4}{3}+\varepsilon)^2n^{\delta/(2+\delta)}M_{n,2+\delta}^{-2},\label{eq:lowlevel}\end{aligned}$$ where $\varepsilon >0$ is as in Eq. . Theorem \[thm:powercompSN\] provides two sufficient conditions under which our two-step Lasso SN method will have greater or equal power than the two-step SN method in . The power difference is a direct consequence of Eq. , i.e., our Lasso-based first step inequality selection procedure chooses a subset of the inequalities in the SN-based first step. The first sufficient condition, Eq. , is sharper than the second one, Eq. , but the second one is of lower level and, thus, easier to interpret and understand. Eq.  is composed of three statements and only the third one could be considered restrictive. The first one, $\beta \leq 10\%$, is non-restrictive as require that $\beta_n\leq \alpha/3$ and the significance level $\alpha$ is typically less than $30\%$. The second, $M_{n,2+\delta}^2n^{2/(2+\delta)}\geq 2$, is also non-restrictive since $M_{n,2+\delta}^2$ is a non-decreasing sequence of positive constants and $n^{2/(2+\delta)}\to\infty$. In principle, Theorem \[thm:powercompSN\] allows for the possibility of the inequality in Eq.  being an equality. However, in cases in which the Lasso-based first step selects a strict subset of the moment inequalities chosen by the SN method (i.e. the inclusion in Eq.  is strict), the inequality in Eq.  can be strict. In fact, the inequality in Eq.  can be strict even in cases in which the Lasso-based and SN-based first step agree on the set of binding moment inequalities. The intuition for this is that our Lasso-based method considers the usual $\alpha$-quantile while the other two-step methods consider the $(\alpha - 2\beta_{n})$-quantile for the sequence of positive constants $\{ \beta_{n} \}_{ n\geq 1}$. This slight difference always plays in favor of the Lasso-based first step having more power.[^13] The relevance of Theorem \[thm:powercompSN\] depends on the generality of the sufficient conditions in Eq.  and . Figure \[fig:SN\] provides heat maps that indicate combinations of values of $M_{n,2+\delta}$ and $p$ under which Eqs.  and are satisfied. The graphs clearly show these conditions are satisfied for a large portion of the parameter space. In fact, the region in which Eq.  fails to hold is barely visible. In addition, the graph also confirms that Eq.  applies more generally than Eq. . ![Consider a moment inequality model with $n=400$, $\beta_n=0.1\%$, $C=2$, $M=M_{n,2+\delta} \in [0,10]$, and $k=p\in \{1,\dots,1000\}$. The left (right) panel shows in red the configurations $(p,M)$ that do not satisfy Eq.  (Eq. , respectively).[]{data-label="fig:SN"}](C41_highlevel.png "fig:"){width="8cm" height="8cm"} ![Consider a moment inequality model with $n=400$, $\beta_n=0.1\%$, $C=2$, $M=M_{n,2+\delta} \in [0,10]$, and $k=p\in \{1,\dots,1000\}$. The left (right) panel shows in red the configurations $(p,M)$ that do not satisfy Eq.  (Eq. , respectively).[]{data-label="fig:SN"}](C41_closedform.png "fig:"){width="8cm" height="8cm"} \[rem:FiniteSN\] Notice that the power comparison in Theorem \[thm:powercompSN\] is a finite sample result. In other words, under any of the sufficient conditions Theorem in \[thm:powercompSN\], the rejection of the null hypothesis by an inference method with SN-based first step implies the same outcome for the corresponding inference method with Lasso-based first step. Expressed in terms of confidence sets, the confidence set with our Lasso first step will be a subset of the corresponding confidence set with a SN first step. To conclude the section, we now compare the power of the two-step bootstrap procedures. \[thm:power\_compB\] Assume Assumption \[ass:Rates3\] and let $B \in \{MB,EB\}$. For all $\theta \in \Theta$ and $n \in \mathbb{N}$, $$\begin{aligned} \hat{J}_L(\theta) ~\subseteq~ \hat{J}_{B}(\theta) \label{eq:JComparisonB} \end{aligned}$$ implies $$\begin{aligned} P (T_n(\theta) > c_{n}^{B,2S}(\alpha)) ~\leq~ P (T_n(\theta) > c_{n}^{B,L}(\theta,\alpha)). \label{eq:PowerComparisonBpre} \end{aligned}$$ Eq.  occurs with probability approaching one, i.e., $$P\del[1]{\hat{J}_L(\theta)\subseteq \hat{J}_{B}(\theta)} ~\ge~ 1-{C}n^{-{c}} \label{eq:JComparisonBstock}$$ under the following sufficient conditions: $M_{n,2+\delta}^2n^{2/(2+\delta)}\geq 2$, $\beta_n\geq {C}n^{-{c}}$ for some $C,c>0$, and any one of the following conditions: $$\begin{aligned} 1-\Phi \left( \frac{3}{2^{3/2}}(\frac{4}{3}+\varepsilon )n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}\right) & \geq 3\beta _{n}, \qquad \mbox{or,} \label{eq:2Ssuff} \\ \sqrt{(1-\rho(\theta) )\log (p)/2}-\sqrt{2\log (1/[1-3\beta _{n}])} & \geq \frac{3}{ 2^{3/2}}(\frac{4}{3}+\varepsilon )n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}, \label{eq:2Ssuff2}\end{aligned}$$ where $\rho(\theta) \equiv \max_{j_1\neq j_2}corr[X_{j_1}(\theta),X_{j_2}(\theta)]$.\ Under any of the sufficient conditions in part 2, $$P \del[1]{T_n(\theta) > c_{n}^{B,2S}(\alpha)} ~\leq~ P\del[1]{T_n(\theta) > c_{n}^{B,L}(\theta,\alpha)} + {C}n^{-{c}} \label{eq:PowerComparisonB}$$ Theorem \[thm:power\_compB\] provides sufficient conditions under which any power advantage of the two-step bootstrap method in relative to our two-step bootstrap Lasso vanishes as the sample size diverges to infinity. Specifically, Eq.  indicates that, under any of the sufficient conditions, this power advantage does not exceed $\tilde{C}n^{-\tilde{c}}$. As in the SN approximation, this relative power difference is a direct consequence of Eq. , i.e., our Lasso-based first step inequality selection procedure chooses a subset of the inequalities in the bootstrap-based first step. The relevance of the result in Theorem \[thm:power\_compB\] depends on the generality of the sufficient condition. This condition has three parts. The first part, i.e., $M_{n,2+\delta}^2n^{2/(2+\delta)}\geq 2$, was already argued to be non-restrictive since $M_{n,2+\delta}^2$ is a non-decreasing sequence of positive constants and $n^{2/(2+\delta)}\to\infty$. The second part, i.e., $\beta_n\geq{C}n^{-{c}}$ is also considered mild as $\{\beta_n\}_{n\ge 1}$ is a sequence of positive constants and $ {C}n^{-{c}}$ converges to zero. The third part is Eq.  or and we deem it to be the more restrictive condition of the three. In the case of the latter, this condition can be understood as imposing an upper bound on the maximal pairwise correlation within the moment inequalities of the model. Monte Carlo simulations {#sec:MonteCarlos} ======================= We now use Monte Carlo simulations to investigate the finite sample properties of our tests and to compare them to those proposed by . Our simulation setup follows closely the moment inequality model considered in their Monte Carlo simulation section. For a hypothetical fixed parameter value $\theta \in \Theta$, we generate data according to the following equation: $$\begin{aligned} X_{i}(\theta)~=~\mu(\theta) + A'\epsilon_{i}~~~~~i=1,\dots,n = 400,\end{aligned}$$ where $\Sigma(\theta)=A'A$, $\epsilon_i=(\epsilon_{i,1},\dots, \epsilon_{i,p})$, and $p \in \{ 200,500,1000\}$. We simulate $\{\epsilon_i\}_{i=1}^{n}$ to be i.i.d. with $E[\epsilon_i]= {\bf 0}_{p}$ and $Var[\epsilon_i]=\mathbf{I}_{p\times p}$, and so $\{X_{i}(\theta)\}_{i=1}^{n}$ are i.i.d. with $E[X_{i}(\theta)]=\mu(\theta)$ and $Var[X_{i}(\theta)]=\Sigma(\theta)$. This model satisfies the moment (in)equality model in Eq.  if and only if $\mu(\theta)\leq {\bf 0}_{p}$. In this context, we are interested in implementing the hypothesis test in Eqs.  (or, equivalently, Eq. ) with a significance level of $\alpha = 5\%$. We simulate $\epsilon_i =(\epsilon_{i,1},\dots, \epsilon_{i,p})$ to be i.i.d. according to two distributions: (i) $\epsilon_{i,j}$ follows a $t$-distribution with four degrees of freedom divided by $\sqrt{2}$, i.e., $ \epsilon_{i,j} \sim t_4/\sqrt{2}$ and (ii) $\epsilon_{i,j} \sim U(-\sqrt{3},\sqrt{3})$. Note that both of these choices satisfy $E[\epsilon_i]= {\bf 0}_{p}$ and $Var[\epsilon_i]=\mathbf{I}_{p\times p}$. Since $(\epsilon_{i,1},\dots, \epsilon_{i,p})$ are i.i.d., the correlation structure across moment inequalities depends entirely on $\Sigma(\theta)$, for which we consider two possibilities: (i) $\Sigma(\theta)_{[j,k]}=1[j=k]+\rho \cdot 1[j\neq k]$ and (ii) a Toeplitz structure, i.e., $\Sigma(\theta)_{[j,k]}=\rho^{|j-k|}$ with $\rho \in \{ 0,0.5,0.9\}$. We repeat all experiments $2,000$ times. The description of the model is completed by specifying $\mu(\theta)$, given in Table \[tab:ParameterChoices\]. We consider ten different specifications of $\mu(\theta)$ which, in combination with the rest of the parameters, results in fourteen simulation designs. Our first eight simulation designs correspond exactly to those in , half of which satisfy the null hypothesis and half of which do not. We complement these simulations with six designs that do not satisfy the null hypothesis. The additional designs are constructed so that the moment inequalities that agree with the null hypothesis are only slightly or moderately negative.[^14] As the slackness of these inequalities decreases, it becomes harder for two-step inference methods to correctly classify the non-binding moment conditions as such. As a consequence, these new designs will help us understand which two-step inference procedures have better ability in detecting slack moment inequalities. We implement all the inference methods described in Table \[tab:InfMethods\]. These include all of the procedures described in previous sections some additional “hybrid” methods (i.e. MB-H and EB-H). The bootstrap based methods are implemented with $B=1,000$ bootstrap replications. Finally, for our Lasso-based first step, we use: $$\begin{aligned} {\lambda}_n~=~C\cdot n^{-1/2}\del[1]{\hat{M}_{n,3}^2n^{-1/3}-n^{-1}}^{-1}, \label{eq:LambdaMCs}\end{aligned}$$ with $C\in\{2,4,6\}$ and $\hat{M}_{n,3}\equiv \max_ {j = 1,\dots,p} (n^{-1}\sum\nolimits_{i=1}^n|X_{ij}(\theta)|^3)^{1/3}$. This corresponds to the empirical analogue of Eq.  when $\delta=1$ and $\varepsilon\in \{2/3, 8/3, 14/3\}$. We shall begin by considering the simulation designs in as reported in Tables \[tab1\]-\[tab8\]. The first four tables are concerned with the finite sample size control. The general finding is that all tests under consideration are very rarely over-sized. The maximal size observed for our procedures is 7.15 (e.g. EB Lasso in Designs 3-4, $p=1,000$, $\rho=0$, and uniform errors) while the corresponding number for is 7.25 (e.g. EB-1S in Designs 3-4, $p=1,000$, $\rho=0$, and uniform errors). Some procedures, such as SN-1S, can be heavily under-sized. Our simulations reveal that in order to achieve empirical rejection rates close to $\alpha = 5\%$ under the null hypothesis, one requires using a two-step inference procedure with a bootstrap-based second step (either MB or EB). Before turning to the individual setups for power comparison, let us remark that a first step based on our Lasso procedure compares favorably with a first step based on SN. For example, SN-Lasso with $C=2$ has more or equal power than SN-2S with $\beta_n=0.1\%$. While the differences may often be small, this finding is in line with the power comparison in Section \[sec:Power\]. Tables \[tab5\]-\[tab8\] contain the designs used by to gauge the power of their tests. Tables \[tab5\] and \[tab6\] consider the case where all moment inequalities are violated. Since none of the moment conditions are slack, there is no room for power gains based on a first-step inequality selection procedure. In this sense, it is not surprising that the first step choice makes no difference in these designs. For example, the power of SN-Lasso is identical to the one of SN-1S while the power of SN-2S is also close to the one of SN-1S. However, the SN-2S has lower power than SN-1S for some values of $\beta_{n}$ while the power of SN Lasso appears to be invariant to the choice of $C$. The latter is in accordance with our previous findings. The bootstrap still improves power for high values of $\rho$. Next, we consider Tables \[tab7\] and \[tab8\]. In this setting, $90\%$ of the moment conditions have $\mu_j(\theta)=-0.75$ and our results seem to suggest that this value is relative far away from being binding. We deduce this from the fact that all first-step selection methods agree on the set of binding moment conditions, producing very similar power results. Table \[tab15\] shows the percentage of moment inequalities retained by each of the first-step procedures in Design 8. When the error terms are $t$-distributed, all first-step procedures retain around $10\%$ of the inequalities which is also the fraction that are truly binding (and, in this case, violated). Thus, all two-step inference procedures are reasonably powerful. When the error terms are uniformly distributed, all first-step procedures have an equal tendency to aggressively remove slack inequalities. However, we have seen from the size comparisons that this does not seem to result in oversized tests. Finally, we notice that the power of our procedures hardly varies with the choice of $C$. The overall message of the simulation results in Designs 1-8 is that our Lasso-based procedures are comparable in terms of size and power to the ones proposed by . Tables \[tab9\]-\[tab14\] present simulations results for Designs 9-14. These correspond to modifications of the setup in Design 8 in which progressively decrease the degree of slackness of the non-binding moment inequalities from $-0.75$ to values between $-0.6$ and $-0.1$. Tables \[tab9\]-\[tab10\] shows results for Designs 9 and 10. As in the case of Design 8, the degree of slackness of the non-binding moment inequalities is still large enough so that it can be correctly detected by all first first-step selection methods. As Table \[tab11\] shows, this pattern changes in Design 11. In this case, the MB Lasso with $C=2$ has a rejection rate that is at least 20 percentage points higher than the most powerful procedure in . For example, with $t$-distributed errors, $p=1,000$, and $\rho=0$, our MB Lasso with $C=2$ has a rejection rate of 71.40% whereas the MB-2S with $\beta_n = 0.01\%$ has a rejection rate of 20.55%. Table \[tab16\] holds the key to these power differences. Ideally, a powerful procedure should retain only the $10\%$ of the moment inequalities that are binding (in this case, violated). The Lasso-based selection indeed often retains close to $10\%$ of the inequalities for $C \in \{2,4\}$. On the other hand, SN-based selection can sometimes retain more than $90\%$ of the inequalities (e.g. see $t$-distributed errors, $p=1,000$, and $\rho=0$). The power advantage in favor of the Lasso-based first step is also present in Design 12 as shown in Table \[tab12\]. In this case, the MB Lasso with $C=2$ has a rejection rate which is at least $15$ percentage points higher than the most powerful procedure in . For $t$-distributed errors, the MB Lasso always has a rejection rate that is at least $20$ percentage points higher than its competitors and sometimes more than $50$ percentage points (e.g. $p=1,000$ and $\rho=0$). As in the previous design, this power gain mainly comes from the Lasso being better at removing the slack moment conditions. Table \[tab13\] shows the results for Design 13. For $t$-distributed errors, the MB Lasso with $C=2$ has a higher rejection rate than the most powerful procedure of (which is often MB-1S) by at least $5$ percentage points. Sometimes the difference is larger than $45$ percentage points (e.g. see $p=1,000$ and $\rho=0$). For uniformly distributed errors, there seems to be no significant difference between our procedures and the ones in ; all of them have relatively low power. Design 14 is our last experiment and it is shown in Table \[tab14\]. In this case, the degree of slackness of the non-binding moment inequalities is so small that it cannot be detected by any of the first-step selection methods. As a consequence, there are very little differences among the various inference procedures and all of them exhibit relatively low power. The overall message from Tables \[tab9\]-\[tab14\] is that our Lasso-based inference procedures can have higher power than those in when the slack moment inequalities are difficult to distinguish from zero. Conclusions {#sec:Conclusions} =========== This paper considers the problem of inference in a partially identified moment (in)equality model with possibly many moment inequalities. Our contribution is to propose a novel two-step inference method based on the combination of two ideas. On the one hand, our test statistic and critical values are based on those proposed by . On the other hand, we propose a new first step selection procedure based on the Lasso. Our two-step inference method can be used to conduct hypothesis tests and to construct confidence sets for the true parameter value. Our inference method has very desirable properties. First, under reasonable conditions, it is uniformly valid, both in underlying parameter $\theta $ and distribution of the data. Second, by virtue of results in , our test is asymptotically optimal in a minimax sense. Third, the power of our method compares favorably with that of the corresponding two-step method in , both in theory and in simulations. On the theory front, we provide sufficient conditions under which the power of our method dominates. These can sometimes represent a significant part of the parameter space. Our simulations indicate that our inference methods are usually as powerful as the corresponding ones in , and can sometimes be more powerful. Fourth, our Lasso-based first step is straightforward to implement. Appendix ======== Throughout the appendix, we omit the dependence of all expressions on $\theta$. Furthermore, LHS and RHS abbreviate “left hand side” and “right hand side”, respectively. Auxiliary results ----------------- \[lem:SampleBound\] Assume Assumptions \[ass:Basic\]-\[ass:Rates\]. Then, for any $\gamma $ s.t. $\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}\in [ 0,n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}]$, $$P\del[2]{\max_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert / \hat{\sigma} _{j}>\gamma} \leq 2p(1-\Phi (\sqrt{n}\gamma /\sqrt{ 1+\gamma ^{2}}))[ 1+Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }( 1+ \sqrt{n}\gamma /\sqrt{ 1+\gamma ^{2}}) ^{2+\delta }] , \label{eq:Lemma1Eq1}$$ where $K$ is a universal constant. For any $i=1,\dots ,n$ and $j=1,\dots ,p$, let $Z_{ij}\equiv ( X_{ij}-\mu _{j})/\sigma _{j}$ and $U_{j}\equiv \sqrt{n} \sum_{i=1}^{n} (Z_{ij}/n)/\sqrt{ \sum_{i=1}^{n} (Z_{ij}^{2}/n)}$. We divide the rest of the proof into three steps. By definition, $\sqrt{n}(\hat{\mu}_{j}-\mu _{j})/\hat{\sigma}_{j}=U_{j}/\sqrt{ 1-U_{j}^{2}/n}$ and so $$\sqrt{n}|\hat{\mu}_{j}-\mu _{j}|/\hat{\sigma}_{j}=|U_{j}|/\sqrt{ 1-|U_{j}|^{2}/n}.\label{eq:reexpressionU}$$ Since the RHS of Eq.  is increasing in $|U_{j}|$, it follows that: $$\begin{aligned} \cbr[2]{\max_{j=1,\dots ,p}|\hat{\mu}_{j}-\mu _{j}|/\hat{\sigma}_{j}>\gamma } =\cbr[2]{\max_{1\leq j\leq p}|U_{j}|/\sqrt{1-|U_{j}|^{2}/n}>\sqrt{n}\gamma } \subseteq \cbr[2]{\max_{1\leq j\leq p}|U_{j}|\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}}. \label{eq:SNbound0}\end{aligned}$$ For every $j=1,\dots ,p,$ $\{Z_{ij}\}_{i=1}^{n}$ is a sequence of independent random variables with $E[Z_{ij}]=0$, $E[Z_{ij}^{2}]=1$, and $ E[|Z_{ij}|^{2+\delta }]\leq M_{n,2+\delta }^{2+\delta }<\infty $. If we let $ S_{nj}=\sum_{i=1}^{n}Z_{ij}$, $V_{nj}^{2}=\sum_{i=1}^{n}Z_{ij}^{2}$, and $ 0<D_{nj}=[n^{-1}\sum_{i=1}^{n}E[|Z_{ij}|^{2+\delta }]]^{1/(2+\delta )}\leq M_{n,2+\delta }<\infty $, then (Lemma A.1) implies that for all $t\in [ 0,n^{\delta /(2(2+\delta ))}D_{nj}^{-1}]$, $$\envert[2]{\frac{P(S_{nj}/V_{nj}\geq t)}{1-\Phi (t)}-1}\leq Kn^{-\delta /2}D_{nj}^{2+\delta }(1+t)^{2+\delta }, \label{eq:SNbound1}$$ where $K$ is a universal constant. By using that $U_{j} = S_{nj}/V_{nj}$, $D_{nj}\leq M_{n,2+\delta }$, and applying Eq.  to $t=\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}$, it follows that for any $\gamma $ s.t. $\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}\in [ 0,n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}]$, $$\begin{aligned} \envert[1]{P\del[2]{U_{j}\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}}-(1-\Phi (\sqrt{n}\gamma / \sqrt{1+\gamma ^{2}}))} \leq Kn^{-\delta /2}D_{nj}^{2+\delta }\del[1]{1-\Phi (\sqrt{ n }\gamma /\sqrt{1+\gamma ^{2}})}\del[1]{1+\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}} }^{2+\delta }.\end{aligned}$$ Thus, for any $\gamma $ s.t. $\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}\in [ 0,n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}]$, $$\begin{aligned} \sum_{j=1}^{p}P\del[2]{U_{j}\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}}\leq p\del[1]{1-\Phi (\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}})}\sbr[1]{1+Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}})^{2+\delta }}. \label{eq:SNbound2}\end{aligned}$$ By applying the same argument for $-Z_{ij}$ instead of $Z_{ij}$, it follows that for any $\gamma $ s.t. $\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}\in [ 0,n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}]$, $$\begin{aligned} \sum_{j=1}^{p}P\del[2]{-U_{j}\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}}\leq p\del[1]{1-\Phi (\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}})}\sbr[1]{1+Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}})^{2+\delta }}. \label{eq:SNbound3}\end{aligned}$$ Consider the following argument. $$\begin{aligned} P\del[2]{\max_{j=1,\dots ,p}|\hat{\mu}_{j}-\mu _{j}|/\hat{\sigma}_{j}>\gamma } &\leq &P\del[2]{\max_{1\leq j\leq p}|U_{j}|\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}} } \\ &\leq &\sum_{j=1}^{p}P\del[2]{|U_{j}|\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}} \\ &\leq &\sum_{j=1}^{p}P\del[2]{U_{j}\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}} }+\sum_{j=1}^{p}P\sbr[2]{-U_{j}\geq \sqrt{n}\gamma /\sqrt{1+\gamma ^{2}}} \\ &\leq &2p\del[2]{1-\Phi (\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}})}\sbr[2]{1+Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }\del[1]{1+\sqrt{n}\gamma /\sqrt{1+\gamma ^{2}} }^{2+\delta }},\end{aligned}$$ where the first inequality follows from Eq. , the second inequality follows from Bonferroni bound, and the fourth inequality follows from Eqs.  and . \[lem:SampleBound2\] Assume Assumptions \[ass:Basic\]-\[ass:Rates\] and let $\{\gamma _{n}\}_{n\geq 1}\subseteq \mathbb{R}$ satisfy $\gamma _{n}\geq \gamma _{n}^{\ast }$ for all $n$ sufficiently large, where $$\begin{aligned} \gamma _{n}^{\ast } \equiv n^{-1/2}( M_{n,2+\delta }^{2}n^{-\delta /(2+\delta )}-n^{-1}) ^{-1/2} =( nM_{n,2+\delta }^{2+\delta }) ^{-1/(2+\delta )}( 1-( nM_{n,2+\delta }^{2+\delta }) ^{-2/(2+\delta )}) ^{-1/2}\to 0. \label{Eq:LambdaStar}\end{aligned}$$ Then, $$\begin{aligned} P\del[2]{ \max_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert / \hat{\sigma} _{j}>\gamma _{n}} \leq 2p\exp \del[1]{ -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}} \sbr[1]{ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }} \to 0. \label{eq:SampleBoundEq}\end{aligned}$$ First, note that the convergence to zero in Eq.  follows from $ nM_{n,2+\delta }^{2+\delta }\to \infty $. Since $\gamma _{n}\geq \gamma _{n}^{\ast }$, Eq. holds if we show: $$\begin{aligned} P\del[2]{ \max_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert / \hat{\sigma} _{j}>\gamma _{n}^*} \leq 2p\exp \del[1]{ -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}} \sbr[1]{ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }} \to 0. \label{eq:SampleBoundEq2}\end{aligned}$$ As we show next, Eq.  follows from using Lemma \[lem:SampleBound\] with $\gamma =\gamma _{n}^{\ast }$. This choice of $\gamma$ implies $\sqrt{n}\gamma _{n}^{\ast }/\sqrt{1+(\gamma _{n}^{\ast })^{2} }=n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}$ making $\gamma =\gamma_n^*$ a valid choice in Lemma \[lem:SampleBound\]. Then, Lemma \[lem:SampleBound\] with $\gamma =\gamma _{n}^{\ast }$ implies that: $$\begin{aligned} P\del[2]{\max_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert / \hat{\sigma} _{j}>\gamma _{n}^{\ast }} &\leq& 2p\del[1]{1-\Phi (n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1})}\sbr[1]{ 1+Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }( 1+n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}) ^{2+\delta }} \\ & \leq& 2p\exp \del[1]{-2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}}\sbr[1]{ 1+K( n^{-\delta /(2(2+\delta ))}M_{n,2+\delta }+1) ^{2+\delta } },\end{aligned}$$ where we have used that $1-\Phi(t)\leq e^{-t^2/2}$. We now show that the RHS of the above display converges to zero by Assumption \[ass:Rates\]. First, notice that $ M_{n,2+\delta }^{(2+\delta )}(\ln ( 2k-p) )^{(2+\delta )/2}n^{-\delta /2}\to 0$. Next, $( 2k-p) >1$ implies that $M_{n,2+\delta }^{(2+\delta )}n^{-\delta /2}\to 0$ and, in turn, this implies that $n^{-\delta /(2(2+\delta ))}M_{n,2+\delta }\to 0$. Furthermore, notice that $M_{n,2+\delta }^{(2+\delta )}(\ln ( 2k-p) )^{(2+\delta )/2}n^{-\delta /2}\to 0$, $M_{n,2+\delta }^{(2+\delta )}(\ln ( 2k-p) )^{(2+\delta )/2}n^{-\delta /2}>0$, and $( 2k-p) \geq p$ implies that $n^{\delta /(2+\delta )}(M_{n,2+\delta }^{2}\ln p)^{-1}\to \infty $. This implies that: $$p\exp \del[1]{-2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}]}=\exp \del[2]{\ln p\sbr[1]{1-2^{-1}[n^{\delta /(2+\delta )}(M_{n,2+\delta }^{2}\ln p)^{-1}]}} \to 0,$$ completing the proof. By definition, $J\subseteq J_{I}$ where $J_{I}$ is as defined in the proof of Theorem \[thm:Lasso2SSize\]. Then, the result is a corollary of Step 2 in the proof of Theorem \[thm:Lasso2SSize\]. Fix $j=1,\dots ,p$ arbitrarily. @buhlmann/vandegeer:2011 [Eq. (2.5)] implies that the Lasso estimator in Eq.  satisfies: $$\begin{aligned} \hat{\mu}_{L,j} = {\mathrm{sign}}(\hat{\mu}_{j}) \times \max \{|\hat{\mu}_j|-\hat{\sigma}_j{\lambda_n}/{2}, 0\}~~~\forall j=1,\dots,p. \label{eq:ClosedForm1} \end{aligned}$$ To complete the proof, it suffices to show that: $$\{\hat{\mu}_{L,j}\geq -\hat{\sigma}_{j}\lambda _{n}\}~~=~~ \{\hat{\mu }_{j}\geq -3\hat{\sigma}_{j}\lambda _{n}/2\}. \label{eq:ClosedForm2}$$ We divide the verification into four cases. First, consider that $\hat{\sigma }_{j}=0$. If so, $-\hat{\sigma}_{j}\lambda _{n}=-3\hat{\sigma} _{j}\lambda _{n}/2=0$ and $\hat{\mu}_{L,j}=sign( \hat{\mu} _{j}) \times \max \{ \vert \hat{\mu}_{j}\vert ,0\} =\hat{\mu}_{j}$, and so Eq.  holds. Second, consider that $\hat{\sigma}_{j}>0$ and $\hat{\mu}_{j}\geq 0$. If so, $\hat{\mu}_{j}\geq 0\geq -3\hat{\sigma}_{j}\lambda _{n}/2$ and so the RHS condition in Eq.  is satisfied. In addition, Eq.  implies that $\hat{\mu}_{L,j}\geq 0\geq -\hat{\sigma} _{j}\lambda _{n}$ and so the LHS of condition in Eq.  is also satisfied. Thus, Eq.  holds. Third, consider that $\hat{\sigma}_{j}>0$ and $\hat{\mu}_{j}\in [ -\hat{\sigma} _{j}\lambda _{n}/2,0)$. If so, $\hat{\mu}_{j}\geq - \hat{\sigma}_{j}\lambda _{n}/2\geq -3\hat{\sigma}_{j}\lambda _{n}/2$ and so the RHS condition in Eq.  is satisfied. In addition, Eq.  implies that $\hat{\mu}_{L,j}=0\geq -\hat{\sigma}_{j}\lambda _{n}$ and so the LHS of condition in Eq.  is also satisfied. Thus, Eq.  holds. Fourth and finally, consider that $\hat{\sigma}_{j}>0$ and $\hat{\mu}_{j}<-\hat{\sigma}_{j}\lambda _{n}/2$. Then, Eq.  implies that $\hat{\mu}_{L,j}=\hat{\mu}_{j}+\hat{\sigma} _{j}\lambda _{n}/2$ and so Eq.  holds. Results for the self-normalization approximation ------------------------------------------------ \[lem:CSNincreasing\] For any $\pi \in (0,0.5]$, $n\in \mathbb{N}$, and $d\in \{0,1\dots ,2k-p\}$, define the function: $$CV(d)\equiv \left\{ \begin{array}{ll} 0 & \text{if }d=0 ,\\ \frac{\Phi ^{-1}( 1-\pi /d) }{\sqrt{1-( \Phi ^{-1}( 1-\pi /d) ) ^{2}/n}} & \text{if }d>0. \end{array} \right.$$ Then, $CV:\{0,1\dots, 2k-p\}\to \mathbb{R}_{+}$ is weakly increasing for $n$ sufficiently large. First, we show that $CV(d)\leq CV(d+1)$ for $d=0$. To see this, use that $\pi \leq 0.5$ such that $\Phi ^{-1}( 1-\pi) \geq 0$, implying that $CV(1)\geq 0=CV(0)$. Second, we show that $CV(d)\leq CV(d+1)$ for any $d>0$. To see this, notice that $CV(d)$ and $CV(d+1)$ are both the result of the composition $g_{1}(g_{2}(\cdot )):\{ 1\dots ,2k-p\} \to \mathbb{R}$ where: $$\begin{aligned} g_{1}(y) &\equiv&y/\sqrt{1-y^{2}/n}:[0,\sqrt{n})\to \mathbb{R}_{+} \\ g_{2}(d) &\equiv&\Phi ^{-1}( 1-{\pi }/{d}) :\{ 1\dots ,2k-p\} \to \mathbb{R}.\end{aligned}$$ We first show that $g_{1}(g_{2}(\cdot ))$ is properly defined by verifying that the range of $g_{2}$ is included in support of $g_{1}$. Notice that $g_{2}$ is an increasing function and so $ g_{2}(d)\in \lbrack g_{2}(1),g_{2}(2(k-p)+p)]=[\Phi ^{-1}( 1-\pi ) ,\Phi ^{-1}( 1-\pi /(2k-p)) ]$. For the lower bound, $\pi \leq 0.5$ implies that $\Phi ^{-1}( 1-\pi ) \geq 0$. For the upper bound, consider the following argument. On the one hand, $( 1-\Phi ( \sqrt{n} )) \leq \exp ( -n/2)/2$ holds for all $n$ large enough. On the other hand, Assumption \[ass:Rates\] implies that $\exp(-n/2)/2\leq {\pi}/(2k-p)$. By combining these two, we conclude that $\Phi ^{-1}( 1-\pi /(2k-p)) \leq \sqrt{n}$ for all $n$ large enough, as desired. From here, the monotonicity of $CV(d)$ follows from the fact that $g_{1}$ and $ g_{2}$ are both weakly increasing functions and so $CV(d)=g_{1}(g_{2}(d)) \leq g_{1}(g_{2}(d+1))=CV(d+1)$. \[lem:SNSize\] Assume Assumptions \[ass:Basic\]-\[ass:Rates\], $\alpha \in (0,0.5)$, and that $H_{0}$ holds. For any non-stochastic set $L\subseteq \{1,\dots ,p\}$, define: $$\begin{aligned} T_{n}(L) &\equiv &\max \left\{ \max_{j\in L}{\sqrt{n}\hat{\mu}_{j}}/{\hat{ \sigma}_{j}},\max_{s=p+1,\dots ,k}{\sqrt{n}\vert \hat{\mu} _{s}\vert }/{\hat{\sigma}_{s}} \right\} \\ c_{n}^{SN}(|L|,\alpha) &\equiv &\tfrac{\Phi ^{-1}( 1-\alpha /(2(k-p)+|L|)) }{\sqrt{1-( \Phi ^{-1}( 1-\alpha /(2(k-p)+|L|)) ) ^{2}/n}}.\end{aligned}$$ Then, $$P\del[1]{T_{n}(L)>c_{n}^{SN}(|L|,\alpha)} \leq \alpha +R_{n},$$ where $R_{n}\equiv \alpha Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\Phi ^{-1}( 1-\alpha /(2k-p)) )^{2+\delta }\to 0$ and $K$ is a universal constant. Under $H_{0}$, $\sqrt{n}\hat{\mu}_{j}/\hat{\sigma}_{j}\leq \sqrt{n}( \hat{\mu}_{j}-\mu _{j}) /\hat{\sigma}_{j}$ for all $j\in L$ and $\sqrt{n} \vert \hat{\mu}_{s}\vert /\hat{\sigma}_{s}=\sqrt{n}\vert \hat{\mu}_{s}-\mu _{s}\vert /\hat{\sigma}_{s}$ for $s=p+1,\dots ,k$. From this, we deduce that: $$\begin{aligned} T_{n}(L) &=&\max \cbr[2]{ \max_{j\in L}{\sqrt{n}\hat{\mu}_{j}}/{\hat{ \sigma}_{j}},\max_{s=p+1,\dots ,k}{\sqrt{n}\vert \hat{\mu} _{s}\vert }/{\hat{\sigma}_{s}}} \\ &\leq &\max \cbr[2]{ \max_{j\in L}{\sqrt{n}( \hat{\mu}_{j}-\mu _{j}) }/{\hat{\sigma}_{j}},\max_{s=p+1,\dots ,k}{\sqrt{n} \vert \hat{\mu}_{s}-\mu _{s}\vert }/{\hat{\sigma}_{s}}} =T_{n}^{\ast }(L).\end{aligned}$$ For any $i=1,\dots ,n$ and $j=1,\dots ,k$, let $Z_{ij}\equiv (X_{ij}-\mu _{j})/\sigma _{j}$ and $U_{j}\equiv \sqrt{n} \sum_{i=1}^{n} (Z_{ij}/n)/\sqrt{\sum_{i=1}^{n} (Z_{ij}^2/n)}$. It then follows that $\sqrt{n}[\hat{\mu}_{j}-\mu _{j}]/\hat{\sigma}_{j}=U_{j}/\sqrt{ 1-U_{j}^{2}/n}$ and so, $$\begin{aligned} \sqrt{n}(\hat{\mu}_{j}-\mu _{j})/\hat{\sigma}_{j} &=&U_{j}/\sqrt{ 1-|U_{j}|^{2}/n} \\ \sqrt{n}|\hat{\mu}_{j}-\mu _{j}|/\hat{\sigma}_{j} &=&|U_{j}|/\sqrt{ 1-|U_{j}|^{2}/n}.\end{aligned}$$ Notice that the expressions on the RHS are increasing in $U_{j}$ and $ |U_{j}|$, respectively. Therefore, for any $c\geq 0 $, $$\begin{aligned} \{ T_{n}^{\ast }(L)>c\} &=&\cbr[2]{ \max_{j\in L}{\sqrt{n} ( \hat{\mu}_{j}-\mu _{j}) }/{\hat{\sigma}_{j}}>c} \cup \cbr[2]{ \max_{s=p+1,\dots ,k}{\sqrt{n}\vert \hat{\mu}_{s}-\mu _{s}\vert }/{\hat{\sigma}_{s}}>c} \\ &=&\cbr[2]{ \max_{j\in L}U_{j}/\sqrt{1-|U_{j}|^{2}/n}>c} \cup \cbr[2]{ \max_{s=p+1,\dots ,k}|U_{s}|/\sqrt{1-|U_{s}|^{2}/n}>c} \\ &=&\cbr[2]{ \max_{j\in L}U_{j}>c/\sqrt{1+c^{2}/n}} \cup \cbr[2]{ \max_{s=p+1,\dots ,k}|U_{s}|>c/\sqrt{1+c^{2}/n}} .\end{aligned}$$ From here, we conclude that for all $c\geq 0 $ such that: $c/\sqrt{1+c^{2}/n}\in [0,n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}]$, $$\begin{aligned} P(T_{n}(L)>c) &\leq& P(T_{n}^{\ast }(L)>c) \notag \\ &\leq& P\del[3]{ \cbr[2]{ \max_{j\in L}U_{j}>c/\sqrt{1+c^{2}/n}} \cup \cbr[2]{ \max_{s=p+1,\dots ,k}|U_{s}|>c/\sqrt{1+c^{2}/n}} }\notag \\ &\leq& \sum_{j\in L}P\del[2]{ U_{j}>c/\sqrt{1+c^{2}/n}} +\sum_{s=p+1}^{k}P\del[2]{ |U_{s}|>c/\sqrt{1+c^{2}/n}} \notag \\ &\leq& \sum_{j\in L}P\del[2]{ U_{j}>c/\sqrt{1+c^{2}/n}} +\sum_{s=p+1}^{k}P\del[2]{U_{s}>c/\sqrt{1+c^{2}/n}}+\sum_{g=p+1}^{k}P\del[2]{-U_{g}>c/ \sqrt{1+c^{2}/n}} \notag \\ &\leq& (2(k-p)+|L|) \del[2]{1-\Phi (c/\sqrt{1+c^{2}/n})} \sbr[2]{1+Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+c/\sqrt{1+c^{2}/n})^{2+\delta }} \label{eq:KeyBound},\end{aligned}$$ where the first inequality follows from $T_{n}(L)\leq T_{n}^{\ast }(L)$, the third inequality is based on a Bonferroni bound, the last inequality follows from Eqs. - in Lemma \[lem:SampleBound\] upon choosing $\gamma=c/\sqrt{n}$ in that result. We are interested in applying Eq.  with $c=c_{n}^{SN}(|L|,\alpha)$ which satisfies: $$(2(k-p)+|L|) \del[2]{1-\Phi ( c_{n}^{SN}(|L|,\alpha)/\sqrt{ 1+c_{n}^{SN}(|L|,\alpha)^{2}/n})}= \alpha. \label{eq:CoverageDefn}$$ Before doing this, we need to verify that this is a valid choice, i.e., we need to verify that, for all sufficiently large $n$, $$c_{n}^{SN}(|L|,\alpha)/\sqrt{1+c_{n}^{SN}(|L|,\alpha)^{2}/n} ~~\in~~ [0,n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}].$$ On the one hand, note that $c_{n}^{SN}(|L|,\alpha)\geq 0$ implies that $ c_{n}^{SN}(|L|,\alpha)/\sqrt{1+c_{n}^{SN}(\alpha ,|L|)^{2}/n}\geq 0$. On the other hand, note that, by definition, $c_{n}^{SN}(|L|,\alpha)/\sqrt{1+c_{n}^{SN}(|L|,\alpha)^{2}/n} = \Phi ^{-1}( 1-\alpha /(2(k-p)+|L|))$ and so it suffices to show that $\Phi ^{-1}( 1-\alpha /(|L|+2(k-p))) M_{n,2+\delta }n^{-\delta /(2(2+\delta ))} \to 0$. To show this, note that $\Phi ^{-1}( 1-\alpha /(2(k-p)+|L|)) \leq \sqrt{2\ln ( (|L|+2(k-p))/\alpha ) }\leq \sqrt{2\ln ( (2k-p)/\alpha ) }$, where the first inequality uses that $1-\Phi (t)\leq \exp ( -t^{2}/2) $ for any $t>0$ and the second inequality follows from $|L|\leq p$. These inequalities and $\ln ( (2k-p)/\alpha ) M_{n,2+\delta }^{2}n^{-\delta /(2+\delta )}\to 0$ (by Assumption \[ass:Rates\]) complete the verification. Therefore, by Eq.  with $c=c_{n}^{SN}(|L|,\alpha)$ we conclude that: $$\begin{aligned} P(T_{n}>c_{n}^{SN}(|L|,\alpha)) ~\leq~ \alpha +\alpha Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\Phi ^{-1}( 1-\alpha /(2(k-p)+|L|)) )^{2+\delta } ~\leq~ \alpha +R_{n},\end{aligned}$$ where the first inequality uses Eq.  and the second inequality follows from the definition $R_{n}$ and $f(x)\equiv \Phi ^{-1}( 1-\alpha /(2(k-p)+x)) $ being increasing and $|L|\leq p$. To conclude the proof, it suffices to show that $R_{n}\to 0$. To this end, consider the following argument: $$\begin{aligned} R_{n} &\equiv &\alpha Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\Phi ^{-1}( 1-\alpha /(2k-p)) )^{2+\delta } \\ &\leq &\alpha 2^{1+\delta}Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\vert \Phi ^{-1}( 1-\alpha /(2k-p)) \vert ^{2+\delta }) \\ &\leq &\alpha 2^{1+\delta} K n^{-\delta /2}M_{n,2+\delta }^{2+\delta } (1 + {2}^{1/2}( \ln ( (2k-p)/\alpha ) ) ^{( 2+\delta ) /2}))=o(1),\end{aligned}$$ where the first inequality uses the convexity of $f(x)=x^{2+\delta}$ and $\delta>0$ and Jensen’s Inequality to show $(1+a)^{2+\delta}\leq 2^{1+\delta}(1+a^{2+\delta})$ for any $a>0$, the second inequality follows from $1-\Phi (t)\leq \exp ( -t^{2}/2) $ for any $t>0$ and so $\Phi ^{-1}(1-\alpha /(2k-p) )\leq \sqrt{2\ln ( (2k-p) /\alpha ) }$, and the convergence to zero is based on $n^{-\delta /2}M_{n,2+\delta }^{2+\delta }(\ln (2k-p) )^{(2+\delta )/2}\to 0$ (by Assumption \[ass:Rates\]) which for $2k-p>1$ implies that $n^{-\delta /2}M_{n,2+\delta }^{2+\delta }\to 0 $. This result follows from Lemma \[lem:SNSize\] with $L=\{1,\dots ,p\}$. This proof follows similar steps than (Proof of Theorem 4.2). Let us define the sequence of sets: $$J_{I} ~\equiv~ \{j=1,\dots ,p:\mu _{j}/\sigma _{j}\geq -3\lambda _{n}/4 \}$$ We divide the proof into three steps. We show that $\hat{\mu}_{j}\leq 0$ for all $j\in J_{I}^{c}$ with high probability, i.e., for any $c\in (0,1)$, $$P\del[1]{ \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}} \leq 2p\exp \del[1]{ -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}} \sbr[1]{ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }} +\tilde{K} n^{-c}\to 0,$$ where $K$ and $\tilde{K}$ are universal constants. First, we show that for any $r\in (0,1)$, $$\cbr[1]{\cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}} \cap \cbr[2]{ \sup_{j=1,\dots ,p}\vert \hat{\sigma}_{j}/\sigma _{j}-1\vert \leq r/(1+r) } \subseteq \cbr[2]{ \sup_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{\sigma }_{j}>(1-r)\lambda _{n}3/4 }.$$ To see this, suppose that there is an index $j=1,\dots ,p$ s.t. $\mu _{j}/\sigma _{j}<-\lambda _{n}3/4 $ and $\hat{\mu}_{j}>0$. Then, $\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{\sigma}_{j}>\lambda_{n}(3/4) ( \sigma _{j}/\hat{\sigma}_{j}) $. In turn, $ \sup_{j=1,\dots ,p}\vert 1- \hat{\sigma}_{j}/\sigma _{j}\vert \leq r/(1+r)$ implies that $\vert 1- \sigma _{j}/\hat{\sigma} _{j}\vert \leq r$ and so $( \sigma _{j}/\hat{\sigma}_{j})\lambda _{n}3/4 \geq ( 1-r)\lambda _{n}3/4 $. By combining these, we conclude that $ \sup_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{\sigma }_{j}>( 1-r)\lambda _{n}(3/4) $. Based on this, consider the following derivation for any $r\in (0,1)$, $$\begin{aligned} P( \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\})~&=~\left\{ \begin{array}{c} P( \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}\cap \sup_{j=1,\dots ,p}\vert \hat{\sigma}_{j}/\sigma _{j}-1\vert \leq r/(1+r)) + \\ P( \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}\cap \sup_{j=1,\dots ,p}\vert \hat{\sigma}_{j}/\sigma _{j}-1\vert >r/(1+r)) \end{array} \right\} \notag\\ ~&\leq~ P\del[2]{\sup_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{\sigma}_{j}> ( 1-r)\lambda _{n}3/4 } + P\del[2]{ \sup_{j=1,\dots ,p}\vert \hat{\sigma }_{j}/\sigma _{j}-1\vert >r/(1+r)}.\label{eq:step1_r_step}\end{aligned}$$ By evaluating Eq.  with $r=r_{n}=(((n^{-( 1-c) /2}\ln p+n^{-3/2}( \ln p) ^{2}) B_{n}^{2}) ^{-1}-1)^{-1}\to 0$ (by Assumption \[ass:Rates2\]), we deduce that: $$\begin{aligned} P( \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}) ~\leq~ 2p\exp ( -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}) [ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }] +\tilde{K}n^{-c},\end{aligned}$$ where the first term is a consequence of Lemma \[lem:SampleBound2\], $r_{n}\to 0$, and $ ( 1-r_{n})\lambda _{n}3/4 \geq n^{-1/2}( M_{n,2+\delta }^{2}n^{-\delta /(2+\delta )}-n^{-1}) ^{-1/2}$ for all $n$ sufficiently large, and the second term is a consequence of (Lemma A.5) and $ r_{n}/(1+r_{n})=[ n^{-( 1-c) /2}\ln p+n^{-3/2}( \ln p) ^{2}] B_{n}^{2}\to 0$. We show that $J_{I}\subseteq \hat{J}_{L}$ with high probability, i.e., $$P(J_{I}\subseteq \hat{J}_{L}) ~\geq~ 1- 2p\exp ( -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}) [ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }] +\tilde{K} n^{-c},$$ where $K,\tilde{K}$ are uniform constants. First, we show that for any $r\in (0,1)$, $$\cbr[2]{\{ J_{I}\not\subseteq \hat{J}_{L}\} \cap \cbr[2]{ \sup_{j=1,\dots ,p}\vert \hat{\sigma}_{j}/\sigma _{j}-1\vert \leq r/(1+r) } } ~\subseteq~ \cbr[2]{ \sup_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{\sigma}_{j}>\lambda _{n}(1-r)3/4}.$$ To see this, consider the following argument. Suppose that $j\in J_{I}$ and $j\not\in \hat{J}_{L}$, i.e., $\mu _{j}/\sigma _{j}\geq -\lambda _{n}3/{4} $ and $\hat{\mu}_{L,j}/\hat{\sigma} _{j}< -\lambda _{n}$ or, equivalently by Eq. , $\hat{\mu}_{j}/\hat{\sigma} _{j}< -\lambda _{n}{3}/{2}$. Then, $\vert \mu _{j} - \hat{\mu}_j \vert / \hat{\sigma}_{j}>\lambda _{n}[\frac{3}{2}-\frac{3}{4} (\sigma _{j}/\hat{ \sigma}_{j})]$. In turn, $\sup_{j=1,\dots ,p}\vert 1 - \hat{\sigma}_j/ \sigma_j \vert \leq r/(1+r)$ implies that $\vert \sigma _{j}/\hat{\sigma}_{j}-1\vert \leq r$ and so $\lambda _{n}[\frac{3}{2}-\frac{3}{4}(\sigma _{j}/\hat{\sigma}_{j})]\geq \lambda _{n}(1-r)3/4$. By combining these, we conclude that $ \sup_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{ \sigma}_{j}>\lambda _{n}(1-r)3/4$, as desired. Based on this, consider the following derivation for any $r\in (0,1)$, $$\begin{aligned} P(J_{I}\not\subseteq \hat{J}_{L}) &=&\left\{ \begin{array}{c} P \left( \{ J_{I}\not\subseteq \hat{J}_{L}\} \cap \{ \sup_{j=1,\dots ,p}\vert \hat{\sigma}_{j}/\sigma _{j}-1\vert \leq r/(1+r)\} \right)\\ +P \left( \{ J_{I}\not\subseteq \hat{J} _{L}\} \cap \{ \sup_{j=1,\dots ,p}\vert \hat{\sigma} _{j}/\sigma _{j}-1\vert >r/(1+r) \} \right) \end{array} \right\} \\ &\leq& P\del[2]{\sup_{j=1,\dots ,p}\vert \hat{\mu}_{j}-\mu _{j}\vert /\hat{\sigma}_{j}>\lambda _{n}(1-r)3/4} + P\del[2]{\sup_{j=1,\dots ,p}\vert \hat{\sigma }_{j}/\sigma _{j}-1\vert >r/(1+r) } .\end{aligned}$$ Notice that the expression on the RHS is exactly the RHS of Eq. . Consequently, by evaluating this equation in $r=r_{n}$ and repeating arguments used in step 1, the desired result follows. We now complete the argument. Consider the following derivation: $$\begin{aligned} \left\{ \{ T_{n}>c_{n}^{SN,L}(\alpha) \}\cap \{J_{I}\subseteq \hat{J}_{L}\} \cap \{\cap _{j\in J_{I}^{c}} \{\hat{\mu}_{j}\leq 0\}\} \right\} &\subseteq \left\{ \{T_{n}>c_{n}^{SN}(|J_{I}|,\alpha )\} \cap \{\cap _{j\in J_{I}^{c}}\cbr[1]{\hat{\mu}_{j}\leq 0}\} \right\}\\ &\subseteq \cbr[3]{\max \cbr[2]{ \max_{j\in J_{I}}\frac{\sqrt{n}\hat{\mu}_{j}}{\hat{ \sigma}_{j}},\max_{s=p+1,\dots ,k}\frac{\sqrt{n}\vert \hat{\mu} _{s}\vert }{\hat{\sigma}_{s}}}>c_{n}^{SN}(\alpha ,|J_{I}|)} ,\end{aligned}$$ where we have used $c_{n}^{SN,L}(\alpha) = c_{n}^{SN}(\alpha ,|\hat{J}_L|)$, Lemma \[lem:CSNincreasing\] (in that $c_{n}^{SN}(\alpha ,d)$ is a non-negative increasing function of $d\in \{0,1\dots ,2k-p\}$), and we take $\max_{j\in J_{I}}\sqrt{n}\hat{\mu}_{j}/\hat{\sigma}_{j}=-\infty $ if $ J_{I}=\emptyset $. Thus, $$\begin{aligned} P( T_{n}>c_{n}^{SN,L}(\alpha)) &=\left\{ \begin{array}{c} P( \{ T_{n}>c_{n}^{SN,L}(\alpha)\} \cap \{ \{ J_{I}\subseteq \hat{J}_{L}\} \cap \{ \cap _{j\in J_{I}^{c}}\{\hat{\mu}_{j}\leq 0\}\} \} ) + \\ P( \{ T_{n}>c_{n}^{SN,L}(\alpha)\} \cap \{ \{ J_{I}\not\subseteq \hat{J}_{L}\} \cup \{ \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}\} \} ) \end{array} \right\} \nonumber \\ &\leq P\del[3]{ \max \cbr[2]{ \max_{j\in J_{I}}\frac{\sqrt{n}\hat{\mu}_{j}}{\hat{ \sigma}_{j}},\max_{s=p+1,\dots ,k}\frac{\sqrt{n}\vert \hat{\mu} _{s}\vert }{\hat{\sigma}_{s}}} >c_{n}^{SN}(\alpha ,|J_{I}|)} + P( J_{I}\not\subseteq \hat{J}_{L}) +P( \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}) \nonumber \\ &\leq \alpha +\left\{ \begin{array}{c} \alpha Kn^{-\delta /2}M_{n,2+\delta }^{2+\delta }(1+\Phi ^{-1}( 1-\alpha /(2k-p)) )^{2+\delta }+ \\ 4p\exp ( -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}) [ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }] +2\tilde{K} n^{-c} \end{array} \right\} \nonumber \\ &\leq \alpha +o(1), \end{aligned}$$ where the third line uses Lemma \[lem:SNSize\] and steps 1 and 2, and the convergence in the last line holds uniformly in the manner required by the result. Results for the bootstrap approximation --------------------------------------- \[lem:BootSize\] Assume Assumptions \[ass:Basic\], \[ass:Rates3\], $\alpha \in (0,0.5)$, and that $H_{0}$ holds. For any non-stochastic set $L\subseteq \{1,\dots ,p\}$, define: $$T_{n}(L) ~\equiv~ \max \cbr[2]{ \max_{j\in L}{\sqrt{n}\hat{\mu}_{j}}/{\hat{\sigma} _{j}},\max_{s=p+1,\dots ,k}{\sqrt{n}\vert \hat{\mu} _{s}\vert }/{\hat{\sigma}_{s}}},$$ and let $c_{n}^{B}(L,\alpha)$ with $B\in \{MB,EB\}$ denote the conditional $(1-\alpha )$-quantile based on the bootstrap. Then, $$P(T_{n}(L)>c_{n}^{B}(L,\alpha)) \leq \alpha +\tilde{C}n^{- \tilde{c}},$$ where $\tilde{c},\tilde{C}>0$ are positive constants that only depend on the constants $c,C$ in Assumption \[ass:Rates3\]. Furthermore, if $\mu _{j}=0$ for all $j\in L$ then: $$\vert P( T_{n}(L)>c_{n}^{B}(L,\alpha)) -\alpha \vert \leq \tilde{C}n^{-\tilde{c}}.$$ Finally, since $\tilde{c},\tilde{C}$ depend only on the constants $c,C$ in Assumption \[ass:Rates3\], the proposed bounds are uniform in all parameters $\theta \in \Theta$ and distributions $P$ that satisfy the assumptions in the statement. In the absence of moment equalities equalities, this results follow from replacing $\{1,\dots ,p\}$ with $L$ in (proof of Theorem 4.3). As we show next, our proof can be completed by simply redefining the set of moment inequalities by adding the moment equalities as two sets of inequalities with reversed sign. Define $A = A(L)\equiv L\cup \{p+1,\dots ,k\}\cup \{k+1,\dots ,2k-p\}$ with $|A|=|L|+2(k-p)$ and for any $i=1,\dots ,n$, define the following $|A|$-dimensional auxiliary data vector: $$X_{i}^{E}\equiv \cbr[1]{ \{X_{ij}\}_{j\in L}^{\prime },\{X_{is}\}_{s=p+1,\dots ,k}^{\prime },\{-X_{is}\}_{s=p+1,\dots ,k}^{\prime }} ^{\prime }.$$ Based on these definitions, we modify all expressions analogously, e.g., $$\begin{aligned} \mu ^{E} &=&\{ \{\mu _{j}\}_{j\in L}^{\prime },\{\mu _{s}\}_{s=p+1,\dots ,k}^{\prime },\{-\mu _{s}\}_{s=p+1,\dots ,k}^{\prime }\} ^{\prime }, \\ \sigma ^{E} &=&\{ \{\sigma _{j}\}_{j\in L}^{\prime },\{\sigma _{s}\}_{s=p+1,\dots ,k}^{\prime },\{\sigma _{s}\}_{s=p+1,\dots ,k}^{\prime }\} ^{\prime },\end{aligned}$$ and notice that $H_{0}$ is equivalently re-written as $\mu ^{E}\leq \mathbf{0}_{|A|}$. In the new notation, the test statistic is re-written as $T_{n}(L)=\max_{j\in A} {\sqrt{n}\hat{ \mu}_{j}^{E}}/{\hat{\sigma}_{j}^{E}}$, and the critical values can re-written analogously. In particular, the MB and EB test statistics are respectively defined as follows: $$\begin{aligned} W_{n}^{MB}(L)&=&\max_{j\in A} {\sqrt{n}\sum_{i=1}^{n} \epsilon _{i}( X_{ij}^{E}-\hat{\mu}_{j}^{E}) }/{\hat{\sigma }_{j}^{E}},\\ W_{n}^{EB}(L)&=&\max_{j\in A} {\sqrt{n}\sum_{i=1}^{n} ( X_{ij}^{*,E}-\hat{\mu}_{j}^{E}) }/{\hat{\sigma }_{j}^{E}}.\end{aligned}$$ Given this setup, the result follows immediately from (Theorem 4.3). This result follows from Lemma \[lem:BootSize\] with $|L|=\{1,\dots ,p\}$. \[lem:CvBootIncreasing\] For any $\alpha \in (0,0.5)$, $n\in \mathbb{N}$, $B\in \{MB,EB\}$, and $L_{1}\subseteq L_{2}\subseteq \{1,\dots ,p\}$, $$c_{n}^{B}(L_{1},\alpha)~\leq~ c_{n}^{B}(L_{2},\alpha ).$$ Furthermore, under the above assumptions, $P\del[1]{c_{n}^{B}(L_{1},\alpha)\geq 0}\geq 1-Cn^{-c}$, where $c,C$ are universal constants. By definition, $L_{1}\subseteq L_{2}$ implies that $W_{n}^{B}(L_{1})\leq W_{n}^{B}(L_{2})$ which, in turn, implies $c_{n}^{B}(L_{1},\alpha)\leq c_{n}^{B}(L_{2},\alpha)$. We now turn to the second result. If the model has at least one moment equality, then $W_n^B(L_{1})\geq 0$ and so $c_{n}^{B}(\alpha ,L_{1})\geq 0$. If the model has no moment equalities, then we consider consider a different argument depending on the type of bootstrap procedure being implemented. First, consider MB. Conditionally on the sample, $W_{n}^{MB}(L_{1})=\max_{j\in L}{(1/\sqrt{n})\sum_{i=1}^{n} \epsilon _{i}\left( X_{ij}-\hat{\mu}_{j}\right) }/{\hat{\sigma}_{j}}$ is the maximum of $L_{1}$ zero mean Gaussian random variables. Thus, $\alpha \in (0,0.5)$ implies that $c_{n}^{MB}(\alpha ,L_{1})\geq 0$. Second, consider EM. Let $c_{0}(L_{1},\alpha)$ denote the $(1-\alpha)$-quantile of $ \max_{j \in L_{1}}Y_{j}$ with $\{Y_{j}\}_{j \in L_{1}}\sim N(\mathbf{0},E [ \tilde{Z}\tilde{Z}'] )$ with $\tilde{Z} = \{Z_j\}_{j \in L_{1}}$ and $Z$ as in Assumption \[ass:Rates2\]. At this point, we apply (Eq. (66)) to our hypothetical model with the moment inequalities indexed by $L_{1}$. Applied to this model, their Eq. (66) yields: $$P\del[1]{c_{n}^{EB}(L_{1},\alpha)\geq c_0(L_{1},\alpha +\gamma_{n})}~\geq~ 1-Cn^{-c},\label{eq:66inCCK}$$ where $\gamma_{n} \equiv \zeta_{n2}+\nu_{n}+8\zeta_{n1}\sqrt{\log p} \in (0, 2Cn^{-c})$, for sequences $\{(\zeta_{n1},\zeta_{n2},\nu_{n})\}_{n\ge 1}$ and universal positive constants $(c,C)$, all specified in . Since $\alpha<0.5$ and $\gamma_{n}<2Cn^{-c}$, it follows that for all $n$ sufficiently large, $\alpha + \gamma_{n}<0.5$ and so $c_0(\alpha + \gamma_{n},L_{1})>0$. The desired result follows from combining this with Eq. . This proof follows similar steps than (Proof of Theorem 4.4). Let us define the sequence of sets: $$J_{I} ~\equiv~ \{j=1,\dots ,p:\mu _{j}/\sigma _{j}\geq -3\lambda _{n}/4 \}$$ We divide the proof into three steps. Steps 1-2 are exactly as in the proof of Theorem \[thm:Lasso2SSize\] so they are omitted. Defining $T_{n}(J_{I})$ as in Lemma \[lem:BootSize\] and consider the following derivation: $$\begin{aligned} &\cbr[2]{T_{n}>c_{n}^{B}(\hat{J}_L,\alpha)}\cap \cbr[2]{J_{I}\subseteq \hat{J}_{L}}\cap \cbr[2]{\cap _{j\in J_{I}^{c}}\cbr[1]{\hat{\mu}_{j}\leq 0}}\cap\cbr[2]{c_{n}^{B}(J_I,\alpha)\geq 0}\\ &\subseteq \cbr[2]{T_{n}>c_{n}^{B}(J_{I},\alpha)}\cap \cbr[2]{\cap _{j\in J_{I}^{c}}\cbr[1]{\hat{\mu}_{j}\leq 0}}\cap\cbr[2]{c_{n}^{B}(\alpha ,J_I)\geq 0}\\ &\subseteq \cbr[2]{T_n(J_I)>c_{n}^{B}(J_{I},\alpha)} ,\end{aligned}$$ where the first inclusion follows from Lemma \[lem:CvBootIncreasing\], and the second inclusion follows from noticing that ${\cap _{j\in J_{I}^{c}}\{\hat{\mu}_{j}\leq 0}\}$ and $\{T_{n}>c_{n}^{B}(\alpha ,J_{I})\geq 0\}$ implies that $\{T_n(J_I)>c_{n}^{B}(\alpha ,J_{I})\}$. Thus, $$\begin{aligned} P\del[1]{T_{n}>c_{n}^{B,L}(\alpha)} &= P\del[1]{T_{n}>c_{n}^{B}(\hat{J}_{L},\alpha)} \notag\\ &=\cbr[4]{ \begin{array}{c} P\del[1]{ \{ T_{n}>c_{n}^{B}(\hat{J}_{L},\alpha)\} \cap \{ \{ J_{I}\subseteq \hat{J}_{L}\} \cap \{ \cap _{j\in J_{I}^{c}}\{\hat{\mu}_{j}\leq 0\}\} \cap\cbr[0]{c_{n}^{B}(\alpha ,J_I)\geq 0}\} } + \\ P\del[1]{ \{ T_{n}>c_{n}^{B}(\hat{J}_{L},\alpha)\} \cap \{ \{ J_{I}\not\subseteq \hat{J}_{L}\} \cup \{ \cup _{j\in J_{I}^{c}}\{\hat{\mu}_{j}>0\}\} \cup\cbr[0]{c_{n}^{B}(\alpha ,J_I)< 0}\} } \end{array} } \notag \\ &\leq P( T_{n}(J_{I})>c_{n}^{B}(J_{I},\alpha)) +P( J_{I}\not\subseteq \hat{J}_{L}) +P( \cup _{j\in J_{I}^{c}}\{\hat{ \mu}_{j}>0\})+P(c_{n}^{B}(\alpha ,J_I)< 0) \nonumber \\ &\leq \alpha + Cn^{-c}+ \tilde{C} n^{-\tilde{c}} + 4p\exp ( -2^{-1}n^{\delta /(2+\delta )}M_{n,2+\delta }^{-2}) [ 1+K( n^{-\delta /(2(2+\delta ))}M_{n,2+\delta }+1) ^{2+\delta }] +2\tilde{K} n^{-c} \notag \\ &\leq \alpha +o(1), \label{eq:Derivation2SMBLasso}\end{aligned}$$ where the convergence in the last line is uniform in the manner required by the result. The third line of Eq.  uses Lemmas \[lem:BootSize\] and \[lem:CvBootIncreasing\] as well as steps 1 and 2. We next turn to the second part of the result. By the case under consideration, $\mu={\bf 0}_{p}$ and so $J_{I} = \{ 1, \dots, p \}$. Thus, in this case, $\{J_{I} \subseteq \hat{J}_{L}\} = \{ \hat{J}_{L} = J_{I} = \{1,\dots,p\}\}$. By this and step 2 of Theorem \[thm:Lasso2SSize\], it follows that: $$P\del[1]{ \hat{J}_{L} = J_{I} = \{1,\dots,p\}} ~\geq~ 1 - 2p\exp ( -2^{-1}n^{\delta /(2+\delta )}/M_{n,2+\delta }^{2}) [ 1+K( M_{n,2+\delta }/n^{\delta /(2(2+\delta ))}+1) ^{2+\delta }] + \tilde{K} n^{-c}, \label{eq:mehmet1}$$ where $K,\tilde{K}$ are uniform constants. In turn, notice that $\{\hat{J}_{L} = J_{I} = \{1,\dots,p\} \}$ implies that $c_n^{B,1S} (\alpha) = c_n^{B} ({J}_{I},\alpha) = c_n^{B} (\hat{J}_{L},\alpha) = c_n^{B,L} (\alpha)$. Thus, $$\begin{aligned} P ( T_n > c_n^{B,L} (\alpha) )&=& P ( \{T_n > c_n^{B,L} ( \alpha)\} \cap \{\hat{J}_{L} = J_{I} = \{1,\dots,p\}\}) + P ( \{T_n > c_n^{B,L} ( \alpha)\} \cap \{\hat{J}_{L} = J_{I} = \{1,\dots,p\}\}^{c}) \notag\\ & \ge& P ( \{T_n > c_n^{B,1S} ( \alpha)\} \cap \{\hat{J}_{L} = J_{I} = \{1,\dots,p\}\}) \notag\\ & \ge& P ( T_n > c_n^{B,1S} ( \alpha) ) - P ( \{\hat{J}_{L} = J_{I} = \{1,\dots,p\}\}^{c}) \notag \\ & \ge& \alpha - 2\tilde{C} n^{- \tilde{c}},\label{eq:Derivation2SMBLasso2}\end{aligned}$$ where the last inequality uses the second result in Theorem \[thm:B1Scorollary\], Eq. , and Eq. . If we combine this with Eq. , the result follows. Results for power comparison ---------------------------- The arguments in the main text show that Eq.  implies Eq. . To complete the proof, it suffices to show that the two sufficient conditions imply Eq. . By definition and Lemma \[lem:LassoClosedForm\], $$\begin{aligned} \hat{J}_{SN} &=&\{j=1,\dots,p:\hat{\mu}_{j}/\hat{\sigma}_{j}\geq -2c_{n}^{SN,1S}(\beta _{n})/\sqrt{n}\}, \\ \hat{J}_{L} &=&\{j=1,\dots,p:\hat{\mu}_{j,L}/\hat{\sigma}_{j}\geq -\lambda _{n}\}=\{j=1,\dots ,p:\hat{\mu}_{j}/\hat{\sigma}_{j}\geq -\lambda _{n}3/2\},\end{aligned}$$ We show this by contradiction, i.e., suppose that Eq.  and $\hat{J}_{L}\not\subseteq \hat{J}_{SN}$ hold. By the latter, $ \exists j=1,\dots ,p$ s.t. $j\in \hat{J}_{L}\cap \hat{J}_{SN}^{c}$, i.e., $-2c_{n}^{SN,1S}(\beta _{n})/\sqrt{n}>{\hat{\mu}_{j}}/{\hat{\sigma}_{j}}\geq - \lambda _{n}3/2$, which implies that $c_{n}^{SN,1S}(\beta _{n})4/3 < \sqrt{n} \lambda _{n}$, contradicting Eq. . By definition, $c_{n}^{SN,1S}(\beta _{n})4/3 \geq \sqrt{n}\lambda _{n}$ is equivalent to $$\del[1]{\Phi ^{-1}(1-\beta _{n}/p)} ^{2}~\geq~ n\lambda _{n}^{2}\frac{9}{16}. \label{eq:InequalityForPower}$$ The remainder of the proof shows that Eq.  holds under Eq. . First, we establish a lower bound for the LHS of Eq. . For any $x\geq 1$, consider the following inequalities: $$1-\Phi (x)~\geq~ \frac{1}{x+1/x}\frac{1}{\sqrt{2\pi }}e^{-x^{2}/2}~\geq~ \frac{1}{2x}\frac{1}{\sqrt{2\pi }}e^{-x^{2}/2}~\geq~ \frac{1}{2\sqrt{2\pi }}e^{-x^{2}},$$ where the first inequality holds for all $x>0$ by @gordon:1941 [Eq. (10)], the second inequality holds by $x\geq 1$ and so $x>1/x$, and the third inequality holds by $e^{-x^{2}/2}\leq 1/x$ for all $x>0$. Note that for $\beta _{n}\leq 10\% $ and $p\geq 1$, $\Phi ^{-1}( 1-\beta _{n}/p) \geq 1$. Evaluating the previous display at $x=\Phi ^{-1}( 1-\beta _{n}/p) $ yields: $$\del[1]{\Phi ^{-1}(1-\beta _{n}/p)} ^{2}~\geq~ \ln \del[2]{ \frac{p}{2 \sqrt{2\pi }\beta _{n}}} . \label{eq:InequalityForPowerRHS}$$ Second, we establish an upper bound for the RHS of Eq. . By Eq. , $$\begin{aligned} n\lambda _{n}^{2}~=~(4/3+\varepsilon)^{2}\frac{n}{n^{2/(2+\delta )}M_{n,2+\delta }^{2}-1}~\leq~ 2(4/3+\varepsilon)^{2}n^{\delta /(2+\delta )}M_{n,2+\delta }^{-2},\label{eq:nlambda}\end{aligned}$$ where the last inequality used that $1/(x-1)\leq 2/x$ for $x\geq 2$ and that $n^{2/(2+\delta)}M_{n,2+\delta}^2\geq 2$. Thus, $$\frac{9}{16}n\lambda _{n}^{2}~\leq~ \frac{18}{16}(4/3+\varepsilon)^{2}n^{\delta /(2+\delta )}M_{n,2+\delta }^{-2} ~=~ \frac{9}{8}(4/3+\varepsilon)^{2}n^{\delta /(2+\delta )}M_{n,2+\delta }^{-2}. \label{eq:InequalityForPowerLHS}$$ To conclude the proof, notice that Eq.  follows directly from combining Eqs. , , and . This result has several parts. The same arguments used for SN method imply that Eq.  implies Eq. . By definition and Lemma \[lem:LassoClosedForm\], $$\begin{aligned} \hat{J}_{B} &=&\{ j=1,\dots ,p:\hat{\mu}_{j}/\hat{\sigma}_{j}\geq -2c_{n}^{B,1S}( \beta _{n}) /\sqrt{n}\} , \\ \hat{J}_{L} &=&\{ j=1,\dots ,p:\hat{\mu}_{j,L}/\hat{\sigma}_{j}\geq -\lambda _{n}\}= \{j=1,\dots ,p:\hat{\mu}_{j}/\hat{\sigma}_{j}\geq -\lambda _{n}3/2\}, \end{aligned}$$ Suppose that $\hat{J}_{L}\subseteq \hat{J}_{B}$ does not occur, i.e., $ \exists j\in \hat{J}_{L}\cap \hat{J}_{B}^{c}$ s.t. $-2c_{n}^{B}( \beta _{n}) /\sqrt{n}>\hat{\mu}_{j}/\hat{\sigma} _{j}\geq -\lambda _{n}3/2$. From this, we conclude that: $$\{ c_{n}^{B}( \beta _{n}) 4/3\geq \lambda _{n}\sqrt{n} \} ~\subseteq~ \{ \hat{J}_{L}\subseteq \hat{J}_{B}\} .$$ Let $c_{0}(3\beta _{n})$ denote the $(1-3\beta _{n})$-quantile of $ \max_{1\leq j\leq p}Y_{j}$ with $(Y_{1},\ldots ,Y_{p})\sim N(\mathbf{0},E [ ZZ'] )$ with $Z$ as in Assumption \[ass:Rates2\]. In the remainder of this step, we consider two strategies to establish the following result: $$c_{0}(3\beta _{n})4/3~\geq~ \lambda _{n}\sqrt{n}. \label{eq:Strategies}$$ Under Eq. , we can conclude that: $$\{ c_{n}^{B}( \beta _{n}) \geq c_{0}(3\beta _{n})\} ~\subseteq~ \{ c_{n}^{B}( \beta _{n}) 4/3\geq \lambda _{n}\sqrt{n}\} ~\subseteq~ \{ \hat{J}_{L}\subseteq \hat{J} _{B}\} .$$ From this and since $c_{0}(\cdot )$ is decreasing, we conclude that for any $\mu _{n}\leq 3\beta _{n}$, $$P\del[1]{ \hat{J}_{L}\subseteq \hat{J}_{B}} ~\geq~ P\del[1]{c_{n}^{B}( \beta _{n}) ~\geq~ c_{0}(3\beta _{n})} ~\geq~ P\del[1]{ c_{n}^{B}( \beta _{n}) \geq c_{0}(\mu _{n})}. \label{eq:PowerGivenStrategies}$$ To complete the proof, it suffices to provide a uniformly high lower bound for the RHS of Eq. . To this end, we consider (Eq. (66)) at the following values: $\alpha =\beta _{n}$, $\nu _{n}=Cn^{-c}$, and $(\zeta _{n2},\zeta _{n1})$ s.t. $\zeta _{n2}+8\zeta _{n1}\sqrt{\ln p}\leq Cn^{-c}$. Under our assumptions, these choices yield $\mu _{n}\equiv \beta _{n}+\zeta _{n2}+v_{n}+8\zeta _{n1}\sqrt{\log p}\leq \beta _{n}+2Cn^{-c}\leq 3\beta _{n}$. By plugging these on (Eq. (66)), the RHS of Eq.  exceeds $1-Cn^{-c}$, as desired. To complete the proof the step, we now describe the two strategies that can be used to show Eq. . The first strategy relies on Eq.  and the second strategy relies on Eq. . By definition, $$c_{0}(3\beta _{n})~\geq~ \Phi ^{-1}(1-3\beta _{n}), \label{eq:P2_B}$$ By combining Eqs. , , and , it follows that: $$c_{0}(3\beta _{n})4/3~\geq~ \Phi ^{-1}(1-3\beta _{n})4/3~\geq~ \sqrt{2} (4/3+\varepsilon )n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}\geq \sqrt{n} \lambda _{n}.$$ First, the Borell-Cirelson-Sudakov inequality (see, e.g., @boucheron/lugosi/massart:2013 [Theorem 5.8]), implies that for $x\geq 0$, $$P\del[2]{\max_{1\leq j\leq p}Y_{j}~\leq~ E[ \max_{1\leq j\leq p}Y_{j} ] -x} ~\leq~ e^{-x^{2}/2}, \label{eq:Piece0}$$ where we used that the diagonal $E[ZZ']$ is a vector of ones. Equating the RHS of Eq.  to $(1-3\beta _{n})$ yields $x=\sqrt{ 2\log (1/[1-3\beta _{n}])}$ such that: $$c_{0}(3\beta _{n})~\geq~ E[ \max_{1\leq j\leq p}Y_{j}] -\sqrt{2\log (1/[1-3\beta _{n}])}. \label{eq:Piece1}$$ We now provide a lower bound for the first term on the RHS of Eq. . Consider the following derivation: $$\begin{aligned} E[ \max_{1\leq j\leq p}Y_{j}] ~\geq~ \min_{i\neq j}\sqrt{ E(Y_{i}-Y_{j})^{2}\log (p)/2} ~\geq ~\sqrt{2(1-\rho )\log (p)/2},\label{eq:Piece2} \end{aligned}$$ where the first inequality follows from Sudakov’s minorization inequality (see, e.g., @boucheron/lugosi/massart:2013 [Theorem 13.4]) and the second inequality follows from $E[ZZ']$ having a diagonal elements equal to one and the maximal absolute correlation less that $\rho $. Eqs. - imply that: $$c_{0}(3\beta _{n})~\geq~ \sqrt{(1-\rho )\log (p)/2}-\sqrt{2\log (1/[1-3\beta _{n}])}. \label{eq:P2_C}$$ By combining Eqs. , , and , it follows that: $$\begin{aligned} c_{0}(3\beta _{n})4/3 \geq 4/3( \sqrt{(1-\rho )\log (p)/2}-\sqrt{ 2\log (1/[1-3\beta _{n}])}) \geq\sqrt{2}(4/3+\varepsilon )n^{\delta /(2(2+\delta ))}M_{n,2+\delta }^{-1}\geq \sqrt{n}\lambda _{n}. \end{aligned}$$ Consider the following argument. $$\begin{aligned} P(T_{n}\geq c_{n}^{B,2S}( \alpha ) ) &=P( T_{n}\geq c_{n}^{B,2S}( \alpha ) \cap \hat{J}_{L}\subseteq \hat{J}_{B}) +P(T_{n}\geq c_{n}^{B,2S}( \alpha )\cap \hat{J}_{L}\not\subseteq \hat{J}_{B})\\ &\leq P(T_{n}\geq c_{n}^{B,L}( \alpha )) +P(\hat{J}_{L}\not\subseteq \hat{J}_{B}) \\ &\leq P(T_{n}\geq c_{n}^{B,L}( \alpha )) + {C}n^{-{c}}, \end{aligned}$$ where the first inequality uses part 1, and the second inequality uses that the sufficient conditions imply Eq. . [^1]: We thank useful comments and suggestions from the participants in the 2015 World Congress in Montreal, the Second International Workshop in Financial Econometrics, and the seminar at University of Maryland. Bugni acknowledges support by National Institutes of Health under grant no. 40-4153-00-0-85-399. Bredahl Kock acknowledges support from CREATES - Center for Research in Econometric Analysis of Time Series (DNRF78), funded by the Danish National Research Foundation. Lahiri acknowledges support from National Science Foundation under grant no. DMS 130068. [^2]: These include [@CHT07], [@andrews/berry/jiabarwick:2004], [@imbens/manski:2004], @galichon/henry:2006 [@galichon/henry:2013], [@beresteanu/molinari:2008], [@romano/shaikh:2008], [@rosen:2008], [@andrews/guggenberger:2009], [@stoye:2009], [@andrews/soares:2010], @bugni:2010 [@bugni:2015], [@canay:2010], [@romano/shaikh:2010], [@andrews/jiabarwick:2012], [@bontemps/magnac/maurin:2012], [@bugni/canay/guggenberger:2012], [@romano/shaikh/wolf:2014], and [@pakes/porter/ho/ishii:2015], among others. [^3]: These include [@kim:2008], [@ponomareva:2010], @armstrong:2014 [@armstrong:2015], [@chetverikov:2013], [@andrews/shi:2013], and [@chernozhukov/lee/rosen:2013], among others. [^4]: As pointed out by [@chernozhukov/chetverikov/kato:2014c], this is true even for conditional moment (in)equality models (which typically produce an infinite number of unconditional moment (in)equalities). As they explain, the unconditional moment (in)equalities generated by conditional moment (in)equality models inherit the structure from the conditional moment conditions, which limits the underlying econometric model. [^5]: See also the related technical contributions in @chernozhukov/chetverikov/kato:2013a [@chernozhukov/chetverikov/kato:2013b; @chernozhukov/chetverikov/kato:2014a; @chernozhukov/chetverikov/kato:2014b]. [^6]: This feature distinguishes their framework from a standard conditional moment (in)equality model. While conditional moment conditions can generate an uncountable set of unconditional moment (in)equalities, their covariance structure is greatly restricted by the conditioning structure. [^7]: For excellent reviews of this method see, e.g., [@belloni/chernozhukov:2011], [@buhlmann/vandegeer:2011], [@fan/lv/qi:2011], and [@hastie/tibshirani/wainwright:2015]. [^8]: In these models, the limiting distribution of the test statistic is discontinuous in the slackness of the moment inequalities, while its finite sample distribution does not exhibit such discontinuities. In consequence, asymptotic results obtained for any fixed distribution (i.e. pointwise asymptotics) can be grossly misleading, and possibly producing confidence sets that undercover (even asymptotically). See [@imbens/manski:2004], [@andrews/guggenberger:2009], [@andrews/soares:2010], and [@andrews/shi:2013] (Section 5.1). [^9]: We point out that Assumptions \[ass:Basic\]-\[ass:Rates3\] are tailored for the construction of confidence sets in Eq.  in the sense that all the relevant constants are defined uniformly in $\theta \in \Theta$. If we were only interested in the hypothesis testing problem for a particular value of ${\theta}$, then the previous assumptions could be replaced by their “pointwise” versions at the parameter value of interest. [^10]: Nevertheless, it is unclear whether the asymptotic properties of this method carry over to our partially identified moment (in)equality model. We consider that a rigorous handling of these issues is beyond the scope of this paper. [^11]: They also consider the so-called “hybrid” procedures in which the first step can be based on one approximation method (e.g. SN approximation) and the second step could be based on another approximation method (e.g. bootstrap). While these are not explicitly addressed in this section they are included in the Monte Carlo section. [^12]: To establish this result, we now use Lemma \[lem:CvBootIncreasing\] instead of Lemma \[lem:CSNincreasing\]. [^13]: This is clearly shown in Designs 5-6 of our Monte Carlos. In these cases, both first-step methods to agree on the correct set of binding moment inequalities (i.e. $\hat{J}_L(\theta)=\hat{J}_{SN}(\theta)$). Nevertheless, the slight difference in quantiles produces small but positive power advantage in favor of methods that use the Lasso in a first stage. [^14]: For reasons of brevity, these additional designs only consider $\Sigma(\theta)$ with a Toeplitz structure. We carried out the same designs with equicorrelated $\Sigma(\theta)$ and obtained qualitatively similar results. These are available from the authors, upon request.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate the dispersion relations of plasmonic waves propagating along a chain of semiconducting or metallic nanoparticles in the presence of both a static magnetic field ${\bf B}$ and a liquid crystalline host. The dispersion relations are obtained using the quasistatic approximation and a dipole-dipole approximation to treat the interaction between surface plasmons on different nanoparticles. For a plasmons propagating along a particle chain in a nematic liquid crystalline host with both ${\bf B}$ and the director parallel to the chain, we find a small, but finite, Faraday rotation angle. For ${\bf B}$ perpendicular to the chain, but director still parallel to the chain, the field couples the longitudinal and one of the two transverse plasmonic branches. This coupling is shown to split the two branches at the zero field crossing by an amount proportional to $|{\bf B}|$. In a cholesteric liquid crystal host and an applied magnetic field parallel to the chain, the dispersion relations for left- and right-moving waves are found to be different. For some frequencies, the plasmonic wave propagates only in one of the two directions.' author: - 'N. A. Pike' - 'D. Stroud' title: 'Faraday Rotation, Band Splitting, and One-Way Propagation of Plasmon Waves on a Nanoparticle Chain' --- Ordered arrays of metal particles in dielectric hosts have many remarkable properties [@meltzer; @maier2; @tang; @park; @pike2013; @pike2013a]. For example, they support propagating modes which are linear superpositions of so-called “surface” or “particle” plasmons. In dilute suspensions of such nanoparticles, these surface plasmons give rise to characteristic absorption peaks, in the near infrared or visible, which play an important role in their optical response, and which have recently been observed in semiconductor nanoparticles as well as metallic ones [@Faucheaux; @Hsu]. For ordered chains, if both the particle dimensions and the interparticle separation are much smaller than the wavelength of light, one can readily calculate the dispersion relations for both transverse ($T$) and longitudinal ($L$) waves propagating along the chain, using the quasistatic approximation, in which the curl of the electric field is neglected. In a previous paper, we calculated these dispersion relations for metallic chains immersed in an anisotropic host, such as a nematic or cholesteric liquid crystal (NLC or CLC) [@pike2013]. Here, we consider the additional effects of a static magnetic field applied either parallel and perpendicular to a chain of nanoparticles. In order to obtain a larger effect from the magnetic field, we will also consider doped semiconducting nanoparticles. Such nanoparticles have a much lower electron density than typical metallic nanoparticles. For example, the electron density in semiconductor nanoparticles, such as the Cu$_{2-x}S$ nanoparticles whose optical properties have recently been studied [@Faucheaux], can be adjusted over a broad range from $10^{17} - 10^{22}$ cm$^{-3}$ or even lower. The largest effects are obtained with electron densities towards the lower end of this range. We find, for a parallel magnetic field orientation, that a linearly polarized $T$ wave undergoes a Faraday rotation as it propagates along the chain. For a field of $2$ Tesla and a suitably low electron density, this Faraday rotation can be at least 1 degree per ten interparticle spacings. In this case, for the parallel field orientation, the NLC quantitatively modifies the amount of Faraday rotation, but there would still be rotation without the NLC host. We also consider the propagation of plasmonic waves along a nanoparticle chain but with a cholesteric liquid crystal (CLC) host . In this case, if the magnetic field is parallel to the chain and the director rotates about the chain axis with a finite pitch angle, we show that the frequencies of left- and right-propagating waves are, in general, not equal. Because of this difference, it is possible, in principle, that for certain frequencies, a linearly polarized wave can propagate along the chain only in one of the two possible directions. Indeed, for sufficiently low electron concentration, we do find one-way propagation in certain frequency ranges. This realization of one-way propagation is quite different from other proposals for one-wave waveguiding [@yu; @mazor; @hadad; @wang; @dixit]. The remainder of this article is organized as follows: First, we use the formalism of Ref.  to determine the dispersion relations for the $L$ and $T$ waves in the presence of an anisotropic host and a static magnetic field. Next, we give simple numerical examples and finally we provide a brief concluding discussion. Formalism ========= We consider a chain of identical metallic or semiconducting nanoparticles, each a sphere of radius $a$, arranged in a one-dimensional periodic lattice along the $z$ axis, with lattice spacing $d$, so that the $n^{th}$ particle is assumed centered at $(0, 0, nd)$ ($-\infty < n < + \infty$). The propagation of plasmonic waves along such a chain of nanoparticles has already been considered extensively for the case of isotropic metal particles embedded in a homogeneous, isotropic medium [@brong]. In the present work, we calculate, within the quasistatic approximation, how the plasmon dispersion relations are modified when the particle chain is immersed in both an anisotropic dielectric, such as an NLC or CLC, and a static magnetic field. We thus generalize earlier work in which an anisotropic host is considered without the magnetic field [@pike2013a; @pike2013]. In the absence of a magnetic field, the medium inside the particles is assumed to have a scalar dielectric function. If there is a magnetic field ${\bf B}$ parallel to the chain (which we take to lie along the $z$ axis), the dielectric function of the particles becomes a tensor, $\hat{\boldsymbol \epsilon}$. In the Drude approximation, the diagonal components are $\epsilon(\omega)$, while $\epsilon_{xy} = -\epsilon_{yx}= iA(\omega)$ and all other components vanish. In this case, the components of the dielectric tensor take the form $$\epsilon(\omega) = 1 - \frac{\omega_p^2}{\omega(\omega+ i/\tau )} \rightarrow 1 - \frac{\omega_p^2}{\omega^2}, \label{eq:epsw}$$ and $$A(\omega) = -\frac{\omega_p^2\tau}{\omega}\frac{\omega_c\tau}{(1-i\omega\tau)^2} \rightarrow \frac{\omega_p^2\omega_c}{\omega^3}, \label{eq:aw}$$ where $\omega_p$ is the plasma frequency, $\tau$ is a relaxation time, and $\omega_c$ is the cyclotron frequency, and the second limit is applicable when $\omega\tau \rightarrow \infty$. We will use Gaussian units throughout. While this approximation may be somewhat crude, especially for semiconducting nanoparticles, it should be a reasonable first approximation. ![Blue symbols (x’s and +’s): Dispersion relations for left (x) and right (+) circularly polarized $T$ plasmon waves propagating along a chain of nanoparticles immersed in a NLC with both the director and a magnetic field parallel to a chain. The particles are described by a Drude dielectric function with $\omega_p \tau = 100$ and $\omega_c/\omega_p = 0.07$. Red symbols (open squares and triangles): Same as the blue symbols, but assuming no single-particle damping ($\omega_p\tau = \infty$). In both cases, the splitting between left and right circularly polarized waves is not visible on the scale of the figure (but the rotation is visible in Fig. 2). For $\omega_p = 5.0\times 10^{12}$ sec$^{-1}$, the chosen $\omega_c/\omega_p$ corresponds to $B \sim 2$ Tesla.[]{data-label="figure1"}](NLC_par_disp.eps){width="45.00000%"} The dielectric function of the liquid crystal host, for either the NLC or CLC case, is taken to be that described in Ref. . The dispersion relations for the surface plasmon waves are determined within the formalism of Ref. . Specifically, we write down a set of self-consistent equations for the coupled dipole moments; these are given in Ref.  as Eq. (9), and repeated here for reference: $${\bf p}_n = -\frac{4\pi a^3}{3}\hat{\bf t}\sum_{n^\prime \neq n}\hat{\cal G}({\bf x}_n - {\bf x}_{n^\prime}){\bf p}_{n^\prime}. \label{eq:selfconsist}$$ Here $$\hat{\bf t} = \delta\hat{\boldsymbol{\epsilon}}\left(\hat{\bf 1}-\hat{\bf \Gamma}\delta\hat{\boldsymbol{\epsilon}}\right)^{-1} \label{eq:tmatrix}$$ is a “t-matrix” describing the scattering properties of the nanoparticle spheres in the surrounding material, $\hat{\cal G}$ and $\hat{\bf \Gamma}$ are a $3\times 3$ Green’s function and depolarization matrix given in Ref. , $\hat{\bf 1}$ is the $3\times3$ identity matrix, and $\delta \hat{\boldsymbol{\epsilon}} = \hat{\boldsymbol{\epsilon}}- \hat{\boldsymbol{\epsilon}}_h$, where $\hat{\boldsymbol \epsilon}_h$ is the dielectric tensor of the liquid crystal host. ![Real and imaginary parts of $\theta d$, the rotation angle per interparticle spacing (in radians), as a function of frequency, assuming $\omega_c/\omega_p = 0.07$. Blue $+$’s (real part) and green $x$’s (imaginary part of $\theta d$): Drude model with no damping ($\omega_p\tau \rightarrow \infty$). Black triangles (real part) and red circles (imaginary part of $\theta d$): Drude model with finite damping ($\omega_p\tau = 100$). In both cases, the magnetic field and the director of the NLC are assumed parallel to the chain axis, as in Fig. \[figure1\]. The dotted lines merely connect the points.[]{data-label="figure2"}](NLC_par_angle.eps){width="45.00000%"} Nematic Liquid Crystal ---------------------- We first consider a chain of such particles placed in an NLC host with ${\bf B}\| \hat{z}$ and parallel to the liquid crystal director $\hat{n}$. Using the formalism of Ref. , combined with Eq. , we obtain two coupled sets of linear equations for the transverse ($T$) components of the polarization, $p_{nx}$ and $p_{ny}$. The solutions are found to be left- and right-circularly polarized transverse waves with frequency $\omega$ and wave number $k_{\pm}$, where the frequencies and wave numbers are connected by the dispersion relations in the nearest-neighbor approximation $$\label{eq:nlc_parallel_disp} 1= -\frac{2}{3} \frac{a^3}{d^3} \frac{\epsilon_\|}{\epsilon_\perp^2}\left(\frac{\epsilon(\omega)-1}{\epsilon(\omega)+2} \mp \frac{3 A(\omega)}{(\epsilon(\omega)+2)^2}\right)\cos(k_\pm d),$$ where we use the notation of Ref. . These equations are accurate to first order in $A(\omega)$. The longitudinal ($L$ or $z$) mode is unaffected by the magnetic field. Since the frequency-dependences of both $\epsilon(\omega)$ and $A(\omega)$ are assumed known, these equations represent implicit relations between $\omega$ and $k_\pm$ for these $T$ waves. If ${\bf B} \| \hat{x}$ while both $\hat{n}$ and the chain of particles are parallel to $\hat{z}$, then the $y$ and $z$ polarized waves are coupled The dispersion relations are obtained as solutions to the coupled equations $$\begin{aligned} p_{0y} & = & \frac{-2a^3}{3d^3}\left[\frac{\epsilon_\|}{\epsilon_\perp^2}t_{yy}p_{0y} - \frac{2}{\epsilon_\perp}t_{yz}p_{0z}\right]\cos (kd), \nonumber \\ p_{0z} & = & \frac{-2a^3}{3d^3}\left[-\frac{\epsilon_\|}{\epsilon_\perp^2}t_{yz}p_{0y} -\frac{2}{\epsilon_\perp}t_{zz}p_{0z}\right]\cos (kd). \label{eq:rotyz2}\end{aligned}$$ These $y$ and $z$ modes are uncoupled from the $x$ modes. If we solve this pair of equations for $p_{0y}$ and $p_{0z}$ for a given $k$, we obtain a nonzero solution only if the determinant of the matrix of coefficients vanishes. For a given real frequency $\omega$, there will, in general, be two solutions for $k(\omega)$ which decay in the $+z$ direction. These correspond to two branches of propagating plasmon (or plasmon polariton) waves, with dispersion relations which we may write as $k_\pm(\omega)$. The frequency dependence appears in $t_{yz}$, $t_{yy}$, and $t_{zz}$, which depend on $\omega$ \[through $\epsilon(\omega)$ and $A(\omega)$\]. However, unlike the case where the magnetic field is parallel to the $z$ axis, the waves are elliptically rather than circularly polarized. Cholesteric Liquid Crystal -------------------------- We now consider immersing the chain of semiconducting nanoparticles in a CLC in the presence of a static magnetic field with ${\bf B}\| \hat{z}$ and the chain. A CLC can be thought of as an NLC whose director is perpendicular to a rotation axis (which we take to be $\hat{z}$), and which spirals about that axis with a pitch angle $\alpha$ per interparticle spacing. For a CLC, if we include only interactions between nearest-neighbor dipoles, the coupled dipole equation \[Eq. \] takes the form $${\tilde p}_n = -\frac{4\pi a^3}{3}[\hat{\bf R}^{-1}(z_1)\hat{\bf t}\hat{\cal G}\cdot{\tilde p}_{n+1} +\hat{\bf R}(z_1)\hat{\bf t}\hat{\cal G}\cdot{\tilde p}_{n-1}], \label{eq:selfcon_two}$$ as is shown in Refs.  and . Here $\tilde{p}_n = {\bf R}_n(z)p_n$ and ${\bf R}_n(z)$ is a $ 2\times2 $ rotation matrix for the director $\hat{n}(z)$. If ${\bf B} \| \hat{z}$, the two $T$ branches are coupled. One can write a $2\times 2$ matrix equation for the coupled dipole equations in the rotated ${x}$ and ${y}$ directions. This equation is found to be $$\tilde{\bf p}_0 = -\frac{2 a^3}{3 d^3}\hat{\bf M}(k, \omega)\cdot \tilde{\bf p}_0, \label{eq:disperchiral}$$ where $\tilde{\bf p}_0$ is the rotated two-component column vector whose components are $\tilde{\bf p}_{x0}$ and $\tilde{\bf p}_{y0}$. The components of the matrix $\hat{\bf M}(k,\omega)$ are found to be $$\begin{aligned} M_{xx} &=& \epsilon_1\lbrack t_{xx}\cos(kd)\cos(\alpha d) + it_{xy}\sin(kd)\sin(\alpha d)\rbrack \nonumber \\ M_{yy} &=& \epsilon_2\lbrack t_{yy}\cos(kd)\cos(\alpha d) + it_{xy}\sin(kd)\sin(\alpha d)\rbrack\nonumber \\ M_{xy} &=&\epsilon_2\lbrack t_{xy}\cos(kd)\cos(\alpha d) - it_{yy}\sin(kd)\sin(\alpha d)\rbrack \nonumber \\ M_{yx} &=&\epsilon_1 \lbrack it_{xx}\sin(kd)\sin(\alpha d) - t_{xy}\cos(kd)\cos(\alpha d)\rbrack \nonumber. \\ \label{eq:magfield}\end{aligned}$$ where $\epsilon_1 = \frac{\epsilon_\perp^{1/2}}{\epsilon_\|^{3/2}}$ and $\epsilon_2= \frac{1}{\sqrt{\epsilon_\perp \epsilon_\|}}$. The dispersion relations for the two $T$ waves are the non-trivial solutions to the secular equation formed from Eqs.  and . The most interesting result emerging from Eqs.  and  is that the dispersion relations are [*non-reciprocal*]{}, i. e., $\omega(k) \neq \omega(-k)$ in general. The magnetic field appears only in the off-diagonal elements $t_{xy}$ and $t_{yx}$, which are linear in the field except for very large fields. The terms involving $t_{xy}$ and $t_{yx}$ in Eq.  are multiplied by $\sin(kd)$ and thus change sign when $k$ changes sign. Thus, the secular equation determining $\omega(k)$ is not even in $k$, implying that the dispersion relations are non-reciprocal. The non-reciprocal nature of the dispersion relations disappears at $B=0$ even though the off diagonal terms of ${\bf M}(k,\omega)$ are still nonzero, because $\sin(kd)$ appears only to second order. Also, when the host dielectric is an NLC, the non-reciprocity vanishes because the rotation angle $\alpha =0$ and all terms proportional to $\sin(kd)$ vanish, even at finite $B$. For a finite [**B**]{}, the difference in magnitude of wave number between a right-moving or left-moving wave is $$\label{eq:delta_k} \Delta k_i(\omega) = \lvert \mathrm{Re}(k_{i,L})\rvert -\lvert\mathrm{Re}(k_{i,R})\rvert,$$ where $i = 1,2$ for the two elliptical polarizations and $ L$ or $R$ denotes the left-moving or right-moving branch. If, for example, $\Delta k(\omega) \neq 0$, then the left- and right-moving waves have different magnitudes of wave numbers for a given frequency and are non-reciprocal. Faraday Rotation and Ellipticity -------------------------------- By solving for $k(\omega)$ using either Eqs.  or  for an NLC, or  for a CLC, one finds that the two modes polarized perpendicular to [**B**]{} and propagating along the nanoparticle chain have, in general, different wave vectors. For the NLC, we denote these solutions $k_+(\omega)$ and $k_-(\omega)$, while for the CLC, we denote them $k_1(\omega)$ and $k_2(\omega)$. We first discuss the case of an NLC host and ${\bf B} \| \hat{z}$. Then the two solutions represent left- and right-circularly polarized waves propagating along the chain. A linearly polarized mode therefore represents an equal-amplitude mixture of the two circularly polarized modes. This mixture undergoes a [*rotation*]{} of the plane of polarization as it propagates down the chain and is analogous to the usual Faraday effect in a [*bulk*]{} dielectric. The angle of rotation per unit chain length may be written $$\label{eq:angle_single} \theta(\omega) = \frac{1}{2} \left[k_+(\omega) - k_-(\omega)\right].$$ ![Red open squares: dispersion relations for plasmon waves elliptically polarized in the $yz$ plane and propagating along a chain of nanoparticles described by a Drude dielectric function and assuming no damping. The chain is assumed immersed in an NLC with director parallel to the chain ($\hat{z}$), ${\bf B} \| \hat{x}$ and $\omega_c/\omega_p = 0.007$. Blue $x$’s: same as red open squares, but assuming single-particle damping corresponding to $\omega_p\tau = 100$. For $\omega_p = 1.0\times 10^{13}$ sec$^{-1}$, the chosen $\omega_c/\omega_p$ corresponds to about $2$ Tesla. []{data-label="figure3"}](NLC_perp_disp.eps){width="45.00000%"} In the absence of damping, $\theta$ is real. If $\tau$ is finite, the electrons in each metal or semiconductor particle will experience damping, leading to an exponential decay of the plasmonic waves propagating along the chain. This damping is automatically included in the above formalism, and can be seen most easily if only nearest neighbor coupling is included. The quantity $$\theta(\omega) = \theta_1(\omega) + i\theta_2(\omega) \label{eq:thetatau}$$ is the [*complex*]{} angle of rotation per unit length of a linearly polarized wave propagating along the chain of particles. $\mathrm{Re}\lbrack\theta(\omega)\rbrack$ represents the angle of rotation of a linearly polarized wave (per unit length of chain), and $\mathrm{Im}\lbrack\theta(\omega)\rbrack$ is the corresponding Faraday ellipticity, i. e., the amount by which the initially linearly polarized wave becomes elliptically polarized as it propagates along the chain. ![image](CLC_disp.eps){width="70.00000%"} In the case of a CLC host, neither of the two $T$ modes is circularly polarized in general. Thus, the propagation of a linearly polarized wave along the chain cannot be simply interpreted in terms of Faraday rotation. Numerical Illustrations ======================= We now numerically evaluate the dispersion relations presented in the previous section assuming the host is the liquid crystal known as E7. This liquid crystal was described by Müller [@muller], from whom we take the dielectric constants $\epsilon_\|$ and $\epsilon_\perp$. We first consider the case of an a NLC host with both the director and an applied magnetic field parallel to the chain axis $\hat{z}$. To illustrate the predictions of our simple expressions, we take $a/d = 1/3$, and assume a magnetic field such that the ratio $\omega_c/\omega_p = 0.07$ or $0.007$ as indicated in the Figures. For a typical plasma frequency of $\sim 10^{13}$ sec$^{-1}$, the ratio of $0.007$ would correspond to a magnetic induction of $B \sim 2T$ . We consider both the undamped and damped cases; in the latter, we choose $\omega_p\tau = 100$. For propagating waves we choose solutions for which $\mathrm{Im}\lbrack k_\pm\rbrack > 0$ so that these waves decay to zero, as expected, when $z\rightarrow \infty$ when $\mathrm{Re} \ k >0$. The calculated dispersion relations for the two circular polarizations of plasmonic wave are shown in Fig. \[figure1\] with and without single-particle damping. The splitting between the two circularly polarized $T$ waves is too small to be seen on the scale of this plot. The difference can be seen through its effect on the Faraday rotation angle, which is shown in Fig. \[figure2\] . In this, and all subsequent plots, we have calculated far more points than are shown in the Figure, so that effectively the entire range $0 < kd < \pi$ is included. In Fig. \[figure2\], we plot the corresponding quantity $\theta(k)d$, the rotation angle for a distance equal to one interparticle spacing. When there is no damping, we find that the real part of $\theta d$ is very small and that the imaginary part is zero. Both become larger when damping is included, as we do here by setting $\omega_p\tau = 100$. Even in this case, neither $\mathrm{Re}\lbrack\theta(\omega)d\rbrack$ nor $\mathrm{Im}\lbrack\theta(\omega)d\rbrack$ exceed about $0.005$ radians, showing that a linear incident wave is rotated only slightly over a single particle spacing (by about 1/4 degree per interparticle spacing for the chosen parameters). If we assume that the wave [*intensity*]{} has an exponential decay length of no more than around 20 interparticle spacings in realistic chains, the likely Faraday rotation of such a wave will only be 3-4 degrees over this distance. The present numerical calculations also suggest that $\theta(k)d$ is very nearly linear in $B$ for a given $k$, so a larger rotation could be attained by increasing $B$; it can also be increased if the electron density is reduced. For a chain of Drude particles in a NLC where $B \perp \hat{z}$, we find, using the same parameters and requirements as the previous case, that the two non-degenerate waves (one an $L$ and the other a $T$ wave) become mixed when $B\neq 0$. The dispersion relations, again with and without damping, are plotted in Fig. \[figure3\]. When compared to previous work in Ref.  without the presence of damping, the dispersion relations in Fig. \[figure3\] are modified because of the finite damping and presence of the magnetic field. Decreasing the electron density of the metal or semiconductor at fixed [**B**]{} increases the interaction of the coupled $L$ and $T$ mode near their crossing point $kd = 0.7$, although this is not visible in the Figure, for the chosen parameters. We find that the effect of the magnetic field is such that the two dispersion relations appear to be “repelled” near their crossing point, although this is again not visible in the Figure for the magnetic field considered. The band gap that opens between the two bands is proportional to the magnetic field. These features are shown analytically in the Appendix. Finally, we discuss the case of a chain parallel to the $z$ axis, subjected to a magnetic field along the $z$ axis, and immersed in a CLC whose twist axis is also parallel to $\hat{z}$. Using the same host dielectric constants given above and a twist angle of $\alpha d = \pi/6$, we show in Fig. \[figure4\](a) the resulting dispersion relations, i. e., $\omega/\omega_p$ plotted against $\lvert kd \rvert$, for the two transverse branches. In particular, we show both transverse branches for a right-moving wave, displayed as black (+) symbols, and a left-moving wave, displayed as red ($x$) symbols, giving a total of $4$ plots shown in Fig. \[figure4\](a). The separation between the two $T$ branches is on the order of $0.05 \ \omega/\omega_p$ for all $k$. In Fig. \[figure4\](b), we plot the corresponding difference in wave number between the left- and right-moving waves as $\Delta k_i(\omega)d$. Since $\Delta k_i(\omega)d$ is nonzero in a wide frequency range, the wave propagation is indeed non-reciprocal in this range. One-way wave propagation clearly does occur in part of this range. Such propagation occurs when, at particular frequencies, waves can propagate only in one of the two directions. From Fig.  \[figure4\](a), we can see that for the upper dispersion relation, only the right-hand-moving wave can propagate near $kd = \pi$, whereas for the lower one, only the left-hand-moving wave propagates near $kd = 0$. Thus, there is a gap in the plot of $\Delta k_i(\omega)d$ near $\omega/\omega_p = 0.41$, within which there is only one-way wave propagation. In Fig. \[figure4\](b), the boundaries of the frequency band for one-way propagation are indicated by the two horizontal lines. Discussion ========== The present numerical calculations omit several potentially important factors which could alter the numerical results. The first of these are the effects of particles beyond the nearest neighbors on the dispersion relations [@brong]. We believe that these further neighbors will mainly modify the details of the dispersion relations without changing the qualititative features introduced by the magnetic field and the NLC or CLC host. Another omitted factor is the (possibly large) influence of the particles in disrupting the director orientation of the liquid crystalline host [@lubensky; @poulin; @stark; @kamien; @allender], whether NLC or CLC. This could be quite important in modifying the dielectric properties of the host liquid near the particles, and could also cause the positions of the particles themselves to be disturbed, depending on whether they are somehow held in place. Even though these effects could be quite substantial, we believe that the qualitative effects found in the present calculations, notably the regime of one-way propagation found for certain frequencies in a CLC host, should still be present. We hope to investigate these effects in future work. Finally, it is known that radiative damping [@weber04], not included in the quasistatic approximation, can have a significant effect on the dispersion relations at special frequencies. Once again, however, we believe that the qualitative effects discussed in this paper should still be present even if radiation damping is included. Thus, we believe that our calculations do qualitatively describe the combined effects of a liquid crystalline host and an applied magnetic field on the surface plasmon dispersion relations. It should be noted that the magnetic field effects described in this paper are numerically small, for the parameters investigated. The smallness is caused mainly by the small value of the ratio $\omega_c/\omega_p$, taken here as $0.07$ or $0.007$ depending on the electron density used in the calculation. To increase this ratio, one could either increase $\omega_c$ (by raising the magnetic field strength), or decrease $\omega_p$ (by reducing the free carrier density in the particle). For the case of a particle chain in a CLC host, any change which increases $\omega_c/\omega_p$ will increases $\Delta k$, leading to a broader frequency rage for one-way wave propagation. In summary, we have calculated the dispersion relations for plasmonic waves propagating along a chain of semiconducting or metallic nanoparticles immersed in a liquid crystal and subjected to an applied magnetic field. For a magnetic field parallel to the chain and director axis of the NLC, a linearly polarized wave is Faraday-rotated by an amount proportional to the magnetic field strength. For a CLC host and a magnetic field parallel to the chain, the transverse wave solutions become non-reciprocal (left- and right-traveling waves having different dispersion relations) and there are be frequency ranges in which waves propagate only in one direction. Thus, plasmonic wave propagation can be tuned, either by a liquid crystalline host or a magnetic field, or both. In the future, it may be possible to detect some of these effects in experiments, and to use some of the predicted properties for applications, e. g., in optical circuit design. Acknowledgments =============== This work was supported, in part, by the Center for Emerging Materials at The Ohio State University, an NSF MRSEC (Grant No. DMR-1420451). In addition, this work was supported by the Belgian Fonds National de la Recherche Scientifique FNRS under grant number PDR T.1077.15-1/7. Appendix ======== In this Appendix, we show that the two bands shown in Fig. \[figure3\], which in zero magnetic field cross near $kd = 0.7$, are “repelled” in a finite magnetic field ${\bf B} \| {\hat x}$, by an amount proportional to $|{\bf B}|$. That is, a gap opens up at the crossing point which is proportional to$|{\bf B}|$. The dispersion relations for the coupled $y$ and $z$ polarized waves are obtained from Eqn. . They have non-trivial solutions when the determinant of the matrix of coefficients vanishes, i.e., $$\begin{aligned} &\left[1+\frac{2}{3}\frac{a^3}{d^3}\frac{\epsilon_\|}{\epsilon_\perp^2}t_{yy}(\omega)\cos(kd)\right]\left[1-\frac{4}{3}\frac{a^3}{d^3}\frac{1}{\epsilon_\perp} t_{zz}(\omega)\cos(kd)\right] \nonumber \\ &-\frac{8}{9}\frac{a^6}{d^6}\frac{\epsilon_\|}{\epsilon_\perp^3}t_{yz}^2(\omega)\cos^2(kd)= 0. \label{eq:determ}\end{aligned}$$ We first consider the case of zero magnetic field. In this case, the off-diagonal components of the t-matrix, namely $t_{yz} = -t_{zy}$, both vanish. The dispersion relations are then given by $$F_1(k, \omega) \equiv 1+\frac{2}{3}\frac{a^3}{d^3}\frac{\epsilon_\|}{\epsilon_\perp^2}t_{yy}(\omega)\cos(kd) = 0$$ and $$F_2(k,\omega) \equiv 1-\frac{4}{3}\frac{a^3}{d^3}\frac{1}{\epsilon_\perp}t_{zz}(\omega)\cos(kd) = 0.$$ The two bands will be degenerate when $F_1(k, \omega) = F_2(k, \omega)$, or equivalently $$\frac{\epsilon_\|}{\epsilon_\perp^2}t_{yy}(\omega) + \frac{1}{\epsilon_\perp}t_{zz}(\omega) = 0. \label{eq:degenw}$$ Eq.  gives the frequency of the degeneracy, which we denote $\omega_0$. The corresponding wave vector $k_0$ of the degeneracy is determined by either $$F_1(k_0, \omega_0) = 0$$ or $$F_2(k_0,\omega_0) = 0.$$ Now we consider Eq.  with non-zero magnetic field, i. e. finite $t_{yz}(\omega)$. For $k = k_0$, assuming that the band energies $\omega$ are close to their zero-field value $\omega_0$, we can expand $F_1$ and $F_2$ in Taylor series as $F_i(k_0\omega) \sim (\omega - \omega_0) F_i^\prime(k_0, \omega_0)$ for i =1, 2, where $F_i^\prime(k_0,\omega_0) = [\partial F_i(k_0,\omega)/\partial \omega)]_{\omega=\omega_0}$. Again to lowest order in B, we can write $t_{yz}(\omega)\sim t_{yz}(\omega_0)$. Then the solutions to Eq.  are given by $$F_1^\prime(k_0,\omega_0)F_2^\prime(k_0, \omega_0)(\omega-\omega_0)^2 = \frac{8}{9}\frac{a^6}{d^6}\frac{\epsilon_\|}{\epsilon_\perp^3}t_{yz}^2(\omega_0)\cos^2(k_0d)$$ or $$\begin{aligned} & |\omega(B)-\omega_0| = \\ & \pm \{\frac{8}{9}\frac{a^6}{d^6}\frac{\epsilon_\|}{\epsilon_\perp^3}t_{yz}^2(\omega_0)\cos^2(k_0d)/ [F_1^\prime(k_0,\omega_0)F_2^\prime(k_0,\omega_0]\}^{1/2} \nonumber \label{eq:finiteb}\end{aligned}$$ Here $\omega(B)$ represents one of the two band energies at $k = k_0$. Since $t_{yz}(\omega_0)$ is proportional to B (see below), Eq.  shows at the splitting between these two band energies at $k = k_0$ is proportional to ${\bf B}$. To show that $t_{yz}(\omega_0) \propto B$, we can calculate $t_{yz}$ (and the other components of [**t**]{}) from Eqs. , , and . The result, to lowest order in $\delta\epsilon_{yz}(\omega)$ is $$t_{yz}(\omega) = \frac{\delta\epsilon_{yz}(\omega)}{(1-\Gamma_{yy}\delta\epsilon_{yy}(\omega))(1-\Gamma_{zz}\delta\epsilon_{zz}(\omega))}.$$ Since $\delta\epsilon_{yz}(\omega) \propto A(\omega)$, we see that $t_{yz}(\omega) \propto \omega_c \propto B$. Hence, the splitting between the two bands at $k = k+0$ is proportional to $|{\bf B}|$. For the magnitude of B considered in Fig. \[figure3\], this splitting is not visible on the scale of the Figure, but we have tentatively verified numerically that this splitting is present for finite magnetic field. [90]{} S.  A. Maier, M.  L.  Brongersma, P.  G. Kik, S.  Meltzer, A.  A. G. Requicha, and H.  A. AtwaterAdv. Mat. [**13**]{} 1501 (2001). Nature Mater[**2**]{} 229 (2003). Adv. Mater. [**17**]{} 951 (2005). [Nature ]{} [**451**]{} [553]{} (2008). [J. Opt. Soc. Am. B]{} [**30**]{} [1127-1134 ]{} (2013). [ Plasmonics: Metallic Nanostructures and Their Optical Properties XI, Proc. SPIE]{} [**8809** ]{} [880910]{} (2013). [J. Phy. Chem. Lett.]{} [**5**]{} [976]{} (2014). [J. Am. Chem. Soc]{} [**133**]{} [19072]{} (2011). [Phys. Rev. Lett.]{} [**100**]{} [023902]{} (2008). [App. Phys. Lett.]{} [**104**]{} [061604]{} (2014). [Phys. Rev. B]{} [**86**]{} [045120]{} (2012). [Phys. Rev. Lett. ]{}[**105**]{} [233904]{} (2010). [Phys. Rev. Lett.]{} [**100**]{} [013905]{} (2008). [Phys. Rev. B]{} [**62**]{} [R16356]{} (2000). [Appl. Phys. Lett.]{} [**81**]{} [171]{}(2002). [Phys. Rev. E]{} [**57**]{} [610]{} (1998). [Phys. Rev. E ]{} [**57**]{} [626]{} (1998). [Phys. Rep.]{} [**351**]{} [387]{} (2001). [Liq. Cryst.]{} [**23**]{} [213]{} (1997). [Phys. Rev. Lett.]{} [**67**]{} [1442]{} (1991). [Phys. Rev. B]{} [**70**]{} [125429]{} (2004).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We derive the kinematic Hamiltonian for the so-called “new general relativity” class of teleparallel gravity theories, which is the most general class of theories whose Lagrangian is quadratic in the torsion tensor and does not contain parity violating terms. Our approach makes use of an explicit expression for the flat, in general, nonvanishing spin connection, which avoids the use of Lagrange multipliers, while keeping the theory invariant under local Lorentz transformations. We clarify the relation between the dynamics of the spin connection degrees of freedom and the tetrads. The terms constituting the Hamiltonian of the theory can be decomposed into irreducible parts under the rotation group. Using this, we demonstrate that there are nine different classes of theories, which are distinguished by the occurrence or non-occurrence of certain primary constraints. We visualize these different classes and show that the decomposition into irreducible parts allows us to write the Hamiltonian in a common form for all nine classes, which reproduces the specific Hamiltonians of more restricted classes in which particular primary constraints appear.' author: - Daniel Blixt - Manuel Hohmann - Christian Pfeifer bibliography: - 'NGRADMBlixt2.bib' title: Hamiltonian and primary constraints of new general relativity --- Introduction {#sec:intro} ============ General relativity (GR) is usually formulated using the Levi-Civita connection induced by a pseudo-Riemannian metric. Alternatively, one may employ other connections, such as the flat connections used in teleparallel [@Krssak:2015oua; @Golovnev:2017dox] or symmetric teleparallel gravity [@BeltranJimenez:2017tkd], in order to obtain sets of field equations equivalent to those of GR. In this work we consider teleparallel gravity, where the field variables are the 16 components of a tetrad (or vierbein), instead of the 10 components of a metric. Nowadays it is known that 6 components are related to local Lorentz transformations, while at most 10 encode the gravitational interaction. How many of them actually encode dynamical degrees of freedom of a teleparallel theory of gravity is not conclusively answered in general, and to gain insight into this question is one motivation for this work. Large varieties of teleparallel theories of gravity have been constructed [@Aldrovandi:2013wha; @Cai:2015emx; @Krssak:2018ywd]. Since the building block of these theories is the torsion of the teleparallel connection and not the curvature of the Levi-Civita connection, second order derivatives of the fundamental fields do not appear in the Lagrangians, as long as no terms with additional derivatives on the torsion are introduced, and so no Gibbons-Hawking-York boundary term is required. In this way the teleparallel formulation allows for more freedom in the construction of gravity theories with second order derivative field equations than the metric approach. Moreover, teleparallel gravity theories can be understood as gauge theories with a Yang-Mills theory like structure [@BlagojevicHehl; @Itin:2016nxk; @Hohmann:2017duq], which brings gravity closer to the standard model of particle physics, and might hence open a path to its unification with the other fundamental forces in physics. The other prominent reason to construct modified theories of gravity is to shed light on astrophysical observations which lack explanation within GR coupled to standard model matter only; the most famous ones being the dark matter and dark energy phenomena. Before studying the phenomenology of modified teleparallel theories of gravity it is essential to identify those which are self-consistent, i.e. to understand the properties of their degrees of freedom and if they contain ghosts. This can be done best in terms of a full-fledged Hamiltonian analysis in terms of the Dirac-Bergmann algorithm for constrained Hamiltonian systems. It is known that the teleparallel equivalent of general relativity (TEGR), which yields the same dynamics and solutions for the metric defined by the tetrads as general relativity and contains no additional degrees of freedom, is self-consistent and ghost-free [@Okolow:2011nq; @Okolow:2013lwa; @Blagojevic:2000qs; @Blagojevic:2000xd; @Maluf:1994ji; @Ferraro:2016wht; @Maluf:2001rg]. The hope is that this is not the only contender of the class of healthy teleparallel theories of gravity in this sense. Because of the complexity in the calculation of the constraint algebra, the Hamiltonian analysis for modified theories of gravity is not done for all the models considered in the literature. With this work we aim to contribute to this goal. One widely studied class of modified teleparallel theories of gravity are the $f(T)$-models. They are based on the Lagrangian $T$ employed in TEGR, and can be thought of as the teleparallel counterpart of $f(R)$-theories considered in the metric formalism. While it is known that TEGR and GR are equivalent, this is in general not true for $f(T)$ and $f(R)$ theories. The Hamiltonian analysis of $f(T)$ theories has just recently been presented [@Ferraro:2016wht; @Ferraro:2018tpu] with the conclusion that there are three propagating degrees of freedom, which differs from previous results [@Chen:2014qtl; @Li:2011rn]. Other, more general models are based on a Lagrangian that is a free function of the three parity even scalars that are quadratic in the torsion tensor and do not involve further fields than the tetrads [@Bahamonde:2017wwk]. Their Hamilton analysis is still missing, and, due to the generality of the model, could be very involved. However, among these general models, there are the *New General Relativity* (NGR) models [@PhysRevD.19.3524]: the most general class of teleparallel theory of gravity in four spacetime dimensions, whose Lagrangian is quadratic in the torsion tensor and contains only the tetrad and its first derivatives. This class is parametrized by three constant parameters appearing in the Lagrangian and contains TEGR for a special choice of the parameters. Various work has been performed on NGR. Solar system constraints have been investigated [@PhysRevD.19.3524] as well as the propagation and polarization modes of gravitational waves on a Minkowski spacetime background [@Hohmann:2018wxu]. This analysis found that already on the linearized level, in general, NGR models predict more than two gravitational wave polarizations. However, it was also found that there exist NGR models different from TEGR with two gravitational wave polarizations. What remains open from the analysis of the linearized theory is if it differs from the full nonlinear theory. On the nonlinear level strongly coupled fields may appear, similar to what was pointed out in early attempts to formulate massive gravity theories [@Boulware:1973my]. A complete Hamiltonian analysis is needed in order to answer this question. In this article we work towards the goal of a full Hamiltonian description of NGR. In particular, we derive the fully generic kinematic Hamiltonian for NGR, which is valid for any choice of the parameters appearing in the action. Further, we discuss the occurrence of primary constraints depending on the parameters of the theory. This analysis is an important cornerstone for further studies of NGR in its Hamiltonian formulation. Knowing the primary constraints, it is possible to calculate the successive Poisson brackets, and thus to derive the full constraint algebra, which implies the number of degrees of freedom of the theory. In addition it is the starting point to study the presence or absence of ghosts, and hence to test the viability of different theories within the NGR class. Further, the 3+1 Hamiltonian formalism also leads to the initial value formulation of NGR, required for numerical calculations, such as the precise prediction of gravitational wave signatures. Hamiltonian analyses of specific theories within the NGR class besides TEGR have been studied [@Cheng:1988zg; @Okolow:2011np]. Additionally this line of research extends to the Hamiltonian formulation of more general Poincaré gauge theories, where both torsion and curvature are present [@Blagojevic:1983zz; @Blagojevic:1980mm]. The main difference between the previous studies and the approach we present in this article lies in the method which is employed in order to implement the vanishing curvature of the teleparallel connection. Previous studies can mainly be divided into two groups, either assuming a vanishing spin connection (which is known as the Weitzenböck gauge) [@Maluf:2001rg; @Ferraro:2016wht; @Ferraro:2018tpu; @Okolow:2011np; @Okolow:2011nq], or an arbitrary spin connection, whose curvature is then enforced to vanish by using Lagrange multipliers in the action functional [@Blagojevic:2000qs; @Blagojevic:2000xd]. Here we use a different ansatz, by allowing for a non-vanishing spin connection, as mandated by the covariant formulation of teleparallel gravity [@Krssak:2015oua; @Golovnev:2017dox], which is obtained explicitly by applying a local Lorentz transformation to the vanishing Weitzenböck gauge spin connection. This spin connection is flat by construction, and we will show that it enters only as a gauge degree of freedom. The article is organized as follows: In section \[sec:NGRLangrangian\] we present the Lagrangian for new general relativity. Then we write down the Lagrangian in 3+1 decomposition and derive its conjugate momenta, and discuss the gauge fixing, in section \[sec:momenta\]. In section \[sec:Constraints\] we perform a decomposition into irreducible parts and find the possible primary constraints. Finally the kinematic Hamiltonian is written down in section \[sec:Hamiltonian\], where we use the irreducible parts to write it in a block structure showing the most general expression. In Appendix \[Withoutgauge\] we sketch how one can derive the Hamiltonian without fixing the gauge. Index conventions throughout this article are such that capital Latin indices $A, B, C, \ldots$ are Lorentz indices running from $0$ to $3$, Greek indices $\mu, \nu, \rho, \ldots$ are spacetime indices running from $0$ to $3$ and small Latin indices $i, j, k, \ldots$ are spatial spacetime indices running from $1$ to $3$. A dot over a quantity always denotes derivative with respect to $x^0$ $\dot X = \partial_0 X$. The signature convention for the spacetime metric employed is $(-,+,+,+)$. The New General Relativity Lagrangian {#sec:NGRLangrangian} ===================================== Teleparallel theories of gravity are formulated in terms of tetrad fields $\theta^A$, their duals $e_A$ and a curvature-free spin connection $\omega^A{}_B$, which can at least locally be constructed out of local Lorentz transformations $\Lambda^A{}_B$. In local coordinates $(x^{\mu}, \mu = 0, \ldots, 3)$ on spacetime they can be expressed as $$\begin{aligned} \label{tetrad+connection} \theta^A = \theta^A{}_\mu {\mathrm{d}}x^\mu,\quad e_A = e_A{}^\mu \partial_\mu,\quad \omega^A{}_B = \omega^A{}_{B\mu} {\mathrm{d}}x^\mu = \Lambda^A{}_C {\mathrm{d}}(\Lambda^{-1})^C{}_B = \Lambda^A{}_C \partial_\mu (\Lambda^{-1})^C{}_B{\mathrm{d}}x^\mu\,,\end{aligned}$$ and satisfy $$\begin{aligned} \label{eq:tetradinv} \theta^A(e_B) = \theta^A{}_\mu e_B{}^\mu = \delta^A_B,\qquad \theta^A{}_\mu e_A{}^\nu = \delta^\nu_\mu\,.\end{aligned}$$ Implementing the flat teleparallel spin connection in this way has the advantage that it avoids the use of Lagrange multipliers as done in [@Blagojevic:2000qs; @Nester:2017wau]. The spacetime metric $g_{\mu\nu}$, which is a fundamental field in other gravity theories such as GR, here becomes a derived quantity defined by $$\begin{aligned} g_{\mu\nu} = \eta_{AB}\theta^A{}_\mu \theta^B{}_\nu,\qquad g^{\mu\nu} = \eta^{AB}e_A{}^\mu e_B{}^\nu\,.\end{aligned}$$ The fundamental tensorial ingredient from which actions for the fields are built are the first covariant derivatives of the tetrad with respect to the covariant derivative defined by the spin connection $$\begin{aligned} \label{torsion} T^A = {\mathcal{D}}\theta^A = (\partial_\mu \theta^A{}_\nu + \omega^A{}_{B\mu}\theta^B{}_\nu) {\mathrm{d}}x^\mu \wedge {\mathrm{d}}x^\nu = \tfrac{1}{2}T^A{}_{\mu\nu}{\mathrm{d}}x^\mu \wedge {\mathrm{d}}x^\nu\,,\end{aligned}$$ which is nothing but the torsion of the connection. Using the covariant derivative ${\mathcal{D}}$ in the definition of the torsion ensures a covariant transformation behavior under local Lorentz transformations of the tetrad [@Krssak:2015oua; @Golovnev:2017dox]. Changes of index types on tensors are performed by multiplication with tetrad components, for example $T^\mu{}_{\rho\sigma} = T^A{}_{\rho\sigma} e_A{}^\mu$. We now consider the most general Lagrange densities, in four spacetime dimensions, quadratic in torsion, which can be built from the components $T^A{}_{\mu\nu}$ of the torsion tensor and the tetrad alone, while not introducing further derivatives or parity violating terms. This class of theories can be parameterized in terms of three free parameters $c_1, c_2$ and $c_3$, and its Lagrangian is given by $$\begin{aligned} \label{eq:LNGR} &L_{\text{NGR}}[\theta, \Lambda] = L_{\text{NGR}}(\theta, \partial \theta, \Lambda, \partial \Lambda)\nonumber\\ &= |\theta| \bigg(c_1 T^\rho{}_{\mu\nu}T_{\rho}{}^{\mu\nu} + c_2T^\rho{}_{\mu\nu} T^{\nu\mu}{}_\rho + c_3 T^\rho{}_{\mu\rho}T^{\sigma\mu}{}_{\sigma}\bigg) = |\theta| G_{\alpha\beta}{}^{\mu\nu\rho\sigma}T^{\alpha}{}_{\mu\nu}T^\beta{}_{\rho\sigma} = |\theta| G_{AB}{}^{\mu\nu\rho\sigma}T^{A}{}_{\mu\nu}T^B{}_{\rho\sigma}\,.\end{aligned}$$ In the last equality we introduced the convenient supermetric or constitutive tensor representation of the Lagrangian [@Ferraro:2016wht; @Itin:2016nxk; @Hohmann:2017duq], where below the metric must be read as a function of the tetrads [^1] $$\begin{aligned} \label{eq:G} G_{AB}{}^{\mu\nu\rho\sigma} =c_1 \eta_{AB}g^{\rho[\mu}g^{\nu]\sigma} - c_2 e_B^{[\mu}g^{\nu][\rho}e^{\sigma]}_A - c_3e_A^{[\mu}g^{\nu][\rho}e^{\sigma]}_B\,.\end{aligned}$$ Teleparallel theories of gravity with the action $$\begin{aligned} S[\theta,\Lambda] = \int_M L_{\text{NGR}}[\theta, \Lambda]\ {\mathrm{d}}^4x\end{aligned}$$ are called *new general relativity* (NGR) theories of gravity [@PhysRevD.19.3524]. Choosing the parameters of the theory to be $c_1=\frac{1}{4}$, $c_2=\frac{1}{2}$ and $c_3=-1$ the theory reduces to TEGR [@Aldrovandi:2013wha]. 3+1 decomposition and conjugate momenta {#sec:momenta} ======================================= In order to derive the Hamilton formulation of the previously introduced NGR teleparallel theories we need to split spacetime into spatial hypersurfaces and a time direction before we derive the canonical momenta of the field variables. We introduce the $3+1$ decomposition in local coordinates $(x^0, x^i)$, where the submanifolds $x^0=const$ are the spatial hypersurfaces. As for the Hamiltonian formulation of general relativity, see, for example, the modern review [@Giulini:2015qha] and references therein, the metric can be decomposed into the lapse function $\alpha$, the shift vector $\beta^{i}$, and the metric on the spatial hypersurfaces $h_{ij}$ $$\begin{aligned} g_{\mu\nu}=\begin{bmatrix} -\alpha^{2}+\beta^{i}\beta^{j}h_{ij} & \beta_{i} \\ \beta_{i} & h_{ij} \end{bmatrix}, && g^{\mu\nu}=\begin{bmatrix} -\frac{1}{\alpha^{2}} & \frac{\beta^{i}}{\alpha^{2}} \\ \frac{\beta^{i}}{\alpha^{2}} & h^{ij}-\frac{\beta^{i}\beta^{j}}{\alpha^{2}}, \end{bmatrix}\,.\end{aligned}$$ Spatial indices $i, j, \ldots$ are raised and lowered with the components of the spatial metric $h_{ij}$, i.e., $\beta_{i}=\beta^{j}h_{ij}$. In the teleparallel formulation of theories of gravity we need to apply the $3+1$ decomposition to the tetrad $\theta^A = \theta^A{}_0 {\mathrm{d}}x^0 + \theta^A{}_i {\mathrm{d}}x^i $ and its dual $e_A = e{}_A{}^0 \partial_0 + e_A{}^i \partial_i$ instead of to the metric. They can be further expanded into lapse and shift by writing $$\begin{aligned} \theta^A{}_0=\alpha \xi^{A}+\beta^{i}\theta^A{}_i\,,\end{aligned}$$ where we introduced the components $\xi^A$ of the normal vector $n$ to the $x^0=const$ hypersurfaces in the dual tetrad basis [@Okolow:2011nq] $$\begin{aligned} n= \xi^A e_A,\quad \xi^{A}=-\frac{1}{6}\epsilon^{A}_{\ BCD}\theta^B{}_i\theta^C{}_j\theta^D{}_k\epsilon^{ijk}\,.\end{aligned}$$ Lowering and raising upper-case Latin indices with the Minkowski metric $\eta_{AB}$, the $\xi^A$ satisfy $$\begin{aligned} \eta_{AB}\xi^A \xi^B = \xi^{A}\xi_{A}=-1,\quad \eta_{AB}\xi^A \theta^B{}_i = \xi_{A}\theta^A{}_i =0\,.\end{aligned}$$ The dual tetrads and the spatial metric can be expanded into lapse, shift and spatial tetrads as $$\begin{aligned} e_A{}^0=-\frac{1}{\alpha} \xi_{A},\quad e_A{}^i=\theta_A{}^i+ \xi_{A}\frac{\beta^{i}}{\alpha},\quad h_{ij} = \eta_{AB}\theta^A{}_i\theta^B{}_j\,.\end{aligned}$$ Observe the following possible source of confusion. The spatial components of the tetrad with non-canonical index positions are defined as $\theta_A{}^i = \eta_{AB}h^{ij}\theta^B{}_j \neq e_A{}^i=\theta_A{}^i + \xi_A \frac{\beta^i}{\alpha}$. This is related to the fact that in contrast to other approaches, such as the standard calculation for the Hamiltonian of GR, we do not expand tensors into components parallel or orthogonal to the spatial hypersurfaces, but parallel to the hypersurfaces or the time direction. Inserting these expansions into the NGR Lagrangian we obtain the $3+1$ split of the theory $$\begin{aligned} \label{eq:NGRsplit} \begin{split} L_{\text{NGR}}[\alpha, \beta^i,\theta^A{}_i,\Lambda^A{}_B]&=|\theta|\left(4 G_{AB}{}^{i0j0}T^A{}_{i0}T^B{}_{j0} + 4 G_{AB}{}^{ijk0}T^A{}_{ij}T^B{}_{k0} + G_{AB}{}^{ijkl}T^A{}_{ij}T^B{}_{kl}\right)\\ &=\frac{\sqrt{h}}{2\alpha}T^{A}{}_{i0}T^{B}{}_{j0}M^i{}_A{}^j{}_B \\&+\frac{\sqrt{h}}{\alpha}T^{A}{}_{i0}T^{B}{}_{kl}\left[M^i{}_A{}^l{}_B\beta^{k}+2\alpha h^{il} (c_{2}\xi_{B}\theta_A{}^k+c_{3}\xi_{A}\theta_B{}^k) \right] \\&+ \frac{\sqrt{h}}{\alpha}T^{A}{}_{ij}T^{B}{}_{kl}\beta^{i}\left[\tfrac{1}{2}M^j{}_A{}^l{}_B \beta^{k}+2 \alpha h^{jl} (c_{2} \xi_{B}\theta_A{}^k+ c_{3}\xi_{A}\theta_B{}^k) \right] \\&+\alpha \sqrt{h}\cdot {}^{3}\mathbb{T}\,. \end{split}\end{aligned}$$ The matrix $M^i{}_A{}^j{}_B$ is a map from $3\times4$ matrices to their duals, i.e. $4\times3$ matrices, and will play an important role when we express the velocities of the tetrads in terms of the canonical momenta and vice versa. It can be written in the form $$\begin{aligned} \label{eq:Mdef} \begin{split} M^i{}_A{}^j{}_B &= 8\alpha^{2}G_{AB}{}^{i0j0}\\ &=-2(2 c_{1}h^{ij}\eta_{AB} - (c_{2}+c_{3})\xi_{A}\xi_{B}h^{ij} + c_{2}\theta_A{}^j\theta_B{}^i + c_{3}\theta_A{}^i\theta_B{}^j)\,. \end{split}\end{aligned}$$ The purely intrinsic torsion scalar on the $x^0=const$ hypersurface is given by $$\begin{aligned} \begin{split} {}^{3}\mathbb{T}\equiv c_{1}\eta_{AB}T^A{}_{ij}T^B{}_{kl}h^{ik}h^{jl}+c_{2}\theta_A{}^i \theta_B{}^j T^A{}_{kj}T^B{}_{li}h^{kl}+c_{3}\theta_A{}^i\theta_B{}^j h^{kl}T^A{}_{ki}T^B{}_{lj} = H_{AB}{}^{ijkl}T^A{}_{ij}T^B{}_{kl}\,, \end{split}\end{aligned}$$ where the spatial supermetric is $$\begin{aligned} H_{AB}{}^{ijkl} = c_1 \eta_{AB}h^{k[i}h^{j]l} - c_2 \theta_B{}^{[i}h^{j][k}\theta^{l]}{}_A - c_3\theta_A{}^{[i}h^{j][k}\theta^{l]}{}_B\,.\end{aligned}$$ In the $3+1$ decomposed form it is not difficult to derive the canonical momenta of the tetrads $\theta^A{}_\mu$ and the Lorentz transformations $\Lambda^A{}_B$ which generate the spin connection. Time derivatives on the variables of the theory only appear in torsion terms $T^A{}_{0i}$ and never act on $\theta^A{}_0$, due to the antisymmetry of the torsion tensor in its lower indices, nor on the lapse $\alpha$ and the shift $\beta$. Hence the canonical momenta of lapse and shift are, not surprisingly, all identically zero, $$\begin{aligned} \label{eq:pilapseshift} \pi_\alpha = \frac{\partial L_{\text{NGR}}}{\partial \dot\alpha} = 0,\quad \pi_{\beta^i} = \frac{\partial L_{\text{NGR}}}{\partial \dot\beta^i} = 0\,.\end{aligned}$$ The canonical momenta of the spatial tetrad components are given by $$\begin{aligned} \begin{split} \label{conj} \frac{\alpha}{\sqrt{h}} \pi_A{}^i= \frac{\alpha}{\sqrt{h}} \frac{\partial L_{\text{NGR}}}{\partial \dot \theta^A{}_i} = T^B{}_{0j}M^i{}_A{}^j{}_B + T^B{}_{kl}\left[M^i{}_A{}^k{}_B\beta^{l}+2\alpha h^{ik}( c_{2}\xi_{B}\theta_A{}^l+c_{3}\xi_{A}\theta_B{}^l) \right]\,, \end{split}\end{aligned}$$ while the momenta for the connection generating Lorentz transformations turn out to be completely determined from the momenta of the tetrad. To see this first observe that the Lorentz group is six dimensional and therefore not all components of the $\Lambda^A{}_B$ are independent of each other. To reflect this during the derivation of the corresponding momenta we introduce the auxiliary antisymmetric field $a_{AB}$ in the following way: $$\begin{aligned} \label{Defa} a_{AB}:=\eta_{AC}\omega^C{}_{B0} = \eta_{C[A}\Lambda^C{}_{|D|} \dot{(\Lambda^{-1})}^D{}_{B]} \Leftrightarrow \dot \Lambda^A{}_B=a_{MN}\eta^{A[N} \Lambda^{M]}{}_B \,.\end{aligned}$$ The independent components of the momenta of the Lorentz matrices are then given by $$\begin{aligned} \label{conjugatemomentarelation} \hat{\pi}^{AB} = \frac{\partial L_{\text{NGR}}}{\partial a_{AB}}\,,\end{aligned}$$ and satisfy $$\begin{aligned} \label{conjugatemomentaauxillaryrel} \hat{\pi}^{AB}&=- \pi_C{}^i\eta^{C[B}\theta^{A]}{}_i \,,\end{aligned}$$ which can easily be realized from $$\begin{aligned} \frac{\partial L_{\text{NGR}}}{\partial a_{MN}} &=\frac{\partial L_{\text{NGR}}}{\partial \dot \Lambda^A{}_B} \frac{\partial \dot \Lambda^A{}_B}{\partial a_{MN}} = \frac{\partial L_{\text{NGR}}}{\partial T^C{}_{0k}} \frac{\partial T^C{}_{0k}}{\partial \dot \Lambda^A{}_B} \frac{\partial \dot \Lambda^A{}_B}{\partial a_{MN}} = -\frac{\partial L_{\text{NGR}}}{\partial T^C{}_{0k}} \frac{\partial T^C{}_{0k}}{\partial \dot \theta^A{}_i} \left[\theta^D{}_i(\Lambda^{-1})^B{}_D\right] \frac{\partial \dot \Lambda^A{}_B}{\partial a_{MN}}\\ &= -\frac{\partial L_{\text{NGR}}}{\partial \dot \theta^A{}_i} \left[\theta^D{}_i(\Lambda^{-1})^B{}_D\right]\eta^{A[N}\Lambda^{M]}{}_B\,.\end{aligned}$$ The fact that the momenta $\hat \pi$ are not independent of the momenta $\pi$ demonstrates that the $\Lambda^A{}_B$ are not independent, but only gauge degrees of freedom. In the following, we introduce new field variables $(\tilde \alpha, \tilde \beta^i, \tilde \theta^A{}_i, \tilde \Lambda^A{}_B)$, where $\tilde{\theta}{}^A{}_i(\theta,\Lambda) := \theta^B{}_i (\Lambda^{-1})^A{}_B$ is the so-called Weitzenböck tetrad and all other fields are not changed: $\tilde \alpha=\alpha,\ \tilde \beta^i = \beta^i$ and $\tilde \Lambda^A{}_B = \Lambda^A{}_B$. Using the inverse of this definition $\theta^B{}_i = \tilde{\theta}{}^A{}_i \Lambda^B{}_A$ to express the Lagrangian in terms of the Weitzenböck tetrad yields that $\tilde L_{\text{NGR}}[\alpha, \beta^i,\tilde \theta^A{}_i, \Lambda^A{}_B] := L_{\text{NGR}}[\alpha, \beta^i,\theta^A{}_i(\tilde \theta, \Lambda),\Lambda^A{}_B]$ is independent of $\Lambda$, respectively, $\tilde \Lambda$. The $\alpha$ and $\beta^i$ momenta are not affected by this field redefinition at all. For the momenta in the new frame we find the transformation behavior $$\begin{aligned} \label{eq:Ptrafo1} \tilde{\pi}_A{}^i = \frac{\partial \tilde L_{\text{NGR}}}{\partial\dot{\tilde{ \theta}}^A{}_i} = \pi_B{}^i\Lambda^B{}_A\,,\quad \hat {\tilde \pi}^{MN} = \frac{\partial \tilde L_{\text{NGR}}}{\partial a_{MN}} = \pi_A{}^j \eta^{A[N}\theta^{M]}{}_j + \hat \pi^{MN} \,,\end{aligned}$$ with inverse transformation $$\begin{aligned} \label{eq:Ptrafo3} \pi_A{}^i = \tilde \pi_B{}^i (\Lambda^{-1}){}^B{}_A,\quad \hat \pi^{MN} = \hat {\tilde \pi}^{MN} - \tilde \pi_B{}^j (\Lambda^{-1}){}^B{}_A \eta^{A[N}\Lambda^{M]}{}_C \tilde \theta^C{}_j \,.\end{aligned}$$ Applying the constraint to the second part of the transformation shows that in the Weitzenböck gauge the momenta of the Lorentz transformations all vanish, $\hat {\tilde \pi}^{AB} = 0$. This reproduces the well-known fact that in teleparallel gravity the spin connection represents pure gauge degrees of freedom [@Krssak:2015oua; @Golovnev:2017dox]. Therefore, without loss of generality, we can set the spin connection coefficients to zero and work in the so-called Weitzenböck gauge, in which the connection coefficients of the spin connection vanish identically. The Hamiltonian in the Weitzenböck gauge is then given by the Legendre transform of the the Lagrangian where we have to add the primary constraints we already discovered, and with Lagrange multipliers ${}^{\tilde \alpha}\lambda$, ${}^{\tilde \beta}\lambda^i$ and ${}^{\hat \pi}\lambda$ $$\begin{aligned} \label{eq:HNGR} \begin{split} \tilde H_{\text{NGR}}[{}^{\tilde \alpha}\lambda, {}^{\tilde \beta}\lambda^i, {}^{\tilde\pi}\lambda_{AB},\tilde \alpha,\tilde\pi_\alpha, \tilde \beta^i,\tilde\pi_{\beta^i},\tilde \theta^A{}_i,\hat \pi_A{}^i, \tilde \Lambda^A{}_B, \hat {\tilde \pi}^{AB} ] &= \tilde\pi_\alpha\dot{\tilde\alpha} + \tilde\pi_{\beta^i}\dot{\tilde\beta}^i + \tilde \pi_A{}^i \dot {\tilde \theta}^A{}_i + \hat {\tilde \pi}^{AB} \tilde a_{AB} \\ &+ {}^{\tilde \alpha}\lambda \tilde\pi_\alpha + {}^{\tilde \beta}\lambda^i \tilde\pi_{\beta^i} + {}^{\hat {\tilde{\pi}}}\lambda_{AB} \hat {\tilde \pi}^{AB} - \tilde L_{NGR}[\tilde \alpha, \tilde \beta^i,\tilde \theta^{}_i, \tilde \Lambda]\,. \end{split}\end{aligned}$$ The term $\hat {\tilde \pi}^{AB} \tilde a_{AB}$ is identical to the term one would use naively in terms of the canonical variables $\frac{\tilde \partial L_{\text{NGR}}}{\partial \dot{\Lambda}^A{}_B}\dot \Lambda^A{}_B$, as can easily be seen from the definition of the auxiliary variable $a_{AB}$ in . As mentioned $\tilde \alpha = \alpha$, $\tilde \beta^i = \beta^i$ and $\tilde \Lambda^A{}_B = \Lambda^A{}_B$ and $\tilde L_{\text{NGR}}[\tilde \alpha, \tilde \beta^i,\tilde \theta^{}_i, \tilde \Lambda]$ is independent of $\Lambda$. Therefore, on shell, where the constraint $\hat {\tilde \pi}^{AB}=0$ is implemented, the gauge fixed Hamiltonian does neither depend on $\Lambda$ nor on $\hat {\tilde \pi}^{AB}$. Moreover the evolution of the constraints is preserved since their Poisson bracket with the Hamiltonian vanishes $\{\tilde\pi_\alpha,\tilde H\}\approx 0$, $\{\tilde\pi_{\beta^i},\tilde H\}\approx 0$, $\{\hat {\tilde \pi}^{AB},\tilde H\} \approx 0$ on the constraint surface $\tilde\pi_\alpha =\tilde\pi_{\beta^i} =\hat {\tilde \pi}^{AB} =0$. These findings on the level of canonical momenta demonstrate that we do not need to include the variables $\tilde\pi_\alpha,\ \tilde\pi_{\beta^i},\ \Lambda$ and $\hat \pi$ in the Hamiltonian and again justify the approach in [@Maluf:2000ag]. In the following we will work in the Weitzenböck gauge and omit the tilde from $\tilde{\theta},\tilde{\pi},\hat{\tilde{\pi}}$ for readability. Inverting the momentum-velocity relation {#sec:Constraints} ======================================== One essential step in the reformulation of a physical field theory from its Lagrangian to its Hamiltonian description is to invert the relation between the momenta and the velocities, to express the latter in terms of the former. For NGR this amounts to inverting the equation . To do so we rewrite the equation as a linear map from the space of $4 \times 3$ matrices to the space of $3 \times 4$ matrices $$\begin{aligned} \label{invertvelocity} S_A{}^i= M^i{}_A{}^j{}_B \dot{\theta}^B{}_j\,,\end{aligned}$$ with a source term $S_A{}^i$, which only depends on the momenta, the fields and their spatial derivatives, $$\begin{aligned} \label{eq:source} \begin{split} S_A{}^i[\alpha,\beta,\theta^A{}_i,\pi_A{}^i]=\frac{\alpha}{\sqrt{h}} \pi_A{}^i + \big[ D_{k} \left(\alpha \xi^{B} + \beta^{m}\theta^B{}_m \right) - T^B{}_{kl}\beta^{l} \big] M^i{}_A{}^k{}_B - 2 \alpha T^B{}_{kl} h^{ik}( c_{2}\xi_{B}\theta_A{}^l+c_{3}\xi_{A}\theta_B{}^l), \end{split}\end{aligned}$$ where $D_{i}$ is the Levi-Civita covariant derivative of the hypersurface metric $h_{ij}$. By inverting this equation we can re-express the field velocities in terms of the canonical variables: the fields themselves and their momenta. To explicitly invert equation we decompose the velocities of the spatial tetrads into irreducible parts with respect to the rotation group. It turns out that in this decomposition the matrix $M$ has a block diagonal structure which can be inverted block by block. Since for certain combinations of the $c_{1},c_{2},c_{3}$ parameters of the theory some blocks become identically zero, we employ the Moore-Penrose pseudoinverse of a matrix [@Ferraro:2016wht] to display the inverse in a closed form for all choices of the parameters. This then carries over when we display the Hamiltonian. The irreducible decomposition with respect to the rotation group amounts in defining a vectorial ($\mathcal{V}$), antisymmetric ($\mathcal{A}$), symmetric trace-free ($\mathcal{S}$), and trace ($\mathcal{T}$) part of the tetrad velocities and their momenta: $$\begin{aligned} \label{veldec} \dot{\theta}^A{}_i &={}^{\mathcal{V}}\dot{\theta}_{i}\ \xi^{A}+{}^{\mathcal{A}}\dot{\theta}_{ji}\ h^{kj}\theta^A{}_k+{}^{\mathcal{S}}\dot{\theta}_{ji}\ h^{kj}\theta^A{}_k+{}^{\mathcal{T}}\ \dot{\theta}\ \theta^A{}_i,\\ \pi_A{}^i &={}^{\mathcal{V}}\pi^{i}\ \xi_{A}+{}^{\mathcal{A}}\pi^{ji}\ h_{kj}\theta_A{} ^k+{}^{\mathcal{S}}\pi^{ji}\ h_{kj}\theta_A{}^k+{}^{\mathcal{T}}\pi\ \theta_A{}^i\,.\end{aligned}$$ Decomposing $S_A{}^i$ into the same irreducible parts and using the explicit form of $M$, see equation , yields $$\begin{aligned} \label{eq:VS} \begin{split} {}^\mathcal{V}S^i &=- \xi^AS_A{}^i \\ &={}^{\mathcal{V}}\pi^{i} \frac{\alpha}{\sqrt{h}} - 2 \alpha c_{3}T^B{}_{kl} h^{ik}\theta_B{}^l + 2 (2c_{1}+c_{2}+c_{3})\big[D_{k}\big(\alpha \xi^{B}+\beta^{m}\theta^B{}_m)- T^B{}_{kl}\beta^{l} \big]\xi_{B}h^{ik} \\ &= -2\ {}^{V}\dot{\theta}_{j}\ h^{ij}(2c_{1}+c_{2}+c_{3})\,, \end{split}\end{aligned}$$ for the vector part, $$\begin{aligned} \label{eq:AS} \begin{split} {}^\mathcal{A}S^{mp} &= \theta^A{}_i h^{i[m}S_A{}^{p]} \\ &={}^\mathcal{A}\pi^{mp} \frac{\alpha}{\sqrt{h}} - 2 \alpha c_{2} h^{lm}h^{pk}T^B{}_{kl} \xi_{B} - 2( 2 c_1 - c_2) \big[D_{k}\left( \alpha \xi^{B}+\beta^{s}\theta^B{}_s\right)- T^B{}_{kl} \beta^l\big] \theta_B{}^{[m}h^{p]k} \\ &=- 2\ {}^\mathcal{A}\dot{\theta}^{mp}\ (2c_{1} - c_2) \end{split}\end{aligned}$$ for the antisymmetric part, $$\begin{aligned} \label{eq:SS} \begin{split} {}^\mathcal{S}S^{mp} &= \theta^A{}_q h^{q(m}S_A{}^{p)} - \tfrac{1}{3} \theta^A{}_i S_A{}^i h^{mp} \\ &={}^\mathcal{S}\pi^{mp} \frac{\alpha}{\sqrt{h}} - 2(2c_{1}+c_{2}) \big[ D_{k}\left( \alpha \xi^{B}+\beta^{s}\theta^B{}_s \right) - T^B{}_{kl}\beta^l \big] \left(\theta_B{}^{(m}h^{p)k}-\tfrac{1}{3}h^{pm}\theta_B{}^k\right)\\ &=- 2\ {}^{\mathcal{S}}\dot{\theta}^{mp} (2 c_1 +c_2) \end{split}\end{aligned}$$ for the trace-free symmetric part, and $$\begin{aligned} \label{eq:TS} \begin{split} {}^\mathcal{T}S &= \tfrac{1}{3} \theta^A{}_iS_A{}^i \\ &={}^\mathcal{T}\pi \frac{\alpha}{\sqrt{h}} -\tfrac{2}{3}(2c_{1}+c_{2}+3c_{3}) \big[ D_{k}\left(\alpha \xi^{B}\beta^{m}\theta^B{}_m \right) - T^B{}_{kl}\beta^{l}\Big]\theta_B{}^k\\ &= -2\ {}^{T}\dot{\theta}\ (2c_{1}+c_{2}+3c_{3}) \end{split}\end{aligned}$$ for the trace part. These equations are easily solved for the velocities in terms of their dual momenta in case the coefficients $$\begin{aligned} A_{\mathcal{V}} = 2c_{1}+c_{2}+c_{3},\quad A_{\mathcal{A}} = 2c_{1}-c_{2},\quad A_{\mathcal{S}} = 2c_{1}+c_{2} \textrm{ and } A_\mathcal{T}=2c_{1}+c_{2}+3c_{3}\,,\end{aligned}$$ are all non-vanishing. In case one or more of these coefficients vanish they induce primary constraints $$\begin{aligned} {4} A_{\mathcal{V}} &= 0 &\quad&\Rightarrow & \quad{}^\mathcal{V}C^i &:= \frac{{}^{\mathcal{V}}\pi^{i}}{\sqrt{h}} - 2 c_{3}T^B{}_{kl} h^{ik}\theta_B{}^l &&= 0 \label{eq:C1}\\ A_{\mathcal{A}} &= 0 &\quad&\Rightarrow & \quad{}^\mathcal{A}C^{ij} &:= \frac{{}^\mathcal{A}\pi^{ij}}{\sqrt{h}} - 2 c_{2} h^{li}h^{jk}T^B{}_{kl} \xi_{B} &&= 0 \label{eq:C2}\\ A_{\mathcal{S}} &= 0 &\quad&\Rightarrow & \quad{}^\mathcal{S}C^{ij} &:= \frac{{}^\mathcal{S}\pi^{ij}}{\sqrt{h}} &&= 0 \label{eq:C3}\\ A_{\mathcal{T}} &= 0 &\quad&\Rightarrow & \quad{}^\mathcal{T}C &:= \frac{{}^\mathcal{T}\pi}{\sqrt{h}} &&= 0 \label{eq:C4}\,.\end{aligned}$$ Observe that ${}^VC^i$ correspond to 3 constraints, ${}^AC^{mp}$ to 3 (since it is antisymmetric in its indices), ${}^SC^{mp}$ to 5 (since it is symmetric in its indices, but does not contain the trace part), and ${}^TC$ corresponds to 1 constraint. For any choice of the parameters $c_1,\ c_2,\ c_3$ we either can invert the appearing velocities of the tetrads in terms of the tetrads and their momenta, or we obtain a constraint from the Lagrangian, which must be implemented in the Hamiltonian later by a Lagrange multiplier. The Moore-Penrose pseudoinverse of the matrix $M$ in the irreducible decomposition of the rotation group we employed is given by the inverse of the separate blocks if the coefficient in front of the block $A_{\mathcal{V}},A_{\mathcal{A}},A_{\mathcal{S}}$ or $A_{\mathcal{T}}$ is non-vanishing. In case one of the coefficients is vanishing the block in the inverse matrix is simply a block of zeros. For completeness we display $M$ and its Moore-Penrose pseudoinverse explicitly. Expanding $M$ itself into the irreducible parts basis $$\begin{aligned} M^i{}_A{}^j{}_B = {}^\mathcal{V}M^{ij}\ \xi_A \xi_B + {}^\mathcal{A} M^{[ir][js]}\ \theta^C{}_r \eta_{AC}\theta^D{}_s \eta_{BD} + {}^\mathcal{S} M^{(ir)(js)}\ \theta^C{}_r \eta_{AC}\theta^D{}_s \eta_{BD} + {}^\mathcal{T} M\ \theta_A{}^i \theta_B{}^j\end{aligned}$$ yields $$\begin{aligned} \begin{split} M^i{}_A{}^j{}_B &= 2 A_{\mathcal{V}}\ \xi_A \xi_B h^{ij} - 2 A_{\mathcal{A}}\ h^{i[j}h^{s]r}\theta^C{}_r \eta_{AC}\theta^D{}_s \eta_{BD}\\ &- 2 A_{\mathcal{S}}\ (h^{i(j}h^{s)r}- \tfrac{1}{3}h^{ir}h^{js})\theta^C{}_r \eta_{AC}\theta^D{}_s \eta_{BD} - \frac{2}{3} A_{\mathcal{T}}\ \theta_A{}^i \theta_B{}^j\,. \end{split}\end{aligned}$$ By using the identity $\eta^{AB}+\xi^A\xi^B = \theta^A{}_i \theta^B{}_j h^{ij}$ one may check that this representation of $M$ is indeed identical to its definition . Its pseudoinverse is $$\begin{aligned} \begin{split} \label{eq:MinvRed} \left(M^{-1}\right)^{A \ C}_{\ i \ k}&= \frac{1}{2}B_{\mathcal{V}} \xi^{A}\xi^{C}h_{ik} -\frac{1}{2} B_{\mathcal{A}} h^{r[s}h^{m]n} h_{kr}h_{si}\theta^A{}_m\theta^C{}_n \\& -\frac{1}{2} B_{\mathcal{S}} \left( h^{r(s}h^{m)n} - \tfrac{1}{3} h^{sm} h^{nr} \right) h_{kr}h_{si}\theta^A{}_m\theta^C{}_n-\frac{1}{6} B_{\mathcal{T}} \theta^A{}_i\theta^C{}_k, \end{split}\end{aligned}$$ where the different blocks are implemented by defining ($I={\mathcal{V}},{\mathcal{A}},{\mathcal{S}},{\mathcal{T}}$) $ B_I= \begin{cases} 0 &\textrm{ for } A_I = 0\\ \frac{1}{A_I} &\textrm{ for } A_I \neq 0\,. \end{cases} $ The NGR Hamiltonian {#sec:Hamiltonian} =================== To obtain the Hamiltonian from the Lagrangian we use its definition as Legendre transform omitting the variables $\Lambda$ and $\hat \pi$, as discussed below equation . We display the dependencies on the remaining variables explicitly for clarification, and the square brackets shall indicate that the function may depend on the spatial derivatives of the fields, $$\begin{aligned} H[\alpha, \beta^i, \theta^A{}_j, \pi_A{}^k] = \dot \theta^A{}_i[\alpha, \beta^i, \theta^A{}_j, \pi_A{}^k] \pi_A{}^i - L[\alpha, \beta^i, \theta^A{}_j, \dot \theta^A{}_k[\alpha, \beta^r, \theta^A{}_s, \pi_A{}^m]]$$ We will suppress these dependencies in the brackets from now on for the sake of readability. Moreover, we comment on how to remove the gauge fixing, i.e. how to reintroduce the dependence on $\Lambda$ and $\hat \pi$ at the end of this section. A sketch on how the calculations would be carried out without gauge fixing is made in Appendix \[Withoutgauge\]. To derive the Hamiltonian explicitly we can first use the source expression $S$, defined in equation , to simplify the Lagrangian. This can be done by expanding the $T^A{}_{i0}$ terms in equation into the time derivatives of the tetrad and combining them with the $M$ matrices to the source term whenever possible. By their definition, they can then be expanded in terms of the momenta and spatial derivatives acting on the fields. As an intermediate result the Hamiltonian becomes $$\begin{aligned} \label{eq:protoH} \begin{split} H &= \frac{1}{2}\dot{\theta}^A{}_i\pi_A{}^i-\sqrt{h}T^B{}_{jk}\dot{\theta}^A{}_i h^{ij}\left[c_{2}\xi_{B}\theta_A{}^k + c_{3}\xi_{A}\theta_B{}^k \right] \\ &+\frac{1}{2}\pi_A{}^iD_{i}\left(\alpha \xi^{A}+\beta^{j}\theta^A{}_j \right)+\sqrt{h}T^B{}_{ kl}D_{i}\left( \alpha \xi^{A}+\beta^{j}\theta^A{}_j\right) h^{ik}\left[c_{2}\xi_{B}\theta_A{}^l+c_{3}\xi_{A}\theta_B{}^l \right]\\ &-\frac{1}{2}\pi_B{}^jT^B{}_{jk}\beta^{k} -\sqrt{h}T^A{}_{ij}T^B{}_{kl}\beta^{k}h^{il} \left[ c_{2}\xi_{A}\theta_B{}^j+c_{3}\xi_{B}\theta_A{}^j\right] -\alpha \sqrt{h}\cdot{}^{3}\mathbb{T} \end{split} \end{aligned}$$ To eliminate the remaining velocities we expand them into the ${\mathcal{V}},{\mathcal{A}},{\mathcal{S}},{\mathcal{T}}$ decomposition, we introduced in the previous section, and replace them according to equations to . Expanding the first term in the irreducible decomposition yields $$\begin{aligned} \label{eq:dthetapi} \dot \theta^A{}_i \pi_A{}^i &= - {}^{\mathcal{V}}\dot{\theta}_{i}\ {}^{\mathcal{V}}\pi^i + {}^{\mathcal{A}}\dot{\theta}_{ji}\ {}^{\mathcal{A}}\pi^{ji} +{}^{\mathcal{S}}\dot{\theta}_{ji}\ {}^{\mathcal{S}}\pi^{ji} + 3 {}^{\mathcal{T}}\dot{\theta}\ {}^{\mathcal{T}}\pi\\ &= \alpha \Bigg(\frac{{}^\mathcal{V}C_i\ {}^\mathcal{V}\pi^i}{2 A_\mathcal{V}} - \frac{{}^\mathcal{A}C_{ij}\ {}^\mathcal{A}\pi^{ij}}{2 A_\mathcal{A}} - \frac{{}^\mathcal{S}C_{ij}\ {}^\mathcal{S}\pi^{ij}}{2 A_\mathcal{S}} - \frac{3 {}^\mathcal{T}C\ {}^\mathcal{T}\pi}{2 A_\mathcal{T}}\Bigg) + \pi_A{}^i D_i\left(\alpha \xi^{A}+\beta^{m}\theta^A{}_m \right) - \pi_A{}^i T^A{}_{im}\beta^m \,.\end{aligned}$$ while for the second we find $$\begin{aligned} \label{eq:dthetaT} \begin{split} \sqrt{h}T^B{}_{jk}\dot{\theta}^A{}_i h^{ij}\left[c_{2}\xi_{B}\theta_A{}^k + c_{3}\xi_{A}\theta_B{}^k \right] &= c_2 T^B{}_{jk}\ {}^\mathcal{A}\dot{\theta}_{mi}\ h^{km} h^{ij} - c_3 T^B{}_{jk}\ {}^\mathcal{V}\dot\theta_i\ h^{ij} \theta_B{}^k\\ &= \frac{\alpha}{2 A_{\mathcal{A}}}c_2 \xi_B T^B{}_{jk}\ {}^\mathcal{A}C^{jk} + \frac{\alpha}{2 A_{\mathcal{V}}}c_3 \theta_B{}^k T^B{}_{jk}\ {}^\mathcal{V}C^{j} \\ &- [D_i\left(\alpha \xi^{C}+\beta^{m}\theta^C{}_m \right) - T^C{}_{im}\beta^m]T^B{}_{jk}h^{ki}[c_2 \xi_B \theta_C{}^j + c_3 \xi_C \theta_B{}^j]\,. \end{split}\end{aligned}$$ Inserting the expressions and into equation finally yields the kinematic Hamilton density of the NGR teleparallel theories of gravity, $$\begin{aligned} \label{eq:Hkin} \begin{split} H &= \alpha \sqrt{h}\Bigg(\frac{{}^\mathcal{V}C_i\ {}^\mathcal{V}C^i}{4 A_\mathcal{V}} - \frac{{}^\mathcal{A}C_{ij}\ {}^\mathcal{A}C^{ij}}{4 A_\mathcal{A}} - \frac{{}^\mathcal{S}C_{ij}\ {}^\mathcal{S}C^{ij}}{4 A_\mathcal{S}} - \frac{3 {}^\mathcal{T}C\ {}^\mathcal{T}C}{4 A_\mathcal{T}} - {}^{3}\mathbb{T} - \frac{\xi^A D_i\pi_A{}^i}{\sqrt{h}}\Bigg) - \beta^k(T^A{}_{jk}\pi_A{}^j + \theta^A{}_k D_{i}\pi_A{}^i)\\ & + D_i[\pi_A{}^i(\alpha \xi^{A}+\beta^{j}\theta^A{}_j )]\end{split}\end{aligned}$$ which we here display in terms of the constraints to , as this is the most convenient expression. Observe that, even though we use the irreducible $\mathcal{V},{\mathcal{A}},{\mathcal{S}},\mathcal{T}$ decomposition of the fields to display the Hamiltonian, since in this form the dependence on the parameters $c_i$ becomes most clear, the canonical variables on which the Hamiltonian depends are $\{\alpha, \beta^i, \theta^A{}_j, \pi_A{}^k\}$. As in general relativity we immediately see that we deal with a pure constraint Hamiltonian up to boundary terms. Lapse $\alpha$ and shift $\beta$ have vanishing momenta, $\pi_\alpha = 0$ and $\pi_{\beta_i} = 0$, and appear only as Lagrange multipliers. To obtain the dynamically equivalent Hamiltonian to the Lagrangian we need to add possible further nontrivial constraints via Lagrange multipliers. To find all constraints it is necessary to calculate the Poisson brackets between all primary constraints, check if they are first class, and, in case they are not, add possible secondary constraints. This algorithm has to be continued until a closed constraint algebra is obtained [@Dirac]. ![(Color online.) Visualization of the parameter space of new general relativity, colored by the occurrences of primary constraints. The radial axis shows the zenith angle $\theta$, while the (circular) polar axis shows the azimuth angle $\phi$, following the definition .[]{data-label="fig:conplot"}](conplot.pdf){width="80.00000%"} From our analysis in section \[sec:Constraints\] we conclude that the NGR theories of gravity decay into nine subclasses depending on the choice of the parameters $c_1$ , $c_2$ and $c_3$, which correspond to the appearance of different primary class constraints, in addition to the lapse and shift constraints arising from the diffeomorphism invariance of the action. We have visualized these classes in figure \[fig:conplot\], which we constructed as follows. We started from the assumption that at least one of the parameters $c_1, c_2, c_3$ is non-vanishing, since otherwise the Lagrangian would be trivial, and introduced normalized parameters $$\tilde{c}_i = \frac{c_i}{\sqrt{c_1^2 + c_2^2 + c_3^2}}$$ for $i = 1, 2, 3$. One easily checks that the constraint classes we found only depend on these normalized parameters. We then introduced polar coordinates $(\theta, \phi)$ on the unit sphere to express the parameters as $$\label{eqn:polcoord} \tilde{c}_1 = \sin\theta\cos\phi\,, \quad \tilde{c}_2 = \sin\theta\sin\phi\,, \quad \tilde{c}_3 = \cos\theta\,.$$ Since the same constraints appear for antipodal points on the parameter sphere, we restrict ourselves to the hemisphere $\tilde{c}_3 \geq 0$, and hence $0 \leq \theta \leq \frac{\pi}{2}$; this is equivalent to identifying antipodal points on the sphere and working with the projective sphere instead, provided that we also identify antipodal points on the equator $\tilde{c}_3 = 0$. We then considered $(\theta, \phi)$ as polar coordinates on the plane in order to draw the diagram shown in figure \[fig:conplot\]. Note that antipodal points on the perimeter, such as the two gray points for the most constrained case, are identified with each other, since they describe the same class of theories. To summarize, we find the following constraints: Theory Constraints Location in figure \[fig:conplot\] ---------------------------------------------------------------------------------------- -------------------------------------------------------------------- ------------------------------------ $A_{I}\neq 0 \ \forall I\in \{ \mathcal{V},{\mathcal{A}},{\mathcal{S}},\mathcal{T} \}$ No constraints white area $A_{\mathcal{V}}=0$ ${}^{\mathcal{V}}C_{i}=0$ red line $A_{\mathcal{A}}=0$ ${}^{\mathcal{A}}C_{ji}=0$ black line $A_{\mathcal{S}}=0$ ${}^{\mathcal{S}}C_{ji}=0$ green line $A_{\mathcal{T}}=0$ ${}^{\mathcal{T}}C=0$ blue line $A_{\mathcal{V}}=A_{\mathcal{A}}=0$ ${}^{\mathcal{V}}C_{i}={}^{\mathcal{A}}C_{ji}=0$ turquoise point $A_{\mathcal{A}}=A_{\mathcal{S}}=0$ ${}^{\mathcal{A}}C_{ji}={}^{\mathcal{S}}C_{ji}=0$ purple point (center) $A_{\mathcal{A}}=A_{\mathcal{T}}=0$ ${}^{\mathcal{A}}C_{ji}={}^{\mathcal{T}}C=0$ orange point $A_{\mathcal{V}}=A_{\mathcal{S}}=A_{\mathcal{T}}=0$ ${}^{\mathcal{V}}C_{i}={}^{\mathcal{S}}C_{ji}={}^{\mathcal{T}}C=0$ gray points (perimeter) In order to understand the degrees of freedom and derive the full Hamiltonian of the theory, we would need to calculate the Poisson brackets and deduce whether they are first or second class constraints and if more constraints appear (secondary, tertiary etc). For teleparallel equivalence to general relativity this has already been done in [@Maluf:1994ji; @Maluf:2000ag; @Blagojevic:2000qs; @Maluf:2001rg; @Okolow:2011nq; @Okolow:2013lwa; @Ferraro:2016wht; @Nester:2017wau] and it was found that the dynamical equivalent Hamiltonian to TEGR can be expressed with the help of two sets of Lagrange multipliers, ${}^\mathcal{V}\lambda^i$ and ${}^\mathcal{A}\lambda^{ij}$, as $$\begin{aligned} \label{eq:Htegr} \begin{split} H_{TEGR} &= \sqrt{h} \Big({}^\mathcal{V}\lambda^i\ {}^\mathcal{V}C_i + {}^\mathcal{A}\lambda^{ij}\ {}^\mathcal{A}C_{ij}\Big) + D_i[\pi_A{}^i(\alpha \xi^{A}+\beta^{j}\theta^A{}_j )] \\ &- \alpha \sqrt{h} \Big( \frac{1}{4}{}^\mathcal{S}C_{ij}\ {}^\mathcal{S}C^{ij} - \frac{3}{8} {}^\mathcal{T}C\ {}^\mathcal{T}C + {}^{3}\mathbb{T} + \frac{\xi^A D_i\pi_A{}^i}{\sqrt{h}} \Big) - \beta^k\Big(T^A{}_{jk}\pi_A{}^j + \theta^A{}_k D_{i}\pi_A{}^i\Big)\,. \end{split}\end{aligned}$$ In the future we aim to derive the dynamically equivalent Hamiltonians for all nine classes we identified among the NGR theories of gravity. By introducing additional Lagrange multipliers ${}^\mathcal{S}\lambda^{ij}$ and ${}^\mathcal{T}\lambda^{ij}$ in the short-hand notation $$\begin{aligned} {}^\mathcal{V} H = \begin{cases} \alpha \sqrt{h}\ \frac{{}^{\mathcal{V}}C_i {}^{\mathcal{V}}C^i}{4 A_{\mathcal{V}}}& \textrm{ for } {}^{\mathcal{V}}A \neq 0\\ \sqrt{h}\ {}^\mathcal{V}\lambda_i {}^{\mathcal{V}}C^i & \textrm{ for } {}^{\mathcal{V}}A = 0\,, \end{cases}\quad {}^\mathcal{A} H = \begin{cases} -\alpha \sqrt{h}\ \frac{{}^{\mathcal{A}}C_{ij} {}^{\mathcal{A}}C^{ij}}{4 A_{\mathcal{A}}}& \textrm{ for } {}^{\mathcal{A}}A \neq 0\\ \sqrt{h}\ {}^\mathcal{A}\lambda_{ij} {}^{\mathcal{A}}C^{ij} & \textrm{ for } {}^{\mathcal{A}}A = 0\,, \end{cases}\\ {}^\mathcal{S} H = \begin{cases} -\alpha \sqrt{h}\ \frac{{}^{\mathcal{S}}C_{ij} {}^{\mathcal{S}}C^{ij}}{4 A_{\mathcal{S}}}& \textrm{ for } {}^{\mathcal{S}}A \neq 0\\ \sqrt{h}\ {}^\mathcal{S}\lambda_{ij} {}^{\mathcal{S}}C^{ij} & \textrm{ for } {}^{\mathcal{S}}A = 0\,, \end{cases}\quad {}^\mathcal{T} H = \begin{cases} -\alpha \sqrt{h}\ \frac{3{}^{\mathcal{T}}C_{ij} {}^{\mathcal{T}}C^{ij}}{4 A_{\mathcal{T}}}& \textrm{ for } {}^{\mathcal{T}}A \neq 0\\ \sqrt{h}\ {}^\mathcal{T}\lambda_{ij} {}^{\mathcal{T}}C^{ij} & \textrm{ for } {}^{\mathcal{T}}A = 0\,, \end{cases}\end{aligned}$$ we can display a first step towards the dynamical Hamiltonians $$\begin{aligned} \label{eq:Hdyn} \begin{split} H &= \Big({}^\mathcal{V}H + {}^\mathcal{A}H + {}^\mathcal{S}H + {}^\mathcal{T}H \Big) - \alpha \Big(\sqrt{h}\ {}^{3}\mathbb{T} - \xi^A D_i\pi_A{}^i \Big) - \beta^k\Big(T^A{}_{jk}\pi_A{}^j + \theta^A{}_k D_{i}\pi_A{}^i\Big) + D_i[\pi_A{}^i(\alpha \xi^{A}+\beta^{j}\theta^A{}_j )]\\ &+ \textrm{secondary-, tertiary-, \ldots{} constraints}\,. \end{split}\end{aligned}$$ However, the list of secondary-, tertiary-, … constraints, which have to be added in addition, has to be investigated separately for the nine classes we derived. Even within a single class there may appear different constraint algebras. For example, in the class with all $A_I$ being nonzero, the Poisson bracket of the Hamilton constraint with itself in general generates new constraints since the Poisson brackets of the Hamiltonian and momenta constraints do not form a closed algebra. However, for particular values of the parameters the terms which cause this behavior are absent from the action, thus allowing the Poisson brackets to close [@Okolow:2011np]. Due to the lengthiness of the calculations even in seemingly simple cases such as TEGR [@Okolow:2013lwa] we present these studies in separate articles. Another potential issue that must receive attention is the possible bifurcation of constraints, i.e., the situation where the closing or non-closing of the Poisson brackets depends on the particular values of the fields, as found in previous studies [@Chen:1998ad], which we plan to investigate in detail in further work. Before we conclude this article we like to add one more remark on the gauge fixing. The Hamiltonian we obtained is derived in the Weitzenböck gauge. To remove the gauge fixing and to reintroduce the variables $\Lambda$ and $\hat \pi$, which we removed in the course of the discussion in section \[sec:momenta\], the following two steps have to be performed. First replace the Levi-Civita covariant derivatives $D_i$ in equation by a total covariant derivative $\mathfrak{D}_i$ which also acts on the Lorentz indices of the objects appearing, $$\begin{aligned} D_i \pi_A{}^j \rightarrow \mathfrak{D}_i \pi_A{}^j = D_i \pi_A{}^j - \omega^B{}_{Ai}\pi_B{}^j\,,\end{aligned}$$ and, second, add the constraint with the help of a Lagrange multiplier. The result is a gauge invariant Hamiltonian depending on the field variables $\alpha,\ \beta^i, \theta^A{}_i, \pi_A{}^i$, and $\Lambda^A{}_B$ as well as $\hat \pi^{AB}$. Conclusion {#sec:conclusion} ========== We have derived a closed form for the kinematic Hamiltonian of *new general relativity* theories of gravity, starting from its Lagrangian formulation including the teleparallel spin connection. The latter we implemented explicitly in terms of local Lorentz transformations, thus avoiding the need for Lagrange multipliers in the action. We found that the canonical momenta for the spin connection are not independent and can fully be expressed in terms of the momenta for the tetrad. Further, only the 12 spatial components of the tetrads have non-vanishing momenta, while the 4 temporal components can be expressed in terms of the ADM variables lapse and shift, whose momenta vanish identically. We have shown that it is not possible to invert the relation between the time derivatives of the spatial tetrad components and their conjugate momenta, which results in the appearance of up to four types of further primary constraints, depending on the choice of parameters defining the theory. We find that the family of NGR theories is divided into nine different classes, which are distinguished by the presence or absence of these primary constraints. We visualized the locations of these nine classes in the parameter space of the theory, and identified a prototype of a dynamically equivalent Hamiltonian for the different classes, which serves as a starting point for the continuation towards a complete systematic Hamiltonian analysis of NGR. Our results invite further investigations in various directions. The most logical next step is the calculation of the Poisson brackets for all possible constraints. This will show under which circumstances the constraint algebra closes, and under which circumstances additional constraints must be included, and finally lead to the full, dynamical Hamiltonian. It should be noted that the calculation of the Poisson brackets is straightforward, although it can be very lengthy, even in the case of TEGR [@Okolow:2013lwa]. Naively, the unconstrained case would be the easiest, since it involves the least number of constraints to calculate Poisson brackets with. However, the Poisson brackets do not form a closed algebra, hence are not first class, except for special cases [@Okolow:2011np], and thus generate further secondary constraints. Another class of new general relativity theories of particular interest besides general relativity is the one where only the vector constraint $A_{\mathcal{V}} = 0$ is imposed. It has been argued that this constraint is necessary in order to avoid the appearance of ghosts at the linearized level [@Kuhfuss1986; @VanNieuwenhuizen:1973fi]. The constraint algebra has been worked out for this case, and it turns out that also in this case the constraints are not first class, so that secondary constraints appear [@Cheng:1988zg]. An important result which we expect from the aforementioned further work on the constraint algebra is the number of degrees of freedom for general parameters of new general relativity. A hint towards the existence of further degrees of freedom compared to TEGR comes from comparing the degrees of freedom in new general relativity with the number of polarization modes of gravitational waves in the Newman-Penrose formalism [@Hohmann:2018wxu]. This result gives a lower bound of the number of degrees of freedom, since the polarization modes which appear in the linearized theory must come from the fundamental degrees of freedom in the complete nonlinear theory. Once the full Hamiltonian is derived, it can be compared with the propagators presented in [@Koivisto:2018loq]. Results for a systematic categorization of theoretical pathologies (tachyons and ghosts) in a large class of theories including NGR was recently presented in [@Lin:2018awc]. Future work could consist of confirming their results using the Hamiltonian analysis and getting guidance in which theories are mostly motivated and perform the full-fledged Hamiltonian analysis in these cases. The full dynamical Hamiltonian would also be useful for further tests of NGR with observations, in particular considering gravitational waves. The results we presented here show that the vicinity of TEGR in the parameter space, which is known to be compatible with post-Newtonian observations in the solar system [@PhysRevD.19.3524], is composed out of different classes of possible constraint algebras. Studying their Hamiltonian dynamics one may expect new results on the generation of gravitational waves in these theories, from which tighter bounds on the NGR parameters would be obtained. The authors are grateful to Martin Krššák for numerous discussions and pointing out references and to María José Guzmán for commenting on a previous version of this article. They were supported by the Estonian Research Council through the Institutional Research Funding project IUT02-27 and the Personal Research Funding project PUT790 (start-up project), as well as by the European Regional Development Fund through the Center of Excellence TK133 “The Dark Side of the Universe”. Hamiltonian analysis without gauge fixing {#Withoutgauge} ========================================= Looking at equation and noting that the conjugate momenta are related to each other via an algebraic equation it at first seems like it is impossible to solve the velocities for momenta. However, there is a way to attack this problem and successfully derive the Hamiltonian. First, we note that equation before fixing the gauge becomes $$\begin{aligned} \label{invertvelocityungauged} S_A{}^i= M^i{}_A{}^j{}_B \left(\dot{\theta}^B{}_j -\left(\Lambda^{-1}\right)^D{}_{C}\theta^C{}_j\dot{\Lambda}^B{}_D \right) =M^i{}_A{}^j{}_B \Lambda^B{}_D \partial_0\left(\theta^C{}_j (\Lambda^{-1})^D{}_C \right)\,,\end{aligned}$$ with $$\begin{aligned} \label{eq:sourceungauged} \begin{split} S_A{}^i[\alpha,\beta,\theta^A{}_i,\pi_A{}^i]&=\frac{\alpha}{\sqrt{h}} \pi_A{}^i + \big[\Lambda^B{}_D D_{k}\left[ \left(\alpha \xi^{C} + \beta^{m}\theta^C{}_m \right)\left(\Lambda^{-1}\right)^D{}_C \right] - T^B{}_{kl}\beta^{l} \big] M^i{}_A{}^k{}_B \\ &- 2 \alpha T^B{}_{kl} h^{ik}( c_{2}\xi_{B}\theta_A{}^l+c_{3}\xi_{A}\theta_B{}^l). \end{split}\end{aligned}$$ In the Lagrangian, velocities only appear from terms of the structure $$\begin{aligned} T^B{}_{0j}=\Lambda^B{}_D\partial_0\left(\theta^C{}_j\left(\Lambda^{-1}\right)^D{}_C\right)-\Lambda^B{}_DD_j\left[\left(\alpha\xi^C+\beta^m\theta^C{}_m\right)\left(\Lambda^{-1}\right)^D{}_C\right].\end{aligned}$$ Hence, the velocities in the Lagrangian appear exactly as in equation . This means that we can get rid of all velocities and express them in terms of conjugate momenta by applying $\left(M^{-1}\right)^{A \ C}_{\ i \ k}$ on both sides of equation , where we have used the same decomposition of the Weitzenböck tetrad $\dot{\tilde{\theta}}^A{}_i=\partial_0\left(\theta^B{}_i \left(\Lambda^{-1}\right)^A{}_B\right)$ as in equation into irreducible parts.\ Second, we need to write down the Hamiltonian together with its primary constraints. The algebraic relation between the conjugate momenta is a primary constraint and needs to be added. The Hamiltonian is then by definition $$\begin{aligned} \label{ungaugedHamiltonian} H=\pi_A{}^i\dot{\theta}^A{}_i+\hat{\pi}^{AB}a_{AB}-L(\theta^A{}_i,\pi_A{}^i)-{}^\pi\lambda^{A}{}_{B}\left(\hat \pi^B{}_A +\pi_A{}^i\eta^{B[N}\theta^{M]}{}_i\right),\end{aligned}$$ which is the gauge independent correspondence to equation . Using the equation imposed by the Lagrange multiplier to express all conjugate momenta solely in the conjugate momenta with respect to the spatial tetrad field $\pi_A{}^i$ we get that the Hamiltonian is of the form $$\begin{aligned} \label{} H=\pi_A{}^i\Lambda^A{}_B\partial_0\left(\theta^C{}_i\left(\Lambda^{-1}\right)^B{}_C\right)-L\left[\alpha,\beta,\theta^A{}_i,\pi_A{}^i,\Lambda^A{}_B\right]-{}^\pi\lambda^{A}{}_{B}\left(\hat \pi^B{}_A +\pi_A{}^i\eta^{B[N}\theta^{M]}{}_i\right).\end{aligned}$$ From this we can see that the Hamiltonian can be expressed in canonical variables without gauge fixing. By using equation we get $$\begin{aligned} \begin{split} H[\alpha,\beta,\theta^A{}_i,\pi_A{}^i,\Lambda^A{}_B,\hat{\pi}^B{}_A]&=\pi_A{}^i\left(M^{-1}\right)^{A \ C}_{\ i \ k}S_C{}^k[\alpha,\beta,\theta^A{}_i,\pi_A{}^i]-L\left[\alpha,\beta,\theta^A{}_i,\pi_A{}^i,\Lambda^A{}_B\right]\\ &-{}^\pi\lambda^{A}{}_{B}\left(\hat \pi^B{}_A +\pi_A{}^i\eta^{B[N}\theta^{M]}{}_i\right). \end{split}\end{aligned}$$ [^1]: Alternatively, one may introduce the so-called axial, vector, tensor decomposition of the torsion, in which the NGR Lagrangian becomes $L = a_1 T_{\text{ax}}+a_2 T_{\text{tens}}+a_3 T_{\text{vec}}$ [@Bahamonde:2017wwk]. The coefficients translate as $c_1 = -\frac{1}{3}(a_1 + 2a_2)$, $c_2 = \frac{2}{3}(a_1 - a_2)$ and $c_3 = \frac{2}{3}(a_2 - a_3)$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study variations on combinatorial games in which, instead of alternating moves, the players bid with discrete bidding chips for the right to determine who moves next. We consider both symmetric and partisan games, and explore differences between discrete bidding games and *Richman games*, which allow real-valued bidding. Unlike Richman games, discrete bidding game variations of many familiar games, such as chess, Connect Four, and even Tic-Tac-Toe, are suitable for recreational play. We also present an analysis of Tic-Tac-Toe for both discrete and real-valued bidding.' author: - | Mike Develin\ American Institute of Mathematics\ \[-0.8ex\] 360 Portage Ave., Palo Alto, CA 94306\ `develin@post.harvard.edu`\ - | Sam Payne[^1]\ Stanford University, Dept. of Mathematics\ \[-0.8ex\] 450 Serra Mall, Stanford, CA 94305\ `spayne@stanford.edu` bibliography: - 'math.bib' date: | \ Mathematics Subject Classification: 91A46, 91B26, 91A60 title: Discrete bidding games --- Introduction ============ Imagine playing your favorite two-player game, such as Tic-Tac-Toe, Connect Four, or chess, but instead of alternating moves you bid against your opponent for the right to decide who moves next. For instance, you might play a game of bidding chess in which you and your opponent each start with one hundred bidding chips. If you bid twelve for the first move, and your opponent bids ten, then you give twelve chips to your opponent and make the first move. Now you have eighty-eight chips and your opponent has one hundred and twelve, and you bid for the second move... Similar bidding games were studied by David Richman in the late 1980s. In Richman’s theory, as developed after Richman’s death in [@LLPU; @LLPSU], a player may bid any nonnegative real number up to his current supply of bidding resources. The player making the highest bid gives the amount of that bid to the other player and makes the next move in the game. If the bids are tied, then a coin flip determines which player wins the bid. The goal is always to make a winning move in the game; bidding resources have no value after the game ends. The original Richman theory requires that the games be *symmetric*, with all legal moves available to both players, to avoid the possibility of *zugzwang*, positions where neither player wants to make the next move. The theory of these real-valued bidding games, now known as Richman games, is simple and elegant with surprising connections to random turn games. The recreational games, such as chess, that motivated the work presented here, are partisan rather than symmetric, and it is sometimes desirable to force your opponent to move rather than to make a move yourself. However, the basic results and arguments of Richman game theory go through unchanged for partisan games, in spite of the remarks in [@LLPSU p. 260], provided that one allows the winner of the bid either to move or to force his opponent to move, at his pleasure. For the remainder of the paper, we refer to these possibly partisan real-valued bidding games as *Richman games*. Say Alice and Bob are playing a Richman game, whose underlying combinatorial game is $G$. Then there is a critical threshold $R(G)$, sometimes called the *Richman value* of the game, such that Alice has a winning strategy if her proportion of the total bidding resources is greater than $R(G)$, and she does not have a winning strategy if her proportion of the bidding resources is less than $R(G)$. If her proportion of the bidding resources is exactly $R(G)$, then the outcome may depend on coin flips. The critical thresholds $R(G)$ have two key properties, as follows. We say that $G$ is finite if there are only finitely many possible positions in the game, and we write $\Gbar$ be the game that is just like $G$ except that Alice and Bob have exchanged roles. Let $P(G)$ be the probability that Alice can win $G$ at random-turn play, where the player who makes each move is determined by the toss of a fair coin, assuming optimal play. 1. If $G$ is finite, then $R(G)$ is rational and equal to $1-R(\Gbar)$. 2. For any $G$, $R(G)$ is equal to $1-P(G)$. The surprising part of (1) is that, if $G$ is modeled on a finite graph, which may contain many directed cycles, there is never a range of distributions of bidding resources in which both Alice and Bob can prolong the game indefinitely and force a draw. On the other hand, for infinite games $R(G)$ can be any real number between zero and one [@LLPSU p. 256], and $1-R(\Gbar)$ can be any real number between zero and $R(G)$; in particular, the Richman threshold may be irrational and there may be an arbitrarily large range in which both players can force a draw. The connection with random turn games given by (2) is especially intriguing given recent work connecting random turn selection games with conformal geometry and ideas from statistical mechanics [@PSSW07]. The discrete bidding variations on games that we study here arose through recreational play, as a way to add spice and interest to old-fashioned two player games such as chess and Tic-Tac-Toe. The real valued bidding and symmetric play in Richman’s original theory are mathematically convenient, but poorly suited for recreational play, since most recreational games are partisan and no one wants to keep track of bids like $e^{\sqrt \pi} + \log 17$. Bidding with a relatively small number of discrete chips, on the other hand, is easy to implement recreationally and leads to interesting subtleties. For instance, ties happen frequently with discrete bidding with small numbers of chips, so the tie-breaking method is especially important. To avoid the element of chance in flipping coins, we introduce a deterministic tie-breaking method, which we call the *tie-breaking advantage*. If the bids are tied, the player who has the tie-breaking advantage has the choice either to declare himself the winner of the bid and give the tie-breaking advantage to the other player, or declare the other player the winner of the bid and keep the tie-breaking advantage. See Section \[tie-breaking section\] for more details. As mentioned earlier, partisan games still behave well under bidding variations provided that the winner of the bid has the option of forcing the other player to move in zugzwang positions. Other natural versions of bidding in combinatorial game play are possible, and some have been studied fruitfully. The most prominent example is Berlekamp’s “economist’s view of combinatorial games" [@Berlekamp96], which is closely related to Conway’s theory of thermography [@Conway76] and has led to important advances in understanding Go endgames. Since this paper was written, Bidding Chess has achieved some popularity among fans of Chess variations [@Beasley08; @Beasley08b]. Also, bidding versions of Tic-Tac-Toe and Hex have been developed for recreational play online, by Jay Bhat and Deyan Simeonov. Readers are warmly invited to play against the computer at [`http://bttt.bidding-games.com/online/`](http://bttt.bidding-games.com/online/) and [`http://hex.bidding-games.com/online/`](http://hex.bidding-games.com/online/), and to challenge friends through Facebook at [`http://apps.facebook.com/biddingttt`](http://apps.facebook.com/biddingttt)  and [`http://apps.facebook.com/biddinghex`](http://apps.facebook.com/biddinghex). The artificial intelligence for the computer opponent in Bidding Hex is based on the analysis of Random-Turn Hex in [@PSSW07] and connections between random turn games and Richman games, and is presented in detail in [@biddinghexai]. Although we cannot prove that the algorithm converges to an optimal or near-optimal strategy, it has been overwhelmingly effective against human opponents. A game of bidding Tic-Tac-Toe ----------------------------- We conclude the introduction with two examples of sample bidding games. First, here is a game of Tic-Tac-Toe in which each player starts with four bidding chips, and Alice starts with tie-breaking advantage. In Tic-Tac-Toe there is no zugzwang, so the players are simply bidding for the right to move. *First move.* Both players bid one for the first move, and Alice chooses to use the tie-breaking advantage, placing a red [ A]{} in the center of the board. *Second move.* Now Alice has three chips, and Bob has five chips plus the tie-breaking advantage. Once again, both players bid one. Bob uses the tie-breaking advantage and places a blue [ B]{} in the upper-left corner. *Third move.* Now Alice has four chips plus the tie-breaking advantage, while Bob has four chips. Alice bids two, and Bob also bids two. This time Alice decides to keep the tie-breaking advantage, and lets Bob make the move. Bob places a blue [ B]{} in the upper-right corner, threatening to make three in a row across the top. The position after three moves is shown in the following figure. ![image](TTT-introboard.pdf) *Fourth move.* Now Alice has six chips plus the tie-breaking advantage, and Bob has two chips. Bob is one move away from winning, so he bets everything, and Alice must give him two chips, plus the tie-breaking advantage, to put a red [ A]{} in the top center and stop him. *Conclusion.* Now Alice has four chips, and Bob has four chips plus the tie-breaking advantage. Alice is one move away from victory and bets everything, so Bob must also bet everything, plus use the tie-breaking advantage, to move bottom center and stop her. Now Alice has all eight chips, plus the tie-breaking advantage, and she coolly hands over the tie-breaking advantage, followed by a single chip, as she moves center left and then center right to win the game. Normal Tic-Tac-Toe tends to end in a draw, and Alice and Bob started with equal numbers of chips, so it seems that the game should have ended in a draw if both players played well. But Alice won decisively. *What did Bob do wrong?* A game of bidding chess. ------------------------ Here we present an actual game of bidding chess, played in the common room of the mathematics department at UC Berkeley, in October 2006. Names have been changed for reasons the reader may imagine. Alice and Bob each start with one hundred bidding chips. Alice offers Bob the tie-breaking advantage, but he declines. Alice shrugs, accepts the tie-breaking advantage, and starts pondering the value of the first move. Alice is playing black, and Bob is playing white. *First move.* After a few minutes of thought on both sides, Alice bids twelve and Bob bids thirteen for the first move. So Bob wins the bid, and moves his knight to c6. Now Alice has one hundred and thirteen chips and the tiebreaking advantage, and Bob has eighty seven chips. *Second move.* Alice figures that the second move must be worth no more than the first, since it would be foolish to bid more than thirteen and end up in a symmetric position with fewer chips than Bob. She decides to bid eleven, which seems safe, and Bob bids eleven as well. Alice chooses to use the tie-breaking advantage and moves her pawn to e3. Bob, who played chess competitively as a teenager, is puzzled by this conservative opening move. *Third move.* Now Alice has one hundred and two chips, and Bob has ninety eight and the tie-breaking advantage. Sensing the conservative tone, Bob decides to bid nine. He is somewhat surprised when Alice bids fifteen. Alice moves her bishop to c4. The resulting position is shown below. *Fourth move.* Now Alice has eighty seven chips, while Bob has one hundred and thirteen and the tie-breaking advantage. Since Alice won the last move for fifteen and started an attack that he would like to counter, Bob bids fifteen for the next move. Alice bids twenty two, and takes the pawn at f7. Bob realizes with some dismay that he must win the next move to prevent Alice from taking his king, so he bids sixty five, to match Alice’s total chip count, and uses the tie-breaking advantage to win the bid and take Alice’s bishop with his king. The resulting position after five moves is shown here. *Conclusion.* Now Bob has a material advantage, but Alice has one hundred and thirty chips, plus the tie-breaking advantage. Pondering the board, Bob realizes that if Alice wins the bid for less than thirty, then she can move her queen out to f3 to threaten his king, and then bid everything to win the next move and take his king. So Bob bids thirty, winning over Alice’s bid of twenty-five. Bob moves his knight to f6, to block the f-column, but Alice can still threaten his king by moving her queen to h4. Since Alice has enough chips so that she can now win the next two bids, regardless of what Bob bids, and capture the king. Alice suppresses a smile as Bob realizes he has been defeated. Head in his hands, he mumbles, *“That was a total mindf\*\*k."* **Acknowledgments.** I am grateful to the organizers and audience at the thirteenth BAD Math Day, in Fall 2006 at MSRI, where discrete bidding games first met the general public, for their patience and warm reception. I also thank Elwyn Berlekamp, David Eisenbud, and Ravi Vakil for their encouragement, which helped bring this project to completion. And finally, I throw down my glove at bidding game masters Andrew Ain, Allen Clement, and Ed Finn. Anytime. Anywhere. *—SP* Preliminaries ============= Game model ---------- Let $G$ be a game played by two players, Alice and Bob, and modeled by a colored directed graph. The vertices of the graph represent possible positions in the game, and there is a distinguished vertex representing the starting position. The colored directed edges represent valid moves. Red and blue edges represent valid moves for Alice and for Bob, respectively, and two vertices may be connected by any combination of red and blue edges, in both directions. Each terminal vertex represents a possible ending position of the game, and is colored red or blue if it is a winning position for Alice or Bob, respectively, and is uncolored if it is a tie. For any possible position $v$ in the game, we write $G_v$ for the game played starting from $v$. Recall that we write $\Gbar$ for the game that is exactly like $G$ except that Alice and Bob exchange roles. So $\overline G$ is modeled by the same graph as $G$, but with all colors and outcomes switched. Bidding ------- Alice and Bob each start with a collection of bidding chips, and all bidding chips have equal value, for simplicity. When the game begins, the players write down nonnegative integer bids for the first move, not greater than the number of chips in their respective piles. The bids are revealed simultaneously, and the player making the higher bid gives that many chips to the other, and decides who makes the first move. The chosen player makes a move in the game, and then the process repeats, until the game reaches an end position or one player is unable to continue. Tie-breaking {#tie-breaking section} ------------ One player starts, by mutual agreement, with the *tie-breaking advantage*. If Alice has the tie-breaking advantage, and the bids are tied, then she can either declare Bob the winner of the bid and keep the tie-breaking advantage, or she can declare herself the winner of the bid and give the tie-breaking advantage to Bob. Similarly, if Bob has the tie-breaking advantage, then he can either declare Alice the winner of the bid, or he can declare himself the winner of the bid and give the tie-breaking advantage to Alice. In each case, the winner of the bid gives the amount of the bid to the other, and decides who makes the next move. One virtue of this tie-breaking method is that it is never a disadvantage to have the tie-breaking advantage (see Lemma \[\* is an advantage\] below). Another virtue is that the tie-breaking advantage is worth less than an ordinary bidding chip (Lemma \[1&gt;\*\]). Other reasonable tie-breaking methods are possible, and many of the results in this paper hold with other methods. We discuss some other tie-breaking methods in the Appendix. We write $G(a^*,b)$ for the bidding game in which Alice starts with $a$ bidding chips and the tie-breaking advantage, and Bob starts with $b$ bidding chips. Similarly, $G(a,b^*)$ is the bidding game in which Alice starts with $a$ bidding chips and Bob starts with $b$ bidding chips and the tie-breaking advantage. General theory {#general theory} ============== As we make the transition from recreational play to mathematical investigation, one of the most basic questions we can ask about a game $G$ is for which values of $a$ and $b$ does Alice have a winning strategy for $G(a^*,b)$ or for $G(a,b^*)$. Often it is convenient to fix the total number of chips, and simply ask how many chips Alice needs to win. And when Alice does have a winning strategy, we ask how to to find it. The general theory that we present here shows some of the structure that the answers to these questions must have. For instance, if Alice has a winning strategy for $G(a^*,b)$, then she also has a winning strategy for $G((a+1)^*,b)$, and if she has a winning strategy for $G(a^*,b+1)$, then she also has a winning strategy for $G(a+1, b^*)$. Similar results hold when Bob starts with the tie-breaking advantage. In each case, she can just play as if her extra chip wasn’t there, or as if Bob’s missing chip wasn’t missing. Some of the structural results that we present here, such as the periodicity result in Section \[periodicity section\] are less obvious, and can be used to greatly simplify computations for specific games. We apply this approach to solve several games, including Tic-Tac-Toe, in Sections \[examples section\] and \[TTT section\]. For simplicity, we always assume optimal play, and say that Alice wins if she has a winning strategy, and that she does not win if Bob has a strategy to prevent her from winning. Value of the tie-breaking advantage {#basic lemmas} ----------------------------------- Roughly speaking, we show that the value of the tie-breaking advantage is strictly positive, but less than that of an ordinary bidding chip. \[\* is an advantage\] If Alice wins $G(a,b^*)$, then she also wins $G(a^*,b)$. Alice’s winning strategy for $G(a^*,b)$ is as follows. She plays as if she did not have the tie-breaking chip until the first time the bids are tied. The first time the bids are tied, Alice declares herself the winner of the bid, and gives Bob the tie-breaking advantage. The resulting situation is the same as if Bob had started with the tie-breaking advantage and declared Alice the winner of the bid. Therefore, Alice has a winning strategy for the resulting situation, by assumption. \[1&gt;\*\] If Alice wins $G(a^*,b+1)$, then she also wins $G(a+1,b^*)$. Alice’s winning strategy for $G(a+1, b^*)$ is as follows. She begins by playing as if she started with $a$ chips and the tie-breaking advantage except that whenever her strategy for $G(a^*, b+1)$ called for bidding $k$ and using the tie-breaking advantage, she bids $k+1$ instead. She continues in this way until either she wins such a bid for $k+1$ or Bob uses the tie-breaking advantage. Suppose that Alice’s strategy for $G(a^*, b+1)$ called for bidding $k$ for the first move and using the tie-breaking advantage in case of a tie. Then Alice bids $k+1$ for the first move. If Alice wins the bid, then the resulting situation is the same as if Alice had won the first bid in $G(a^*, b+1)$ using the tie-breaking advantage, so Alice has a winning strategy by hypothesis. Similarly, if Bob wins the bid using the tie-breaking advantage, then the resulting situation is the same as if Bob had won the first bid in $G(a^*, b+1)$ by bidding $k+1$, so Alice has a winning strategy. Finally, if Bob bids $k+2$ or more chips to win the bid, then the resulting situation is a position that could have been reached following Alice’s winning strategy for $G(a+1, b^*)$, except that Alice has traded the tie-breaking advantage for two or more chips, and Alice can continue with her modified strategy outlined above. The analysis of the case where Alice’s strategy for $G(a^*,b+1)$ did not call for using the tie-breaking advantage for the first move is similar. Although Lemma \[1&gt;\*\] shows that trading the tie-breaking chip for a bidding chip is always advantageous, giving away the tie-breaking chip in exchange for an extra bidding chip from a third party is not necessarily a good idea; for any positive integer $n$, there is a game $G$ such that Alice has a winning strategy for $G(a^*,b)$, but not for $G(a+n, b^*)$, as the following example demonstrates. \[bid zero example\] Let $G$ be the game where Bob wins if he gets any of the next $n$ moves, and Alice wins otherwise. Then for $n \ge 1$, $(k^*, 0)$ is an Alice win if and only if $k\ge 2^{n-1}-1$, while $(k, 0^*)$ is an Alice win if and only if $k\ge 2^n-1$. Using the tie-breaking advantage -------------------------------- In order for the tie-breaking advantage to have strictly positive value, as shown in Lemma \[\* is an advantage\], it is essential that the player who has it is not required to use it. However, the following proposition shows that it is always a good idea to use the tie-breaking advantage, unless you want to bid zero. \[use the advantage\] Both players have optimal strategies in which they use the tie-breaking advantage whenever the bids are nonzero and tied. Suppose that Alice has an optimal strategy which involves bidding $k$, but letting Bob win the bid if the bids are tied. If $k$ is positive, then Alice can do at least as well by bidding $(k-1)^*$ instead. If Bob bids $k$ or more, the resulting situation is unchanged, while if Bob bids $k-1$ or less, then Alice pays $(k-1)^*$ instead of $k$ to win the bid, which is at least as good by Lemma \[1&gt;\*\]. If the bids are tied at zero, it is not necessarily a good idea to use the tie-breaking advantage, as the following example shows. Consider the game where the player who makes the second move wins. Suppose Alice and Bob are playing this game, and they both start with the same number of chips. Then the player who starts with the tie-breaking advantage has a unique winning strategy—bid zero for the first move, decline to use the tie-breaking advantage, and bid everything to make the second move and win. Proposition \[use the advantage\] shows that when looking for an optimal strategy, we can always assume that the player with the tie-breaking advantage either bids 0 or $ 0^*, 1^*, \ldots$. Furthermore, if the player with the tie-breaking advantage bids 0, then the second player wins automatically and does best to bid 0 as well. Otherwise, if the player with the tie-breaking advantage bids $k^*$, we may assume that the second player either bids $k$ and gains $k^*$ chips while letting the first player move, or else bids $k+1$ and wins the bid. These observations significantly reduce the number of bids one needs to consider when searching for a winning strategy. Classical Richman calculus -------------------------- For the reader’s convenience, here we briefly recall the classical methods for determining the critical threshold $R(G)$ between zero and one such that Alice has a winning strategy if her proportion of the bidding resources is greater than $R(G)$ and does not have a winning strategy if her proportion of the bidding resources is less than $R(G)$. This *Richman calculus* also gives a method for finding the optimal moves and optimal bids for playing $G$ as a bidding game with real-valued bidding. See the original papers [@LLPU; @LLPSU] for further details. In Section \[discrete section\], we present a similar method for determining the number of chips that Alice needs to win a discrete bidding game with a fixed total number of chips, as well as the optimal bids and moves for discrete bidding. First, suppose $G$ is bounded. We compute the critical thresholds $R(G_v)$ for all positions $v$ in $G$ by working backwards from the end positions. If $v$ is an end position then $$R(G_v) = \left\{ \begin{array}{ll} 0 & \mbox{ if } v \mbox{ is a winning position for Alice.} \\ 1 & \mbox{ otherwise.} \end{array} \right.$$ Suppose $v$ is not an end position. If Alice makes the next move, then she will move to a position $w$ such that $R(G_w)$ is minimal. Similarly, if Bob makes the next move, then he will move to a position $w'$ such that $R(G_{w'})$ is maximal. We define $$R_A(G_v) = \min_{A: v \rightarrow w} R(G_w) \mbox{ \ \ and \ \ } R_B(G_v) = \max_{B: v \rightarrow w'} R(G_{w'}),$$ where the minimum and maximum are taken over Alice’s legal moves from $v$ and Bob’s legal moves from $v$, respectively. The critical threshold $R(G_v)$ is then $$R(G_v) = \frac{R_A(G_v) + R_B(G_v)}{2}.$$ The difference $R_B(G_v) - R_A(G_v)$ is a measure of how much both players want to move (or to prevent the other player from moving). If this difference is positive, then both players want to move, and if the difference is negative then the position is zugzwang and both players want to force the other to move. In either case, an optimal bid for both players is $\Delta_v = |R_B(G_v) - R_A(G_v)| / 2$. Next, suppose $G$ is locally finite, but not necessarily bounded. Let $G[n]$ be the truncation of $G$ after $n$ moves. So $G[n]$ is just like $G$ except that the game ends in a tie if there is no winner after $n$ moves. In particular, Alice wins $G[n]$ if and only if she has a strategy to win $G$ in at most $n$ moves. We can compute the critical threshold $R(G)$ when $G$ is bounded by using the bounded truncations $G[n]$, as follows. First, $R(G[n])$ can be computed by working backward from end positions, since $G[n]$ is bounded. Now $R(G[n])$ is a nonincreasing function of $n$ that is bounded below by zero, so these critical thresholds approach a limit as $n$ goes to infinity. Furthermore, since $G$ is locally finite, Alice has a winning strategy for $G$ if and only if she has a winning strategy that is guaranteed to succeed in some fixed finite number of moves. It follows that $$R(G) = \lim_{n \rightarrow \infty} R(G[n]).$$ For games that are not locally finite, Alice may have a winning strategy, but no strategy that is guaranteed to win in a fixed finite number of moves. In this case, $R(G)$ is not necessarily the limit of the critical thresholds $R(G[n])$, as the following example shows. Let $\A^m$ be the game that Alice wins after $m$ moves, and let $G$ be the game in which the first player to move can choose between the starting positions of $\A^m$ for all positive integers $m$. Then Alice is guaranteed to win $G$, so $R(G) = 0$, but the critical threshold of each truncation is $R(G[n]) = 1/2$. Indeed, if Bob wins the first move of $G[n]$, then he can move to the starting position of $\A^n$, which Alice cannot win in the remaining $n-1$ moves. Discrete Richman calculus {#discrete section} ------------------------- Here we return to discrete bidding and compute the number of chips that Alice needs to win a locally finite game, assuming that the total number of chips is fixed. Since Alice may or may not have the tie-breaking advantage, the total number of chips that Alice has is an element of $\N \cup \N^*$, which is totally ordered by $$0 < 0^* < 1< 1^* < 2 < \cdots.$$ If we fix the game $G$ and the total number of ordinary chips $k$, then it follows from Lemmas \[\* is an advantage\] and \[1&gt;\*\] that there is a critical threshold $f(G,k) \in \N \cup \N^*$ such that Alice wins if and only if she has at least $f(G,k)$ chips. Note that Alice can have at most $k^*$ chips, so if $G$ is a game in which Alice never wins, then $f(G,k) = k+1,$ by definition. The critical thresholds $f(G,k)$ can be computed recursively from end positions for bounded games, and the critical thresholds of locally finite games can be computed from the critical thresholds of their truncations, just like the critical thresholds $R(G)$ for real-valued bidding. However, one must account for the effects of rounding, since the bidding chips are discrete, as well as the tie-breaking advantage. First, suppose that $v$ is an end position. Then $$f(G_v,k) = \left\{ \begin{array}{ll} 0 & \mbox{ if } v \mbox{ is a winning position for Alice.} \\ k+1 & \mbox{ otherwise.} \end{array} \right.$$ Next, suppose $v$ is not an end position. If Alice makes the next move, then she will move to a position $w$ such that $f(G_w,k)$ is minimal. Similarly, if Bob makes the next move, then he will move to a position $w'$ such that $f(G_{w'},k)$ is maximal. We define $$f_A(G_v,k) = \min_{A: v \rightarrow w} f(G_w,k) \mbox{ \ \ and \ \ } f_B(G_v,k) = \max_{B: v \rightarrow w'} f(G_{w'},k),$$ where the minimum and maximum are taken over Alice’s legal moves from $v$ and Bob’s legal moves from $v$, respectively. For an element $x \in \N \cup \N^*$, we write $|x|$ for the underlying integer, so $|a|$ and $|a^*|$ are both equal to $a$, for nonnegative integers $a$. We also define $a + * = a^*$. For a real number $x$, we write $\lfloor x \rfloor$ for the greatest integer less than or equal to $x$. \[discrete recursion\] For any position $v$, the critical threshold $f(G_v,k)$ is given by $$f(G_v,k) = \left \lfloor \frac{ | f_A(G_v, k) | + | f_B(G_{v},k) | }{ 2 } \right \rfloor + \varepsilon,$$ where $$\varepsilon = \left \{ \begin{array}{ll} 0 & \mbox{ if } |f_A(G_v,k)| + |f_B(G_{v},k)| \mbox{ is even, and } f_A(G_v, k) \in \N. \\ 1 & \mbox{ if } |f_A(G_v,k)| + |f_B(G_{v},k)| \mbox{ is odd, and } f_A(G_v, k) \in \N^*. \\ * & \mbox{ otherwise.} \\ \end{array} \right.$$ Since the critical threshold for any locally finite game can be computed from its bounded truncations, it is enough to prove the theorem in the case where $G$ is bounded. If the game starts at an end position, then the theorem is vacuously true. We proceed by induction on the length of the bounded game. Suppose $|f_A(G_v,k)| + |f_B(G_{v},k|$ is even and $f_A(G_v,k) \in \N$. If Alice has at least $(|f_A(G_v,k)| + |f_B(G_{v},k|) / 2$ chips, then she can bid $$\Delta = \big| |f_A(G_v,k)| - |f_B(G_{v},k| \big| /2$$ and guarantee that she will end up in a position $w$ with at least $f(G_w,k)$ chips. Then Alice has a winning strategy, by induction, since $G_w$ is a bounded game of shorter length than $G$. Similarly, if Alice has fewer than $( |f_A(G_v,k)| + |f_B(G_{v},k|) /2$ chips, then Bob can bid $\Delta$ and guarantee that he will end up in a position $w'$ where Alice will have fewer than $f(G_{w'},k)$ chips. Then Bob can prevent Alice from winning, by induction. Therefore, Alice wins $G$ if and only if she has at least $(|f_A(G_v,k)| + |f_B(G_{v},k|)/2$ chips, as was to be shown. The proofs of the remaining cases, when $|f_A(G_v,k)| + |f_B(G_{v},k|$ is odd, and when $f_A(G_v,k)$ is in $\N^*$, are similar. If $|f_A(G_v,k)| + |f_B(G_v,k)|$ is odd and $f_A(G_v,k)$ is in $\N^*$, then the ideal bid for both players is the round down $$\Delta = \Big \lfloor \big| |f_A(G_v,k)| - |f_B(G_{v},k| \big| /2 \Big \rfloor.$$ If $|f_A(G_v,k)| + |f_B(G_v,k)|$ is odd but $f_A(G_v,k)$ is in $\N$, then both players should try to make the smallest possible bid that is strictly greater than $ \Big \lfloor \big| |f_A(G_v,k)| - |f_B(G_{v},k| \big| /2 \Big \rfloor$. And if $|f_A(G_v,k)| + |f_B(G_v,k)|$ is even and $f_A(G_v,k)$ is in $\N^*$ then both players should try to make the smallest possible bid that is strictly greater than $\big| |f_A(G_v,k)| - |f_B(G_{v},k| /2 \big| -1$. Theorem \[discrete recursion\] makes it possible to find both the critical threshold and the optimal strategy for a given bounded game by working backward from end positions. \[simple tables\] Suppose $\A$ is a game that Alice is guaranteed to win and $\B$ is a game that Bob wins. Then $$f(\A,k) = 0 \mbox{ \ \ and \ \ } f(\B,k) = k+1.$$ Let $\E$ be the game in which the first player to move wins. Then $$f(\E,k) = \left \{ \begin{array}{ll} (k+1)/2 & \mbox{ if } k \mbox{ is odd}. \\ \lfloor (k+1)/2 \rfloor ^* & \mbox{ if } k \mbox{ is even}. \end{array} \right.$$ As games become more complicated, it is more convenient to encode the possibilities in a table. For instance, the critical thresholds for the game $\E$ could be put in a table as follows. 10 pt [|r||\*[2]{}[l|]{}]{} $k=2n+$&0&1\ $f(\E, k) = n + $&0\*&1\ Let $A^2$ be the game that Alice wins if she makes either of the first two moves and Bob wins otherwise. Similarly, let $B^2$ be the game that Bob wins if he makes either of the first two moves and Alice wins otherwise. Then the critical thresholds for $A^2$ and $B^2$ are given by 10 pt [|r||\*[4]{}[l|]{}]{} $k=4n+$&0&1&2&3\ $f(A^2, k) = n + $&0&0\*&0\*&1\ [|r||\*[4]{}[l|]{}]{} $k=4n+$&0&1&2&3\ $f(B^2, k) = 3n + $&1&1\*&2\*&3\ See Section \[TTT section\] for detailed computations using such tables in a more interesting situation. Discrete bidding with large numbers of chips -------------------------------------------- When $G$ is played as a discrete bidding game, the optimal moves for Richman play are not necessarily still optimal. This may be seen as a consequence of the effects of rounding and tie-breaking in the discrete Richman calculus. However, one still expects that as the number of chips becomes large, discrete bidding games should become more and more like Richman games. Roughly speaking, the effects of rounding should only be significant enough to affect the outcome when the number of chips is small or Alice’s proportion of the total number of chips is close to the critical threshold $R(G)$. We think of these situations as “unstable." A strategy for Alice is *stable* if, whenever Alice makes a move following this strategy, she moves to a position $w$ such that $R(G_w)$ is as small as possible. Similarly, we say that a strategy for Bob is stable if, whenever he makes a move following this strategy, he moves to a position $w'$ such that $R(G_{w'})$ is as large as possible. We say that a discrete bidding game is stable if both Alice and Bob have stable optimal strategies. Note that the proofs of Lemmas \[\* is an advantage\] and \[1&gt;\*\] go through essentially without change when “Alice wins" is replaced by “Alice has a stable winning strategy." For instance, if Alice has a stable winning strategy for $G(a,b^*)$, then she also has a stable winning strategy for $G(a^*,b)$. \[above the threshold\] For any locally finite game $G$, and for any positive $\epsilon$, Alice has a stable winning strategy for $G(a,b^*)$ provided that $a/(a+b)$ is greater than $R(G) + \epsilon$ and $a$ is sufficiently large. First, we claim that it suffices to prove the theorem when $G$ is bounded. Recall that $G[n]$ denotes the truncation of $G$ after $n$ moves. So $G[n]$ is bounded and Alice wins $G[n]$ if and only if she wins $G$ in at most $n$ moves. By [@LLPU Section 2], $R(G)$ is the limit as $n$ goes to infinity of $R(G[n])$. Therefore, replacing $G$ by $G[n]$ for $n$ sufficiently large, we may assume that $G$ is bounded. Suppose $G$ is bounded and guaranteed to end after $n$ moves. If $n = 1$, then the theorem is clear. We proceed by induction on $n$. Assume the theorem holds for games guaranteed to terminate after $n-1$ moves. Alice’s strategy is as follows. For the first move, she bids $x$ such that $x/(a+b)$ is between the optimal real-valued bid $\Delta$ and $\Delta + \epsilon /2$, which is possible since $a$ is large. If Alice moves, then she moves to a position $w$ such that $R(G_w)$ is minimal. Otherwise, Bob moves wherever he chooses. Either way, Alice ends up in a game $G_v$ that is guaranteed to terminate after $n-1$ moves, holding $a'$ chips where $a' / (a+b)$ is greater than $R(G_v) + \epsilon / 2$, and hence has a winning strategy for $a$ sufficiently large, by induction. Since $G$ is locally finite, there are only finitely many possibilities for $v$. Therefore, $a$ can be chosen sufficiently large for all such possibilities, and the result follows. The conclusion of Theorem \[above the threshold\] is false if $G$ is not locally finite; we give an example illustrating this in Section \[examples section\]. \[below the threshold\] For any finite game $G$, and for any positive $\epsilon$, Bob has a stable strategy for preventing Alice from winning $G(a^*,b)$ provided that $a/(a+b)$ is less than $R(G) - \epsilon$ and $b$ is sufficiently large. Recall that $\overline G$ is the game that is identical to $G$ except that Alice and Bob exchange roles. If $G$ is finite, then $R(\overline G) = 1- R(G)$. Therefore, the result follows from Theorem \[above the threshold\] applied to $\overline G(b,a^*)$. Under the hypotheses of Theorem \[below the threshold\], Bob actually has a winning strategy. For locally finite games that are not finite, $R(\overline G)$ may be strictly larger than $1-R(G)$, so Bob should not be expected to have a winning strategy for $G(a^*,b)$. With real-valued bidding he may only have a strategy to prolong the game into an infinite draw. Next, we show that finite games always become stable when the number of chips becomes sufficiently large. \[stable for many chips\] Let $G$ be a finite game. Then $G(a^*,b)$ and $G(a,b^*)$ are stable when $a+b$ is sufficiently large. Assume the total number of chips is large. We will show that either 1. Alice has a stable winning strategy, or 2. Any unstable strategy for Alice can be defeated by a stable strategy for Bob. A similar argument shows that either Bob has a stable winning strategy, or any unstable strategy for Bob can be defeated by a stable winning strategy for Alice, and the theorem follows. Suppose Alice follows an unstable strategy that calls for her to move to a position $w$ such that $R(G_w)$ is not as small as possible. Let $\delta$ be the discrepancy $$\delta = R(G_w) - R(G_{w_0}),$$ where $w_0$ is a position that Alice could have moved to such that $R(G_{w_0})$ is as small as possible. If Alice’s proportion of the bidding chips is greater than $R(G) + \delta/4$, then she has a stable winning strategy, by Theorem \[above the threshold\]. Therefore, we may assume that Alice’s proportion of the bidding chips is at most $R(G) + \delta/4$. Suppose that the position is not zugzwang, so Bob is bidding for the right to move. Since the total number of chips is large, Bob can make a bid that is strictly between $\Delta - \delta/2$ and $\Delta - \delta/4$, where $\Delta = R(G) - R(G_{w_0})$ is the optimal real-valued bid. Then, if Alice wins the bid and moves to $w$, she finds herself with a proportion of chips that is less than $R(G_w) - \delta/4$ and hence Bob has a stable winning strategy (since the number of chips is large). Otherwise, Bob wins and moves to a position $w'$ such that $R(G_{w'})$ is as large as possible. Then his proportion of the chips is greater than $R(\overline G_{w'}) + \delta/4$, so again he has a stable winning strategy. The proof when the position is zugzwang is similar except that Bob should bid between $\Delta + \delta/4$ and $\Delta + \delta/2$ to force Alice to move. Periodicity {#periodicity section} ----------- Here we prove a periodicity result for finite stable games that allows one to determine the outcome of $G$ for all possible chip counts for Alice and Bob by checking only a finite number of cases. We use this result extensively in our analysis of specific bidding games in Sections \[examples section\] and \[TTT section\]. Fix a finite game $G$. Choose a positive integer $M$ such that $M \cdot R(G_v)$ and $M \cdot \Delta_v$ are integers for all positions $v$ in $G$. For instance, one can take $M$ to be the least common denominators of $R(G_v)$ and $\Delta_v$ for all $v$. Let $$m = M \cdot R(G) \mbox{ \ \ and \ \ } m_v = M \cdot R(G_v).$$ Similarly, let $\overline m = R(\overline G)$ and $\overline m_v = R(\overline G_v)$. \[periodicity for Alice\] If Alice has a stable winning strategy for $G(a^*,b)$ then she also has a stable winning strategy for $G(a + m^*, b + \overline m)$. Similarly, if Alice has a stable winning strategy for $G(a,b^*)$ then she also has a stable winning strategy for $G(a + m, b + \overline m^*)$. Since $G$ is finite, Alice’s stable winning strategy for $G(a^*,b)$ is guaranteed to succeed in some fixed number of moves. If the game starts at a winning position for Alice, then the theorem is vacuously true, so we proceed by induction on the number of moves. Suppose Alice has a stable winning strategy for $G(a^*, b)$ in which she bids $k$ for the first move. Then Alice can win $G(a + m^*, b+ \overline m)$ by bidding $k + M \Delta$ for the first move, and moving according to her stable strategy for $G(a^*,b)$. Regardless of whether she wins the bid, Alice ends up in a position $v$ where, compared to her stable strategy for $G(a^*,b)$, she has at least $m_v$ additional chips and Bob has at most $\overline m_v$ additional chips. By induction, Alice has a stable winning strategy starting from $v$ that is guaranteed to win in a smaller number of moves, and the result follows. The proof for the situation where Bob starts with tie-breaking advantage is similar. \[periodicity for Bob\] If Bob has a stable strategy to prevent Alice from winning $G(a^*,b)$ then he also has a stable strategy to prevent Alice from winning $G(a + m^*, b + \overline m)$. Similarly if Bob has a stable strategy to prevent Alice from winning $G(a,b^*)$, then he also has a stable strategy to prevent Alice from winning $G(a + m, b + \overline m^*)$. Similar to proof of Theorem \[periodicity for Alice\]. Using these periodicity results, we can determine the exact set of chip counts for which Alice can win $G$ by answering the following two questions for finitely many $x$ and $y$ in $\N \cup \N^*$. - If Alice starts with $x$ chips then how many chips does Bob need to prevent her from winning? - If Bob starts with $y$ chips then how many chips does Alice need to win? There is a unique minimal answer in $\N \cup \N^*$ to each such question, and the answer is generally not difficult to find if $G$ is relatively simple and $x$ or $y$ is small. Furthermore, by Theorem \[stable for many chips\], there is an integer $n$ such that $G(a^*,b)$ and $G(a,b^*)$ are stable provided that $a + b$ is at least $n$. Then, if we know the answers to the above questions for $x < m + n$ and $y < \overline m + n$, we can easily deduce whether Alice wins for any given chip counts using Theorems \[periodicity for Alice\] and \[periodicity for Bob\]. We conclude this section with some open problems that ask to what extent, if any, these results extend from finite games to locally finite games. For fixed $\epsilon > 0$ and a large number of chips, by Theorem \[above the threshold\] we know that Alice has a stable winning strategy if her chip count is at least $R(G) + \epsilon$ and Bob has a winning strategy if his proportion of the chips is at least $R(\overline G) + \epsilon$. However, if $R(G) + \epsilon$ is less than $1 - R(\overline G) - \epsilon$, then there is a gap where the outcome is unclear, even when the number of chips is large. \[in the draw range\] Is there a locally finite game $G$ and a positive number $\epsilon$ such that Alice has a winning strategy for infinitely many chip counts $G(a^*,b)$ such that $a/(a+b)$ is less than $R(G) - \epsilon$? Roughly speaking, Problem \[in the draw range\] asks whether strategies to force a draw in a locally finite game with real-valued bidding can always be approximated sufficiently well by discrete bidding with sufficiently many chips. However, it is not clear whether one should follow stable strategies in locally finite games with large numbers of chips. \[eventually Richman\] If $G$ is locally finite, are $G(a^*,b)$ and $G(a,b^*)$ stable for $a$ and $b$ sufficiently large? If the answer to Problem \[in the draw range\] is negative, then the answer to Problem \[eventually Richman\] is negative as well. To see this, suppose Alice wins a game $G$ with a proportion of chips less than $R(G)-\epsilon$ and an arbitrarily large total number of chips. Let $H$ be a stable game with Richman value between $R(G)-\epsilon$ and $R(G)$ (which is not difficult to construct), and let $G \wedge H$ be the game in which the first player to move gets to choose between the starting position of $G$ and the starting position of $H$. In $G \wedge H$, Alice’s optimal first move for large chip counts would be to move to the starting position for $G$, while her stable strategy (i.e. her optimal strategy for real-valued bidding) would be to move to the starting position for $H$. In the Appendix, we show that the answer to Problem \[eventually Richman\] is negative for different tie-breaking methods. Examples {#examples section} ======== In this section, we analyze discrete bidding play for two simple combinatorial games, Tug o’ War and what we call *Ultimatum*. We use these examples to construct games with strange behavior under discrete bidding play. Tug o’ War {#tug o war} ---------- The Tug o’ War game of length $n$, which we denote by $\Tug^n$, is played on a path of length $2n$, with vertices labeled $-n, ..., -1, 0 , 1, ..., n$, from left to right. The game starts at the center vertex, which is marked $0$. Adjacent vertices are connected by edges of both colors in both directions. Alice’s winning position is the right-most vertex of the path, and Bob’s winning position is the left-most vertex of the path. So Alice tries to move to the right, Bob tries to move the left, and the winner is the first player to reach the end. In particular, Tug o’ War is stable, since the optimal moves do not depend on whether bidding is discrete or real-valued, and since it is also symmetric the critical threshold is $R(\Tug^n) = 1/2$. \[tug o war result\] Suppose Bob’s total number of chips is less than $n$. Then Alice wins $\Tug^n$ if and only if her total number of chips is at least $(n-1)^*$. We define the weight of a position in the bidding game to be the number of the current vertex plus the number of chips that Alice has, including the tie-breaking chip. Note that, in order to reach a winning position, Alice must first reach a position of weight at least $n$. Suppose Alice’s chip total is at most $n-1$. Then Bob can force a draw by bidding zero every time, and using the tie-breaking advantage to win whenever possible. Indeed, if Bob does this, then the weight of the position in the game never exceeds its starting value, so Alice cannot win. Suppose Alice’s chip total is at least $(n-1)^*$. Then Alice can win with the following strategy. Whenever she has the tie-breaking advantage, she bids zero and uses the advantage if possible. Whenever she does not have the tie-breaking advantage, she bids one. With this strategy, the weight of the position never decreases, and it follows that Bob cannot win. Bob may win the first several moves, but eventually he will run out of chips, and Alice will win a move for zero, using the tie-breaking advantage. Then Alice may win a certain number of moves for one chip each. Since the weight of the position is at least $n$, if Bob lets her win moves for one chip indefinitely, then Alice will win. So eventually Bob must bid either two chips or one plus the tie-breaking advantage, and the weight of Alice’s position increases by one. It follows that Alice can raise the weight of her position indefinitely, until eventually she must win. The Richman game version of Tug o’ War was studied in [@LLPSU p. 252], where the critical threshold of the vertex labeled $k$ was determined to be $(k + n) / 2n$. Therefore, the periodicity results of Section \[periodicity section\] hold with $M = 2n$ and with $m$ and $\overline m$ both equal to $n$. The cases covered by Proposition \[tug o war result\] then completely determine the outcome of Tug o’ War for all possible chip counts. Let $a$, $b$, $k$, and $k'$ be nonnegative integers, with $a$ and $b$ less than $n$. Then Alice wins $\Tug^n(kn + a, (k'n + b)^*)$ if and only if $k$ is greater than $k'$. Furthermore, Alice wins $\Tug^n(kn + a^*, k'n + b)$ if and only if either $k$ is greater than $k'$ or $k$ is equal to $k'$ and $a$ is equal to $n-1$. Tug o’ War game is perhaps the simplest game that is not bounded, and yet we can use it to construct some interesting examples of bidding game phenomena for games that are not locally finite. Let $G$ be the game in which the first player to move can go to the starting position of $\Tug^n$ for any $n$. Then $R(G) = 1/2$, but $G(3a, a^*)$ is a draw for all values of $a$ greater than one. Since all of the possible first moves lead to positions $v$ with $R(G_v) = 1/2$, the critical threshold is $R(G) = 1/2$. Consider $G(3a, a^*)$ for any $a$. Bob’s strategy to force a draw is as follows. Bob bets all of his chips for the first move. Then Alice can either bet $a+1$ and win the bet, getting to play any $\Tug^n(2a-1, 2a+1^*)$ which is at best a draw for Alice. Otherwise, Bob wins the bet, and chooses to play $\Tug^n$ for some $n$ greater than $4a + 1$, which leads to a draw by Proposition \[tug o war result\]. So Bob can force a draw. This type of behavior, where Bob can force a draw even though Alice’s proportion of the chips is much greater than $R(G)$ and the total number of chips is large, is impossible for locally finite games by Theorem \[above the threshold\]. The following example shows that non locally finite games may also be unstable even for large numbers of chips. Let $G$ be the game in which the first player to move can either play $\Tug^1$, or $\Tug^n$ starting at the node labeled $-1$ for any $n\ge 10$. Then $G(3a^*, 2a)$ fails to be stable for all $a$. We claim that Bob has no optimal stable strategy. For the first move, Bob’s only stable strategy is to move to $\Tug^1$, since its Richman value is 1/2 and the Richman values of all $\Tug^n$ starting at $-1$ is less than 1/2. However, if Bob wins the first bid and moves to $\Tug^1$ then he will lose. Nevertheless, we claim that Bob has a strategy to force a draw. This strategy is as follows. Bob bets all of his chips on the first turn. If Alice lets him win the bid, he can move to $\Tug^n$ for $n$ large, which leads to a draw. Otherwise, if Alice bets $2a^*$, she may choose to play either $\Tug^1(a,4a^*)$ or $\Tug^n(a,4a^*)$ starting from the node labeled $-1$. If Alice chooses $\Tug^1$ then Bob wins. Otherwise, Bob can win the next move for $a^*$, leading to $\Tug^n(2a^*, 3a)$, which is at worst a draw for Bob. Therefore Bob has a nonstable strategy that is better than any stable strategy. Ultimatum --------- We know describe the Ultimatum game of degree $n$, which we denote by $\Ult^n$. It is played on a directed graph with vertices labeled $B, -n, \ldots, -1, 0 ,1, \ldots, n-1, n, A$. There are red edges from $0$ to $n$, from $k$ to $A$ for $k > 0$, and from $k$ to $k + 1$ for $k < 0$. Similarly, there are blue edges from $0$ to $-n$, from $k$ to $B$ for $k < 0$, and from $k$ to $k-1$ for $k > 0$. The game starts at $0$. In other words, when the game starts, the first player to move gives the other an ultimatum—the other player must make each of the next $n$ moves (in which case the game reverts to the beginning state), or else lose the game. Since $\Ult^n$ is finite and symmetric, the critical threshold $R(U^n)$ is $1/2$. Suppose $b$ is less than $2^n$. Then Alice has a winning strategy for $\Ult^n(a^*, b)$ if and only if $a$ is greater than $b$, or $a$ is equal to $b$ and $b\neq 2^{n-1} - 1$. Suppose $a$ is greater than $b$. We claim that Alice can win by bidding $b$ chips on the first move and using the tie-breaking advantage. Then Alice still has at least one chip left, so Bob must bid at least one chip plus the tie-breaking advantage for the second move, three chips for the third move, and $3 \cdot 2^k$ for move number $k + 3$. It follows that if Bob is able to make $n$ moves in a row, then Alice receives at least $3 \cdot (2^{n-1} -1)$ chips from Bob before returning to $0$. In particular, Alice returns to the starting position with strictly more chips than she started with, and hence must eventually win. Suppose $a$ is equal to $b$ and less than $2^{n-1} - 1$. Then Alice can win by bidding all of her chips on the first move and using the tie-breaking advantage. Bob must give her the tie-breaking advantage to take the second move, and one chip to take the third move, and $2^k$ chips to take move number $k + 3$. It follows that if Bob is able to make $n$ moves in a row, then Alice will have collected $2^{n-1} - 1$ chips from Bob by the time they return to the starting position. Now Alice has more chips than Bob, plus the tie-breaking advantage, so she has a winning strategy. Suppose $a$ is equal to $b$ and greater than $2^{n-1} -1$. Then Alice bids $a -1$; if Bob bids all of his chips to win, then Alice can pay him $2^{n-1} - 1$ chips, plus the tie-breaking advantage, to return to the starting position. Now Alice has at least two more chips than Bob, so she can bid Bob’s number of chips plus one to move to vertex $n$. Then, if Bob has enough chips to make the next $n$ moves, Alice will return to the starting vertex with more chips than Bob, plus the tie-breaking advantage, and will therefore win. Finally, suppose $a$ and $b$ are both equal to $2^{n-1} - 1$. Then Bob can prevent Alice from winning by bidding all of his chips for the first move. If Alice bids all of her chips plus the tie-breaking advantage to take the first move, then Bob has exactly enough chips to return to the starting position with $2^{n-1} -1$ chips left. Otherwise, Bob makes the first move, and Alice has just enough chips to return to the starting position with $2^{n-1} -1 $ chips left. Since the tie-breaking advantage is always an advantage, by Lemma \[\* is an advantage\], Bob’s position is no worse than when the game began, so Alice cannot win. Suppose $b < 2^n$. Then Alice has a winning strategy for $\Ult^n(a, b^*)$ if and only if $a$ is greater than $b+1$, or $a$ is equal to $b+1$ and $b\neq 2^{n-1} - 1$. The proof is essentially identical to the previous proposition’s. If Alice has more than $b+1$ chips, she just bets $b+1$ and wins. If she has $b+1$ chips, then unless the congruence condition holds, she can win by betting $b+1$ chips if $b$ is less than $2^{n-1}-1$ and $b$ chips if $b$ is greater than $2^{n-1}-1$. If $b$ is equal to $2^{n-1}-1$ then Bob can bet all his chips to return with either $b^*$ chips or $b+1$ chips, thus forcing a draw. Now $\Ult^n$ is clearly stable, since there is only one move available to each player from each position, so the above cases can be used to determine the outcome of $\Ult^n$ for all possible chip counts using the periodicity results of Section \[periodicity section\]. Alice wins $\Ult^n(a^*, b)$ if and only if either $a$ is greater than $b$ or $a$ and $b$ are equal but not congruent to $2^{n-1}-1 \ (\mathrm{mod} \ 2^n)$. Similarly, Alice wins $\Ult^n(a, b^*)$ if and only if either $a$ is greater than $b+1$, or $a$ is equal to $b+1$ and not congruent to $ 2^{n-1}\ (\mathrm{mod} \ 2^n)$. Using these computations for the Ultimatum games, we can construct another interesting example: a discrete bidding game $G$ whose critical threshold $R(G)$ is rational, but when Alice’s proportion of the total number of chips is exactly equal to $R(G)$ the outcome is not periodic in the total number of chips. For finite games, such behavior is impossible by the periodicity results of Section \[periodicity section\]. Let $G$ be the game where on the first move, either player can choose one of $\Ult^1, \Ult^3, \Ult^5, \ldots$, and then the players play the chosen game with the current chip counts. Then $R(G)=1/2$, but the sequence of results of $G(k^*, k)$ is aperiodic. Since the game is symmetric, it cannot be a win for Bob. If Alice either bets 1 or uses the tiebreaking advantage in a 0-0 tie, then she will be playing an ultimatum game with fewer chips and cannot win. So her only chance at forcing a win is to let Bob win for 0 chips. So Bob picks any ultimatum game. If he can pick a game for which $\Ult^n(k^*, k)$ is a draw, $G(k^*, k)$ is a draw; otherwise, $G(k^*, k)$ is an Alice win. However, the values of $k$ for which $\Ult^n(k^*, k)$ is a draw are those for which $k\equiv 2^{n-1}-1$ (mod $2^n$). Only games $\Ult^n$ for odd $n$ are in play, so Bob can draw whenever $k$ is congruent to 0 mod 2, or 3 mod 8, or 15 mod 32, and so on. It is easily seen that the set of $k$ which satisfy these congruence conditions is not a finite union of arithmetic progressions and hence the outcome is not periodic in $k$. A partial order on games {#partial order section} ======================== If we know the optimal moves for Alice and Bob, or if both the game $G$ and the total number of chips $k$ are fixed, then the critical threshold $f(G,k)$ is easy to compute using the formulas in Section \[discrete section\]. On the other hand, for fixed $G$, the tree of optimal moves may vary as $k$ varies. In this section, we discuss situations in which one can find moves that are optimal for all $k$. We define a partial order on games by setting $G \leq G'$ if $f(G,k) \leq f(G',k)$ for all $k$. In other words, $G \leq G'$ means that, regardless of the total number of chips, if Alice has enough chips to win $G'$ then she also has enough chips to win $G$. Then $G$ and $G'$ are equivalent in this partial order if $f(G,k) = f(G',k)$ for all $k$. The point of this partial order is that if $G \leq G'$, then Alice will always prefer to move to a position $v$ such that $G_v$ is equivalent to $G$ rather than a position $v'$ such that $G_{v'}$ is equivalent to $G'$, regardless of the total number of chips. Let $\A$ be the equivalence class of games in which Alice always wins, and let $\B$ be the equivalence class of games in which Bob always wins. Then $$\A \leq G \leq \B$$ for every game $G$. If $G$ and $G'$ are games with many legal moves available from each position, then it may be very difficult to tell whether $G$ and $G'$ are comparable in this partial order. In practice, we can sometimes compare relatively complicated games $G$ and $G'$ by relating them to games with fewer legal moves. For a positive integer $n$, let $A^n$ be the game in which Alice wins if she makes any of the first $n$ moves, and Bob wins otherwise. Roughly speaking, we think of $A^n$ as a sudden-death game in which Alice has an advantage of order $n$. Similarly, let $B^n$ be the game in which Bob wins if he makes any of the first $n$ moves and Alice wins otherwise. We write $\E$ for the equivalence class of games in which the player with more chips always wins. One game in the class of $\E$ is the game in which the first player to move wins. Then $A^1$ and $B^1$ are both equivalent to $\E$, and $$\A < \cdots < A^3 < A^2 < \E < B^2 < B^3 < \cdots < \B.$$ The games $A^n$ and $B^n$ are useful for comparing positions, since $A^n \leq G$ for any game $G$ in which Bob wins if he makes all of the first $n$ moves, and $G' \leq A^n$ for any game $G$ in which Alice wins if she makes any of the first $n$ moves, and similarly for $B^n$. \[symmetric\] Games that are equivalent to $\E$ in this partial order are not necessarily obviously symmetric. For instance, Tic-Tac-Toe played starting from any of the following six positions is equivalent to $\E$. ![image](symmetric-tttcrop.pdf) From each of these positions, Alice wins if she makes two moves before Bob does, and Bob can prevent Alice from winning if he makes two moves before Alice does. It follows that each of these positions is equivalent to $A^2 \wedge B^2$, which is equivalent to $\E$. Recall that the wedge sum $G \wedge H$ of two games $G$ and $H$ is the game where the first player to move can choose between the starting position for $G$ and the starting position for $H$. Although $G \wedge H$ is the same as $H \wedge G$, when one game is clearly better for each player we write the game that Alice prefers on the left. For example, $A^2 \wedge B^3$ is a game in which Alice can move to a position equivalent to $A^2$ and Bob can move to a position equivalent to $B^3$. This notation is convenient for working with equivalence classes of games and our partial order, since $G \wedge H$ is equivalent to $G' \wedge H$ if $G$ is equivalent to $G'$, and $G \wedge H \leq G' \wedge H'$ if $G \leq G'$ and $H \leq H'$. Simple relations between games are also easily expressible in this notation. For instance, we have equivalences of games $$A^n \equiv \A \wedge A^{n-1} \mbox{ \ \ and \ \ } \E \equiv G \wedge \overline G,$$ for any integer $n \geq 2$ and any game $G$. In Section \[TTT section\], we completely analyze bidding Tic-Tac-Toe by comparing positions in Tic-Tac-Toe with iterated wedge sums of games $\A$, $\B$, $A^m$ and $B^n$. For instance, we show that Tic-Tac-Toe played from the position ![image](wedgesum-examplecrop.pdf) is equivalent to $(\A \wedge B^2) \wedge \B$. Bidding Tic-Tac-Toe {#TTT section} =================== In this section, we use comparisons with bivalent games and the partial orders defined in Section \[partial order section\] to completely analyze Tic-Tac-Toe for both real-valued bidding and discrete bidding with arbitrary chip counts. Since Tic-Tac-Toe is symmetric, to determine whether or not Alice wins Tic-Tac-Toe$(a,b)$ for all possible chip counts $$(a,b) \in (\N \times \N^*) \cup (\N^* \times \N).$$ Indeed, Bob wins Tic-Tac-Toe$(a,b)$ if and only if Alice wins Tic-Tac-Toe$(b,a)$, and the outcome is a tie if and only if neither Alice nor Bob wins. Therefore, we will focus our analysis on the game $\T$ that is played just like Tic-Tac-Toe except that Bob is declared the winner of any game that would normally be a tie. In particular, Alice wins Tic-Tac-Toe if and only if she wins $\T$, so these games have the same critical thresholds. Optimal moves ------------- We begin by determining the optimal moves in Tic-Tac-Toe, as much as possible, using the partial order defined in Section \[partial order section\]. This simple approach determines all of the optimal moves except Alice’s optimal move from an empty board, which we reduce to two possibilities. In Section \[TTT tables\] we compute the number of chips that Alice needs to win for every possible chip total for each of these two possibilities, using the recursion from Theorem \[discrete recursion\] for small chip totals and the periodicity results in Theorems \[periodicity for Alice\] and \[periodicity for Bob\] for higher chip totals. \[stable TTT\] The tree of moves for $\T$ shown in Figure 1 is optimal for real-valued bidding. These moves are also optimal for discrete bidding if the total number of chips is not equal to five. Furthermore, the critical thresholds of each position reached through these optimal moves is as indicated in the figure. In particular, $$R(\T) = 133/256.$$ For discrete bidding with five chips, a tree of optimal moves is shown in Figure 2, below. The critical threshold $R(\T) = 133/256$ was computed independently by Theodore Hwa [@Hwa06]. ![image](stablegametree.pdf){width="6.25in"} Our general approach to studying $\T$ is to work our way backward from end positions. Knowing which sequence of moves is optimal late in the game greatly simplifies the analysis of earlier positions, by reducing to comparisons with wedge sums of games equivalent to $\A$, $\B$, $A^n$ or $B^n$. For discussing $\T$ and other games, it is useful to have some basic language for describing positions, although we attempt to keep jargon to a minimum. We say that Alice has a *threat* if she can win on the next move, and Bob has a *threat* if he can prevent Alice from winning with his next move. If Alice has a threat from a position $v$ then, in the partial order on games discussed in Section \[partial order section\], $$G_v \leq \E,$$ and if Bob has a threat from $v'$ then $\E \leq G_{v'}$, where $\E$ is the game in which the first player to move wins. We say that Alice has a *double threat* if she can win if she gets either of the next two moves, and Bob has a *double threat* or *triple threat* if he can prevent Alice from winning if he gets any of the next two moves or three moves, respectively. Bob has a triple threat from the position ![image](triplethreatcrop.pdf) since he can prevent Alice from winning if he moves anywhere in the right column. This position is equivalent to $B^3$, since Alice wins if she gets three moves in a row. On the other hand, Alice has a double threat from the position ![image](doublethreatcrop.pdf) but this position is strictly better for Alice than $A^2$, since Bob cannot prevent Alice from winning if he gets two moves in a row. We say that a move is a *block* if it goes from a position in which the opponent has a threat to a position in which the opponent does not have a threat. A move is a *counterattack* if it is a block that also creates a threat. \[counterattack\] If a counterattack is available, then the optimal move must be a block. A counterattack moves to a position that is at least as good as $\E$, and any move that is not a block moves to a position that is no better than $\E$. In Tic-Tac-Toe, if a player has a block available then it is always unique. Therefore, counterattacks are always optimal in Tic-Tac-Toe. Note however, that blocks are not always optimal in Tic-Tac-Toe. See the analysis of Alice’s move from position 3e, below, for an example where blocking is not optimal. Every move in Figure 1 other than Alice’s first move is optimal for all chip counts. There are twenty-nine moves in the stable game tree above, so we must analyze each of these twenty-nine moves. The moves from positions 5a, 5b, 5e, 4a, 4d, 4e, 4f, 3d, and 3e are counterattacks, and by Lemma \[counterattack\], counterattacks are always optimal. We now analyze each of the remaining moves other than Alice’s first move, moving from bottom to top and from left to right in Figure 1. *Alice’s move from 4c:* Wherever Alice moves, Bob can create a double threat by moving top right or bottom left. Therefore, the best game Alice can hope to reach is $\A \wedge B^2$, which she reaches by moving bottom left. *Bob’s move from 4c:* Wherever Bob moves, Alice can win if she gets two moves in a row. Therefore, the best game Bob can hope to move to is $B^2$, which he reaches by moving bottom left. *Alice’s move from 3a:* Wherever Alice moves, Bob can win if he gets two moves in a row by filling either the right column or the bottom row. Therefore, the best game Alice can hope to move to is $A^2$, which she reaches by moving top right. *Bob’s move from 3a:* Bob’s move in either corner creates $(\A\wedge B^2) \wedge \B$, which is better for him than $A^2 \wedge \B$ and hence better than any position from which Alice can produce a double threat. If Bob moves on a side, then Alice can create a double threat by moving in a corner. Therefore, Bob’s corner move is optimal. *Bob’s move from 3b:* If Bob does not block, then the best game he can reach is $\E$, which is equivalent to $A^2 \wedge B^2$. Blocking gives $(\A\wedge B^2) \wedge B^2$. Since $\A\wedge B^2$ is strictly better than $A^2$, it follows that blocking is optimal for Bob. *Alice’s move from 3e:* This is an interesting move, since Alice’s optimal strategy is not to block, but instead to create a threat of her own to reach a game equivalent to $\E$. If Alice blocks, then the resulting position is $A^2 \wedge B^3$, since Bob can move lower left to create a triple threat, and this is strictly worse for Alice than $\E$. Of course, if Alice moves in the right column, then Bob has a threat but Alice does not, which is again worse than $\E$. Therefore, moving in the left column is optimal for Alice. *Alice’s move from 3f:* Wherever Alice moves, Bob can produce a double threat by moving in a bottom corner. Therefore Alice’s bottom corner move, which gives $\A\wedge B^2$, is optimal. *Bob’s move from 3f:* Wherever Bob moves, Alice can win if she gets the next three moves in a row. Therefore, Bob’s move in a bottom corner, which produces a triple threat, is optimal. *Bob’s move from 2a:* If Bob does not block, then he must get the next move as well. And if blocking is not optimal, then his optimal next move must also be nonblocking (since otherwise he may as well have blocked the first time). If Bob makes two moves without blocking, then the best he can do is create a symmetric threat, reaching $\E$. However, if he does block, then in two moves he can reach a position in which he has a threat and Alice does not, which is strictly better than $\E$. Therefore, blocking is optimal. *Alice’s move from 2b:* Suppose Alice moves upper right or lower left. Then Bob’s next move must be a counterattack, and Alice’s move after that must be a counterattack as well, and the resulting position after five moves is the one labeled 5a in the game tree. Therefore either of these moves leads to $\A\wedge (TTT_{5a} \wedge \B)$. Suppose Alice moves lower right. Then Bob can move upper right forcing Alice to counterattack in the top center, leading to position 5a. Since this move does not create a threat, it is strictly worse than $\A\wedge (TTT_{5a} \wedge \B)$. Finally, suppose Alice moves top or left. Then we have seen in the analysis of position 4c that Bob’s optimal response is to block. It follows that moving top or left leads to $\A\wedge (TTT_{5b} \wedge B^2)$. Since $TTT_{5a}$ and $TTT_{5b}$ are both equivalent to $\A\wedge B^2$, it follows that any move other than lower right is optimal. In particular, moving top, as shown in the game tree, is optimal. *Bob’s move from 2b:* If Bob moves top or left, then the resulting position is equivalent to $(\A\wedge B^2) \wedge \B$. Suppose Bob moves upper right or lower left. Then Alice must counterattack. No matter what Bob does next, Alice will still be able to win if she gets the next two moves. Therefore, this is no better for Bob than $(\A\wedge B^2) \wedge \B$. Finally, suppose Bob moves lower right. Then Alice can move in the top, forcing Bob to counterattack, and again Alice can win if she gets the next two moves. So this is still no better than $(\A\wedge B^2) \wedge \B$. Therefore, moving top or left is optimal. *Alice’s move from 2c:* If Alice moves top or left, a series of counterattacks ensues, as shown in the stable game tree. This position gives $\A\wedge \big( (\A\wedge B^2) \wedge \B \big)$. If Alice moves upper right or lower left, then Bob counterattacks and then best Alice can do in response is to reach a symmetric position. Therefore, these moves give $\A \wedge ( \E \wedge \B)$ which is strictly worse for Alice than moving top or left. Finally, suppose Alice moves lower right. Then Bob can move on a side and force a series of counterattacks. After such a move from Bob, this position is then $(\A\wedge B^2) \wedge \B$, which is the same as if Alice moves top or left and then Bob counters. However, since lower right does not create a threat for Alice, this is strictly worse than moving top or left, and it follows that top and left are Alice’s only optimal moves. *Bob’s move from 2c:* Wherever Bob moves, Alice can create a threat on the next move. Therefore, the best game Bob can hope to reach is $\E \wedge \B$, which he gets to by moving top or left. *Alice’s move from 2d:* If Alice does not block, then she must get two moves in a row, and if she gets two moves in a row then the best game she can reach without blocking is $\E$. Since she can reach position 4g in two moves, which is strictly better than $\E$, it follows that blocking is her only optimal move. *Alice’s move from 1a:* If Alice moves in a corner, then the resulting game is $\A\wedge (A^2 \wedge TTT_{3a})$. We must show that Alice cannot do better by moving on the top. Suppose Alice moves on the top. Then Bob can block on the bottom. If Bob gets the next move, then he can force a series of counterattacks, guaranteeing that this is no worse for him than 3a. Therefore, it will suffice to show that if Alice gets the next move after Bob blocks, then the best she can reach is a game equivalent to $A^2$. If Alice moves in the top row, then she has a double threat and Bob can win if he gets two moves in a row, so this is exactly $A^2$. If Alice moves in the middle row, then Bob can block, reaching a position ![image](Alice1a-1crop.pdf) that is symmetric by Example \[symmetric\]. And if Alice moves in the bottom row, then Bob can block, reaching a position ![image](Alice1a-2crop.pdf) that is symmetric by Example \[symmetric\]. It follows that Alice cannot do better than moving in a corner. *Bob’s move from 1a:* If Bob moves in a corner, then the resulting game is $(\A \wedge TTT_{4c}) \wedge (TTT_{4d} \wedge \B)$. We must show that Bob cannot do better by moving on top. Suppose Bob moves on top and Alice wins the next move. Then Alice can move on the left side, and we claim that the resulting position is equivalent to $\A \wedge \E$. If Bob does not block, then the best he can reach is a symmetric position. If Bob does block, then the resulting position is still symmetric, since Alice’s best move is bottom left, which gives $A^2$, and Bob’s best move is top right, which gives $B^2$. Now $\A \wedge \E$ is strictly worse for Bob than $\A \wedge TTT_{4c}$, which results if Bob moves in a corner and Alice wins the next move. Suppose Bob moves on top and gets the next move as well. The only way he can do better than starting in a corner is by moving on a side again. Suppose he moves left on his second move. Then Alice can move in the upper left, and Bob’s counterattack leads to a position $B^2$. If Bob moves on the bottom on his second move, then Alice again moves in the upper left, and Bob’s counterattack is worse for him than $B^2$. It follows that after making two consecutive moves on sides, Bob’s position is strictly worse than the position $(\A \wedge B^2) \wedge \B$ that he would reach by moving first in a corner and then on an adjacent side. Therefore, the corner move from position 1a is optimal for Bob. *Alice’s move from 1b:* Suppose Alice moves on the top instead. If Alice gets the following move as well, she could only do better by moving on another side, either left or bottom. If she plays on the left, then Bob can move upper left, and Alice will need to block in order to win. Therefore, Alice blocks and Bob’s next move creates $B^2$. In other words, after Alice moves top and left, the position is strictly worse for Alice than $\A \wedge \big( (\A \wedge B^2) \wedge \B \big)$. On the other hand, if Alice moves top and then bottom, then Bob can move left and the best Alice can do is to reach the position ![image](Alice1b-2crop.pdf) that is symmetric by Example \[symmetric\]. Therefore, if Alice moves top and then bottom, the resulting position is strictly worse for Alice than $\A \wedge (\E \wedge \B)$. Both of these situations are worse than the position 3d, which is equivalent to $\A \wedge \big( (\A \wedge B^2) \wedge \B \big)$, which Alice gets to by playing top left and then top. Next, suppose Alice moves on top and Bob makes the next move. Bob can play top right, and then Alice needs to take the lower left corner to have any chance of winning. So Alice blocks, and reaches a position ![image](Alice1b-1crop.pdf) that is symmetric, by Example \[symmetric\]. In particular, if Alice moves top and then Bob makes the next move, the result is no better for Alice than $\E \wedge \B$. It follows that Alice cannot do better than reaching the position $\T_{4d} \wedge (\E \wedge \B)$ that she gets by playing in the corner. *Bob’s move from 1b:* Suppose Bob moves in the upper left corner instead. Then Alice can block in the lower right. If Bob gets the next move, he can do no better than to reach $B^2$, while Alice can move on the bottom to reach a position $\A \wedge B^2$. Therefore, if Bob moves in a corner, his position is no better than $\big( (\A \wedge B^2) \wedge B^2 \big) \wedge \B$, which is worse for him than $\T_{2d}$, which is equivalent to $\big((\A \wedge B^2) \wedge B^3 \big) \wedge \B$. *Bob’s first move:* Suppose Bob makes the first move, but not in the center. If Alice makes the second move, she can go in the center. By our analysis of Bob’s move from 1a, ![image](Bob1b-1crop.pdf) and by inspection of the tree of optimal moves from positions 2b and 2c in Figure 1, it is straightforward to check that $TTT_{2c} \geq TTT_{2b}$. Therefore, if Alice makes the second move, Bob would have done best to move in the center. Suppose Bob makes the second move as well. If Bob’s optimal first move is not in the center, then his optimal second move must be not in the center as well. Suppose Bob starts with any of the following two moves: ![image](Bob1b-2crop.pdf) Then Alice can respond by moving in the center. After this, Bob can do no better than to create a double threat on his next move. Therefore each of the four positions above is no better than $\big( (\A \wedge B^2) \wedge B^2 \big) \wedge \B$. Similarly, if Bob starts with ![image](Bob1b-3crop.pdf) and then Alice blocks, the resulting position is still no better for Bob than $\big( \A \wedge B^2) \wedge B^2 \big) \wedge \B$. Finally, if Bob starts with ![image](Bob1b-4crop.pdf) then Alice can block. On the next move, Bob can do no better than to move in the center and create a $B^3$, while Alice can move in the center to create a position that is worse for Bob than $\A \wedge B^2$, since Alice can win on the next move, but Bob cannot create a double threat. In particular, all of these scenarios are worse for Bob than $\big( \A \wedge B^2) \wedge B^3 \big) \wedge \B$, which is equivalent to $TTT_{2d}$. It follows that Bob’s only optimal first move is in the center. \[Alice’s first two moves\] If center is not an optimal first move for Alice then Alice’s optimal first two moves are in adjacent corners. Suppose Alice makes an optimal first move, but not in the center. If Bob moves second then he can move in the center. However, by our analysis of Alice’s move from position 1b, and by inspection of the trees of optimal moves from positions 2b and 2c, we have ![image](Alicenotcorner-1crop.pdf) Therefore, Alice must reach a position better than 2a by making the first two moves, neither in the center. We consider all possible positions that Alice can reach after making the first two moves. If Alice’s first two moves are any of the following ![image](Alicenotcorner-2crop.pdf) then Bob moves in the center. Then Alice cannot reach a position better than $A^2$ on her next move, while Bob can move in either the top left corner or the top side to reach a position that is worse for Alice than 4b $\equiv (\A \wedge B^2) \wedge \B$. Similarly, if Alice makes her first two moves ![image](Alicenotcorner-3crop.pdf) then Bob can block. After Bob blocks, if Alice moves next she can get no better than $A^2$, while Bob can move in the center to reach position 4e, which is equivalent to 4b. In particular, none of these scenarios is better for Alice than the positions that she would reach by starting with 2a, and the only remaining possibility is that Alice could make her first two moves in adjacent corners. This proves the lemma. Suppose center is not an optimal first move for Alice, for some fixed total number of chips. Then every move in Figure 2 is optimal. ![image](fivechipgametree.pdf){width="4.5in"} Bob’s move from 5a$'$ is a counterattack, Alice’s first move and her move from 1a$'$ are optimal by Lemma \[Alice’s first two moves\], and Bob’s first move is optimal by Theorem \[stable TTT\]. We analyze the remaining six moves as follows. *Bob’s move from 4a$'$:* If Bob does not block one of Alice’s threats then he must win the following two moves as well. However, if he gets three moves in a row then he can also win by blocking with the first one. *Alice’s move from 4b$'$:* Wherever Alice moves, the resulting position is equivalent to $\E$. *Alice’s move from 3a$'$:* Alice’s move in the center creates a position equivalent to $\A \wedge (\A \wedge B^2)$. If Alice plays anywhere else, then Bob moves in the center to create a position that is no better for Alice than $\E$, and hence worse than $\A \wedge B^2$. Indeed, if Alice does not move in the center column, then Bob’s move in the center creates a threat, and if Alice moves in the bottom then Bob’s move in the center creates a position ![image](Alice3axcrop.pdf) that is equivalent to $\E$ by Example \[symmetric\]. *Bob’s move from 3a$'$:* Moving center creates a position equivalent to $B^2$, or $\E \wedge \B$. If Bob moves anywhere else, then Alice can move center to reach a position in which she has a threat and Bob does not, which is worse for Bob than $\E$. *Bob’s move from 2a$'$:* If Bob does not block then he must get the next move as well. If he blocks on the next move, he may as well have blocked the first time, and if he doesn’t block on his second move, then he has to get a third move in a row. With three moves in a row, he can win after blocking with his first move. *Bob’s move from 1a$'$:* If Bob plays anywhere other than center or bottom right then Alice can play in the center to reach a position that is no better for Bob than 3d, Meanwhile, if Bob gets two moves in a row he cannot prevent Alice from winning if she gets the next two moves. Therefore center is better than any other option except possible bottom right. Suppose Bob plays bottom right. Then Alice can play top right and the best Bob can do is to block to reach a position ![image](Bob1axcrop.pdf) that is equivalent to $\E$ (Example \[symmetric\]). Again this is worse for Bob than if he had played center. Chip tables {#TTT tables} ----------- Having determined all of the optimal moves for bidding Tic-Tac-Toe, except for Alice’s first move which we have reduced to two possibilities, we now compute the total number of chips that Alice needs to win from each position that can be reached by these optimal moves. These computations are straightfoward applications of the recursion given by Theorem \[discrete recursion\] and the periodicity results in Theorem \[periodicity for Alice\] and \[periodicity for Bob\]. As a consequence of these computations, we find that Alice’s optimal first move depends on the total number of chips, as follows. \[final TTT\] Center is an optimal first move for Alice if and only if the total number of chips is not five. Corner is an optimal first move for Alice if and only if the total number of chips is 0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 12, 13, 14, 19, 20, 22, or 26. We prove the theorem by explicitly computing the number of chips that Alice needs to win assuming a first move in the center or a first move in the corner, for all possible chip totals. Let $f'(TTT,k)$ be the number of chips that Alice needs to win, assuming a first move in the center, and $f''(TTT,k)$ be the number of chips that Alice needs to win, assuming she makes the first move in the corner, if she wins the first bid. .5cm [|r||\*[12]{}[l|]{}]{} $256n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&1&1&1\*&2\*&2\*&3\*&4&4&4\*&5\*&5\*&6\*\ 12+&7&7&8&8\*&9&9&10&10\*&11&11\*&12\*&12\*\ 24+&13&13&14&14\*&15&15\*&16&17&17\*&17\*&18&18\*\ 36+&19\*&20&20\*&20\*&21\*&21\*&22&23&23\*&24&24\*&25\ 48+&25\*&26&26&27&27\*&28\*&28\*&29&29\*&30&30\*&31\*\ 60+&31\*&32&33&33&34&34\*&34\*&35\*&36&36\*&37&37\*\ 72+&37\*&38\*&39&40&40&40\*&41&41\*&42&42\*&43&44\ 84+&44\*&44\*&45\*&45\*&46&46\*&47\*&48&48\*&48\*&49&50\ 96+&50\*&51&51\*&52&53&53&53\*&53\*&54\*&55&55\*&56\ 108+&57&57&57\*&58&59&59&59\*&60\*&60\*&61\*&62&62\ 120+&62\*&63\*&63\*&64\*&65&65&66&66\*&67\*&67\*&68&69\ 132+&69&70&70\*&70\*&71&72&72&73&73\*&73\*&74\*&75\ 144+&75\*&75\*&76\*&77&77\*&78&79&79&79\*&79\*&80\*&81\ 156+&81\*&82&82\*&83\*&84&84&84\*&85&86&86\*&87&87\ 168+&88&88&88\*&89\*&90&90\*&91&91\*&92&92\*&92\*&93\*\ 180+&94&95&95&95\*&96&96\*&97&98&98&98\*&99\*&99\*\ 192+&100\*&101&101&102&102\*&103&103\*&104&104&105&105\*&106\*\ 204+&106\*&107&107\*&108&108\*&109&109\*&110\*&111&111&112&112\ 216+&112\*&113&114&114\*&115&115&115\*&116\*&117&117\*&118&118\*\ 228+&119\*&119\*&120&120&121&121\*&122&122\*&123\*&123\*&124&124\*\ 240+&125\*&125\*&126&127&127&128&128\*&128\*&129&130&130&131\ 252+&131\*&131\*&132\*&133&&&&&&&&\ .5cm *Figure 3: Critical thresholds if Alice moves in the center; $f'(TTT,256n + \ ) = 133n + \ \ $.* The entries in the interior of the preceding table are critical thresholds, while the position of the entry determines the total number of chips. For instance, the last entry in the second row is 12\*, which means that the critical threshold $f'(TTT, 256n+23)$ is $133n + 12^*$, for any nonnegative integer $n$. [|r||\*[12]{}[l|]{}]{} $256n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&1&1&1\*&2\*&2\*&3&4&4&5&5\*&6&6\*\ 12+&7&7&8&9&9\*&9\*&10\*&10\*&11&12&12\*&13\ 24+&13\*&13\*&14&15&15\*&16&16\*&17\*&18&18&18\*&19\*\ 36+&20&20\*&21&21\*&22&22\*&23&23\*&24&24\*&25&26\ 48+&26\*&27&27&27\*&28\*&29&29\*&30&31&31&31\*&32\*\ 60+&32\*&33&34&34&35&35\*&35\*&36\*&37&37&38&38\*\ 72+&39&39\*&40\*&41&41&41\*&42&43&43\*&44&44\*&45\ 84+&45\*&46&46\*&47&47\*&48&48\*&49\*&50&50&50\*&51\*\ 96+&52&52\*&53&54&54\*&54\*&55&55\*&56&57&57\*&57\*\ 108+&58\*&58\*&59&60&61&61&61\*&62&62\*&63&64&64\ 120+&65&65\*&65\*&66\*&67&67&68&68\*&69\*&69\*&70&71\ 132+&71&71\*&72\*&72\*&73\*&74&74\*&75&75\*&75\*&76\*&77\*\ 144+&78&78&79&79&79\*&80\*&81&81\*&82&82&82\*&84\*\ 156+&84&84\*&85&86&86\*&86\*&87&88&88\*&89&89\*&90\ 168+&90\*&91&91\*&92&92\*&93&93\*&94\*&95&95\*&95\*&96\ 180+&97&97\*&98&98\*&99\*&99\*&100&101&101&101\*&102\*&102\*\ 192+&103\*&104&104&105&105\*&105\*&106\*&107&107\*&108&109&109\*\ 204+&109\*&110&110\*&111\*&112&112\*&113&113\*&114&114\*&115&115\*\ 216+&116&116\*&117&118&118\*&118\*&119&120&120\*&121&121\*&122\*\ 228+&123&123&123\*&124&124\*&125\*&126&126&127&127&127\*&128\*\ 240+&129\*&129\*&130&130\*&131&131\*&132\*&132\*&133\*&134&134&135\ 252+&135\*&135\*&136\*&137&&&&&&&&\ .5cm *Figure 4: Critical thresholds if Alice moves in the corner; $f''(TTT,256n + \ ) = 137n + \ \ $.* It is straightforward to compare the tables in Figures 3 and 4 to see that $f'(TTT, k)$ is greater than or equal to $f''(TTT, k)$ unless $k =5$, with equality only for $k = 0$, 1, 2, 3, 4, 6, 7, 9, 11, 12, 13, 14, 19, 20, 22, or 26, which proves Theorem \[final TTT\]. These tables are computed by working backwards from ending positions. For completeness, we include tables of critical thresholds for the other positions in the trees of possible optimal moves in Figures 1 and 2. Postions $4f$ and $5b'$ are equivalent to $\E$, position $4a$ is equivalent to $A^2$, positions $3e$, $5c$, $5d$, $5f$, $6a$, $6b$, $6c$, and $4b'$ are equivalent to $B^2$; see Example \[simple tables\] for tables of critical thresholds for these positions. The critical thresholds for the remaining positions are as follows. Positions $5a$, $5b$, $5e$, $4d$, $4g$, and $5a'$ are all equivlent to $\A \wedge B^2$. 10 pt [|r||\*[8]{}[l|]{}]{} $k=8n+$&0&1&2&3&4&5&6&7\ $f(\A \wedge B^2, k) = 3n + $&0\*&0\*&1&1\*&2&2&2\*&3\ Positions $4b$, $4e$, $3c$ and $5a'$ are equivalent to $(\A \wedge B^2) \wedge \B$. From each such position $v$, we have $f( TTT_v, 16n+\ ) = 11n + \ :$ 10 pt [|r||\*[16]{}[l|]{}]{} $16n+$&0&1&2&3&4&5&6&7\ 0+&1&1\*&2&3&3\*&4&5&5\*\ 8+&6\*&7&7\*&8\*&9&9\*&10\*&11\ Position $4c$ is equivalent to $(\A \wedge B^2) \wedge B^2$, and $f(TTT_{4c}, 16n + \ ) = 9n + \ :$ [|r||\*[8]{}[l|]{}]{} $16n+$&0&1&2&3&4&5&6&7\ 0+&1&1&1\*&2\*&3&3&4&4\*\ 8+&5\*&5\*&6&7&7\*&7\*&8\*&9\ Position $4h$ is equivalent to $B^3$. [|r||\*[8]{}[l|]{}]{} $k=8n+$&0&1&2&3&4&5&6&7\ $f(B^3,k) = 7n+$&1&2&3&3\*&4\*&5\*&6\*&7\ From position $3a$, we have $f(TTT_{3a}, 32n+ \ ) = 15n + \ :$ [|r||\*[12]{}[l|]{}]{} $32n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0\*&1&1\*&2&2&3&3\*&3\*&4&5&5&5\*\ 12+&6&6\*&7&7\*&8&8\*&9&9\*&9\*&10\*&11&11\ 24+&11\*&12\*&12\*&13&13\*&14&14\*&15&&&&\ From position $3b$, we have $f(TTT_{3b}, 32n + \ ) = 9n + \ $: [|r||\*[12]{}[l|]{}]{} $32n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ $0+$&0\*&0\*&0\*&1&1\*&1\*&2&2&2\*&2\*&3&3\*\ 12+&3\*&3\*&4&4\*&5&5&5&5\*&6&6&6\*&6\*\ 24+&7&7&7\*&8&8&8&8\*&9&&&&\ From position $3d$, we have $f(TTT_{3d}, 32n + \ ) = 11n + \ :$ [|r||\*[12]{}[l|]{}]{} $32n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0\*&0\*&1&1\*&1\*&2&2\*&2\*&3&3\*&3\*&4\ 12+&4\*&4\*&5&5\*&6&6&6\*&7&7&7\*&8&8\ 24+&8\*&9&9&9\*&10&10&10\*&11&&&&\ Position $3f$ is equivalent to $(\A \wedge B^2) \wedge B^3$. [|r||\*[8]{}[l|]{}]{} $k=8n+$&0&1&2&3&4&5&6&7\ $f((\A \wedge B^2) \wedge B^3, k) = 5n +$ &1&1\*&2&2\*&3&3\*&4\*&5\ From position $2a$, we have $f(TTT_{2a}, 64n + \ ) = 15n + \ $: [|r||\*[12]{}[l|]{}]{} $64n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0&0\*&0\*&1&1&1\*&1\*&1\*&2&2\*&2\*&2\*\ 12+&3&3&3\*&3\*&4&4&4\*&4\*&4\*&5&5\*&5\*\ 24+&5\*&6&6&6\*&6\*&7&7&7\*&7\*&8&8&8\*\ 36+&8\*&9&9&9&9\*&10&10&10&10\*&10\*&11&11\ 48+&11\*&11\*&12&12&12&12\*&13&13&13&13\*&13\*&14\ 60+&14&14\*&14\*&15&&&&&&&&\ From position $2b$, we have $f(TTT_{2b}, 64n+ \ ) = 31n + \ $: [|r||\*[12]{}[l|]{}]{} $64n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&1&1&1\*&2&2\*&3&3\*&3\*&4\*&5&5&6\ 12+&6\*&6\*&7&8&8\*&8\*&9&10&10&10\*&11\*&11\*\ 24+&12&12\*&13&13\*&14&14&15&15\*&16\*&16\*&17&17\*\ 36+&18&18\*&19&19&20&20\*&20\*&21\*&22&22&22\*&23\*\ 48+&24&24&24\*&25\*&25\*&26&27&27&27\*&28&28\*&29\ 60+&29\*&29\*&30\*&31&&&&&&&&\ From position $2c$, we have $f(TTT_{2c}, 64n + \ ) = 35n + \ $: [|r||\*[12]{}[l|]{}]{} $64n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&1&1&1\*&2\*&3&3&4&4\*&5&5\*&6&6\*\ 12+&7\*&7\*&8&9&9\*&9\*&10\*&11&11\*&12&12\*&13\ 24+&14&14&14\*&15\*&16&16&17&17\*&18\*&18\*&19&20\ 36+&20\*&20\*&21\*&22&22\*&23&23\*&24&25&25&25\*&26\*\ 48+&27&27&28&28\*&29&29\*&30&30\*&31\*&31\*&32&33\ 60+&33\*&33\*&34\*&35&&&&&&&&\ From position $2d$, we have $f(TTT_{2d}, 16n + \ ) = 13n + \ $: [|r||\*[8]{}[r|]{}]{} $16n+$&0&1&2&3&4&5&6&7\ 0+&1&2&2\*&3\*&4&5&6&6\*\ 8+&7\*&8\*&9&10&10\*&11\*&12\*&13\ From position $1a$, we have $f(TTT_{1a}, 64n + \ ) = 23n + \ $: [|r||\*[12]{}[l|]{}]{} $64n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0\*&1&1&1\*&1\*&2\*&2\*&2\*&3&4&4&4\*\ 12+&4\*&4\*&5\*&6&6&6&7&7\*&7\*&7\*&8\*&8\*\ 24+&9&9&9\*&10&10\*&10\*&11&11\*&12&12&12\*&13\ 36+&13\*&13\*&14&14&15&15&15&15\*&16\*&16\*&16\*&17\ 48+&18&18&18&18\*&18\*&19\*&20&20&20&21&21&21\*\ 60+&21\*&22&22\*&23&&&&&&&&\ From position $1b$, we have $f(TTT_{1b}, 128n + \ ) = 87n + \ $: [|r||\*[12]{}[l|]{}]{} $128n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&1&1\*&2&3&3\*&4&5&5\*&6&7&7\*&8\*\ 12+&9&9\*&10&11&12&12\*&13&13\*&14\*&15&16&16\ 24+&17&17\*&18\*&19\*&19\*&20&21&22&23&23\*&23\*&24\*\ 36+&25\*&26&27&27&28&28\*&29\*&30&30\*&31&32&33\ 48+&33\*&34&34\*&35\*&36&37&37\*&38&39&39\*&40&41\ 60+&41\*&42&43&43\*&44\*&45&45\*&46\*&47&47\*&48\*&49\ 72+&49\*&50\*&51&52&52\*&53&53\*&54\*&55\*&56&56\*&57\ 84+&58&58\*&59\*&59\*&60\*&61&62&63&63&63\*&64\*&65\*\ 96+&66\*&67&67&68&69&69\*&70\*&70\*&71\*&72&73&73\*\ 108+&74&74\*&75\*&76\*&77&77\*&78&79&79\*&80\*&81&81\*\ 120+&82\*&83&83\*&84\*&85&85\*&86\*&87&&&&\ From position $4a'$, we have $f(TTT_{4a'}, 16n + \ ) = 3n+ \ $: [|r||\*[8]{}[r|]{}]{} $16n+$&0&1&2&3&4&5&6&7\ 0+&0&0&0\*&0\*&1&1&1&1\*\ 8+&1\*&1\*&2&2&2\*&2\*&2\*&3\ From position $3a'$, we have $f(TT_{3a'}, 32n + \ ) = 15n + \ $: [|r||\*[12]{}[l|]{}]{} $32n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0\*&0\*&1\*&2&2\*&2\*&3&4&4\*&4\*&5&5\*\ 12+&6\*&6\*&7&7\*&8&8&9&9\*&10&10&10\*&11\*\ 24+&12&12&12\*&13&14&14&14\*&15&&&&\ From position $2a'$, we have $f(TTT_{2a'}, 64n + \ ) = 15n + \ $: [|r||\*[12]{}[l|]{}]{} $64n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0&0&0\*&1&1&1&1\*&2&2&2&2\*&2\*\ 12+&3&3&3\*&3\*&4&4&4\*&4\*&5&5&5&5\*\ 24+&6&6&6&6\*&7&7&7&7\*&7\*&7\*&8&8\*\ 36+&8\*&8\*&9&9\*&9\*&9\*&10&10&10\*&10\*&11&11\ 48+&11\*&11\*&12&12&12\*&12\*&12\*&13&13\*&13\*&13\*&14\ 60+&14\*&14\*&14\*&15&&&&&&&&\ From position $1a'$, we have $f(TTT_{1a'}, 64n + \ ) = 25n + \ $: [|r||\*[12]{}[l|]{}]{} $64n+$&+0&+1&+2&+3&+4&+5&+6&+7&+8&+9&+10&+11\ 0+&0\*&0\*&1&1\*&2&2&3&3&3\*&3\*&4\*&4\*\ 12+&5&5&6&6\*&6\*&6\*&7\*&8&8&8\*&8\*&9\*\ 24+&10&10&10&11&11\*&11\*&12&12\*&13&13&13\*&14\*\ 36+&14\*&14\*&15&16&16&16\*&16\*&17&18&18&18&18\*\ 48+&19\*&19\*&20&20&21&21&21\*&21\*&22\*&22\*&23&23\*\ 60+&24&24&24\*&25&&&&&&&&\ This concludes our analysis of bidding Tic-Tac-Toe. Appendix: other tie-breaking methods ==================================== Throughout this paper we have used the tie-breaking method introduced in Section \[tie-breaking section\], in which the player who holds the tie-breaking chip can either give the chip to his opponent to win a tie, or keep the tie-breaking chip and lose the tie. We chose this method because it seemed simple and natural, with the tie-breaking advantage passing back and forth between the players just like the ordinary bidding chips, and because it has the following convenient properties. 1. It is always advantageous to have the tie-breaking chip (Lemma \[\* is an advantage\]). 2. The tie-breaking chip is worth less than an ordinary chip (Lemma \[1&gt;\*\]). Other natural tie-breaking methods are possible, and here we briefly consider a few alternatives. **Loser’s Ball (the $\epsilon$-chip)**. We could make a rule that the player who loses one bid wins any tie on the next bid. One way to think about this tie-breaking method is by introducing an *$\epsilon$-chip* whose value is strictly between zero and one. The player who holds the $\epsilon$-chip is required to bid it, so the bids are never tied. Whichever player loses the bid takes all of the chips that were bid, and hence has the $\epsilon$-chip for the next round. This method has the mildly unpleasant feature that holding the $\epsilon$-chip is usually but not always an advantage. For instance, in the game where the second player to move wins, if both players start with an equal number of chips then the player who starts without the $\epsilon$-chip wins. In particular, the possible chip counts in $\N \cup \N + \epsilon$ with respect to this tie-breaking method are not totally ordered. **Make-it Take-it (the $-\epsilon$-chip).** A better idea than Loser’s Ball is to make a rule that the player who wins one bid also wins any ties on the next bid. This method is satisfying, since it penalizes players for losing bids and often leads to taunting. One way to think about this tie-breaking method is by introducing a *$-\epsilon$ chip* whose value is strictly between zero and minus one. The player who holds the $-\epsilon$-chip is required to bid it, so the bids are never tied. Whichever player loses the bid takes the $-\epsilon$-chip for the next round, and hence loses any tie. Arguments similar to those given in Section \[general theory\] show that it is always an advantage *not* to have the $-\epsilon$-chip and that it is always a good idea to accept the $-\epsilon$-chip from your opponent together with an ordinary chip. Therefore, the possible chip counts in $\N \cup \N - \epsilon$ are totally ordered, and results analogous to those in Section \[general theory\] go through without major changes, except that the analogue of the recursion in Theorem \[discrete recursion\] is given by $$f(G_v,k) = \left \lfloor \frac{ | f_A(G_v, k) | + | f_B(G_{v},k) | }{ 2 } \right \rfloor + \delta,$$ where $$\delta = \left \{ \begin{array}{ll} 0 & \mbox{ if } |f_A(G_v,k)| + |f_B(G_{v},k)| \mbox{ is even, and } f_B(G_v, k) \in \N. \\ -\epsilon & \mbox { if } |f_A(G_v,k)| + |f_B(G_{v},k)| \mbox{ is even, and } f_B(G_v, k) \in \N - \epsilon. \\ 1-\epsilon & \mbox{ if } |f_A(G_v,k)| + |f_B(G_{v},k)| \mbox{ is odd, and } f_A(G_v, k) \in \N. \\ 0 & \mbox{ if } |f_A(G_v,k)| + |f_B(G_{v},k)| \mbox{ is odd, and } f_A(G_v, k) \in \N - \epsilon. \\ \end{array} \right.$$ The main disadvantage to the Make-it Take-it tie-breaking method seems to be that this description involves a lot of minus signs. **Ladies First.** Suppose Alice wins all ties. The longer Alice and Bob play, the more the effects of Alice’s advantage accumulate, so we should expect that Bob will have trouble if the game goes too long. Let $G$ be the game that Alice wins if she wins the first move, or both of the next two moves, or all of the next three moves, or the $i$-th move for all $n(n-1)/ 2 < i \leq n(n+1) / 2$, for any $n$. Then Bob cannot win, but we can hope to prolong the game into an infinite draw. If $G$ is truncated after $n(n+1)/2$ moves, then the critical threshold is $$R(G[n(n+1)/2]) = \prod_{j = 1}^n \big(1 - \frac{1}{2^j}\big).$$ In particular, the critical threshold for Richman play, with real-valued bidding is given by the infinite product $R(G) = \prod_{j > 0} (1 - 1/2^j)$, which converges to a positive number. However, for discrete bidding with Ladies First tie-breaking, Alice wins no matter what the chip count. Alice can just bid zero indefinitely. Bob will have to give her a chip sometime between the moves numbered $n(n-1)/2$ and $n(n+1)/2$ for every $n$, until he runs out of chips. When Bob runs out of chips, Alice keeps bidding zero and winning all ties, so she eventually wins. In particular, Theorem \[below the threshold\] does not extend to locally finite games with Ladies First tie-breaking. [^1]: Supported by the Clay Mathematics Institute.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we present a simple but non-simplistic model of gravitational collapse with thermal emission of pre-Hawking radiation. We apply Einstein equations to a time-dependant spherically symmetric metric and an ultrarelativistic stress-energy tensor. In our model, particles either radially approach the center of the star as collapsing matter, or radially flee from it.' author: - Miguel Piñol Ribas title: 'A model for ultrarelativistic spherically symmetric Pre-Hawking radiating gravitational collapse' --- The Model ========= The more general metric for a spherically symmetric collapse presents the following structure: $$d\tau^2 = e^{\nu} dt^2 - e^{\lambda} dr^2 - r^2 d {\Omega} ^2,$$ where $\nu$ and $\lambda$ ara functions of radius $r$ and time $t$. It will be useful to define also a function $\phi$, so that $$\nu = - \lambda - 2\phi$$ and $$d\tau^2 = e^{- \lambda - 2\phi} dt^2 - e^{\lambda} dr^2 - r^2 d {\Omega} ^2,$$ as it may be demonstrated \[Lan\] that $\phi \geq 0$ all over to space (it is strictly equal to zero in the “outer space”, outside the edge of the collapsing star, where the metric is identical to Schwarzschild’s one). In addition, $\phi' \leq 0$ all over to space (we are denoting with $x'$ derivation in respect to $r$, as we will denote with $\dot x$ derivation in respect to $t$ all over the paper). The Einstein equations ---------------------- The Einstein equations for a metric of this kind are these ones: $${8\pi} T_{1}^{1} = - e^{-\lambda} \left( \frac{\nu'} {r} + \frac {1} {r^2} \right) +\frac{1}{r^2},$$ $${8\pi} T_{2}^{2} = {8\pi} T_{3}^{3} = -\frac{1}{2} e^{-\lambda} \left( \nu'' + \frac{\nu'^2}{2} + \frac{\nu'-\lambda'}{r} - \frac{\nu'\lambda'}{2} \right) + \frac{1}{2} e^{-\nu} \left(\ddot{ \lambda} + \frac{\dot{\lambda}^2}{2} -\frac{ \dot{\lambda} \dot{\nu}}{2} \right)$$ $${8\pi} T_{0}^{0} = - e^{-\lambda} \left( \frac{1} {r^2} - \frac {\lambda'} {r} \right) +\frac{1}{r^2},$$ $${8\pi} T_{0}^{1} = - e^{-\lambda} \frac{\dot\lambda} {r}.$$ The other components vanish identically. The stress-momentum tensor -------------------------- The stress-momentum tensor for a perfect fluid takes a simple form, $$T_{\beta}^{\alpha} = g_{\beta \delta} \left( \rho + p \right) u^\alpha u^\delta - \eta_{\beta}^{\alpha} p,$$ where $\rho$ corresponds to the total energy density and $p$ to the pressure. For an ultrarelativistic fluid, $p$ is proportional to $\rho$: $$p = \omega \rho$$ (concretely, it may be demonstrated that $\omega = \frac{1}{3}$, but we will keep on writing the constant of proportionality along the paper). The four-velocity in our model ------------------------------ In our model, all particles (or “elements of mass”, if you prefer) describe a purely radial trajectory (inwards, or outwards), so that the only non-null components of four-velocity are $u^0$ (which we are going to call $\gamma$, in order to obtain more beautiful expressions) and $u^1$. From the metric, we can deduce the relationship between both components: $$1 = e^{- \lambda - 2\phi} \gamma^2 - e^{\lambda} (u_p^1)^2,$$ $$\left| u_p^1 \right| = e^{- \lambda - \phi} \gamma \sqrt{1 - e^{\lambda+2\phi}\gamma^{-2}}$$ (where the subindex $p$ stands for “particles”). For infalling particles, $u_p^r = -\left| u_p^1 \right| $; for outgoing particles, $u_p^1 = +\left| u_p^1 \right| $. Consequently, if we call $\chi$ the proportion of infalling particles, so that $1-d$ be the proportion of outgoing particles, the mean “r-velocity” $u^1$ will be the following one: $$u^1 = \chi \left| u^1 \right| + (1-\chi) (- \left| u^1 \right|) = - (1-2\chi) e^{- \lambda - \phi} \gamma \sqrt{1 - e^{\lambda+2\phi}\gamma^{-2}}.$$ The stress-energy tensor components ----------------------------------- The stress-energy tensor components are, with all the considerations that we have taken until now, the following ones: $$T_{0}^{0} = e^{- \lambda - 2\phi} \gamma^2 \left( \rho + p \right) - p = e^{- \lambda - 2\phi} \gamma^2 \left( 1 + \omega \right) \rho - \omega \rho,$$ $$T_{1}^{1} = - e^{\lambda } \gamma u^1 \left( \rho + p \right) - p = (1-2\chi) e^{ - \phi} \gamma^2 \sqrt{1 - e^{\lambda+2\phi}\gamma^{-2}} \left( 1 + \omega \right) \rho - \omega \rho,$$ $$T_{2}^{2} = T_{3}^{3} = -p = - \omega \rho,$$ $$T_{0}^{1} = e^{- \lambda - 2\phi} \gamma u^1 \left( \rho + p \right) = - (1-2\chi) e^{-2 \lambda -3 \phi} \gamma^2 \sqrt{1 - e^{\lambda+2\phi}\gamma^{-2}} \left( 1 + \omega \right) \rho.$$ The ultrarelativistic limit --------------------------- In the ultrarelativistic limit, $\gamma >> 1$, so that we may ignore the terms with lesser powers of $\gamma$ in front of those in the higher ones: $$T_{0}^{0} \approx e^{- \lambda - 2\phi} \gamma^2 \left( 1 + \omega \right) \rho,$$ $$T_{1}^{1} \approx (1-2\chi) e^{ - \phi} \gamma^2 \left( 1 + \omega \right) \rho,$$ $$T_{0}^{1} \approx - (1-2\chi) e^{-2 \lambda -3 \phi} \gamma^2 \left( 1 + \omega \right) \rho.$$ Therefore: $$\rho \approx e^{ \lambda + 2\phi} \gamma^{-2} \left( 1 + \omega \right)^{-1}T_{0}^{0},$$ $$T_{1}^{1} \approx (1-2\chi) e^{ \lambda + \phi} T_{0}^{0},$$ $$T_{0}^{1} \approx - (1-2\chi) e^{- \lambda - \phi} T_{0}^{0}.$$ Solution of equations ===================== Some considerations on the phases of collapse, in the light of pre-Hawking radiation ------------------------------------------------------------------------------------ In 2006, Vachaspati, Stojkovic and Krauss demonstrated that collapsing stars emit pre-Hawking radiation \[Vac\]. While there exist some difference between both, pre-Hawking radiation spectrum results to be roughly proportional of Hawking radiation of black holes. Thus, we are going to use a Hawking radiation-like expression for pre-Hawking radition in our model of gravitational col·lapse: $$\dot m_{p-H} \approx \frac{-k}{r^2},$$ where $k$ is a constant of proportionality, and $m$ denotes the inner “mass” which is lost as “radiation”. In our model, we are not only going to consider the “global” emission of the collapsing star but also that of the “inner layers” towards the outer ones (and we will assume the same expression for that). If we have into account the relationship between the function $\lambda$ and the “mass”, $$e^{\lambda}= 1-\frac{2m}{r},$$ we may straightforwardly deduce that $$e^{-\lambda}\dot \lambda= \frac{2\dot m}{r},$$ $$\dot \lambda= \frac{2\dot m}{r} e^{\lambda}.$$ On the other hand, from equations (7) and (22) we may obtain another expression for $\dot \lambda$: $$\dot\lambda = - r e^{\lambda} {8\pi} T_{0}^{1} = r (1-2\chi) e^{-\phi} {8\pi} T_{0}^{0},$$ which we may split into two terms in order to make explicit the contribution of infalling and outgoing fluxes: $$\dot\lambda = r (1-\chi) e^{-\phi} {8\pi} T_{0}^{0} - r \chi e^{-\phi} {8\pi} T_{0}^{0}.$$ Consequently, we may logically identify the second term in the previous equation with the pre-Hawking variation of $\lambda$ which we may obtain from equations (23) and (26): $$\dot \lambda= \frac{2e^{\lambda}}{r} \left( \frac{-k}{r^2} \right),$$ $$- r \chi e^{-\phi} {8\pi} T_{0}^{0} = \frac{2e^{\lambda}}{r} \left( \frac{-k}{r^2} \right),$$ $$\chi = \frac{e^{\lambda+\phi}}{4\pi r^2 T_{0}^{0}} \left( \frac{k}{r^2} \right).$$ From equations (27) and (31), $$\dot\lambda = r e^{-\phi} {8\pi} T_{0}^{0} - \frac{2k e^{\lambda}}{r^3}.$$ In the light of equation (32), we may make some considerations on the phases of graviatational collapse: 1\) In a first phase, $\lambda$ increases very fastly, due to the important flux of infalling matter and the insignificancy of pre-Hawking radiation. 2\) When $\lambda$ reaches a certain value, we arrive to a phase of “stability”, where the infalling flux of collapsing matter is exactly compensated by the outgoing flux of pre-Hawking radiation. 3\) Finally, when the “outer layers” of infalling matter have already got exhausted, the pre-Hawking term prevails and $\lambda$ will diminish. In this paper, we are going to focus our study mainly on the second phase. The stability phase ------------------- In the stability phase, $e^{-\lambda} <<1$, so that $${8\pi} T_{0}^{0} \approx \frac{1}{r^2}$$ (the approximations that we are going to perform in this section are not good for small values of $r$). The condition for the stability phase consits on imposing $\dot \lambda_{st} = 0$: $$0 = r e^{-\phi_{st}} \left( \frac{1}{r^2} \right) - \frac{2k e^{\lambda_{st}}}{r^3},$$ $$\phi_{st} = -\lambda_{st} + ln \left( r^2 \right) - ln \left( 2k \right).$$ In this situation, $$\chi_{st} = \frac{1}{2}.$$ Consequently, $$T_{1}^{1} = 0,$$ $$0 = - e^{-\lambda_{st}} \left( \frac{\nu_{st}'} {r} + \frac {1} {r^2} \right) +\frac{1}{r^2},$$ $$\frac{e^{\lambda_{st}}}{r^2} = \left( \frac{\nu_{st}'} {r} + \frac {1} {r^2} \right),$$ $$\nu_{st}' = \frac{e^{\lambda_{st}}}{r} - \frac {1} {r},$$ $$\lambda_{st}' + 2\phi_{st}' = \frac{e^{\lambda_{st}}-1}{r} \approx \frac{e^{\lambda_{st}}}{r} .$$ From equation (35), $$\phi_{st}' = -\lambda_{st}' + \frac{2}{r} \approx -\lambda_{st}'.$$ From equations (41) and (42), $$-\lambda_{st}' \approx \frac{e^{\lambda_{st}}}{r} ,$$ $$\frac {d\lambda_{st}}{dr} \approx -\frac{e^{\lambda_{st}}}{r} ,$$ $$e^{-\lambda_{st}} d\lambda_{st} \approx -\frac{dr}{r}.$$ By integrating, $$- e^{-\lambda_{st}} \approx K_1 - ln(r),$$ $$\lambda_{st}\approx -ln \left(-K_1 + ln(r) \right).$$ On contour conditions --------------------- The function $\phi$ has a null value outside the edge of the collapsing star. If we call $M$ the total mass of the star, its radius $R$ is given by an expression of the following type \[Pin\]: $$R = 2M + e^{\frac{f(r)-t}{2M}},$$ where $M = M(t)$. Thus, $$\phi_{st} (R) = -\lambda_{st} (R) + ln \left( R^2 \right) - ln \left( 2k \right) = 0,$$ $$\lambda_{st} (R) = ln \left( R^2 \right) - ln \left( 2k \right) \approx -ln \left(-K_1 + ln(R) \right).$$ $$K_1 \approx ln R - \frac {2k}{R^2}$$ References ========== ... \[Lan\] Landau, L. D.; Lifshitz, E. M.; “The classical theory of Fields”. Course of Theoretical Physics Volume 2. University of Minnesota (1987) \[Vac\] Vachaspati, T.; Stojkovic, D.; Krauss, L. M.; “Observation of incipient black holes and the information loss problem”. Phys. Rev. D 76:024005 (2007) \[Pin\] Piñol, M.; Lopez, I.; “Transition from Established Stationary Vision of Black Holes to Never-Stationary Gravitational Collapse”. http://arxiv.org/abs/1007.2734 ... contact: miguelpinol@comv.es
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper deals with the revision of partially ordered beliefs. It proposes a semantic representation of epistemic states by partial pre-orders on interpretations and a syntactic representation by partially ordered belief bases. Two revision operations, the revision stemming from the history of observations and the possibilistic revision, defined when the epistemic state is represented by a total pre-order, are generalized, at a semantic level, to the case of a partial pre-order on interpretations, and at a syntactic level, to the case of a partially ordered belief base. The equivalence between the two representations is shown for the two revision operations.' author: - | [**Salem Benferhat**]{}\ IRIT, Univerit[é]{} Paul Sabatier,\ 118, route de Narbonne,\ 31062 Toulouse Cedex 4, France\ benferhat@irit.fr\ \ PRAXITEC\ 115, rue St Jacques,\ 13006 Marseille, France\ lagrue@irit.fr\ \ SIS, Universit[é]{} de Toulon et du Var,\ avenue de l’universit[é]{} BP 132,\ 83957 La Garde cedex, France\ papini@univ-tln.fr\ bibliography: - 'nmr02.bib' title: Revising Partially Ordered Beliefs --- Introduction ============ Most of the time, an intelligent agent faces incomplete, uncertain or inaccurate information. The arrival of a new item of information, more reliable or more certain leads the agent to refine (specify) his beliefs, to revise them. Belief revision is a well known problem in Artificial intelligence [@AGM85], [@PG88], [@KM91], in this context, an epistemic state encodes a set of beliefs about the real world (based on the available information). An epistemic state is generally interpreted as a plausibility ordering between possible states of the world, or as a preference relation between information sources from which an agent can derive his beliefs. On a semantic level, epistemic states have been represented by a total pre-order on interpretations of the underlying logical language [@DP97]. This total pre-order models the agent’s preferences between several situations. This pre-order has been encoded according to several ways, ordinals [@WS87], [@MAW94], possibilities [@DuP92], polynomials [@OP01]. However, the agent has not always a total pre-order between situations at his disposal, but is only able to define a partial pre-order between situations. The arrival of successive items of information can help him to refine this partial pre-order in order to converge to a total pre-order between situations. In other respects, total pre-orders are not suitable to model the case where decision making is impossible or not arbitrary. Suppose the agent has to make a decision on the cultivation of a new plot according to three rules given by experts. A first rule, $R1$ specifies that if some agronomical conditions hold (warm climate, deep soil, acidity, etc...), called [*condition 1*]{}, then [*the cultivation of tobacco is feasible*]{}. The second rule $R2$ specifies that if the agronomical conditions ([*condition 1*]{}) hold and the zone is mildewed (mildew parasite could ruin the plantation) called [*condition 2*]{}, then [*the cultivation of tobacco is not feasible*]{}. The third rule $R3$ specifies that according to the regulation of production of tobacco, if the area of the plot is not greater than the authorized area, called [*condition 3*]{}, then [*the cultivation of tobacco is feasible*]{}. The question is to define a pre-order on the three rules in order to make a decision on the cultivation of tobacco. Since $R2$ is more specific than $R1$, it is natural to prefer $R_2$ over $R_1$ (namely $R2 < R1$ holds). However, [*condition 3*]{} is not related with [*condition 1*]{} nor with [*condition 2*]{}, then it seems reasonable to consider $R3$ incomparable with $R1$ and $R2$. On contrast, if we want to impose a total pre-oder on rules, we have to set $R3$ relatively to $R2$, therefore there are two intuitive choices, either $R1 \leq R3$ or $ R3 \leq R2$. In the first case, the following total pre-order holds: $R2 < R1 \leq R3$ this means that if [*condition 1*]{}, [*condition 2*]{} and [*condition 3*]{} are both satisfied, according to the total pre-order we make the decision that [*the cultivation of tobacco is not feasible*]{}. In the second case, the following total pre-order holds: $ R3 \leq R2 < R1$ this means that if [*condition 1*]{}, [*condition 2*]{} and [*condition 3*]{} are satisfied, according to the total pre-order we make the decision that [*the cultivation of tobacco is feasible*]{}. These two total pre-orders lead to contradictory decisions, and there is no reason to choose the first one or the second one, the choice can only be arbitrary. Since we think that arbitrary choices have to be excluded, we think that a better solution is to consider $R3$ and $R2$ as incomparable (and hence all total pre-orders are considered), and thus to define a partial pre-order between rules. In such cases, an epistemic state has to be represented by a partial pre-order on interpretations and revision operations of partial pre-orders by formulas have to be defined. Partial pre-orders on interpretations have been used to represent update operations [@KM91b], this paper does not address updates but revisions. In the present paper, we propose a generalization of two revision operators to the case of partial pre-orders. Section $2$ presents the problematics of the representation of epistemic states by a partial pre-order and focuses on the difficulties of this generalization. Section $3$ presents the generalization of the semantic revision operations to partial pre-orders and the generalization of the syntactic counterparts of these operations to partially ordered belief bases. These generalizations are rather direct. On contrast the generalization of the mapping from the syntactic level to the semantic level is more problematical as shown in Section $4$, because the definition of a partial pre-order between formulas leads to two possible partial pre-orders between subsets of formulas. We choose one of them, however the results presented hold for the other one. We finally present the syntactic computation of the belief set corresponding to an epistemic state in Section $5$ before a concluding discussion in Section $6$. Presentation of the problem =========================== Basic definitions of partial pre-orders --------------------------------------- In this paper we use propositional calculus, denoted by ${\cal L_{PC}}$, as knowledge representation language with usual connectives $\lnot$, $\land$, $\lor$, $\to$, $\equiv$ (logical equivalence). The lower case letters $a$, $b$, $c$, $\cdots$, are used to denote propositional variables, lower case Greeks letters $\phi$, $\psi$, $\cdots$, are used to denote formulas, upper case letters $A$, $B$, $C$, are used to denote sets of formulas, and upper case Greeks letters $\Psi$, $\Phi$ $\cdots$, are used to denote epistemic states. We denote by ${\cal W}$ the set of interpretations of ${\cal L_{PC}}$ and by $Mod(\psi)$ the set of models of a formula $\psi$, that is $Mod(\psi) = \{ \omega \in {\cal W}, \omega \models \psi \}$ where $\models$ denotes the inference relation used for drawing conclusions. A partial pre-order, denoted by $\preceq$ on a set $A$ is a reflexive and transitive binary relation. Let $x$ and $y$ be two members of $A$, the equality is defined by $x = y$ iff $x \preceq y$ and $y \preceq x$. The corresponding strict partial pre-order, denoted by $\prec$, is such that, $x \prec y$ iff $x \preceq y$ holds but $x \preceq y$ does not hold. We denote by $\sim$ the incomparability relation $x \sim y$ iff $x \preceq y$ does not hold nor $y \preceq x$. $\preceq$ can equivalently be defined from $=$ and $\prec$: $a\preceq b$ iff $a\prec b$ or $a=b$. Given $\preceq$ on a set $A$, the minimal elements of $A$ are defined by: $min(A,\preceq)=\{x:x\in A,\not \exists y\in A,y\prec x\}$ For the purpose of this paper, we need to define a partial pre-order on subsets of elements of a set $A$. According to Halpern’s works [@JH97], [@JH01] (see also Cayrol et al. [@crs1992], Dubois et al. [@dlp1992] and Lafage et al. [@lls1999]) there are two meaningful ways to compare subsets of $A$. We denote these two partial pre-orders on $2^{A}$ by $\preceq_{A,w} $ and by $\preceq_{A,s}$. Let $X$ and $Y$ be two subsets of $A$, The sets $X$ and $Y$ are considered equals if for each preferred element in $X$ there exists a so preferred element in $Y$ and conversely, more formally: Let $X$ and $Y$ be two subsets of $A$ and $\preceq_{A} $ a partial pre-order on $A$, $X = Y$ iff :\ ------------------------------------------------------------- $\forall x \in min(X, \preceq_{A}), \; \exists y \in min(Y, \preceq_{A})$ such that $x = y$ [and]{} $\forall y \in min(Y, \preceq_{A}), \; \exists x \in min(X, \preceq_{A})$ such that $x=y$. ------------------------------------------------------------- The first way to define a partial pre-order on subsets $X$ and $Y$ of $A$ is to consider that $X$ is preferred to $Y$ if for each element of $ Y$ there exists at least one element in $X$, which is preferred to it, more formally (we assume that $X$ and $Y$ are not both empty): \[def1\] $X$ is weakly preferred to $Y$, denoted by $X\prec_{A,w}Y$, iff $\forall y\in Min(Y,\preceq_A)$, $\exists x\in Min(X,\preceq_A)$ such that $x\prec_A y$. The second way to define a partial pre-order on subsets $X$ and $Y$ of $A$ is to consider that $X$ is preferred to $Y$ if there exists at least one element in $X$ which is preferred to all elements in $Y$, more formally: \[def2\] Let $\preceq_A$ be a partial pre-order on $A$ and $X,Y\subseteq A$. $X$ is strongly preferred to $Y$, denoted by $X\prec_{A,s}Y$ iff $\exists x\in Min(X,\preceq_A)$ such that $\forall y\in Min(Y,\preceq_A),$ $x\prec_A y$. It can be shown that the definition of $\prec_{A,s}$ implies the definition of $\prec_{A,w}$, namely, if $X \prec_{A,s} Y$ then $X \prec_{A,w} Y$. The converse does not hold. Let $A = \{x_1, x_2, y_1, y_2 \}$ and $\preceq_{A}$ be a partial pre-order on $A$ such that $x_1 \preceq_{A} y_1$ and $x_2 \preceq_{A} y_2$. Let $X$ and $Y$ be two subsets of $A$, $X =\{x_1, x_2 \}$ and $Y = \{y_1, y_2 \}$, we have $X \prec_{A,w} Y$, indeed $x_1$ is preferred to $y_1$ and $x_2$ is preferred to $y_2$. However, $X\prec_{A,s} Y$ does not hold, there is no element in $X$ which is preferred to all elements of $Y$. In the case where $\preceq_{A}$ is a total pre-order, the definition of $\prec_{A,s}$ is equivalent to the definition of $\prec_{A,w}$, more formally, $X \prec_{A,s} Y$ iff $X \prec_{A,w} Y$. For lake of space, we will only focus on the weakly preference definition. But all provided results are valid for the strong preference. Semantic representation of epistemic states ------------------------------------------- Let $\Psi$ be an epistemic state, $\Psi$ is first represented by a partial pre-order on interpretations, denoted by $\preceq_{\Psi}$. The interpretation $\omega$ is preferred (or more plausible than) to $\omega'$, denoted $\omega \preceq_{\Psi} \omega'$. $\omega \sim_{\Psi} \omega'$ denotes that the agent has no preference between $\omega$ and $\omega'$. The belief set corresponding to $\Psi$, denoted by $Bel^{se}(\Psi)$, modeling the agent’s current beliefs is such that $Mod(Bel^{se}(\Psi)) = min({\cal W}, \preceq_{\Psi})$. We illustrate this representation with the example informally described in the introduction. \[ex1\] The agent has to make a decision on the cultivation of a new plot according tree rules given by experts. A first rule, $R1$ specifies that if some agronomical conditions hold (warm climate, deep soil, acidity, etc...), called [*condition 1*]{}, then [*the cultivation of tobacco is feasible*]{}. The second rule $R2$ specifies that if the agronomical conditions ([*condition 1*]{}) hold and the zone is mildewed (mildew parasite could ruin the plantation) called [*condition 2*]{}, then [*the cultivation of tobacco is not feasible*]{}. The third rule $R3$ specifies that according to the regulation of the production of tobacco, if the area of the plot is not greater than the authorized area, called [*condition 3*]{}, then [*the cultivation of tobacco is feasible*]{}. More formally, the three rules can be represented as follows: $R1$: $b \to a$, $R2$: $b \land c \to \lnot a$, $R3$: $d \to a$, where $a$ encodes [*the cultivation of tobacco is feasible*]{}, $b$ encodes [*condition 1*]{}, $c$ encodes [*condition 2*]{} and $d$ encodes [*condition 3*]{}. There are four propositional variables $a$, $b$, $c$ and $d$. The sixteen interpretations are $\omega_0 = \{\lnot a, \lnot b, \lnot c, \lnot d \}$, $\omega_1 = \{\lnot a, \lnot b, \lnot c, d \}$, $\omega_2 = \{\lnot a, \lnot b, c, \lnot d \} $, $\cdots$, $\omega_{14} =\{ a, b, c, \lnot d\}$, $\omega_{15} = \{a, b, c, d \}$. Let $\Psi$ be an epistemic state which corresponding belief set is $Bel^{se}(\Psi) = (b \to a) \land (b \land c \to \lnot a) \land (d \to a)$, we represent the epistemic state by a partial pre-order, denoted $\preceq_{\Psi} $ as follows: Since $R2$ is more specific than $R1$, and since $R2$ and $R3$ are incomparable, then: - the interpretations satisfying all constraints are preferred to all other interpretations, - the interpretations which falsify $R_2$ are preferred to the interpretations falsifying $R_1$, - the interpretations which falsify $R_2$ and the ones which falsify $R_3$ are uncomparable. The partial pre-order $\preceq_\Psi$ is represented by Figure \[initial\] (an arrow $x\longleftarrow y$ means $x\prec y$, the transitivity and the reflexivity are not represented for sake of clarity). Syntactic representation of epistemic states -------------------------------------------- An epistemic state $\Psi$ is syntactically represented by a partially ordered belief base, denoted by $\preceq_{\Sigma}$, where $\Sigma$ is a set of propositional formulas, and $\preceq_{\Sigma}$ is a partial pre-order on $\Sigma$. Let $\phi$ and $\phi'$ $\in \Sigma$, $\phi \preceq_{\Sigma} \phi'$ means that $\phi$ is preferred (more important than) to $\phi'$ and $\phi \sim_{\Sigma} \phi'$ denotes that the agent has no preference between $\phi$ and $\phi'$. We illustrate this representation with the example informally described in the introduction. \[ex2\] Let $\Psi$ be the epistemic state, where $\Sigma = \{ b \to a, b \land c \to \lnot a, d \to a \}$, we represent the epistemic state by a partial pre-order on $\Sigma$, denoted by $\preceq_{\Sigma}, $ as follows: Since $ b \land c \to \lnot a$ is more specific than $b \to a$, and $d \to a$ and $ b \land c \to \lnot a$ are incomparable, the following partial pre-order on formulas holds: $ b \land c \to \lnot a \prec_{\Sigma} b \to a$ and $b \land c \to \lnot a \sim_{\Sigma} d \to a$ and $ b \to a \sim_{\Sigma} d \to a$. The generalization of the representation and revision of an epistemic state to a partial pre-order leads to the following diagram: $$\begin{array}{ccc} \preceq_{\Sigma} & \to & \preceq_{\Psi} \\ \downarrow & & \downarrow \\ \preceq_{\Sigma \circ^{sy} \mu} & \to & \preceq_{\Psi \circ^{se}\mu} \\ \downarrow & & \downarrow \\ Bel^{sy}(\Psi \circ^{sy} \mu) & \equiv & Bel^{se}(\Psi \circ^{se} \mu) \end{array}$$ In the special case of a total pre-order, the diagram has been shown valid and the equivalence between the syntactic approach and the semantic approach has been proved [@BDP99]. The question is what does remain true when this diagram is extended to the representation by a partial pre-order. We show in Section $3$ that we get the mappings $\preceq_{\Sigma} \to \preceq_{\Sigma \circ^{sy} \mu}$ , $\preceq_{\Psi} \to \preceq_{\Psi \circ^{se} \mu}$ and $\preceq_{\Psi \circ^{se} \mu} \to Bel^{se}(\Psi \circ^{se} \mu)$ rather directly. On contrast, the mappings $\preceq_{\Sigma} \to \preceq_{\Psi}$ and $\preceq_{\Sigma \circ^{sy} \mu} \to Bel^{sy}(\Psi \circ^{sy} \mu)$ are less straightforward. We now present the revision extended to a partial pre-order. Semantic and syntactic revision of partial pre-orders ===================================================== Extension of the revision stemming from the history of observations ------------------------------------------------------------------- We extend the revision operation, defined in [@BDP99; @OP01], to the case of partial pre-orders. The underlying intuition stems from the fact that the agent remembers all his previous observations. However these observations are not at the same level, according to whether there are more plausible or not in the next epistemic state. The general philosophy is that an old assertion is less reliable than a new one. In prediction problems, it seems reasonable to decrease the confidence that one has in an item of information, as time goes by. However, this revision operation attempts to satisfy as many previous observations as possible. That is, an old observation persists until it becomes contradictory with a more recent one. The revision operation uses the history of the sequence of previous observations to perform revision. ### Semantic extension When an epistemic state, $\Psi$, is represented by a partial pre-order on interpretations, the revision of $\Psi$ by a propositional formula $\mu$ leads to a revised epistemic state $\Psi \circ^{se}_{\triangleright} \mu$, represented by a partial pre-order on interpretations. This new epistemic state is such that the relative ordering between models of $\mu$ is preserved, the relative ordering between counter-models of $\mu$ is preserved, and the models of $\mu$ are preferred to its counter-models. More formally: \[defsemo\] Let $\Psi$ be an epistemic state and $\mu$ be a propositional formula, the revised epistemic state $\Psi \circ^{se}_{\triangleright} \mu$ corresponds to the following partial pre-order: - if $\omega, \, \omega' \, \in Mod(\mu)$ then $\omega \preceq_{\Psi\circ^{se}_{\triangleright} \mu} \omega'$ iff $\omega \preceq_{\Psi} \omega'$, - if $\omega, \, \omega' \, \not\in Mod(\mu)$ then $\omega \preceq_{\Psi \circ^{se}_{\triangleright} \mu} \omega'$ iff $\omega \preceq_{\Psi} \omega'$, - if $\omega \in Mod(\mu)$ and $ \omega' \not\in Mod(\mu)$ then $\omega \prec_{\Psi \circ^{se}_{\triangleright} \mu} \omega'$. According to this definition it is easy to check that $Mod(Bel^{se}(\Psi \circ^{se}_{\triangleright} \mu)) = min(Mod(\mu), \preceq_{\Psi})$. \[ex5\] We come back to example \[ex1\], where the initial epistemic state $\Psi$ is represented by the Figure \[initial\]. The corresponding belief set $Bel^{se}(\Psi)$ is such that $Mod(Bel^{se}(\Psi)) = \{ \omega_0, \omega_2, \omega_8 , \omega_9 , \omega_{10} , \omega_{11}, \omega_{12} , \omega_{13} \}$. Suppose we learn that [*condition 3*]{} holds, namely we revise $\Psi$ by the propositional formula $\mu = d$. According to the definition \[defsemo\] the revised epistemic state $\Psi \circ^{se}_{\triangleright}$ is represented by the partial pre-order on interpretations $\preceq_{\Psi \circ^{se}_{\triangleright}}$ graphically represented on Figure \[exsemi\]. The belief set corresponding to $\Psi \circ^{se}_{\triangleright} \mu$ is such that $Mod(Bel^{se}(\Psi \circ^{se}_{\triangleright} \mu)) = \{ \omega_9, \omega_{11} , \omega_{13} \}$ and since $Mod(\mu) =\{\omega_1, \omega_3, \omega_5, \omega_7, \omega_9, \omega_{11} , \omega_{13}, \omega_{15} \}$, it can be checked that\ $Mod(Bel^{se}(\Psi \circ^{se}_{\triangleright} \mu)) = min(Mod(\mu), \preceq_{\Psi} )$. ### Syntactic extension We now present the syntactic extension of this revision operation to partial pre-orders. Let $\Psi$ be an epistemic state, $\Psi$ is syntactically represented by a partially ordered belief base, denoted by $\preceq_{\Sigma}$, where $\Sigma$ is a set of propositional formulas, and $\preceq_{\Sigma}$ is a partial pre-order on $\Sigma$. The revision of $\preceq_{\Sigma}$ by a propositional formula $\mu$ leads to a partially ordered belief base denoted by $\preceq_{\Sigma \circ^{sy}_{\triangleright} \mu}$ as follows: Let us denote by $U$ the set of the disjunctions between $\mu$ and the formulas of $\Sigma$, more formally, $U = \{ \phi \lor \mu,$ such that $\phi \in \Sigma$ and $\phi\lor\mu\not\equiv\top\}$. Let $\Psi$ be an epistemic state, represented by a partially ordered belief base $\preceq_{\Sigma}$, the revision of $\Psi$ by $\mu$ leads to a revised epistemic state $\Psi \circ^{sy}_{\triangleright} \mu$ represented by a partially ordered belief base where $\Sigma \circ^{sy}_{\triangleright} \mu = \Sigma \cup U \cup \{ \mu \}$ and $\preceq_{\Sigma \circ^{sy}_{\triangleright} \mu}$ is such that: - $\forall\phi\lor\mu\in U:\;\phi\lor\mu\prec_{\Sigma\circ^{sy}_{\triangleright} \mu}\mu$, - $\forall \phi \in \Sigma: \; \mu\prec_{\Sigma\circ^{sy}_{\triangleright} \mu} \phi$, - $\forall \phi, \phi' \in \Sigma: \phi \preceq_{\Sigma} \phi'$ iff $\phi \preceq_{\Sigma \circ^{sy}_{\triangleright} \mu}\phi'$, - $\forall \phi, \phi' \in \Sigma: \phi \preceq_{\Sigma} \phi'$ iff $ \phi\lor\mu \; \preceq_{\Sigma \circ^{sy}_{\triangleright}\mu} \; \phi'\lor\mu$. \[ex6\] We come back to example \[ex2\]. Let $\Psi$ be the epistemic state, where $\Sigma = \{ b \to a, b \land c \to \lnot a, d \to a \}$, $\preceq_{\Sigma}$ is such that it only contains one constraint: $ b \land c \to \lnot a \prec_{\Sigma} b \to a$. Let us revise $\Psi$ by the propositional formula $\mu = d$. According to the definition of the revision operation $\circ^{sy}_{\triangleright}$, the revised epistemic state $\Psi \circ^{sy}_{\triangleright} \mu$ is represented by the following partial pre-order on Figure \[exsyno\]. Remark that since $(d \to a ) \lor d$ is a tautology, we do not take it into account. Extension of possibilistic revision ----------------------------------- We now present the extension of the possibilistic revision to partial pre-orders. In a possibility theory framework [@dlp1992] an epistemic state $\Psi$ is represented by a possibility distribution $\pi$. Each interpretation is assigned with a real number belonging to the interval $[ 0, 1 ]$. The value $1$ means that the interpretation is totally possible, whereas the value $0$ means that the interpretation is totally impossible. A possibility distribution induces a total pre-order $\leq_\pi$ on interpretations in the following way: $\omega\leq_\pi\omega'$ iff $\pi(\omega)\geq\pi(\omega')$. For more details on possibility theory and the revision of possibility distributions, see [@DuP92; @DuP97]. ### Semantic extension Let $\Psi$ be an epistemic state represented by a partial pre-order $\leq_\Psi$. The possibilistic revision of $\Psi$ by a propositional formula $\mu$ leads to a revised epistemic state $\Psi \circ^{se}_{\pi} \mu$, represented by a partial pre-order on interpretations, denoted by $\preceq_{\Psi\circ^{se}_{\pi} \mu}$ which considers that all the counter-models of the new item of information $\mu$ as impossible and preserves the relative ordering between the models of $\mu$. More formally: \[defposssem\] Let $\Psi$ be an epistemic state and $\mu$ be a propositional formula, the revised epistemic state $\Psi \circ^{se}_{\pi} \mu$ corresponds to the following partial pre-order: - if $\omega, \, \omega' \, \in Mod(\mu)$ then $\omega \preceq_{\Psi\circ^{se}_{\pi} \mu} \omega'$ iff $\omega \preceq_{\Psi} \omega'$, - if $\omega, \, \omega' \, \not\in Mod(\mu)$ then $\omega =_{\Psi\circ^{se}_{\pi} \mu} \omega'$, - if $\omega \in Mod(\mu)$ and $ \omega' \not\in Mod(\mu)$ then $\omega\prec_{\Psi \circ^{se}_{\pi} \mu} \omega'$. According to this definition it is easy to check that $Mod(Bel^{se}(\Psi \circ^{se}_{\pi} \mu)) = min(Mod(\mu), \preceq_{\Psi} )$. Let us consider again example \[ex1\]. We revise $\Psi$ by the propositional formula $\mu = d$. According to the definition \[defposssem\], the revised epistemic state $\Psi \circ^{se}_{\pi}\mu$ is represented by the following partial pre-order on interpretations $\preceq_{\Psi \circ^{se}_{\pi}\mu}$, graphically represented on Figure \[exsemp\]. The belief set corresponding to $\Psi \circ^{se}_{\pi} \mu$ is such that $Mod(Bel^{se}(\Psi \circ^{se}_{\pi} \mu)) = \{ \omega_9, \omega_{11} ,\omega_{13} \}$. ### Syntactic extension We now present the syntactic extension of the possibilistic revision operation to partial pre-orders. Let $\Psi$ be an epistemic state, syntactically represented by a partially ordered belief base. The revision of $\preceq_{\Sigma}$ by a propositional formula $\mu$ leads to a partially ordered belief base denoted by $\preceq_{\Sigma \circ^{sy}_{\pi} \mu}$ as follows: \[defposssyn\] The revision of $\Psi$ by $\mu$ leads to a revised epistemic state $\Psi\circ^{sy}_{\pi} \mu$ represented by a partially ordered belief base $\Sigma \circ^{sy}_{\pi} \mu = \Sigma \cup \{ \mu \}$ where $\preceq_{\Sigma \circ^{sy}_{\pi} \mu}$ is such that: - $\forall \phi \in \Sigma$: $\quad \mu \; \prec_{\Sigma\circ^{sy}_{\pi} \mu} \phi\; $, - $\forall \phi, \phi' \in \Sigma$: $ \phi \preceq_{\Sigma} \phi'$ iff $\phi \; \preceq_{\Sigma \circ^{sy}_{\pi} \mu} \; \phi'$. We come back to example \[ex2\], where $\Sigma = \{ b \to a, b \land c \to \lnot a, d \to a \}$, and $\preceq\Sigma$ only contains one constraint: $ b \land c \to \lnot a \prec_{\Sigma} b \to a$. Let us revise $\Psi$ by the propositional formula $\mu = d$. According to the definition \[defposssyn\], xthe revised epistemic state $\Psi \circ^{sy}_{\pi} \mu$ is represented by the following partial pre-order on $\Sigma \circ^{sy}_{\pi} \mu$ denoted by $\preceq_{\Sigma \circ^{sy}_{\pi} \mu}$ and graphically represented by Figure \[exsynp\]. Note that revising a partial pre-order, with $\circ_\pi^{se}$ and $\circ_\triangleright^{se}$, carry away some incomparabilities. Hence, after a certain number of successive revisions the resulting partial pre-order on interpretations converges to a total pre-order on interpretations, more formally: Let $\preceq_{\Psi}$ be a partial pre-order on interpretations, there exists a sequence of formulas $(\mu_1, \mu_2, \cdots , \mu_n)$ such that the resulting partial pre-order after successive revisions - $(((\preceq_{\Psi}\circ^{se}_{\triangleright} \mu_1) \circ^{se}_{\triangleright} \mu_2) \circ^{se}_{\triangleright} \cdots \circ^{se}_{\triangleright} \mu_n)$ is a total pre-order, and - $(((\preceq_{\Psi}\circ^{se}_{\pi} \mu_1) \circ^{se}_{\pi} \mu_2) \circ^{se}_{\pi} \cdots \circ^{se}_{\pi} \mu_n)$ is a total pre-order. The interest of such a result stems from the fact that starting from total ignorance about a topic, successive revisions lead to a partial pre-order on interpretations, and we now know how to perform these revisions. Moreover, after a certain number of revision the partial pre-order converges to a total pre-order that can be revised according to the results previously obtained in [@BDP99], [@DuP92]. From syntax to semantics ======================== We now present the mapping from a partially ordered belief base $\preceq_{\Sigma}$ to a partial pre-order on interpretation $\preceq_{\Psi}$. Let $\Sigma$ be a partially ordered belief base and $\omega$ be an interpretation. We denote by $\lceil \omega, \Sigma \rceil$ the set of preferred formulas of $\Sigma$ falsified by $\omega$. We define a partial pre-order on interpretations as follow: $$\omega \preceq_{\Psi,w} \omega' \textrm{ iff } \lceil \omega', \Sigma \rceil \preceq_{\Sigma,w} \lceil \omega, \Sigma \rceil.$$ Where $\preceq_{\Sigma,w}$ is given by definition \[def1\] \[ex9\] We come back to example \[ex2\], where $\Sigma = \{ b \to a, b\land c \to \lnot a, d\to a \}$, and $\preceq_{\Sigma}$ is defined as $ b\land c \to\lnot a\prec_{\Sigma} b \to a$. The sets of preferred formulas of $\Sigma$ falsified by the interpretations are the following: -------------------------------------------------------------------------------------------------------------------------------------------------------- $\lceil\omega_0, \Sigma\rceil = \lceil\omega_2, \Sigma\rceil= \lceil\omega_8, \Sigma\rceil = \lceil\omega_{9}, \Sigma\rceil =\emptyset$, $\lceil\omega_{10}, \Sigma\rceil = \lceil\omega_{11}, \Sigma\rceil = \lceil\omega_{12}, \Sigma\rceil = \lceil\omega_{13}, \Sigma\rceil = \emptyset$, $\lceil\omega_1, \Sigma\rceil = \lceil\omega_3, \Sigma\rceil = \{d\to a \}$, $\lceil\omega_4, \Sigma\rceil = \lceil\omega_6, \Sigma\rceil = \{ b \to a\}$, $\lceil\omega_5, \Sigma\rceil = \lceil\omega_7, \Sigma\rceil = \{ b \to a, d \to a \}$, $\lceil\omega_{14}, \Sigma\rceil = \lceil\omega_{15}, \Sigma\rceil = \{ b \land c \to \lnot a \}$. -------------------------------------------------------------------------------------------------------------------------------------------------------- According to definition of $\preceq_{\Sigma,w}$, it can be easily checked that the computation of $\preceq_{\Sigma,w}$ leads to the same partial pre-order than the one used in the semantic representation of $\Psi$ in example \[ex1\], namely $\preceq_{\Sigma,w} = \preceq_{\Psi}$. We are now able to establish the equivalence between the syntactic representation of epistemic states by means of partially ordered belief bases and the semantic representation of epistemic states by means partial pre-orders on interpretations. Let $\Psi$ be an epistemic state represented, on one hand by a partially ordered belief base $\preceq_{\Sigma}$ and on the other hand by a partial pre-orders on interpretations $\preceq_{\Psi}$. Let $\circ^{sy}_{\triangleright}$ and $\circ^{se}_{\triangleright}$ be the syntactic and semantic revision operators stemming from the history of the observations, and $\circ^{sy}_{\pi} $ and $ \circ^{se}_{\pi}$ be the syntactic and semantic possibilistic revision operators. Let $we$ be the mapping from a partially ordered belief base $\preceq_{\Sigma}$ to a partial pre-order on interpretation $\preceq_{\Psi}$. The following result holds: - $we(\preceq_{\Sigma} \circ^{sy}_{\triangleright} \mu) = we(\preceq_{\Sigma} ) \circ^{se}_{\triangleright} \mu$, - $we(\preceq_{\Sigma} \circ^{sy}_{\pi} \mu) = we(\preceq_{\Sigma} )\circ^{se}_{\pi} \mu$. We illustrate this theorem by the following example. Let $\Psi$ be the epistemic state of example \[ex2\], where $\Sigma = \{ b \to a, b \land c \to \lnot a, d \to a \}$, and $\preceq_{\Sigma}$ is such that: $ b \land c \to \lnot a \prec_{\Sigma} b \to a$. According to example \[ex6\] the revised epistemic state $\Psi\circ^{sy}_{\triangleright} \mu$ is represented by the partial pre-order on $\Sigma \circ^{sy}_{\triangleright} \mu$ given by figure \[exsyno\]. The sets of preferred formulas of $\Sigma\circ^{sy}_{\triangleright} \mu=\{ b \to a, b \land c \to \lnot a, d \to a, d, (b \to a)\lor d, (b \land c \to \lnot a)\lor d, (d \to a)\lor d \}$ falsified by the interpretations are the following:\ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\lceil \omega_9, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_{11}, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_{13}, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \emptyset$, $ \lceil \omega_1, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_3, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \{ d \to a \}$, $ \lceil \omega_5, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_7, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \{ b\to a, d \to a \}$, $\lceil \omega_{15}, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \{ b \land c \to \lnot a \}$, $\lceil \omega_0, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_2, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_8, \Sigma \circ^{sy}_{\triangleright} \mu\rceil=$ $= \lceil \omega_{10}, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_{12}, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \{ d \}$, $ \lceil \omega_4, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \lceil \omega_6, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \{ (b\to a)\lor d \}$, $\lceil \omega_{14}, \Sigma \circ^{sy}_{\triangleright} \mu\rceil = \{ (b \land c \to \lnot a)\lor d \}$. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- According to definition \[defsemo\] it can be checked that $\preceq_{\Psi\circ^{se}_{\triangleright} \mu,w} $ is identical to the one given by figure \[exsyno\]. Since from example \[ex9\] $we(\preceq_{\Sigma}) = \preceq_{\Psi}$ we have $we(\preceq_{\Sigma}) \circ^{se}_{\triangleright} \mu = \preceq_{\Psi\circ^{se}_{\triangleright} \mu}$ and $we(\preceq_{\Sigma} \circ^{sy}_{\triangleright} \mu) =\preceq_{\Psi\circ^{se}_{\triangleright} \mu,s}$ it can be checked that $\preceq_{\Psi \circ^{se}_{\triangleright} \mu,s} =\preceq_{\Psi \circ^{se}_{\triangleright} \mu}$, therefore $we(\preceq_{\Sigma} \circ^{sy}_{\triangleright} \mu) =we(\preceq_{\Sigma} ) \circ^{se}_{\triangleright} \mu$ and the equivalence between syntactic representation and semantic representation holds. Syntactic computation of $Bel^{sy}(\Psi)$ ========================================= Let $\Psi$ be an epistemic state represented by a partially ordered belief base $\preceq_{\Sigma}$, we now present the mapping from $\Sigma$ to $Bel^{sy}(\Psi)$. The computation of the corresponding belief set $Bel^{sy}(\Psi)$ is more complex and it is closely linked to the ability of making syntactic inferences from $\preceq_{\Sigma}$ in order to deduce the agent’s current beliefs. This involves the definition of a partial pre-order between consistent subsets of $\Sigma$, denoted by ${\cal C}$ and we generate a partial pre-order between consistent subsets of $\Sigma$ using $\preceq_{\Sigma}$ described in definition \[def1\]. or Let $C, C' \in {\cal C}$: $C \preceq_{{\cal C},w} C'$ iff $\{\phi': \phi' \not \in C' \} \preceq_{\Sigma,w} \{\phi: \phi \not \in C \}$, Intuitively, $C$ is preferred to $C'$ if the best formulas outside $C$ are less preferred than the best formulas outside $C'$. We denote by $CONS_{w}(\Sigma)$ the set of preferred consistent subsets of $\Sigma$ with respect to $\preceq_{{\cal C},w}$, namely: $CONS_{w}(\Sigma)= min({\cal C}, \preceq_{{\cal C},w}).$ We come back to example \[ex2\], the following array illustrates the set of consistent subsets of $\Sigma$: $$\begin{array}{|l|l|l|} \hline C_i & \{\phi \in C_i \} & Min (\{\phi \not \in C_i \},\preceq_\Sigma)\\ \hline \hline C_0 & \emptyset & b \land c \to \lnot a, d\to a \\ C_1 & b \land c \to \lnot a & b\to a, d \to a\\ C_2 & b \to a & b \land c \to \lnot a, d \to a\\ C_3 & d \to a & b \land c \to \lnot a, \\ C_4 & b \to a, b \land c \to \lnot a & d \to a\\ C_5 & b \to a, d \to a & b \land c \to \lnot a\\ C_6 & b \land c \to \lnot a, d \to a & b \to a \\ C_7 & b \to a, b \land c \to \lnot a, d \to a & \emptyset\\ \hline \end{array}$$ According to the definition of $\preceq_{{\cal C},w}$, we obtain the partial pre-order graphically represented by Figure \[exci\]. The syntactic definition of $Bel$ is: $$Bel^{se}[\Psi]=\bigvee_{C\in Cons_w(\Sigma)}C.$$ This means that the syntactic inference can be defined as: $\phi$ is inferred from $\preceq_\Sigma$ iff $\forall C\in CONS_{s}$, $C \cup \lnot \phi$ is inconsistent. Let $\Psi$ be the epistemic state, let $\circ^{sy}_{\triangleright}$ and $\circ^{se}_{\triangleright}$ be the syntactic and semantic revision operators stemming from the history of observations, and $\circ^{sy}_{\pi} $ and $ \circ^{se}_{\pi}$ be the syntactic and semantic possibilistic revision operators. The following result holds: - $Bel^{sy}(\Psi \circ^{sy}_{\triangleright} \mu) \equiv Bel^{se}( \Psi \circ^{se}_{\triangleright} \mu)$, - $Bel^{sy}(\Psi \circ^{sy}_{\pi} \mu) \equiv Bel^{se}( \Psi \circ^{se}_{\pi} \mu)$. We illustrate this theorem with an example: Let $\Psi$ be the epistemic state defined in example \[ex5\] where the revision by $\mu = d$ leads, to the belief set $Bel^{se}(\Psi \circ^{se}_{\triangleright} \mu)$ such that $Mod(Bel^{se}(\Psi \circ^{se}_{\triangleright} \mu)) = \{ \omega_9, \omega_{11} , \omega_{13} \}$. On the syntactic level, we can check that, since $\Sigma\cup\{\mu\}\cup U$ is consistent, that the preferred elements for $\preceq_{\mathcal{C},w}$, $Cons_w(\Sigma\circ_\triangleright^{sy}\mu)$ is simply the subset composed of all elements of $\Sigma\circ_\triangleright^{sy}\mu$. Namely we have $Cons_w(\Sigma_{\circ_\triangleright^{sy}\mu})= \{ \{ b \to a,b \land c \to \lnot a,d \to a,d,$ $(b \to a)\lor s,$ $(b \land c \to \lnot a)\lor d,$ $(d \to a)\lor d \}\}$ and hence $Mod(Cons_w)=\{\omega_9,\omega_{11},\omega_{13}\}$, as it is excepted. Concluding discussion ===================== Since in certain situations an agent faces incomplete information and has to deal with partially ordered information, this paper proposed a semantic representation of an epistemic state by a partial pre-order on interpretations as well as a syntactic representation by a partially ordered belief base. The extension to partial pre-orders of two revision strategies already defined for total pre-orders are presented, and the equivalence between the representations is shown. We showed that after a certain number of successive revisions the partial pre-order convergences to a total pre-order. In a future work, we have to investigate the properties of these revision operators and the presented approach could be generalized to the revision of a partial pre-order by a partial pre-order, generalizing the approach proposed in [@BKPP00]. Moreover, in order to provide reversibility the encoding by polynomials of partial pre-orders on interpretations and partially ordered belief base could be investigated. Another future work is to develop algorithms for computing the belief set at the syntactical case, and to apply them in geographical information systems where available information is often partially ordered. Acknowledgments =============== This work was supported by European Community with the REV!GIS project $\sharp$ IST-$1999$-$14189$. http://www.cmi.uni-mrs.fr/REVIGIS
{ "pile_set_name": "ArXiv" }
--- author: - | Carlos A. Argüelles\ Massachusetts Institute of Technology, Cambridge, MA 02139, USA\ E-mail: - | \ Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison WI 53706, USA\ E-mail: - | Tianlu Yuan\ Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison WI 53706, USA\ E-mail: bibliography: - 'references.bib' title: Tackling limited simulation and small signals --- For binned data, the Poisson likelihood is taken to be the probability to observe events in a bin and is commonly used in high-energy physics and particle astrophysics experiments. Given an exact expectation rate, $\lambda({\vec{\theta}})$, the probability of observing an integer $k$ events is $$\label{eq:poisson} \mathrm{ {\mathcal{L}}({\vec{\theta}}|k) = \mathrm{Poisson}(k;\lambda({\vec{\theta}})) = \frac{\lambda({\vec{\theta}})^{k}e^{-\lambda({\vec{\theta}})}}{k!},}$$ where ${\vec{\theta}}$ is some set of physics parameters that determine $\lambda$. Given the stochastic nature of processes in particle physics, exactly determining $\lambda$ is often not possible and requires Monte Carlo (MC) simulations. Simulation is often expensive, and so reweighting is employed as it enables a single simulation to be used to describe many physical hypotheses [@Gainer:2014bta]. In such scenarios, an [*ad hoc*]{} likelihood is commonly used, where $\lambda$ is simply taken to be the sum of the weights in each bin, namely $$\label{eq:mcpoisson} {\mathcal{L}_{\textmd{AdHoc}}}({\vec{\theta}}|k) = \frac{\left(\sum_{i}{w_i({\vec{\theta}})}\right)^{k}e^{-\left(\sum_{i}{w_i\left({\vec{\theta}}\right)}\right)}}{k!}.$$ A notable downside of this [*ad hoc*]{} likelihood is that it neglects the statistical uncertainty inherent in estimating $\lambda({\vec{\theta}})$ from a simulation of limited size. For expensive simulations, or physical hypotheses far from the original simulation, this uncertainty can be non-negligible [@Barlow:1993dm; @Bohm:2013gla; @Chirkin:2013lya; @Glusenkamp:2017rlp]. One can account for this uncertainty by incorporating a simulation derived prior on $\lambda$, denoted as ${\mathcal{P}}\left(\lambda|{\vec{w}}({\vec{\theta}})\right)$, that has non-zero variance. Thus, we can write the likelihood as the marginalization of the Poisson likelihood with the prior, $$\label{eq:generalpoisson} {\mathcal{L}}_{\rm General}({\vec{\theta}}|k) = \int_{0}^{\infty}~\frac{\lambda^{k}e^{-\lambda}}{k!}{\mathcal{P}}\left(\lambda|{\vec{w}}({\vec{\theta}})\right)~d\lambda.$$ We construct this prior based on the likelihood function of the simulation outcome and a prior on $\lambda$, ${\mathcal{P}}(\lambda)$, $$\label{eq:posterior} {\mathcal{P}}\left(\lambda|{\vec{w}}({\vec{\theta}})\right) = \frac{{\mathcal{L}}(\lambda|{\vec{w}}({\vec{\theta}})){\mathcal{P}}(\lambda)}{\int_0^\infty {\mathcal{L}}(\lambda'|{\vec{w}}({\vec{\theta}})){\mathcal{P}}(\lambda')~d\lambda'},$$ where in our implementation we have chosen we chosen ${\mathcal{P}}(\lambda)$ to be uniform. Let us first consider the case where all simulation events in the bin have equal weight. In this scenario we can relate the number of events, $m$, and the weight of the events, $w$, to the quantities $\mu$ and $\sigma$, which are defined $$\label{eq:musigma} \mu \equiv \sum_{i=1}^m w_i~\textmd{and}~\sigma^2 \equiv \sum_{i=1}^m w_i^2,$$ and satisfy the relationships $$\label{eq:ids} \mu=wm \textmd{,}~ \sigma^2=w^2 m \textmd{,}~ w=\sigma^2/\mu \textmd{, and }~ m = \mu^2/\sigma^2.$$ The probability of obtaining $m$ events in the simulation bin can be modelled with the Poisson distribution assuming the true but unknown mean $\bar m$: $$\label{eq:mcprob} \mathrm{Poisson}(M=m;\bar m) = \frac{e^{-\bar m} {\bar m}^m}{m!}.$$ This allows us to rewrite the likelihood of $\lambda$ in terms of $\mu$ and $\sigma$ as $${\mathcal{L}}(\lambda|{\vec{w}}({\vec{\theta}}))={\mathcal{L}}(\lambda|\mu, \sigma)=\frac{e^{-\lambda\mu/\sigma^2}\left(\lambda\mu/\sigma^2\right)^{\mu^2/\sigma^2}}{(\mu^2/\sigma^2)!}. \label{eq:poisson_conditional}$$ If the simulation event weights are not all equal, as is usually the case, then we can replace $w$ and $m$ with their “effective” counterparts ${w_\mathrm{Eff}}$ and ${m_\mathrm{Eff}}$. These then relate to $\mu$ and $\sigma$ as $$\label{eq:effparameters} \mu= {w_\mathrm{Eff}}{m_\mathrm{Eff}}~\textmd{and}~\sigma^2 = {w_\mathrm{Eff}}^2 {m_\mathrm{Eff}}.$$ The replacement redefines the likelihood of our simulation outcome $$\begin{aligned} \label{eq:probmeff} {\mathcal{L}}(\bar m|{m_\mathrm{Eff}})&= \frac{e^{-\bar m}{\bar m}^{{m_\mathrm{Eff}}}}{\Gamma({m_\mathrm{Eff}}+1)},\end{aligned}$$ which, assuming $\lambda = {w_\mathrm{Eff}}{\bar m}$, can be rewritten as $${\mathcal{L}}(\lambda|{\vec{w}}({\vec{\theta}}))={\mathcal{L}}(\lambda|\mu, \sigma)=\frac{e^{-\lambda\mu/\sigma^2}\left(\lambda\mu/\sigma^2\right)^{\mu^2/\sigma^2}}{\Gamma(\mu^2/\sigma^2+1)}. \label{eq:poisson_conditional_arb}$$ To simplify the notation, define $$\label{eq:alphabetamc} {\alpha}\equiv \frac{\mu^2}{\sigma^2}+1~\textmd{and}~{\beta}\equiv \frac{\mu}{\sigma^2}.$$ Substituting Eq.  into Eq.  and assuming a uniform ${\mathcal{P}}(\lambda)$, we obtain $$\begin{aligned} \label{eq:theposterior} {\mathcal{P}}(\lambda|{\vec{w}}({\vec{\theta}})) &= {\beta}\frac{ e^{-\lambda {\beta}}(\lambda {\beta})^{{\alpha}-1}}{\Gamma({\alpha})}\nonumber \\ &= \frac{e^{-\lambda {\beta}} \lambda^{{\alpha}-1} {\beta}^{{\alpha}}}{\Gamma({\alpha})} \nonumber \\ &= {\mathcal{G}}(\lambda;{\alpha}, {\beta}),\end{aligned}$$ where ${\mathcal{G}}(\lambda;{\alpha}, {\beta})$ is the gamma distribution, with shape and inverse rate parameters $\alpha$ and $\beta$. Finally, this can be substituted for ${\mathcal{P}}\left(\lambda|\vec{w}({\vec{\theta}})\right)$ in Eq. (\[eq:generalpoisson\]) so that $$\begin{aligned} {{\mathcal{L}}_\textmd{Eff}}({\vec{\theta}}|k) &=\int_{0}^{\infty}~\frac{\lambda^k e^{-\lambda}}{k!}{\mathcal{G}}(\lambda;{\alpha}, {\beta})~d\lambda \\ &= \frac{{\beta}^{\alpha}\Gamma\left(k+{\alpha}\right)}{k!\left(1+{\beta}\right)^{k+{\alpha}}\Gamma\left({\alpha}\right)} \\ &= \left(\frac{\mu}{\sigma^2}\right)^{\frac{\mu^2}{\sigma^2}+1}\Gamma\left(k+\frac{\mu^2}{\sigma^2}+1\right)\left[k!\left(1+\frac{\mu}{\sigma^2}\right)^{k+\frac{\mu^2}{\sigma^2}+1}\Gamma\left(\frac{\mu^2}{\sigma^2}+1\right)\right]^{-1}. \label{eq:parametrizedpoisson}\end{aligned}$$ Equation is an effective likelihood, motivated by Poisson statistics, and derived with a Bayesian approach. It incorporates statistical uncertainties inherent in the MC approximation of the rate by encoding the distribution of weights in terms of $\mu$ and $\sigma^2$. The effective likelihood ${{\mathcal{L}}_\textmd{Eff}}$ can be easily substituted for ${\mathcal{L}_{\textmd{AdHoc}}}$. A more thorough exposition, along with a generalization for different priors, ${\mathcal{P}}(\lambda)$, is given in [@Arguelles:2019izp].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Sparse residual tree (SRT) is an adaptive exploration method for multivariate scattered data approximation. It leads to sparse and stable approximations in areas where the data is sufficient or redundant, and points out the possible local regions where data refinement is needed. Sparse residual forest (SRF) is a combination of SRT predictors to further improve the approximation accuracy and stability according to the error characteristics of SRTs. The hierarchical parallel SRT algorithm is based on both tree decomposition and adaptive radial basis function (RBF) explorations, whereby for each child a sparse and proper RBF refinement is added to the approximation by minimizing the norm of the residual inherited from its parent. The convergence results are established for both SRTs and SRFs. The worst case time complexity of SRTs is $\mathcal{O}(N\log_2N)$ for the initial work and $\mathcal{O}(\log_2N)$ for each prediction, meanwhile, the worst case storage requirement is $\mathcal{O}(N\log_2N)$, where the $N$ data points can be arbitrary distributed. Numerical experiments are performed for several illustrative examples.' address: 'Department of Chemistry, Princeton University, Princeton, NJ 08544, USA' author: - Xin Xu - Xiaopeng Luo bibliography: - 'MReferences.bib' title: Sparse residual tree and forest --- \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] scattered data,sparse approximation ,binary tree ,forest ,radial basis function ,least squares ,parallel computing. Introduction {#SRTF:s1} ============ Multivariate scattered data approximation problems arise in many areas of engineering and scientific computing. In the last five decades, radial basis function (RBF) methods have gradually become an extremely powerful tool for scattered data. This is not only because they possess the dimensional independence and remarkable convergence properties (see, e.g., [@RiegerC2010_RBFSamplingInequalities; @WendlandH2005B_ScatteredData; @WendlandH2005_RBFSamplingInequalities; @WuZ1993A_ErrorEstimatesRBF; @LuoXP2014M_ReproducingKernelHDMR]), but also because a number of techniques, such as multipole (far-field) expansions [@BeatsonRK1999M_FarFieldExpansions; @WendlandH2005B_ScatteredData], multilevel methods of compactly supported kernels [@FloaterM1996M_MultilevelRBF; @GeorgoulisE2012M_MultilevelRBF; @WendlandH2005B_ScatteredData; @XuX2015A_MeshlessPDE] and partition of unity methods , have been proposed to reduce both the condition number of the resulting interpolation matrix and the complexity of calculating the interpolant. These techniques are, of course, very important in practice, however, in contrast to the stability and efficiency, maybe the later question is the most crucial one for a general representation of functions, that is, how to accurately capture and represent the intrinsic structures of a target function, especially in high dimensional space. More specifically, when the data and the expected accuracy are given, we usually do not know at all whether the data is redundant or insufficient for the target function. So it is necessary to consider the following three questions: - Whether the current data is just right to reach the expected accuracy? - How to establish a sparse approximation by ignoring the possible redundancy? - How to update the approximation by replenishing the possible insufficiency? It is often difficult to distinguish between data insufficiency and redundancy, and they could in fact exist simultaneously in different local regions. Sparse residual tree (SRT) is developed for the purpose of representing the intrinsic structure of arbitrary dimensional scattered data. SRT is based on both tree decomposition and adaptive radial basis function (RBF) explorations. For each child a concise and proper RBF refinement, whose shape parameter is related to the current regional scale, is added to the approximation by minimizing the $2$-norm of the residual inherited from its parent; then the tree node will be further split into two according to the updated residual; and this process finally stops when the data is insufficient or the expected accuracy is reached. The word “sparse" here has two meanings: (i) the RBF exploration applies only to a sparse but sufficient subset of the current data, which is to ensure the efficiency of the training process; (ii) the centers of the RBF refinement are also sparse relative to the sparse subset, which is to ensure the efficiency of the prediction process. Thus, on the one hand, SRT provides sparse approximations in areas where the data is sufficient or redundant, and on the other hand, SRT points out the possible local regions where data refinement is needed. In order to ensure stability, the condition number is strictly controlled for every refinement. Furthermore, SRT also yields the excellent performance in terms of efficiency. Similar to most typical tree-based algorithms [@BentleyJ1975M_kdTree; @FriedmanJ1977M_BallTree], the worst case time complexity of SRTs is $\mathcal{O}(N\log_2N)$ for the initial training work and $\mathcal{O}(\log_2N)$ for each prediction; and the worst case storage requirement is $\mathcal{O}(N\log_2N)$, where the $N$ data points can be arbitrary distributed. The training process can be accelerated using multi-core architectures. This hierarchical parallel algorithm allows one to easily handle ten millions of data points on a personal computer, or much more on a computer cluster. Although there are some different attempts to combine tree structures and RBF methods in the field of machine learning (see, e.g., ), they have not paid any attention to their convergence. In fact, similar to multilevel methods [@WendlandH2005B_ScatteredData], these combinations do not always guarantee convergence. Most of the previously used error estimates for RBF interpolation depend on the so-called power function . But recently, sampling inequalities have become a more powerful tool in this respect, and not limited to the case of interpolation [@NarcowichF2005A_RBFSamplingInequalities; @MadychW2006A_SamplingInequalities; @RiegerC2010_RBFSamplingInequalities; @WendlandH2005_RBFSamplingInequalities]. Sampling inequalities describe the fact that a differentiable function whose derivatives are bounded cannot attain large values if it is small on a sufficiently dense discrete set. Together with the stability of the least squares framework for residual trees, we prove that a SRT based on arbitrary basis functions leads to algebraic convergence orders for finitely smooth functions. Further combining the appropriate embeddings of certain native spaces, we also prove that the Gaussian or inverse multiquadric based SRT leads to exponential convergence orders for infinitely smooth functions. Since the SRT approximation is actually piecewise smooth, the error of each piece is significantly larger near the boundary. And the sparse residual forest (SRF), which is a combination of SRT predictors with different tree decompositions, is specifically designed to improve this situation. For all SRTs in the SRF, the splitting method of each SRT depends on the values of a random vector sampled independently and with the same distribution. This provides an opportunity to avoid those predictions with large squared deviations and to use the average value of the remaining predictions to enhance both stability and convergence. In practice, SRFs composed of a small number of SRTs perform quite well than individual SRTs; and in theory, similar to random forests [@BreimanL2001A_RandomForest], the error for SRFs converges with probability $1$ to a limit as the number of SRTs in the SRF becomes large. It is more efficient and accurate than the traditional partition of unity method for overcoming the boundary effect of the error. The remainder of the paper is organized as follows. After appropriate notation and preliminaries are introduced in section \[SRTF:s2\], section \[SRTF:s3\] and section \[SRTF:s4\] give the frameworks of the SRTs and SRFs, respectively, and the stability, convergence and complexity of the SRT algorithm are discussed in section \[SRTF:s5\]. A series of numerical experiments is given in section \[SRTF:s6\]. In section \[SRTF:s7\], we draw some conclusions on the new method presented in this work and discuss possible extensions. Notation and Preliminaries {#SRTF:s2} ========================== Throughout the paper, $e$ denotes Euler’s constant, the space dimension $d\in\mathbb{N}$, the domain $\Omega\subset\mathbb{R}^d$ is convex, $f:\Omega\to\mathbb{R}~\textrm{or} ~\mathbb{C}$ is a given target function, $X=\{x_i\}_{i=1}^N \in\Omega$ is a set of pairwise distinct interpolation points with the fill distance $$\label{SRTF:eq:h} h:=h_{X,\Omega}:=\sup_{x\in\Omega}\min_{x_i\in X}\|x-x_i\|_2,$$ and $f_X=(f(x_1),\cdots,f(x_N))^\mathrm{T}$ are known function values. It is worth noting that $\Omega$ can also be extended to a finite union of convex domains, thereby $\Omega$ is bounded with Lipschitz boundary and satisfies an interior cone condition. In this case, we can first deal with these convex domains separately and then combine them into a meaningful whole by a suitable partition of unity, see section \[SRTF:s6\] for examples. We will focus mainly on the Gaussian kernel $G(x)=G_\delta(x):=e^{-\delta^2\|x\|_2^2}$, where $\delta>0$ is often called the *shape parameter*. Suppose that $\Omega'\subseteq\Omega$ is also convex, then for the subset $X'=\{x'_i\}_{i=1}^{N'}=X\cap\Omega'$ and selected centers $X''=\{x''_i\}_{i=1}^{N''}\subseteq X'$, where $N''\leqslant N'\leqslant N$, an Gaussian RBF approximation $s$ is required to be of the form $$s_f(x,\alpha)=\sum_{j=1}^{N''}\alpha_jG(x-x''_j),~~~x\in\Omega'$$ with unknown coefficients $\alpha=(\alpha_1,\cdots,\alpha_{N''})^\mathrm{T}$. Consider the following least squares (LS) problem $$\label{SRTF:eq:LS} \min_{\alpha\in\mathbb{R}^{N''}}\sum_{i=1}^{N'} \Big(s_f(x'_i,\alpha)-f(x'_i)\Big)^2.$$ It is worth noting here that we consider the case of $N''\ll N'$ as a sparse approximation, and can be rewritten in matrix form as $$\label{SRTF:eq:LSM} \min_{\alpha\in\mathbb{R}^{N''}}\|\Phi_{X',X''}\alpha-f_{X'}\|_2^2,$$ where the matrix $\Phi_{X',X''}\in\mathbb{R}^{N'\times N''}$ is generated by the Gaussian kernel $G(x)$. Suppose $\Phi_{X',X''}$ have a $QR$ decomposition $\Phi_{X',X''}=QR$, where $Q\in\mathbb{R}^{N'\times N''}$ has orthonormal columns and $R\in\mathbb{R}^{N''\times N''}$ is upper triangular, then the problem has a unique solution $\alpha^*=\Phi_{X',X''}^{-1}f_{X'}=R^{-1}Q^\mathrm{T}f_{X'}$, where $R$ and $Q^\mathrm{T}f_{X'}\in\mathbb{R}^{N''}$ can be recursively obtained without computing $Q$ by Householder transformations [@GolubG2013B_MatrixComputations]. We shall consider functions from certain Sobolev spaces $W_p^k(\Omega)$ with $1\leqslant p<\infty$ and native spaces of Gaussians, $\mathcal{N}_G(\Omega)$, respectively. The Sobolev space $W_p^k(\Omega)$ consists of all functions $f$ with distributional derivatives $D^\gamma f\in L_p(\Omega)$ for all $|\gamma|\leqslant k$, $\gamma\in\mathbb{N}_0^d$. Associated with these spaces are the (semi-)norms $$|f|_{W_p^k(\Omega)}=\left(\sum_{|\gamma|=k} \|D^\gamma f\|_{L_p(\Omega)}^p\right)^{1/p} ~~\textrm{and}~~ \|f\|_{W_p^k(\Omega)}=\left(\sum_{|\gamma|\leqslant k} \|D^\gamma f\|_{L_p(\Omega)}^p\right)^{1/p}.$$ For the Gaussian kernel $G(x)=e^{-\delta^2\|x\|_2^2}$ the native space on $\mathbb{R}^d$ is given by $$\mathcal{N}_G(\mathbb{R}^d)=\left\{f\in C(\mathbb{R}^d)\cap L_2(\mathbb{R}^d):\|f\|_{\mathcal{N}_G}:=\left(\int_{\mathbb{R}^d} |\hat{f}(\omega)|^2e^{\frac{\|\omega\|_2^2}{4\delta^2}} {\mathrm{d}}\omega\right)^\frac{1}{2}<\infty\right\},$$ further, the native space $\mathcal{N}_G(\Omega)$ on a bounded domain $\Omega$ is defined as $$\mathcal{N}_G(\Omega)=\left\{f|_\Omega: f\in\mathcal{N}_G(\mathbb{R}^d)~\textrm{and}~ (f,g)_{\mathcal{N}_G(\mathbb{R}^d)}=0, \forall g\in\mathcal{N}_G(\mathbb{R}^d) ~\textrm{s.t.}~g|_\Omega=0\right\},$$ where $(f,g)_{\mathcal{N}_G(\mathbb{R}^d)}=\int_{\mathbb{R}^d} \hat{f}(\omega)\overline{\hat{g}(\omega)} e^{\frac{\|\omega\|_2^2}{4\delta^2}}{\mathrm{d}}\omega$. For any $\Omega\subseteq\mathbb{R}^d$ and all $k\geqslant0$, $$\label{SRTF:GNSembedding} \mathcal{N}_G(\Omega)\subset W_2^k(\Omega)~~\textrm{with}~~ \|f\|_{W_2^k(\Omega)}\leqslant C_G^kk^{k/2}\|f\|_{\mathcal{N}_G}, ~~\forall f\in\mathcal{N}_G(\Omega),$$ where $C_G=\sqrt{\max(\delta^{-d},1)(\frac{8\delta^2}{e}+2)}$ depends only on the shape parameter $\delta$ and the space dimension $d$, see Theorem $7.5$ of [@RiegerC2010_RBFSamplingInequalities] for details. We can also consider inverse multiquadrics $M(x)=M_\delta(x)= (1/\delta^2+\|x\|_2)^{-\beta}$ for $\beta>\frac{d}{2}$, and the inner product of native spaces $\mathcal{N}_M(\mathbb{R}^d)$ can be defined as $$(f,g)_{\mathcal{N}_M(\mathbb{R}^d)}=\int_{\mathbb{R}^d} \hat{f}(\omega)\overline{\hat{g}(\omega)}\widehat{M}^{-1}(\omega){\mathrm{d}}\omega, ~~\forall f,g\in\mathcal{N}_M(\mathbb{R}^d),$$ where $\widehat{M}(\omega)=\frac{2^{1-\beta}}{\Gamma(\beta)} (\delta\|\omega\|_2)^{\beta-d/2}K_{d/2-\beta}(\|\omega\|_2/\delta)$ and $K_v$ is the modified Bessel functions; and similarly to Gaussian kernels, for any $\Omega\subseteq\mathbb{R}^d$ and all $k\geqslant0$, $$\label{SRTF:MNSembedding} \mathcal{N}_M(\Omega)\subset W_2^k(\Omega)~~\textrm{with}~~ \|f\|_{W_2^k(\Omega)}\leqslant C_M^kk^k\|f\|_{\mathcal{N}_M}, ~~\forall f\in\mathcal{N}_M(\Omega),$$ where $C_M>0$ depends only on $\beta$ and $d$, see Theorem $7.6$ of [@RiegerC2010_RBFSamplingInequalities] for details. Sparse residual tree {#SRTF:s3} ==================== Sparse residual tree is based on both tree decomposition and adaptive RBF explorations. Suppose $\epsilon_{\mathrm{E}}>0$ is the expected relative absolute error (RAE) for an approximation $s$ of the target function $f$ on the interpolation dataset $X$, where $$\label{SRTF:eq:RAE} \mathrm{RAE}=\frac{\max_i|s(x_i)-f(x_i)|}{\max_i|f(x_i)|}.$$ For each child, for example, $X'\subset\Omega'$, which is $X\subset\Omega$ itself in the beginning, we need to (i) explore a sparse and proper RBF approximation $s_{r'}$ to minimize the $2$-norm of the current residual $r'(X')$; and then, (ii) split the dataset $X'$ into two proper subsets $X'_1$ and $X'_2$ as well as the domain $\Omega'$ into two proper subdomains $\Omega'_1$ and $\Omega'_2$, as shown in the following diagram. We call it an exploration-splitting process. & & X’’ & &\ & (1,2) & & (1,2) &\ & X’\_1’\_1 & & X’\_2’\_2 &\ As mentioned above, the RBF exploration applies only to a sparse but sufficient subset of $X'$ and the centers of $s_{r'}$ are also sparse relative to the sparse subset. Hence, let us start with a sparsification of the dataset $X'$ when its number is large. Sparsification of datasets {#SRTF:s3:1} -------------------------- Except for updating the residual, we hope to improve efficiency by replacing $X'$ with its subset which has the same distribution of $X'$ if the number of $X'$ is large. Since $s_{r'}$ is only used to refine the relative global component of the current residual $r'$, it is not necessary to use all the data. Let $I$ be an index vector containing $N'_I(\leqslant N')$ unique integers selected randomly from $1$ to $N'$ inclusive, then $X'(I)$ is exactly what we need. Actually, from the independence of $X'$ and $I$, it follows that $$\mathcal{P}_{X'(I)}(x)=\frac{\mathcal{P}_{X',I}(x,i)}{\mathcal{P}_{I}(i)} =\frac{\mathcal{P}_{X'}(x)\mathcal{P}_{I}(i)}{\mathcal{P}_{I}(i)} =\mathcal{P}_{X'}(x),~~x\in\Omega',~i\in I;$$ i.e., $X'(I)$ has the same probability distribution of $X'$. And the choice of the number $N'_I$ will be discussed later. Quasi-uniform subsequence {#SRTF:s3:2} ------------------------- Now we consider a method for generating a quasi-uniform subsequence of $X'(I)$, which is the basis for adaptive RBF explorations. To find a quasi-uniform subsequence $U'$ from $X'(I)=\{x'_{I(1)},\cdots,x'_{I(N'_I)}\}$, we start with the approximate mean point, that is, $$\label{SRTF:eq:Q1} u'_1=\arg\min_{x'\in X'(I)}\|x'-\overline{X'(I)}\|_2,~~\textrm{where}~~ \overline{X'(I)}=\frac{1}{N'_I}\sum_{i=1}^{N'_I}x'_{I(i)}.$$ And for known $U'_j=\{u'_1,\cdots,u'_j\}$, the subsequent point $u'_{j+1}$ is determined as $$\label{SRTF:eq:Qjp1} u'_{j+1}=\arg\max_{x'\in X'(I)}\left(\min_{1\leqslant l\leqslant j} \|x'-u'_l\|_2\right),$$ i.e., $u'_{j+1}\in X'(I)$ is the point that maximizes the minimum of the set of distances from it to a point in $U'_j$. By storing an $N'_I$-dimensional distance vector and an $N'_I$-dimensional index vector, it only takes $\mathcal{O}(jN'_I)$ operations to generate $j$ quasi-uniform points $U'_j$ and determine the relationship between every point of $X'(I)$ and the Voronoi diagram of $U'_j$, see Fig. \[SRTF:fig:1\] for examples. ![**[]{data-label="SRTF:fig:1"}](fig1-u.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:1"}](fig1-n.eps "fig:"){width="49.00000%"} Adaptive RBF exploration ------------------------ The purpose of this adaptive exploration is to determine the centers of the RBF refinement $s_{r'}$ which is only used to refine the relative global component of $r'$. We first introduce the working parameters of SRT $$\label{SRTF:eq:WP} \omega=(\omega_1,\omega_2,\omega_3,\omega_4)^\mathrm{T},$$ where $\omega_1=\kappa>0$ is the upper bound of condition numbers, $\omega_2>0$ is the termination error of explorations, $\omega_3\in(0,1)$ is the factor of shape parameters, and $\omega_4>0$ is the termination factor of tree nodes. For a fixed factor $\omega_3$, the current shape parameter can be determined as $$\delta'=\sqrt{-\frac{\ln(\omega_3)} {\max_{x'\in X'(I)}\|x'-\overline{X'(I)}\|_2^2}},~~\textrm{where}~~ \overline{X'(I)}=\frac{1}{N'_I}\sum_{i=1}^{N'_I}x'_{I(i)};$$ and the meaning of the remaining parameters will be clarified more clearly later. Suppose $\{\chi'_1,\cdots,\chi'_{j'}\}$ are the centers inherited from its father, a reasonable idea is to choose the $(j'+1)$th center $\chi'_{j'+1}$ from the quasi-uniform subsequence $U'_{j'+d+1}$ which is generated by with the initial $U'_{j'}=\{\chi'_1,\cdots,\chi'_{j'}\}$; for the root node, we choose $\chi'_1=u'_1\in U'_1$ given by . Without loss of generality, for known $\{\chi'_1,\cdots,\chi'_j\}\subset U'_{j+d}$ with $j\geqslant j'$, we determine the $(j+1)$th center $\chi'_{j+1}$ from $U'_{j+d+1}-\{\chi'_1,\cdots,\chi'_j\}$ by the following procedure: 1. From the recursive QR decomposition (as mentioned in section \[SRTF:s2\]), $R_j$ and $Q_j^\mathrm{T}r'(X'(I))$ can be recursively obtained by $R_{j-1}$ and $Q_{j-1}^\mathrm{T}r'(X'(I))$ without computing $Q_{j-1}$, where $$Q_vR_v=\Phi_{X'(I),\{\chi'_1,\cdots,\chi'_v\}}\in\mathbb{R}^{N'_I,v}, ~~1\leqslant v\leqslant j,$$ and $\Phi_{X'(I),\{\chi'_1,\cdots,\chi'_v\}}$ is generated by the Gaussian kernel $G_{\delta'}$. 2. The temporary residual can be obtained by $$r'_j(X'(I))=r'(X'(I))-\sum_{1\leqslant i\leqslant j} \alpha_j^{(i)}e^{-\delta'^2\|X'(I)-\chi'_i\|_2^2}.$$ where the coefficients $\alpha_j=\left(\alpha^{(1)}_j,\cdots, \alpha^{(j)}_j\right)^\mathrm{T}=R_j^{-1}Q_j^\mathrm{T}r'(X'(I))$. 3. Suppose $\{\Lambda_l\}_{l=1}^{j+d+1}$ be the Voronoi diagram of the set $U'_{j+d+1}$ and $\{\Lambda_l\}_{l\in\Gamma}$ are Voronoi regions with respect to those elements from the complementary set $U'_{j+d+1}-\{\chi'_1,\cdots,\chi'_j\}$, then $$\label{SRTF:eq:centerjp1} \chi'_{j+1}=u'_{l^*}\in U'_{j+d+1}-\{\chi'_1,\cdots,\chi'_j\},$$ where $$l^*=\arg\max_{l\in\Gamma}\sum_{x'\in\Lambda_l\cap X'(I)} \frac{|r'_j(x')|^2}{n_l},$$ and $n_l$ is the point number of $\Lambda_l\cap X'(I)$. 4. And the termination criteria is $$\label{SRTF:eq:centerT} \kappa(R)>\omega_1~~\textrm{or}~~ \epsilon_j-\epsilon_{j+1}<\omega_2~~\textrm{or}~~j+1=N'_I,$$ where $\kappa(R)=\frac{\max_l|R_{ll}|}{\min_l|R_{ll}|}$ is an estimation of the condition number $\|R^{-1}\|\|R\|$ and $$\epsilon_j=\sqrt{\frac{1}{N'_I}\sum_{x'\in X'(I)}(r'_j(x'))^2}.$$ To ensure that the centers is not too sparse, its number should usually be greater than $d+2$ (imagine a case that the domain $\Omega'$ is a $d$-dimensional simplex). Obviously, each newly selected center is in the Voronoi region with the largest mean squared error of the temporary residual. This allows the exploration to effectively capture the global component of the residual, see Fig. \[SRTF:fig:2\] for examples. ![**[]{data-label="SRTF:fig:2"}](fig2-f.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:2"}](fig2-D.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:2"}](fig2-s.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:2"}](fig2-ys.eps "fig:"){width="49.00000%"} The sparse RBF refinement $s_{r'}$ is obtained when the exploration is terminated, then we update the residual $r''$ on the full set $X'$. Let the final number of the centers is $N''$ and relevant coefficients $\alpha'=\left(\alpha'_1,\cdots,\alpha'_{N''}\right)^\mathrm{T}$, then $$\label{SRTF:eq:updError} r''(X')=r'(X')-\sum_{1\leqslant i\leqslant N''} \alpha'_ie^{-\delta'^2\|X'-\chi'_i\|_2^2}.$$ In addition, assume that the number of all currently existing nodes is $M$ and $\{n_c^{(i)}\}_{i=1}^M$ is the set of the center number of each node, now define the average $$\label{SRTF:eq:Anc} \bar{n}_c=\frac{1}{M}\sum_{i=1}^Mn_c^{(i)},$$ and we can use a certain multiple of the average $\bar{n}_c$, say $100$ times, as the value of $N'_I$ for the sparsification of the next node. For the initial node we usually take a fixed value related to the dimension $d$. Equal binary splitting and termination {#SRTF:s3:4} -------------------------------------- First we consider the selection of two splitting points, then use a hyperplane, whose normal is defined by these two points, to split all the points $X'$ into two parts as well as the domain $\Omega'$ into two subdomains. Clealy, since the half space and $\Omega'$ are both convex, each subdomain is also convex. In order to block the spread of error, we expect to separate the points with large errors from those with small errors. First, we generate $d+1$ quasi-uniform points $U'_{d+1}$ of $X'(I)$ by the method of subsection \[SRTF:s3:2\] with a different starting point: $$u'_1=\arg\max_{x'\in X'(I)}\|x'-\overline{X'(I)}\|_2,~~\textrm{where}~~ \overline{X'(I)}=\frac{1}{N'_I}\sum_{i=1}^{N'_I}x'_{I(i)}.$$ Assume that the domain $\Omega'$ is a $d$-dimensional simplex and $X'$ is dense enough, then $U'_{d+1}$ can almost be viewed as its vertices. Let $\{\Lambda_l\}_{l=1}^{d+1}$ be the Voronoi diagram of $U'_{d+1}$, then the first splitting point is determined as $$x'_a=u'_{l^*},$$ where $l^*=\arg\max_l\sum_{x'\in\Lambda_l\cap X'(I)}\frac{|r''(x')|^2}{n_l}$ and $n_l$ is the point number of $\Lambda_l\cap X'(I)$. Then the second splitting point is determined as $$x'_b=\arg\max_{x'\in X'(I)}\|x'-x'_a\|_2.$$ Then, according to the projections of $X'$ in the direction $x'_b-x'_a$ and its median, $X'$ can be splitted into $X'_1$ and $X'_2$ with the sizes $\lceil\frac{N'}{2}\rceil$ and $N'-\lceil\frac{N'}{2}\rceil$, respectively; where $\lceil t\rceil$ denotes the least integer greater than or equal to $t$. Specifically, let $\vec{n}'=(x'_b-x'_a)^\mathrm{T}$, then the projections $$P_{\vec{n}'}(X')=X'\vec{n}',~~\textrm{where}~~ X'\in\mathbb{R}^{N'\times d}~\textrm{and}~\vec{n}'\in\mathbb{R}^{d\times 1};$$ let $c'=\textrm{median}(P_{\vec{n}'}(X'))$, then $X'_1$ and $X'_2$ can be given as $$\label{SRTF:eq:split1} X'_1=\{x'\in X':P_{\vec{n}'}(x')\leqslant c'\}~~\textrm{and}~~X'_2=X'-X'_1;$$ and similarly, $\Omega'_1$ and $\Omega'_2$ can be given as $$\label{SRTF:eq:split2} \Omega'_1=\{x'\in \Omega':x'\vec{n}'\leqslant c'\} ~~\textrm{and}~~\Omega'_2=\Omega'-\Omega'_1.$$ Since the local high-frequency error tends to propagate over the entire domain, blocking its propagation is very important for a sparse approximation, and this is the motivation for designing the above splitting, see Fig. \[SRTF:fig:3\]. ![**[]{data-label="SRTF:fig:3"}](fig3-f.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:3"}](fig3-r.eps "fig:"){width="49.00000%"} This exploration-splitting process finally stops if the expected RAE $\epsilon_{\mathrm{E}}$ is reached or the data is insufficient at the current tree node. Another important use of the average $\bar{n}_c$ defined in is to determine whether the data is sufficient. Obviously, a sparse approximation must be based on relatively sufficient data, so if the size of $X'_1$ or $X'_2$ is less than $\omega_4$ times the average $\bar{n}_c$ and the RAE of residual still does not reach the expected $\epsilon_{\mathrm{E}}$, then we consider that the relevant node is lack of data, terminate further operations and record the node. A proper $\omega_4$ can guarantee that the prediction does not over-fit the data. SRT prediction and its error characteristics -------------------------------------------- Suppose $s'$ is the current approximation on the domain $\Omega'$ and $s'_{r''(X'_i)}$ is the refinement on $\Omega'_i~(i=1,2)$. Then the next approximation $s'_i$ on $\Omega'_i~(i=1,2)$ can be given as $$\label{SRTF:eq:J} s'_i(x)=s'(x)+s'_{r''(X'_i)}(x),~~\forall x\in\Omega'_i.$$ It is clear that the SRT prediction is actually piecewise smooth on the original domain $\Omega$, hence the error of each piece will be significantly larger near the boundary. The following example illustrates the error characteristics of SRTs. Although the SRT prediction, as shown on the left-hand side of Fig. \[SRTF:fig:4\], can adaptively build a piecewise and sparse approximation according to local features of the target function, the approximation error, as shown on the right-hand side of Fig. \[SRTF:fig:4\], may be significantly larger near the boundary of each piece. Hence, we will introduce the sparse residual forest for overcoming this boundary effect of the error in the next section. ![**[]{data-label="SRTF:fig:4"}](fig4-ys.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:4"}](fig4-err.eps "fig:"){width="49.00000%"} The partition of unity is also one of the methods to address this issue. By introducing appropriate overlapping domains and rapidly decaying weight functions, the boundary effect of the error can be alleviated to some extent. However, since the overlapping domains usually cannot be too small and the depth of the tree is often not small, its time and space costs are significantly higher than $\mathcal{O}(N\log_2N)$. Instead, sparse residual forests still have the same cost as SRTs. And it provides even better performance than the partition of unity based method in terms of accuracy. Sparse residual forest {#SRTF:s4} ====================== Sparse residual forest (SRF) is a combination of SRT predictors with different tree decompositions. It provides an opportunity to avoid those predictions near the boundary and then use the average value of the remaining predictions to enhance both stability and convergence. First, we introduce a random splitting for SRTs. It can help generate random tree decompositions. Random binary splitting {#SRTF:s4:1} ----------------------- To get a random splitting, we only need to replace the median with a random percentile in . Let $p_r$ be a randomly selected integer from $37$ to $62$ inclusive, then $c'$ can be redefined as $$c'=\textrm{percentile}(P_{\vec{n}'}(X'),p_r),$$ where $\textrm{percentile}(Z,p_r)$ denotes the percentile of the values in a data vector $Z$ for the percentage $p_r$. Note that $0.618$ is the golden ratio and this method depends on the values of a random vector sampled independently and with the same distribution. SRF prediction -------------- Suppose $n_t$ is the number of SRTs in the SRF, we usually apply the equal splitting to generate the first SRT and the random splitting to create the remaining $n_t-1$ SRTs. SRF helps us to avoid those predictions with large squared deviations and to use the average value of the remaining predictions to enhance both stability and convergence. For any $x\in\Omega$, let $s_{\textrm{SRT}}^{(i)}(x)$ be the $i$th SRT prediction ($1\leqslant i\leqslant n_t$), then the squared deviation $$\sigma_i^2(x)=\left(s_{\textrm{SRT}}^{(i)}(x)- \frac{1}{n_t}\sum_{j=1}^{n_t}s_{\textrm{SRT}}^{(j)}(x)\right)^2,$$ further, let the indicator set $$I_F=\left\{1\leqslant i\leqslant n_t:\sigma_i^2(x) <\frac{1}{n_t}\sum_{j=1}^{n_t}\sigma_j^2(x)\right\},$$ then the SRF prediction $$\label{SRTF:eq:SRF} s_{\textrm{SRF}}(x)=\frac{1}{n_{I_F}}\sum_{i\in I_F}s_{\textrm{SRT}}^{(i)}(x), ~~\textrm{where}~n_{I_F}~\textrm{is the size of}~I_F.$$ The indicator set $I_F$ here is used to avoid those predictions near the boundaries. In practice, as shown in Fig. \[SRTF:fig:5\], SRFs composed of a small number of SRTs perform quite well than individual SRTs; and in theory, similar to random forests [@BreimanL2001A_RandomForest], the error for SRFs converges with probability $1$ to a limit as $n_t$ becomes large, see Fig. \[SRTF:fig:6\] for examples and subsection \[SRTF:s5:3\] for details. Although SRF predictions usually have smaller errors when the SRT number $n_t$ is larger, we usually do not recommend choosing a large $n_t$, which means $n_t$ times the storage and computational cost. ![**[]{data-label="SRTF:fig:5"}](fig5-te.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:5"}](fig5-fe.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:6"}](fig6-fe10.eps "fig:"){width="49.00000%"} ![**[]{data-label="SRTF:fig:6"}](fig6-fe100.eps "fig:"){width="49.00000%"} Theory {#SRTF:s5} ====== Stability properties -------------------- Suppose $\Omega_{L-1}$ is a leaf node, that is at the lowest level in a SRT, and $L$ levels of approximation, then there exists a domain sequences $\Omega_0\supset\Omega_1\supset\cdots \supset\Omega_{L-1}$ and a relevant dataset sequences $X_0\supset X_1\supset\cdots\supset X_{L-1}$ with relevant sizes $N_0>N_1>\cdots>N_{L-1}$ and shape parameters $\delta_0<\delta_1<\cdots<\delta_{L-1}$, where $\Omega_0=\Omega$ is convex, $X_0=X$ and $N_0=N$; and then, the SRT prediction of the target function $f$ is $$\label{SRTF:eq:SRT} s_{\mathrm{SFT}}(x)=\sum_{i=0}^{L-1}s_i(x),~~\forall x\in\Omega_{L-1},$$ and the final residual $$\label{SRTF:eq:RSRT} r_L(x)=f(x)-s_{\mathrm{SFT}}(x),~~\forall x\in\Omega_{L-1},$$ where $s_i(x)=\sum_{j=1}^{N'_i}\alpha_i^{(j)}G_{\delta_i}(x-\chi_i^{(j)})\in \mathcal{N}_{G_{\delta_i}}(\Omega_i)$ is the LS approximation of the residual $r_i(X_i)$ with respect to the centers $\chi_i=\{\chi_i^{(j)}\}_{j=1}^{N'_i}\in X_i$, and $r_{i+1}=r_i-s_i$ with $r_0=f$. Then, for any $1\leqslant i\leqslant L-1$, it follows that $$\big(s_i(X_i),r_{i+1}(X_i)\big)_{\ell_2}=0~~\textrm{and}~~ \alpha_i=R_i^{-1}Q_i^\mathrm{T}r_i(X_i)=R_i^{-1}Q_i^\mathrm{T}s_i(X_i),$$ where $Q_iR_i$ is the QR decomposition of the current matrix $\Phi_i=\Phi_{X_i,\chi_i}$ generated by the kernel $G_{\delta_i}$. If $\tau_i$ is the smallest singular value of $R_i$, then $$\label{SRTF:eq:Ti} \|\alpha_i\|_2\leqslant\tau_i^{-1}\|s_i(X_i)\|_2.$$ According to the orthogonality of $s_i(X_i)$ and $r_{i+1}(X_i)$, we can obtain the following recurrence relations $$\|r_i(X_i)\|_2^2=\|r_{i+1}(X_i)\|_2^2+\|s_i(X_i)\|_2^2, ~~0\leqslant i\leqslant L-1,$$ and $$\|r_i(X_{i-1})\|_2^2>\|r_i(X_i)\|_2^2,~~1\leqslant i\leqslant L,$$ thus, it follows that $$\|f_X\|_2^2=\|s_0(X_0)\|_2^2+\|r_1(X_0)\|_2^2 >\sum_{i=0}^{L-1}\|s_i(X_i)\|_2^2+\|r_L(X_{L-1})\|_2^2.$$ Together with , we proved the following theorem. \[SRTF:thm:Cbound\] Suppose $s_\mathrm{SFT}$ is a SRT prediction of a function $f$ on a leaf node $\Omega_{L-1}\subset\Omega$ with respect to the data $(X,f_X)$, as defined in . Let $\alpha_i$ be the coefficients of the $i$th level LS approximation $s_i$, then $$\sum_{i=0}^{L-1}\|\alpha_i\|_2\leqslant\tau^{-1}\cdot\|f_X\|_2,$$ where $\tau=\min_{1\leqslant i\leqslant L-1}\tau_i$ and the constants $\tau_i$ comes from . Note that this theorem obviously holds for our SRTs with sparsification processes introduced in subsection \[SRTF:s3:1\]. And now we can prove the following theorem. \[SRTF:thm:WNbound\] Under the supposition of Theorem \[SRTF:thm:Cbound\]. For all $1\leqslant p<\infty$, $k\in\mathbb{N}_0$, $\delta\geqslant\delta_{L-1}$, and any leaf node $\Omega_{L-1}$ of the prediction $s_{\mathrm{SRT}}$, it holds that $$|s_\mathrm{SFT}|_{W_p^k(\Omega_{L-1})}\leqslant C_W\cdot\tau^{-1}\cdot\|f_X\|_2 ~~\textrm{and}~~\|s_\mathrm{SFT}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} \leqslant C_{\mathcal{N}}\cdot\tau^{-1}\cdot\|f_X\|_2,$$ where the constant $\tau$ comes from Theorem \[SRTF:thm:Cbound\], the constant $C_W$ depends only on $\delta_0,\delta_{L-1},d,p$ and $k$, and the constant $C_{\mathcal{N}}$ depends only on $\delta_{L-1}$ and $d$. To prove the first inequality, observe that $$\begin{aligned} |s_i|_{W_p^k(\Omega_{L-1})}\leqslant\!\left(\sum_{|r|=k}\sum_j \big|\alpha_i^{(j)}\big|^p\big\|D^rG_{\delta_i}\big\|^p_{L_p(\Omega_{L-1})} \right)^{1/p}\!\!\leqslant M_{\delta_i}^{p,k}\|\alpha_i\|_p\leqslant C_1M_{\delta_i}^{p,k}\|\alpha_i\|_2,\end{aligned}$$ where $M_{\delta_i}^{p,k}=\left(\sum_{|r|=k} \|D^rG_{\delta_i}\|^p_{L_p(\mathbb{R}^d)}\right)^{1/p}$, and for any $0\leqslant i\leqslant L-1$, $M_{\delta_i}^{p,k}<M_{\delta_{L-1}}^{p,k}$ when $k>1$; or $M_{\delta_i}^{p,k}<M_{\delta_0}^{p,k}$ when $k<1$; or $M_{\delta_i}^{p,k}=M^{p,k}$ is independent of $\delta_i$ when $k=1$. Together with Theorem \[SRTF:thm:Cbound\], we have $$\begin{aligned} |s_\mathrm{SFT}|_{W_p^k(\Omega_{L-1})}\leqslant&\sum_{i=0}^{L-1} |s_i|_{W_p^k(\Omega_{L-1})}\leqslant C_W\cdot\tau^{-1}\cdot\|f_X\|_2,\end{aligned}$$ where $C_W=C_1M_{\delta_{L-1}}^{p,k}$ when $k>1$, or $C_W=C_1M_{\delta_0}^{p,k}$ when $k<1$, or $C_W=C_1M^{p,k}$ when $k=1$. To prove the second inequality, observe that for any $s_i\in \mathcal{N}_{G_{\delta_i}}(\Omega_i)$, there is a natural extension $\mathcal{E}s_i\in\mathcal{N}_{G_{\delta_i}}(\mathbb{R}^d)$ with $\|\mathcal{E}s_i\|_{\mathcal{N}_{G_{\delta_i}}(\mathbb{R}^d)}= \|s_i\|_{\mathcal{N}_{G_{\delta_i}}(\Omega_i)}$. From the definition of native spaces of Gaussians, we see that $\mathcal{E}s_i\in\mathcal{N}_{G_\delta} (\mathbb{R}^d)$ with $$\label{SRTF:eq:keyNG} \|\mathcal{E}s_i\|_{\mathcal{N}_{G_\delta}(\mathbb{R}^d)}\leqslant \|\mathcal{E}s_i\|_{\mathcal{N}_{G_{\delta_i}}(\mathbb{R}^d)},$$ where $\delta\geqslant\delta_{L-1}>\cdots>\delta_0$; and further, the restriction $\mathcal{E}s_i|\Omega_{L-1}=s_i|\Omega_{L-1}$ of $\mathcal{E}s_i$ to $\Omega_{L-1}\subseteq\Omega_i$ is contained in $\mathcal{N}_{G_\delta}(\Omega_{L-1})$ with $$\|s_i|\Omega_{L-1}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})}\leqslant \|\mathcal{E}s_i\|_{\mathcal{N}_{G_\delta}(\mathbb{R}^d)},$$ hence, we have $\|s_i|\Omega_{L-1}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} \leqslant\|\mathcal{E}s_i\|_{\mathcal{N}_{G_{\delta_i}}(\mathbb{R}^d)}$, and then $$\|s_\mathrm{SFT}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} \leqslant\sum_{i=1}^{L-1} \|s_i|\Omega_{L-1}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} \leqslant\sum_{i=1}^{L-1} \|\mathcal{E}s_i\|_{\mathcal{N}_{G_{\delta_i}}(\mathbb{R}^d)}.$$ Together with Theorem \[SRTF:thm:Cbound\] and $$\begin{aligned} \|\mathcal{E}s_i\|^2_{\mathcal{N}_{G_{\delta_i}}(\mathbb{R}^d)} \!=\!\int_{\mathbb{R}^d}|\hat{s}_i(\omega)|^2 e^{\frac{\|\omega\|_2^2}{4\delta_i^2}}{\mathrm{d}}\omega \leqslant\|\alpha_i\|_1^2\! \int_{\mathbb{R}^d}e^{-\frac{\|\omega\|_2^2}{4\delta_i^2}}{\mathrm{d}}\omega \leqslant C_2^2(2\delta_{L-1})^d\pi^{d/2}\|\alpha_i\|_2^2\end{aligned}$$ we finally have $\|s_\mathrm{SFT}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} <C_{\mathcal{N}}\cdot\tau^{-1}\cdot\|f_X\|_2$, where $C_{\mathcal{N}}=C_2(2\delta_{L-1})^{d/2}\pi^{d/4}$. See Theorems $10.46$ and $10.47$ in [@WendlandH2005B_ScatteredData] for details about the restriction and extension of functions from certain native spaces. \[SRTF:rem:M\] The second inequality depends on the embeddings of native spaces of Gaussians. As mentioned in section \[SRTF:s2\], the Fourier transform of the inverse multiquadrics is $\widehat{M}_\delta(\omega)=\frac{2^{1-\beta}}{\Gamma(\beta)} (\delta\|\omega\|_2)^{\beta-d/2}K_{d/2-\beta}(\|\omega\|_2/\delta)$, then for any $\beta>\frac{d}{2}$ and $\delta\geqslant\delta_i$, $\widehat{M}_\delta^{-1}(\omega) \leqslant\widehat{M}_{\delta_i}^{-1}(\omega)$, and then, for an inverse multiquadric based $s_i$, $$\label{SRTF:eq:keyNM} \|\mathcal{E}s_i\|_{\mathcal{N}_{M_\delta}(\mathbb{R}^d)}\leqslant \|\mathcal{E}s_i\|_{\mathcal{N}_{M_{\delta_i}}(\mathbb{R}^d)},$$ hence, the second inequality also holds for native spaces of inverse multiquadrics. Error estimates for SRTs ------------------------ \[SRTF:thm:EEW\] Under the supposition of Theorem \[SRTF:thm:Cbound\]. If $f\in W_p^k(\Omega)$ and $r_L$ is the residual $f-s_{\mathrm{SRT}}$ on an arbitrary leaf node $\Omega_{L-1}$, then for any $1\!\leqslant\!q\!\leqslant\!\infty$, $\gamma\in\mathbb{N}_0^d$, and $1\!\leqslant\!p\!<\!\infty$ with $k>|\gamma|+d/p$ if $p>1$, or with $k\geqslant|\gamma|+d$ if $p=1$, it holds that $$\|D^\gamma r_L\|_{L_q(\Omega_{L-1})}\!\leqslant\! C\left[h^{k-|\gamma|-\left(\frac{d}{p}-\frac{d}{q}\right)_+}\!\! \left(|f|_{W_p^k(\Omega)}\!+C_W\tau^{-1}\|f_X\|_2\right)\!+ h^{-|\gamma|}\|r_L|X_L\|_\infty\right],$$ where $(t)_+=\max(t,0)$, the fill distance $h$ is assumed to be sufficiently small, the constant $C$ do not depend on $f,r_L$ or $h$, and the constant $C_W$ comes from Theorem \[SRTF:thm:WNbound\]. According to the sampling inequality for functions from certain Sobolev spaces on a bounded domain (see Theorem $2.6$ in [@WendlandH2005_RBFSamplingInequalities]), we have $$\|D^\gamma r_L\|_{L_q(\Omega_{L-1})}\!\leqslant\! C\left(h^{k-|\gamma|-\left(\frac{d}{p}-\frac{d}{q}\right)_+} |r_L|_{W_p^k(\Omega_{L-1})}\!+h^{-|\gamma|}\|r_L|X_L\|_\infty\right),$$ and further, $$|r_L|_{W_p^k(\Omega_{L-1})}=|f-s_{\mathrm{SRT}}|_{W_p^k(\Omega_{L-1})} \leqslant|f|_{W_p^k(\Omega)}+|s_{\mathrm{SRT}}|_{W_p^k(\Omega_{L-1})}.$$ Applying the first inequality of Theorem \[SRTF:thm:WNbound\] finishes the proof. This result also explains how the matrix $\Phi_i$ at each level affects the convergence. It is worth noting that this proof does not depend on the radial basis functions, so the next observation is an immediate consequence. The result of Theorem \[SRTF:thm:EEW\] holds for arbitrary basis functions based SRTs provided those basis functions belongs to $W_p^k(\Omega)$. It shows that a SRT, whose basis functions are differentiable and have bounded derivatives on $\Omega$ (regardless of polynomials, trigonometric polynomials, radial basis functions), leads to algebraic convergence orders for finitely smooth target functions. For infinitely smooth target functions, the following theorem shows that the Gaussian based SRT leads to exponential convergence orders. \[SRTF:thm:EENG\] Under the supposition of Theorem \[SRTF:thm:Cbound\]. If $f\in\mathcal{N}_{G_\delta}(\Omega)$ and $r_L$ is the residual $f-s_{\mathrm{SRT}}$ on an arbitrary leaf node $\Omega_{L-1}$, then for any $1\!\leqslant\! q\!\leqslant\!\infty$, $\gamma\in\mathbb{N}_0^d$, and $\delta>\delta_{L-1}$, there are constants $C$ and $h_0$ such that for all $h\leqslant h_0$, it holds that $$\|D^\gamma r_L\|_{L_q(\Omega_{L-1})}\!\leqslant\! e^{C\log(h)/\sqrt{h}}\left(\|f\|_{\mathcal{N}_{G_\delta}(\Omega)}+ C_{\mathcal{N}}\tau^{-1}\|f_X\|_2\right) +C'h^{-|\gamma|}\|r_L|X_L\|_\infty,$$ where the constant $C$ depends only on the geometry of $\Omega_{L-1}$, $h_0$ may depend on $d,p,q,\gamma$ and the geometry of $\Omega_{L-1}$ but not on $h$ or $f$, $C'$ do not depend on $h$ or $r_L$, and the constant $C_{\mathcal{N}}$ comes from Theorem \[SRTF:thm:WNbound\]. According to the sampling inequality for functions from certain native spaces of Gaussians on a bounded domain (see Theorems $3.5$ and $7.5$ in [@RiegerC2010_RBFSamplingInequalities]), we have $$\|D^\gamma r_L\|_{L_q(\Omega_{L-1})}\!\leqslant\! e^{C\log(h)/\sqrt{h}}\|r_L\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} +C'h^{-|\gamma|}\|r_L|X_L\|_\infty,$$ and further, $$\|r_L\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} =\|f-s_{\mathrm{SRT}}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} \leqslant\|f|\Omega_{L-1}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})} +\|s_{\mathrm{SRT}}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})},$$ where $f|\Omega_{L-1}$ is the restriction of $f$ to $\Omega_{L-1}$ with $\|f|\Omega_{L-1}\|_{\mathcal{N}_{G_\delta}(\Omega_{L-1})}\leqslant \|f\|_{\mathcal{N}_{G_\delta}(\Omega)}$ (see Theorem $10.47$ in [@WendlandH2005B_ScatteredData]); and applying the second inequality of Theorem \[SRTF:thm:WNbound\] finishes the proof. Similarly, according to Remark \[SRTF:rem:M\] and the sampling inequality for functions from certain native spaces of Gaussians on a bounded domain (see Theorems $3.5$ and $7.6$ in [@RiegerC2010_RBFSamplingInequalities]), we can also prove the convergence for the inverse multiquadric based SRTs. \[SRTF:thm:EENM\] Under the supposition of Theorem \[SRTF:thm:Cbound\]. If $f\in\mathcal{N}_{M_\delta}(\Omega)$, $s_{\mathrm{SRT}}$ is based on inverse multiquadrics, and $r_L$ is the residual $f-s_{\mathrm{SRT}}$ on an arbitrary leaf node $\Omega_{L-1}$, then for any $1\!\leqslant\! q\!\leqslant\!\infty$, $\gamma\in\mathbb{N}_0^d$, and $\delta>\delta_{L-1}$, there are constants $C$ and $h_0$ such that for all $h\leqslant h_0$, it holds that $$\|D^\gamma r_L\|_{L_q(\Omega_{L-1})}\!\leqslant\! e^{-\frac{C}{\sqrt{h}}}\left(\|f\|_{\mathcal{N}_{M_\delta}(\Omega)}+ C_{\mathcal{N}}\tau^{-1}\|f_X\|_2\right) +C'h^{-|\gamma|}\|r_L|X_L\|_\infty,$$ where the constants $C$ and $h_0>0$ depends only on $d,p,q,\gamma$ and the geometry of $\Omega_{L-1}$, $C'$ do not depend on $h$ or $r_L$, and the constant $C_{\mathcal{N}}$ comes from Theorem \[SRTF:thm:WNbound\]. Error estimates for SRFs {#SRTF:s5:3} ------------------------ For any $x\in\Omega_{L-1}\subset\Omega$, each SRT prediction $s_{\mathrm{SRT}}^{(i)}(x)$ ($1\leqslant i\leqslant n_t$) in a SRF converges to the target function $f(x)$ and satisfies relevant error estimates, thus, together with the Strong Law of Large Numbers and the Lindeberg-Levy central limit theorem, it follows that: For any $x\in\Omega_{L-1}$, there exists an expectation $m_{\mathrm{SRF}}(x)$ such that $$\lim_{n_t\to\infty}s_{\mathrm{SRF}}(x;n_t) =\lim_{n_t\to\infty}\left(\frac{1}{n_t} \sum_{i=1}^{n_t}s_{\mathrm{SRT}}^{(i)}(x)\right)$$ converges almost surely to $m_{\mathrm{SRF}}(x)$. Further, for any $1\leqslant q\leqslant\infty$ and $\gamma\in\mathbb{N}_0^d$, if $\|D^\gamma(s_{\mathrm{SRT}}^{(i)}(x)-f(x))\|_{L_q(\Omega_{L-1})}\leqslant\epsilon$, then there exists $\sigma\leqslant2\epsilon$ such that the random variables $\|D^\gamma(s_{\mathrm{SRF}}(x;n_t)- m_{\mathrm{SRF}}(x))\|_{L_q(\Omega_{L-1})}$ converge in distribution to a normal $N(0,\sigma/\sqrt{n_t})$, i.e., for any $\lambda_a>0$, the inequality $$\left\|D^\gamma\Big(s_{\mathrm{SRF}}(x;n_t)-m_{\mathrm{SRF}}(x) \Big)\right\|_{L_q(\Omega_{L-1})}\leqslant\frac{\lambda_a\sigma}{\sqrt{n_t}} \leqslant\frac{2\lambda_a\epsilon}{\sqrt{n_t}}$$ holds with probability $1-a$, where $a=\frac{1}{\sqrt{2\pi}}\int_{-\lambda_a}^{\lambda_a}e^{-\frac{t^2}{2}}{\mathrm{d}}t$. Obviously, the above result also holds for the SRF prediction defined in that is more stable and is specially designed for overcoming the boundary effect of the error, as shown in Fig. \[SRTF:fig:3\]. Combining the results of the previous subsection, one can obtain the error estimates for SRF predictions in the corresponding spaces. Complexity analysis ------------------- Since the maximum depth of a binary tree is $\log_2N$ and the full data is only used for updating the residual, it is easy to see that: Algorithm in section \[SRTF:s3\] needs $\mathcal{O}(N\log_2N)$ time and $\mathcal{O}(N\log_2N)$ space in the worst case to train a SRT for $N$ arbitrary distributed points; and needs $\mathcal{O}(\log_2N)$ time in the worst case to make a prediction for a new point $x$. And the costs of algorithm in section \[SRTF:s4\] are $n_t$ times that of the SRT for a SRF with $n_t$ SRTs. This result shows that the SRT or SRF also yields the excellent performance in terms of efficiency in addition to accuracy and adaptability. It is worth pointing out that the algorithm in section \[SRTF:s3\] is designed to achieve hierarchical parallel processing so that the training process can be accelerated using multi-core architectures. Numerical examples {#SRTF:s6} ================== In this section we compare the performance of both SRT and SRF with the Gaussian process regression (GPR). For an approximation $s$ of the target function $f$ on a test dataset $Z=\{z_i\}_{i=1}^{N_t}$ of size $N_t$, we use the relative mean absolute error (RMAE) as a measure of accuracy, i.e., $$\label{SRTF:eq:RMAE} \mathrm{RMAE}=\frac{\sum_{i=1}^{N_t}|s(z_i)-f(z_i)|}{\sum_{i=1}^{N_t}|f(z_i)|}.$$ We use two test functions: one is Franke’s function, which is defined as: $$\begin{aligned} \label{SRTF:eq:Franke} f(x)=&\frac{3}{4}\exp\left(-\frac{(9x_1\!-\!2)^2}{4}-\frac{(9x_2\!-\!2)^2}{4}\right) +\frac{3}{4}\exp\left(-\frac{(9x_1+1)^2}{49}-\frac{9x_2+1}{10}\right) \\ &+\frac{1}{2}\exp\left(-\frac{(9x_1\!-\!7)^2}{4}-\frac{(9x_2\!-\!3)^2}{4}\right) \!-\!\frac{1}{5}\exp\left(-(9x_1\!-\!4)^2\!-(9x_2\!-\!7)^2\right)\nonumber,\end{aligned}$$ where $x\in[0,1]^d$ for $d\geqslant2$; and the other is local oscillating and defined as: $$\begin{aligned} \label{SRTF:eq:LO} g(x)=&-2x_1x_2+2x_2^2-330\exp\left(-\frac{\|x\|_2^2}{2}\right)\sin(2\|x\|_2^2),\end{aligned}$$ where $x\in[-7,7]^d$ for $d\geqslant2$. All our numerical tests are based on scattered data which are either randomly generated or the Halton sequence [@HaltonJH1960M_HaltonSequence]. In addition, the procedure for the above two methods at each sample size is repeated $5000$ times for investigating the stability of the results. We use [Matlab]{}’s function [fitrgp]{} to generate a GPR model trained using the same sample data of proposed methods. Fit the GPR model using the subset of regressors method for parameter estimation and fully independent conditional method for prediction. Standardize the predictors. Besides, since the computational complexity of GPR is $\mathcal{O}(N^3)$ for training work and $\mathcal{O}(N^2)$ for each prediction, where $N$ is the sample size, it is very difficult to use GPR for large data set, so the sample size is varied from $10^1$ to $10^4$ for all numerical tests by using GPR. Accuracy, sparsity, storage and computational time -------------------------------------------------- The size of Halton points are varied from $10^1$ to $10^4$ for Franke’s function. The results are shown in Fig. \[SRTF:fig:7\]. From the upper left of Fig. \[SRTF:fig:7\], as expected, the RMAE of both SRT and SRF are much lower than GPR as data point $N$ is large. Moreover, from the upper right of Fig. \[SRTF:fig:7\], we can find out that the average number of centers for SRT prediction at $1$ point is varying from $10$ to $182$. Besides, since the size of sample points is varied from $10^1$ to $10^4$, both the storage requirement and the computational time of the proposed methods are much lower than those of GPR. ![**[]{data-label="SRTF:fig:7"}](fig7vsgprmae.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:7"}](fig7numberc.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:7"}](fig7vsgpmemory.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:7"}](fig7vsgptime.eps "fig:"){width="40.00000%"} Insufficient data report ------------------------ We choose the second test function $g(x)$, $x\in[-7,7]^2$, to illustrate the insufficient data situation. From Fig. \[SRTF:fig:8\] we can find out that since $g(x)$ is complicated near the central of domain and the RAE of residual still does not reach the expected error when the sample points $N=3000$; that is, the relevant node is lack of data at this area. Further, by adding the size of sample points $N$ to $6000$, as expected, from the lower left of Fig.\[SRTF:fig:8\] we find out that the RAE clearly decreased (with the maximum RAE decreases from $0.1926$ to $0.0784$). Besides, from the lower right of Fig. \[SRTF:fig:8\], the median value of centers for SRT prediction at one point for both $N=3000$ and $N=6000$ are close to $110$. ![**[]{data-label="SRTF:fig:8"}](fig8raeN3000.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:8"}](fig8lack.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:8"}](fig8raeN6000.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:8"}](fig8centersnew.eps "fig:"){width="40.00000%"} $3$-dimensional problem ----------------------- The $3$-dimensional Franke’s function can be shown in Fig. \[SRTF:fig:9\]. The size of Halton points are varied from $10^1$ to $10^6$. We find out that the RMAEs of proposed methods are not as good as that of GPR when size of sample points is less than $10^4$. Further, since the size of sample points is varied from $10^4$ to $10^6$, leading to the value of error varying from $5.7224\times10^{-4}$ to $2.3126\times10^{-7}$ by using SRT, and from $1.3037\times10^{-4}$ to $4.7757\times10^{-8}$ by using SRF. Besides, from low figures of Fig. \[SRTF:fig:9\], it is noted that both the storage requirement and the computing time of proposed methods are less than those of GPR. The average number of centers for SRT prediction at one point is varying from $10$ to $853$. ![**[]{data-label="SRTF:fig:9"}](fig9vsgprmae3D10tmix.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:9"}](fig9numberc10treeN6.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:9"}](fig9vsgpmemory3D10tmix.eps "fig:"){width="40.00000%"} ![**[]{data-label="SRTF:fig:9"}](fig9vsgptime3D10tmix.eps "fig:"){width="40.00000%"} Conclusions {#SRTF:s7} =========== In this work, we proposed two new methods for multivariate scattered data approximation, named Sparse residual tree (SRT) and Sparse residual tree (SRF), respectively. We proved that the time complexity of SRTs is less than $\mathcal{O}(N\log_2 N)$ for the initial work and $\mathcal{O}(\log_2N)$ for each prediction, and the storage requirement is less than $\mathcal{O}(N\log_2N)$, where $N$ is the data points. From the numerical experiments, we can find out that the proposed methods are good at dealing with cases where the data is sufficient or even redundant. For the higher dimensional problem, the proposed methods do not work as well as we expected. The possible reason is that the sample size is usually difficult to be sufficient or even redundant for higher dimensional problems, and the proposed methods tend to point out the possible local regions where data refinement is needed, rather than obtain approximations. It provides that the proposed methods can be used to solve the large data sets problems. In the following works, we will try to improve the proposed methods for solving higher dimensional problems. since the sample size is always insufficient, the proposed methods tend to point out the possible local regions where data refinement is needed, rather than obtain approximations. It provides that the proposed methods can be used to solve the large data sets problems. list of figure {#list-of-figure .unnumbered} ==============
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that the solution of a multistate system composed of $N$ degenerate lower (ground) states and one upper (excited) state can be reduced by using the Morris-Shore transformation to the solution of a two-state system involving only the excited state and a (bright) superposition of ground states. In addition, there are $N-1$ dark states composed of ground states. We use this decomposition to derive analytical solutions for degenerate extensions of the most popular exactly soluble models: the resonance solution, the Rabi, Landau-Zener, Rosen-Zener, Allen-Eberly and Demkov-Kunike models. We suggest various applications of the multistate solutions, for example, as tools for creating multistate coherent superpositions by generalized resonant $\pi $-pulses. We show that such generalized $\pi $-pulses can occur even when the upper state is far off resonance, at specific detunings, which makes it possible to operate in the degenerate ground-state manifold without populating the (possibly lossy) upper state, even transiently.' author: - 'E.S. Kyoseva' - 'N.V. Vitanov' title: 'Coherent pulsed excitation of degenerate multistate systems: Exact analytic solutions' --- Introduction ============ The problem of a two-state quantum system driven by a time-dependent pulsed external field plays a central role in quantum physics [@Shore]. First of all, this problem is interesting by itself both physically and mathematically: physically, because the two-state system is the simplest nontrivial system with discrete energy states in quantum mechanics; mathematically, because the Schrödinger equation for two states poses interesting mathematical challenges some of which are exactly soluble. Furthermore, already in the two-state case, important nonclassical phenomena occur, for instance, the famous Rabi oscillations, which often serve as a test for quantum behavior, and also provide a powerful tool for coherent control of quantum dynamics, e.g. by $\pi $ pulses. Finally, in almost all cases (except for a few exactly soluble), the behavior of a multistate quantum system can only be understood by reduction to one or more effective two-state systems, e.g., by adiabatic elimination of weakly coupled states or by using some intrinsic symmetries. Besides the well-known solution for exact resonance, there exist several exactly soluble two-state models, the most widely used being the Rabi [Rabi]{}, Landau-Zener [@LZ], Rosen-Zener [@RZ], Allen-Eberly [@AE], Bambini-Berman [@BB], Demkov-Kunike [@DK], Carroll-Hioe [@CH], Demkov [@Demkov] and Nikitin [@Nikitin] models. All these models provide the transition probability between two *nondegenerate* states. ![Top: The system studied in this paper. $N$ degenerate (in RWAsense) states $\left\vert \protect\psi _{1}\right\rangle ,\left\vert \protect% \psi _{2}\right\rangle ,\ldots ,\left\vert \protect\psi _{N}\right\rangle $ are coupled simultaneously to an upper state $\left\vert \protect\psi % _{N+1}\right\rangle $, possibly off single-photon resonance by a detuning $% \Delta (t)$, with Rabi frequencies $\Omega _{n}(t)$ ($n=1,2,\ldots ,N$). Bottom: The same system in the Morris-Shore basis. There are $N-1$ uncoupled dark states $\left\vert \protect\varphi _{1}\right\rangle ,\left\vert \protect\varphi _{2}\right\rangle ,\ldots ,\left\vert \protect\varphi % _{N-1}\right\rangle $, and a pair of coupled states, a bright state $% \left\vert \protect\varphi _{N}\right\rangle $ and the upper state $% \left\vert \protect\psi _{N+1}\right\rangle $, with the same detuning $% \Delta (t)$ as in the original basis and a coupling given by the rms Rabi frequency $\Omega (t)$, Eq. (\[OmegaRMS\]). []{data-label="Fig-system"}](fig1.eps){width="60mm"} In the present paper, we present the extensions of these exactly soluble models to the case when one of the states is replaced by $N$ degenerate states, as displayed in Fig. \[Fig-system\]. By using the Morris-Shore (MS) transformation [@Morris-Shore] we show that the ($N+1$)-state problem can be reduced to an effective two-state problem involving a bright state and the upper, nondegenerate state. If known, the propagator for this subsystem can be used to find the solution for the full ($N+1$)-state system. Such analytic solutions can be very useful in designing general unitary transformations within the $N$-state degenerate manifold, which can be viewed as a *qunit* for quantum information processing [@QI]. We point out that the same system for $N=3$ has been considered by Unanyan et al [@tripod] and by Kis and Stenholm [@Kis] for general $N$, who have derived the adiabatic solution for pulses generally delayed in time; these schemes extend the well-known technique of stimulated Raman adiabatic passage (STIRAP) (see [@STIRAP] for reviews). Here we derive several *exact* analytic solutions for pulses *coincident* in time. This work can therefore be considered as an extension to arbitrary $N$ of an earlier paper [@Vitanov98], which treated the case $N=2$. This paper is organised as follows. In Sec. \[Sec-definition\] we describe the system and define the problem. In Sec. \[Sec-solution\] we introduce the MS basis and derive the $\left( N+1\right) $-state propagator in terms of the (presumably known) two-state propagator. In Sec. \[Sec-examples\] we use this solution to identify various interesting types of population evolutions. In Sec. \[Sec-applications\] we use the analytic solutions for exact resonance and the Rosen-Zener model to propose several applications, for example, creation of maximally coherent superpositions and qunit rotation. In Sec. \[Sec-LZ\] we discuss some aspects of the multistate Landau-Zener and Demkov-Kunike models. Finally, Sec. \[Sec-conclusions\] provides a summary of the results. Definition of the problem\[Sec-definition\] =========================================== System Hamiltonian ------------------ We consider an $\left( N+1\right) $-state system with $N$ degenerate lower (ground) states $\left\vert \psi _{n}\right\rangle $ $\left( n=1,2,...,N\right) $ and one upper (excited) state $\left\vert \psi _{N+1}\right\rangle $, as depicted in Fig. \[Fig-system\]. The $N$ lower states are coupled via the upper state with pulsed interactions, each pair of which are on two-photon resonance (Fig. \[Fig-system\]). The upper state $\left\vert \psi _{N+1}\right\rangle $ may be off single-photon resonance by some detuning $\Delta (t)$ that, however, must be the same for all fields. In the usual rotating-wave approximation (RWA) the Schrödinger equation of the system reads [@Shore]$$i\hbar \frac{d}{dt}\mathbf{C}(t)=\mathsf{H}(t)\mathbf{C}(t), \label{SEq}$$where the elements of the $\left( N+1\right) $-dimensional vector $\mathbf{C}% (t)$ are the probability amplitudes of the states and the Hamiltonian is given by$$\mathsf{H}(t)=\ \frac{\hbar }{2}% \begin{bmatrix} 0 & 0 & \cdots & 0 & \Omega _{1}\left( t\right) \\ 0 & 0 & \cdots & 0 & \Omega _{2}\left( t\right) \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & \Omega _{N}\left( t\right) \\ \Omega _{1}\left( t\right) & \Omega _{2}\left( t\right) & \cdots & \Omega _{N}\left( t\right) & 2\Delta \left( t\right) \end{bmatrix}% . \label{H}$$For the sake of simplicity the Rabi frequences of the couplings between the ground states and the excited state $\Omega _{1}\left( t\right) ,...,\Omega _{N}\left( t\right) $ are assumed real and positive as the populations do not depend on their signs. The phases of the couplings can easily be incorporated in the description and they can be used to control the inner phases of the created superposition states. Furthermore, the Rabi frequencies are assumed to be pulse-shaped functions with the same time dependence $f(t)$, but possibly with different magnitudes,$$\Omega _{n}\left( t\right) =\chi _{n}f(t)\quad \left( n=1,2,...,N\right) , \label{equal (t)}$$and hence different pulse areas,$$A_{n}=\int_{-\infty }^{\infty }\Omega _{n}(t)dt=\chi _{n}\int_{-\infty }^{\infty }f(t)dt\quad \left( n=1,2,...,N\right) . \label{area_n}$$ Physical implementations ------------------------ The linkage pattern described by the Hamiltonian (\[H\]) can be implemented experimentally in laser excitation of atoms or molecules. For example, the $N=3$ case is readily implemented in the $J=1\leftrightarrow J=0 $ system coupled by three laser fields with right circular, left circular and linear polarizations, as shown in Fig. \[Fig-implementations\] (left). These coupling fields can be produced from the same laser by standard optical tools (beam splitters, polarizers, etc.), which greatly facilitates implementation. Moreover, the use of pulses derived from the same laser ensures automatically the two-photon resonance conditions and the condition (\[equal (t)\]) for the same temporal profile of all pulses. The cases of $N=4-6$ can be realized by adding an additional $J=1$ level to the coupling scheme and appropriately polarized laser pulses, as shown in the right frame of Fig. \[Fig-implementations\]. ![Examples of physical implementations of the linkage pattern of $N$ degenerate ground states coupled via one upper state, considered in the present paper. Left: $N=3$ degenerate states. Right: $N=4$ degenerate (in the RWA sense) states (dashed arrows indicate two additional possible linkages).[]{data-label="Fig-implementations"}](fig2.eps){width="80mm"} General solution\[Sec-solution\] ================================ Morris-Shore (dark-bright) basis -------------------------------- The Hamiltonian (\[H\]) has $N-1$ zero eigenvalues and two nonzero ones, $$\begin{aligned} \lambda _{n} &=&0\quad (n=1,\ldots ,N-1), \\ ~\lambda _{\pm }(t) &=&\frac{1}{2}\left[ \Delta \pm \sqrt{\Delta ^{2}+\Omega ^{2}(t)}\right] ,\end{aligned}$$ where $$\Omega (t)=\sqrt{\overset{N}{\underset{n=1}{\sum }}\Omega _{n}^{2}(t)}\equiv \chi f(t) \label{OmegaRMS}$$ is the root-mean-square (rms) Rabi frequency, where$$\chi =\sqrt{\overset{N}{\underset{n=1}{\sum }}\chi _{n}^{2}}. \label{chi}$$The set of orthonormalized eigenstates $\left\vert \varphi _{n}\right\rangle $ $(n=1,2,\ldots ,N-1)$ corresponding to the zero eigenvalues can be chosen as \[dark-states\] $$\begin{aligned} \left\vert \varphi _{1}\right\rangle &=&\frac{1}{X_{2}}\left[ \chi _{2},-\chi _{1},0,0,\cdots ,0\right] ^{T}, \label{xi1} \\ \left\vert \varphi _{2}\right\rangle &=&\frac{1}{X_{2}X_{3}}\left[ \chi _{1}\chi _{3},\chi _{2}\chi _{3},-X_{2}^{2},0,\cdots ,0\right] ^{T}, \label{xi2} \\ \left\vert \varphi _{3}\right\rangle &=&\frac{1}{X_{3}X_{4}}\left[ \chi _{1}\chi _{4},\chi _{2}\chi _{4},\chi _{3}\chi _{4},-X_{3}^{2},0,\cdots ,0% \right] ^{T}, \label{xi3} \\ &&\cdots \notag \\ \left\vert \varphi _{N-1}\right\rangle &=&\frac{1}{X_{N-1}X_{N}}\left[ \chi _{1}\chi _{N},\chi _{2}\chi _{N},\cdots ,-X_{N-1}^{2},0\right] ^{T}, \label{xiN-1}\end{aligned}$$where $$X_{n}=\sqrt{\overset{n}{\underset{k=1}{\sum }}\chi _{k}^{2}}\quad \left( n=2,3,...,N\right) . \label{Xn}$$ These eigenstates are dark states, i.e. they do not involve the excited state $\left\vert \psi _{N+1}\right\rangle $ and, as we shall see, are uncoupled from $\left\vert \psi _{N+1}\right\rangle $. All dark states are time-independent. We emphasize that the choice (\[dark-states\]) of dark states is not unique because any superposition of dark states is a dark state too; hence their choice is a matter of convenience. The Hilbert space is decomposed into two subspaces: an $(N-1)$-dimensional dark subspace comprising the dark states (\[dark-states\]) and a two-dimensional subspace orthogonal to the dark subspace. It is convenient to use the Morris-Shore (MS) basis [@Morris-Shore], which, in addition to the dark states, includes the excited state $\left\vert \psi _{N+1}\right\rangle \equiv \left\vert \varphi _{N+1}\right\rangle $ and a bright ground state $\left\vert \varphi _{N}\right\rangle $. The latter does not have a component of the excited state and is orthogonal to the dark states; these conditions determine it completely (up to an unimportant global phase),$$\left\vert \varphi _{N}\right\rangle =\frac{1}{X_{N}}\left[ \chi _{1},\chi _{2},\cdots ,\chi _{N},0\right] ^{T}. \label{bright state}$$We point out that the Morris-Shore basis is *not* the adiabatic basis because only the dark states are eigenstates of the Hamiltonian, but $% \left\vert \varphi _{N}\right\rangle $ and $\left\vert \varphi _{N+1}\right\rangle $ are not. In the new, still stationary basis $\left\{ \left\vert \varphi _{n}\right\rangle \right\} _{n=1,2,...,N+1}$, the Schrödinger equation reads $$i\hbar \frac{d}{dt}\mathbf{B}(t)=\widetilde{\mathsf{H}}(t)\mathbf{B}(t), \label{SEq-MS}$$where the original amplitudes $\mathbf{C}(t)$ are connected to the MS amplitudes $\mathbf{B}(t)$ by the time-independent unitary matrix $\mathsf{W} $ composed by the basis vectors $\left\vert \varphi _{n}\right\rangle $, $$\mathsf{W}=\left[ \left\vert \varphi _{1}\right\rangle ,\left\vert \varphi _{2}\right\rangle ,~\ldots ,~\left\vert \varphi _{N+1}\right\rangle \right] , \label{W}$$according to $$\mathbf{C}(t)=\mathsf{W}\mathbf{B}(t). \label{C=WA}$$The transformed Hamiltonian reads $\widetilde{\mathsf{H}}(t)=\mathsf{W}% ^{\dag }\mathsf{H}(t)\mathsf{W}$, or explicitly,$$\widetilde{\mathsf{H}}(t)=\frac{\hbar }{2}% \begin{bmatrix} 0 & 0 & \cdots & 0 & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0 & \Omega (t) \\ 0 & 0 & \cdots & 0 & \Omega (t) & 2\Delta (t)% \end{bmatrix}% . \label{H-MS}$$ We point out that the Hamiltonian of Eq. (\[H\]) is a special case of the most general Hamiltonian for which the MS transformation [@Morris-Shore] applies and which includes $N$ degenerate lower states and $M$ degenerate upper states. Hamiltonians of the same type as (\[H\]) and related transformations leading to Eq. (\[H-MS\]), have appeared in the literature also before the paper by Morris and Shore [@Morris-Shore], mostly in simplified versions of constant and equal interactions (see e.g. [Stenholm]{} and references therein). Solution in the Morris-Shore basis ---------------------------------- As evident from the first $N-1$ zero rows of $\widetilde{\mathsf{H}}$ the dark states are decoupled from states $\left\vert \varphi _{N}\right\rangle $ and $\left\vert \varphi _{N+1}\right\rangle $ and the dark-state amplitudes remain unchanged, $B_{n}(t)=const$ ($n=1,2,\ldots ,N-1$). Thus the $\left( N+1\right) $-state problem reduces to a two-state one involving $\left\vert \varphi _{N}\right\rangle $ and $\left\vert \varphi _{N+1}\right\rangle $,$$i\frac{d}{dt}\left[ \begin{array}{c} B_{N} \\ B_{N+1}% \end{array}% \right] =\frac{1}{2}% \begin{bmatrix} 0 & \Omega \\ \Omega & 2\Delta% \end{bmatrix}% \left[ \begin{array}{c} B_{N} \\ B_{N+1}% \end{array}% \right] . \label{2 state problem}$$The propagator for this two-state system, defined by$$\left[ \begin{array}{c} B_{N}\left( +\infty \right) \\ B_{N+1}\left( +\infty \right)% \end{array}% \right] =\mathsf{U}_{MS}^{\left( 2\right) }\left[ \begin{array}{c} B_{N}\left( -\infty \right) \\ B_{N+1}\left( -\infty \right)% \end{array}% \right] , \label{A=UA}$$is unitary and can be expressed in terms of the Cayley-Klein parameters as$$\mathsf{U}_{MS}^{\left( 2\right) }=% \begin{bmatrix} a & b \\ -b^{\ast } & a^{\ast }% \end{bmatrix}% , \label{U2}$$with $\left\vert b\right\vert ^{2}=1-\left\vert a\right\vert ^{2}$. Then the transition matrix for the $\left( N+1\right) $-state system in the MS basis reads $$\mathsf{U}_{MS}^{\left( N+1\right) }=% \begin{bmatrix} 1 & 0 & \cdots & 0 & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 & 0 \\ 0 & 0 & \cdots & 0 & a & b \\ 0 & 0 & \cdots & 0 & -b^{\ast } & a^{\ast }% \end{bmatrix}% . \label{U-MS}$$ The solution in the original basis ---------------------------------- We can find the transition matrix in the original, diabatic basis by using the transformation$$\mathsf{U}^{\left( N+1\right) }(\infty ,-\infty )=\mathsf{WU}_{MS}^{\left( N+1\right) }(\infty ,-\infty )\mathsf{W}^{\dag }, \label{U=WUW}$$or explicitly, $$\mathsf{U}^{\left( N+1\right) }=% \begin{bmatrix} 1+\left( a-1\right) \frac{\chi _{1}^{2}}{\chi ^{2}} & \left( a-1\right) \frac{\chi _{1}\chi _{2}}{\chi ^{2}} & \left( a-1\right) \frac{\chi _{1}\chi _{3}}{\chi ^{2}} & \cdots & \left( a-1\right) \frac{\chi _{1}\chi _{N}}{\chi ^{2}} & b\frac{\chi _{1}}{\chi } \\ \left( a-1\right) \frac{\chi _{1}\chi _{2}}{\chi ^{2}} & 1+\left( a-1\right) \frac{\chi _{2}^{2}}{\chi ^{2}} & \left( a-1\right) \frac{% \chi _{2}\chi _{3}}{\chi ^{2}} & \cdots & \left( a-1\right) \frac{% \chi _{2}\chi _{N}}{\chi ^{2}} & b\frac{\chi _{2}}{\chi } \\ \left( a-1\right) \frac{\chi _{1}\chi _{3}}{\chi ^{2}} & \left( a-1\right) \frac{\chi _{2}\chi _{3}}{\chi ^{2}} & 1+\left( a-1\right) \frac{\chi _{3}^{2}}{\chi ^{2}} & \cdots & \left( a-1\right) \frac{% \chi _{3}\chi _{N}}{\chi ^{2}} & b\frac{\chi _{3}}{\chi } \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ \left( a-1\right) \frac{\chi _{1}\chi _{N}}{\chi ^{2}} & \left( a-1\right) \frac{\chi _{2}\chi _{N}}{\chi ^{2}} & \left( a-1\right) \frac{\chi _{3}\chi _{N}}{\chi ^{2}} & \cdots & 1+\left( a-1\right) \frac{\chi _{N}^{2}}{\chi ^{2}} & b\frac{\chi _{N}}{\chi } \\ -b^*\frac{\chi _{1}}{\chi } & -b^*\frac{\chi _{2}}{\chi } & -b^*\frac{\chi _{3}}{\chi } & \cdots & -b^*\frac{\chi _{N}}{\chi } & a^*% \end{bmatrix}% . \label{U}$$ The $i$th column of this matrix provides the probability amplitudes for initial conditions \[initial condition k\] $$\begin{aligned} C_{i}(-\infty ) &=&1, \\ C_{n}(-\infty ) &=&0\quad (n\neq i).\end{aligned}$$The initial state $\left\vert \psi _{i}\right\rangle $ can be one of the degenerate states or the upper state. This general unitary matrix and combinations of such matrices can be used to design techniques for general or special qunit rotations. As evident from Eq. (\[U\]) for finding the populations for the initial condition (\[initial condition k\]) it is sufficient to know only the parameter $a=\left[ U_{MS}^{(2)}(\infty ,-\infty )\right] _{11}$ because $\left\vert b\right\vert ^{2}=1-\left\vert a\right\vert ^{2}$ [@Vitanov98]. For the sake of simplicity, in the present paper we are interested only in cases when the system starts in a single state and below we shall concentrate on the values of the parameter $a$. In the more general case when the system starts in a coherent superposition of states, Eq. (\[U\]) can be used again to derive the solution; then the other Cayley-Klein parameter $b$ is also needed. Types of population distribution\[Sec-examples\] ================================================ We identify two types of initial conditions: when the system starts in one of the degenerate states $\left\vert \psi _{i}\right\rangle $ or in the excited state $\left\vert \psi _{N+1}\right\rangle $, which we shall consider separately. System initially in a ground state ---------------------------------- ### General case When the system is initially in the ground state $\left\vert \psi _{i}\right\rangle $, Eq. (\[initial condition k\]), we find from the $i$th column of the propagator (\[U\]) that the populations in the end of the evolution are \[populations psi\_i\] $$\begin{aligned} P_{i} &=&\left\vert 1+\left( a-1\right) \frac{\chi _{i}^{2}}{\chi ^{2}}% \right\vert ^{2}, \\ P_{n} &=&\left\vert a-1\right\vert ^{2}\frac{\chi _{i}^{2}\chi _{n}^{2}}{% \chi ^{4}}\quad (n\neq i,N+1), \\ P_{N+1} &=&\left( 1-\left\vert a\right\vert ^{2}\right) \frac{\chi _{i}^{2}}{% \chi ^{2}}.\end{aligned}$$Therefore the ratio of the populations of any two degenerate states, different from the initial state $\left\vert \psi _{i}\right\rangle $, reads $$\frac{P_{m}}{P_{n}}=\frac{\chi _{m}^{2}}{\chi _{n}^{2}}\quad (m,n\neq i,N+1). \label{Pm/Pn}$$ Hence these population ratios do not depend on the interaction details but only on the ratios of the corresponding peak Rabi frequencies. For equal Rabi frequencies, $$\chi _{1}=\chi _{2}=\cdots =\chi _{N}, \label{Omega equal}$$Eqs. (\[populations psi\_i\]) reduce to \[P for Omega equal\] $$\begin{aligned} P_{i} &=&\left\vert 1+\frac{a-1}{N}\right\vert ^{2}, \label{Pn for Omega equal} \\ P_{n} &=&\frac{\left\vert a-1\right\vert ^{2}}{N^{2}}\quad (n\neq i,N+1), \\ P_{N+1} &=&\frac{1-\left\vert a\right\vert ^{2}}{N}. \label{P0 for Omega equal}\end{aligned}$$Thus the populations of all ground states except the initial state $% \left\vert \psi _{i}\right\rangle $ are equal. ### Special values of $a$ Several values of the propagator parameter $a$ are especially interesting. For $a=0$, which indicates complete population transfer (CPT) in the MS two-state system, Eq. (\[populations psi\_i\]) gives \[P for a=0\] $$\begin{aligned} P_{i} &=&\left\vert 1-\frac{\chi _{i}^{2}}{\chi ^{2}}\right\vert ^{2}, \\ P_{n} &=&\frac{\chi _{n}^{2}\chi _{i}^{2}}{\chi ^{4}}\quad (n\neq i,N+1), \label{P0 for a=0} \\ P_{N+1} &=&\frac{\chi _{i}^{2}}{\chi ^{2}}.\end{aligned}$$ For $a=1$, which corresponds to complete population return (CPR) in the MS two-state system, we obtain \[P for a=1\] $$\begin{aligned} P_{i} &=&1, \label{Pn for a=1} \\ P_{n} &=&0\quad (n\neq i,N+1), \\ P_{N+1} &=&0. \label{P0 for a=1}\end{aligned}$$ For $a=-1$, which again corresponds to CPR in the MS two-state system, but with a sign flip in the amplitude, we find \[P for a=-1\] $$\begin{aligned} P_{i} &=&\left( 1-2\frac{\chi _{i}^{2}}{\chi ^{2}}\right) ^{2}, \label{Pk for a=-1} \\ P_{n} &=&\frac{4\chi _{n}^{2}\chi _{i}^{2}}{\chi ^{4}}\quad (n\neq i,N+1), \\ P_{N+1} &=&0. \label{P0 for a=-1}\end{aligned}$$It is important to note that although both cases $a=1$ and $a=-1$ lead to CPR in the MS two-state system, they produce very different population distributions in the full $\left( N+1\right) $-state system. The case $a=1$ leads to a trivial result (CPR in the full system), whereas the case $a=-1$ is very interesting because it leads to a population redistribution amongst the ground states with zero population in the upper state; hence this case deserves a special attention. ### The case $a=-1$ The case of $a=-1$ is particularly important because it allows to create a coherent superposition of all ground states, with no population in the upper state. All ground-state populations in this superposition will be equal, \[P equal\] $$\begin{aligned} &&P_{1}=P_{2}=\cdots =P_{N}=\frac{1}{N}, \\ &&P_{N+1}=0.\end{aligned}$$if \[Case I\] $$\begin{aligned} \chi _{i}=\left( \sqrt{N}\pm 1\right) \chi _{0}, && \\ \chi _{n}=\chi _{0}\quad (n\neq i), &&\end{aligned}$$where$$\chi _{0}=\frac{\chi }{\sqrt{2\left( N\pm \sqrt{N}\right) }}.$$This result does not depend on other interaction details (pulse shape, pulse area, detuning) as long as $a=-1$. For example, for $N=4$ degenerate states, equal populations are obtained when $\chi _{i}=\chi _{n}$ or $\chi _{i}=3\chi _{n}$. We shall discuss later how the condition $a=-1$ can be obtained for several analytically soluble models. Another important particular case is when the initial-state population $% P_{i} $ vanishes in the end. This occurs for $$\chi _{i}^{2}=\sum_{n\neq i}\chi _{n}^{2}. \label{condition equal Pn, Pk=0}$$ For example, an equal superposition of all lower sublevels except $% \left\vert \psi _{i}\right\rangle $, \[P equal w/o Pi\] $$\begin{aligned} P_{i} &=&P_{N+1}=0, \\ P_{n} &=&\frac{1}{N-1}\quad (n\neq i,N+1),\end{aligned}$$is created for \[Case II\] $$\begin{aligned} \chi _{i} &=&\chi _{0}\sqrt{N-1}, \\ \chi _{n} &=&\chi _{0}\quad (n\neq i),\end{aligned}$$where$$\chi _{0}=\frac{\chi }{\sqrt{2\left( N-1\right) }}.$$ System initially in the upper state\[Sec-upper\] ------------------------------------------------ If the system is initially in the excited state $\left\vert \psi _{N+1}\right\rangle $, at the end of the evolution the populations will be \[populations psi0\] $$\begin{aligned} &&P_{n}=\left( 1-\left\vert a\right\vert ^{2}\right) \frac{\chi _{n}^{2}}{% \chi ^{2}}\quad \left( n=1,2,\ldots ,N\right) , \\ &&P_{N+1}=\left\vert a\right\vert ^{2}.\end{aligned}$$For $a=\pm 1$ at the end of the evolution the system undergoes CPR, as in the MS two-state system. For $a=0$ (CPT in the MS two-state system) the whole population will be in the ground states leaving the excited state empty, $P_{N+1}=0$. If all the couplings are equal, Eq. (\[Omega equal\]), the ground states will have equal populations, $$P_{n}=\frac{1}{N}\quad \left( n=1,2,\ldots ,N\right) . \label{Pequal from Pe}$$ Discussion\[Sec-discussion\] ---------------------------- In this section we discussed some general features of the population redistribution in the $(N+1)$-state system. There are three particularly interesing results. *First*, the ratios of the populations of the degenerate states (except the one populated initially) depend only on the ratios of the corresponding Rabi frequencies; hence they can be controlled by changing the corresponding laser intensities alone. The populations values, though, depend on the other interaction details. Moreover, it can easily be seen that the relative phases of the degenerate states can be controlled by the relative laser phases. *Second*, it is possible to create an equal superposition of all ground states, with zero population in the upper state. This is possible when the system starts in a ground state: then condition (\[Case I\]) is required, along with the CPR condition $a=-1$. Alternatively, an equal superposition can be created when the system starts in the upper state: then condition (\[Omega equal\]) is required, along with the CPT condition $a=0$. Equal superpositions are important in some applications because they are states with maximal coherence (since the population inversions vanish). *Third*, it is possible, starting from a ground state, to create a superposition of all other ground states, whereas the initial ground state and the excited state are left unpopulated. This requires $a=-1$ and condition (\[condition equal Pn, Pk=0\]). This case has interesting physical implications, which will be discussed in the next section. Applications to exactly soluble models\[Sec-applications\] ========================================================== Multistate analytical solutions ------------------------------- [|l|]{} Model\ Resonance\ $\Omega (t)=\chi f(t),\quad \Delta (t)=0$\ $a=\cos \frac{1}{2}A$\ Rabi\ $\Omega (t)=\chi $ $(\left\vert t\right\vert \leqq T),\quad \Delta (t)=\Delta _{0}$\ $a=\cos \left( T\sqrt{\chi ^{2}+\Delta ^{2}}\right) -i\dfrac{\Delta }{\sqrt{% \Omega ^{2}+\Delta ^{2}}}\sin \left( T\sqrt{\chi ^{2}+\Delta ^{2}}\right) $\ Landau-Zener\ $\Omega (t)=\chi ,\quad \Delta (t)=Ct$\ $a=\exp \left[ -\pi \chi ^{2}/4C\right] $\ Rosen-Zener\ $\Omega (t)=\chi $sech$(t/T),\quad \Delta (t)=\Delta _{0}$\ $a=\dfrac{\Gamma ^{2}\left( \frac{1}{2}+i\delta \right) }{\Gamma \left( \frac{1}{2}+\alpha +i\delta \right) \Gamma \left( \frac{1}{2}-\alpha +i\delta \right) }$\ Allen-Eberly\ $\Omega (t)=\chi $sech$(t/T),\quad \Delta (t)=B\tanh (t/T)$\ $a=\dfrac{\cos \left( \pi \sqrt{\alpha ^{2}-\beta^{2}}\right) }{\cosh \left( \pi \beta\right) }$\ Demkov-Kunike\ $\Omega (t)=\chi $sech$(t/T),\quad \Delta (t)=\Delta _{0}+B\tanh (t/T)$\ $a=\dfrac{\Gamma \left( \frac{1}{2}+i(\delta +\beta )\right) \Gamma \left( \frac{1}{2}+i(\delta -\beta )\right) }{\Gamma \left( \frac{1}{2}+\sqrt{% \alpha ^{2}-\beta ^{2}}+i\delta \right) \Gamma \left( \frac{1}{2}-\sqrt{% \alpha ^{2}-\beta ^{2}}+i\delta \right) }$\ The values of the propagator parameter $a=\left[ U_{MS}^{(2)}(\infty ,-\infty )\right] _{11}$ for the most popular analytically exactly soluble models are listed in Table \[Table-a-exact\]. Equation (\[U\]), supplied with these values, provides several exact multistate analytical solutions, which generalize the respective two-state solutions. Among these solutions, the resonance case is the simplest and most important one, which will receive a special attention below. It will be followed by a detailed discussion of the Rosen-Zener (RZ) model, which can be seen as an extension of the resonance solution to nonzero detuning for a special pulse shape (hyperbolic secant). Both the resonance and the RZ model allow for the parameter $a$ to obtain the important values $0,\pm 1$. The Rabi model can also be used to illustrate the interesting cases of population distribution associated with these values of $a$ but its rectangular pulse shape is less attractive (and also less realistic) than the beautiful sech-shape of the pulse in the RZ model. The Landau-Zener (LZ) and Allen-Eberly (AE) models are of level-crossing type, i.e. the detuning crosses resonance, $\Delta (0)=0$. For these models in the adiabatic limit the transition probability approaches unity, that is $a\rightarrow 0$. The parameter $a$ is always nonnegative, i.e. the most interesting value in the present context, $a=-1$, is unreachable. Nevertheless, because of the popularity and the importance of the LZ model, and because the present multistate LZ solution supplements other multistate LZ solutions, we discuss this solution in detail in Sec. \[Sec-LZ\]. The Demkov-Kunike (DK) model is a very versatile model, which combines and generalizes the RZ and AE models. Indeed, as seen in Table [Table-a-exact]{}, the DK model reduces to the RZ model for $B=0$ and to the AE model for $\Delta _{0}=0$. For the DK model, the parameter $a$ can be equal to the most interesting value of $-1$ only when $B=0$, i.e. only in the RZ limit. Therefore, we shall only consider the RZ model below, and leave the AE and DK models to readers interested in other aspects of the analytic multistate solutions presented here. Exact resonance\[Sec-resonance\] -------------------------------- In the case of exact resonance, $$\Delta =0,$$the elements of the evolution matrix for the MS two-state system for any pulse shape of $\Omega (t)$ are \[resonance\] $$\begin{aligned} a &=&\cos \frac{A}{2},~~ \\ b &=&-i\sin \frac{A}{2},\end{aligned}$$where $A$ is the rms pulse area defined as $$A=\int_{-\infty }^{\infty }\Omega (t^{\prime })dt^{\prime }. \label{pulse area}$$ In the important case of $N=3$ we have $$\mathsf{U}_{d}^{\left( 4\right) }=% \begin{bmatrix} 1-2\frac{\chi _{1}^{2}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -2\frac{\chi _{1}\chi _{2}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -2\frac{\chi _{1}\chi _{3}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -i\frac{\chi _{1}}{% \chi }\sin \frac{1}{2}A \\ -2\frac{\chi _{1}\chi _{2}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & 1-2\frac{% \chi _{2}^{2}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -2\frac{\chi _{2}\chi _{3}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -i\frac{\chi _{2}}{% \chi }\sin \frac{1}{2}A \\ -2\frac{\chi _{1}\chi _{3}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -2\frac{% \chi _{2}\chi _{3}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & 1-2\frac{\chi _{3}^{2}}{\chi ^{2}}\sin ^{2}\frac{1}{4}A & -i\frac{\chi _{3}}{\chi }% \sin \frac{1}{2}A \\ -i\frac{\chi _{1}}{\chi }\sin \frac{1}{2}A & -i\frac{\chi _{2}}{\chi }\sin \frac{1}{2}A & -i\frac{\chi _{3}}{\chi }\sin \frac{1}{2}A & \cos \frac{1}{2}A% \end{bmatrix}% . \label{U4-resonance}$$ We have $a=0,\pm 1$ for the following pulse areas, \[a resonance\] $$\begin{aligned} a=0:\quad &&A=\left( 2l+1\right) \pi , \\ a=1:\quad &&A=4l\pi , \\ a=-1:\quad &&A=2\left( 2l+1\right) \pi .\end{aligned}$$where $l=0,1,2,...$. The pulse areas for the three important cases discussed in Sec. [Sec-discussion]{} are easily calculated. An equal superposition of all $N$ ground states is created when starting from the excited state and all individual pulse areas are equal to (see Sec. \[Sec-upper\]) $$A_{n}=\frac{\left( 2l+1\right) \pi }{\sqrt{N}}\quad \left( n=1,2,\ldots ,N\right) ,$$ where $l=0,1,2,...$. An equal superposition of all $N$ ground states is created also when starting from one ground state $\left\vert \psi _{i}\right\rangle $ and the pulse areas are \[see Eq. (\[Case I\])\] $$\begin{aligned} A_{i} &=&\sqrt{2\frac{\sqrt{N}\pm 1}{\sqrt{N}}}\left( 2l+1\right) \pi , \\ A_{n} &=&\sqrt{\frac{2}{N\pm \sqrt{N}}}\left( 2l+1\right) \pi \quad \left( n\neq i\right) ,\end{aligned}$$ where $l=0,1,2,...$ The other interesting case when the system starts in one ground state $% \left\vert \psi _{i}\right\rangle $ and ends up in an equal superposition of all other ground states is realised for pulse areas \[see Eq. (\[Case II\])\] $$\begin{aligned} A_{i} &=&\sqrt{2}\left( 2l+1\right) \pi , \\ A_{n} &=&\sqrt{\frac{2}{N-1}}\left( 2l+1\right) \pi \quad \left( n\neq i\right) ,\end{aligned}$$ where $l=0,1,2,...$. Multistate Rosen-Zener model ---------------------------- Equation (\[U\]) and the value of the parameter $a$ in Table [Table-a-exact]{} represent the multistate RZ solution in the degenerate two-level system. It is easy to show that $$\left\vert a\right\vert ^{2}=1-\frac{\sin ^{2}\left( \frac{1}{2}\pi \chi T\right) }{\cosh ^{2}\left( \frac{1}{2}\pi \Delta _{0}T\right) }, \label{|a|-RZ}$$ where we have used the reflection formula $\Gamma (\frac{1}{2}+z)\Gamma (% \frac{1}{2}-z)=\pi /\cos \pi z$ [@AS]. Hence in this model $\left\vert a\right\vert =1$ for $\alpha =\frac{1}{2}\chi T=l$ ($l=0,1,2,...$). The phase of $a$, however, depends on the detuning $\Delta _{0}$ [@Vitanov98]; we use this to an advantage to select values of $\Delta _{0}$ for which $% a=-1$. For $\alpha =l$ we find [@Vitanov98] $$a=(-1)^{n}\prod_{k=0}^{n-1}\frac{2l+1-i\Delta _{0}T}{2l+1+i\Delta _{0}T}% \quad (\alpha =l), \label{CPT}$$where the recurrence relation $\Gamma (z+1)=z\Gamma (z)$ [@AS] has been used. Thus, the equation $a=-1$ reduces to an algebraic equation for $\Delta _{0}$, which has $l$ real solutions [@Vitanov98]. The first few values of $\chi $ and $\Delta _{0}$ for which $a=-1$ are shown in Table [Table-a]{}. As the table shows, $\Delta _{0}=0$ is a solution for odd $\alpha =\frac{1}{2}\chi T$ but not for even $\alpha $, in agreement with the conclusions in Sec. \[Sec-resonance\]. Moreover, the $a=-1$ solutions do not depend on the number of degenerate states $N$. [|r|rrrrrr|]{} $\chi T$ & $\Delta _{0}T$ & & & & &\ 2 & 0 & & & & &\ 4 & $\pm 1.732$ & & & & &\ 6 & 0 & $\pm 4.796$ & & & &\ 8 & $\pm 1.113$ & $\pm 9.207$ & & & &\ 10 & 0 & $\pm 2.756$ & $\pm 14.913$ & & &\ 12 & $\pm 0.943$ & $\pm 4.936$ & $\pm 21.903$ & & &\ 14 & 0 & $\pm 2.243$ & $\pm 7.595$ & $\pm 30.171$ & &\ 16 & $\pm 0.855$ & $\pm 3.916$ & $\pm 10.708$ & $\pm 39.715$ & &\ 18 & 0 & $\pm 1.988$ & $\pm 5.907$ & $\pm 14.265$ & $\pm 50.534$ &\ 20 & $\pm 0.799$ & $\pm 3.418$ & $\pm 8.195$ & $\pm 18.260$ & $\pm 62.627$ &\ 22 & 0 & $\pm 1.830$ & $\pm 5.098$ & $\pm 10.766$ & $\pm 22.687$ & $\pm 75.993$\ 24 & $\pm 0.759$ & $\pm 3.113$ & $\pm 7.006$ & $\pm 13.613$ & $\pm 27.545$ & $\pm 90.634$\ 26 & 0 & $\pm 1.719$ & $\pm 4.606$ & $\pm 9.130$ & $\pm 16.729$ & $\pm 32.833 $\ & & $\pm 106.549$ & & & &\ 28 & $\pm 0.728$ & $\pm 2.901$ & $\pm 6.289$ & $\pm 11.461$ & $\pm 20.113$ & $\pm 38.548$\ & & $\pm 123.736$ & & & &\ 30 & 0 & $\pm 1.636$ & $\pm 4.268$ & $\pm 8.150$ & $\pm 13.994$ & $\pm 23.760 $\ & & $\pm 44.690$ & $\pm 142.198$ & & &\ In the present context the RZ model is interesting for it shows that one can create superpositions within the ground-state manifold even when the excited state is off resonance by a considerable detuning ($\Delta _{0}\gg 1/T$), for which the transition probability in the MS two-state system is virtually zero, i.e. $\left\vert a\right\vert \approx 1$. This fact allows us, for specific detunings, to essentially contain the *transient* dynamics within the ground states; in contrast, in the resonance case the excited state can get significant transient population, $P_{N+1}(t)=\sin ^{2}% \frac{1}{2}A(t)$, although it vanishes in the end. Figure \[Fig-detuning\] displays the populations against the detuning $% \Delta _{0}$ for a hyperbolic-secant pulse with $\chi T=18$ for couplings chosen to satisfy Eqs. (\[Case I\]) (upper frame) and (\[Case II\]) (lower frame). In both cases we have $\left\vert a\right\vert =1$ \[see Eq. (\[|a|-RZ\])\], which leaves the excited state unpopulated in the end. For several special values of the detuning $\Delta _{0}$, as predicted in Table \[Table-a\], we have $a=-1$. For these values, an equal superposition of all degenerate states including the initially populated state $\left\vert \psi _{1}\right\rangle $ is created in the upper frame, and an equal superposition of all degenerate states except $\left\vert \psi _{1}\right\rangle $ is created in the lower frame. ![(Color online) Populations vs the detuning $\Delta _{0}$ for $N=3$ lower states and $\protect\chi T=18$. The coupling strengths $\protect\chi % _{n}$ are given by Eqs. (\[Case I\]) in the upper frame and Eqs. (\[Case II\]) in the lower frame. The system is initially in state $% \left\vert \protect\psi _{1}\right\rangle $.[]{data-label="Fig-detuning"}](fig3.eps){width="75mm"} Figure \[Fig-area\] shows the final populations versus the rms pulse area $% A=\pi \chi T$ for $N=3$ degenerate lower states for couplings chosen to satisfy Eqs. (\[Case I\]) (upper frame) and (\[Case II\]) (lower frame). As follows from Table \[Table-a\], an equal superposition of all degenerate states is created for rms pulse area $A=18\pi $; this is indeed seen in the figure in the upper frame. For the same value of $A$ in the lower frame an equal superposition is created of all degenerate states except the initially populated state $\left\vert \psi _{1}\right\rangle $. In both frames, there are other values of $A$ for which the same superpositions are apparently created; a closer examination (not shown) reveals that for these other values of the rms pulse area the created superposition has almost, but not exactly, equal components. ![(Color online) Final populations versus the rms pulse area $\protect% \chi $ for $N=3$ degenerate lower states and detuning $\Delta T=50.534$. The coupling strengths $\protect\chi _{n}$ are given by Eqs. (\[Case I\]) in the upper frame and Eqs. (\[Case II\]) in the lower frame. The system is initially in state $\left\vert \protect\psi _{1}\right\rangle $.[]{data-label="Fig-area"}](fig4.eps){width="75mm"} Figure \[Fig-time\] displays the time evolution of the populations for $% N=3 $ degenerate lower states and rms pulse area of $18\pi $, and two detunings: $\Delta =0$ in the upper frame and $\Delta T=50.534$ in the lower frame. For these pairs of areas and detunings, Figs. \[Fig-detuning\] and \[Fig-area\] have already demonstrated that an equal superposition of all degenerate states is created. Figure \[Fig-time\] shows that the evolution towards such a superposition can be dramatically different on and off resonance. Indeed, for $\Delta =0$ (upper frame) the nondegenerate upper state receives considerable transient population, which would lead to significant losses if this state can decay on the time scale of the pulsed interaction. In strong contrast, off resonance this undesired population is greatly reduced (lower frame), and still the desired equal superposition of the degenerate states emerges in the end. We have verified numerically that for larger detunings this transient population continues to decrease, e.g. for $\Delta T=142.198$ and $\Omega T=30$ it is less than 1%. ![(Color online) Populations versus time for $N=3$ lower states and rms Rabi frequency $\protect\chi T=18$. The coupling strengths $\protect\chi % _{n}$ are given by Eqs. (\[Case I\]). The detuning is $\Delta =0$ in the upper frame and $\Delta T=50.534$ in the lower frame. The system is initially in state $\left\vert \protect\psi _{1}\right\rangle $.[]{data-label="Fig-time"}](fig5.eps){width="75mm"} To conclude this section we point out that one can create any desired superposition, with arbitrary unequal populations, in very much the same manner, on or off resonance, by appropriately chosing the individual couplings, while still maintaning particular values of the overall rms pulse area. Tuning on resonance gives the advantage of smaller pulse area required, whereas tuning off resonance (with larger pulse area) provides the advantage of greatly reducing the transient population of the possibly lossy common upper state. Multistate Landau-Zener model\[Sec-LZ\] ======================================= As seen in Table \[Table-a-exact\] the propagator parameter $a$ for the LZ model, $a=\exp \left( -\pi \chi ^{2}/4C\right) $, cannot be equal to 0 or 1 or $-1$, but may approach 0 or 1 arbitrarily closely. However, it is always positive and cannot approach the value of $-1$; hence the LZ model is unsuitable for unitary operations within the degenerate manifold, in contrast to the resonance and RZ models discussed above. Still, the present multistate LZ solution represents an interesting and important addition to the available LZsolutions (see [@CI; @transitions] and references therein). ### The Demkov-Osherov model The present multistate LZ model complements the Demkov-Osherov (DO) model [@DO], wherein a slanted energy crosses $N$ parallel *nondegenerate* energies. In the DO model, the exact probabilities $P_{n\rightarrow m}$ have the same form — products of LZ probabilities for transition or no-transition applied at the relevant crossings — as what would be obtained by naive multiplication of LZ probabilities while moving across the grid of crossings from $\left\vert \psi _{n}\right\rangle $ to $\left\vert \psi _{m}\right\rangle $, without accounting for phases and interferences. For example, if the states $\left\vert \psi _{n}\right\rangle $ ($% n=1,2,\ldots ,N$) are labeled such that their energies increase with the index $n$, and if the slope of the slanted energy of state $\left\vert \psi _{N+1}\right\rangle $ is positive, the transition probabilities in the DO model are \[DO solution\] $$\begin{aligned} P_{n\rightarrow m} &=&p_{n}q_{n+1}q_{n+2}\cdots q_{m-1}p_{m}\quad (n<m), \\ P_{n\rightarrow m} &=&0\quad (n>m), \\ P_{n\rightarrow n} &=&q_{n}, \\ P_{n\rightarrow N+1} &=&p_{n}q_{n+1}q_{n+2}\cdots q_{N}, \\ P_{N+1\rightarrow n} &=&q_{1}q_{2}\cdots q_{n-1}p_{n}, \\ P_{N+1\rightarrow N+1} &=&q_{1}q_{2}\cdots q_{N},\end{aligned}$$ where $q_{n}=\exp \left( -\pi \chi _{n}^{2}/2C\right) $ is the no-transition probability and $p_{n}=1-q_{n}$ is the transition probability between states $\left\vert \psi _{N+1}\right\rangle $ and $\left\vert \psi _{n}\right\rangle $ at the crossing of their energies. ### The degenerate case The present multistate LZ solution provides the transition probabilities for the special case when all parallel energies are degenerate, which cannot be obtained from the DO model. #### The propagator The elements of the transition matrix for our $\left( N+1\right) $-state degenerate LZ problem are readily found from Eq. (\[U\]) to be \[U-LZ\] $$\begin{gathered} U_{m,n}=-\frac{\chi _{n}\chi _{m}}{\chi ^{2}}\left( 1-e^{-\Lambda }\right) \quad \left( m,n=1,\ldots ,N;\text{ }m\neq n\right) , \\ U_{n,n}=1-\frac{\chi _{n}^{2}}{\chi ^{2}}\left( 1-e^{-\Lambda }\right) \quad \left( n=1,\ldots ,N\right) , \\ U_{n,N+1}=\frac{\chi _{n}}{\chi }b\quad \left( n=1,\ldots ,N\right) , \\ U_{N+1,n}=-\frac{\chi _{n}}{\chi }b^{\ast }\quad \left( n=1,\ldots ,N\right) , \\ U_{N+1,N+1}=e^{-\Lambda },\end{gathered}$$with $\Lambda =\pi \chi ^{2}/4C$ and $\left\vert b\right\vert ^{2}=1-e^{-2\Lambda }$. #### System initially in the nondegenerate state When the system begins initially in the nondegenerate state $\left\vert \psi _{N+1}\right\rangle $, with the tilted energy, the system ends in a coherent superposition of all states with populations \[P-LZ-upper\] $$\begin{aligned} P_{n} &=&\frac{\chi _{n}^{2}}{\chi ^{2}}\left( 1-e^{-2\Lambda }\right) \quad \left( n=1,\ldots ,N\right) , \\ P_{N+1} &=&e^{-2\Lambda }.\end{aligned}$$In the adiabatic limit $\Lambda \gg 1$ the population is distributed among the degenerate states according to their couplings, whereas the initially populated state $\left\vert \psi _{N+1}\right\rangle $ is almost depleted, $% P_{N+1}\approx 0$. For equal couplings, all degenerate-state populations will be equal, $P_{n}\approx 1/N$. In the opposite, diabatic limit $\Lambda \ll 1$ the population remains in state $\left\vert \psi _{N+1}\right\rangle $ with almost no population in the degenerate states. #### System initially in a degenerate state When the system is initially in an arbitrary degenerate state $% \left\vert \psi _{i}\right\rangle $, at the end of the evolution the populations are \[P-LZ-lower\] $$\begin{aligned} P_{i} &=&\left[ 1-\frac{\chi _{i}^{2}}{\chi ^{2}}\left( 1-e^{-\Lambda }\right) \right] ^{2}, \\ P_{n} &=&\frac{\chi _{n}^{2}\chi _{i}^{2}}{\chi ^{4}}\left( 1-e^{-\Lambda }\right) ^{2}\quad \left( n=1,\ldots ,N;n\neq i\right) , \\ P_{N+1} &=&\frac{\chi _{i}^{2}}{\chi ^{2}}\left( 1-e^{-2\Lambda }\right) .\end{aligned}$$In the adiabatic limit $\Lambda \gg 1$ and for equal couplings, the populations will be \[LZ adiabatic\] $$\begin{aligned} P_{i} &\approx &\left( 1-\frac{1}{N}\right) ^{2}, \\ P_{n} &\approx &\frac{1}{N^{2}}\quad \left( n=1,\ldots ,N;n\neq i\right) , \\ P_{N+1} &\approx &\frac{1}{N}.\end{aligned}$$ Obviously, Eqs. (\[P-LZ-upper\]) and (\[P-LZ-lower\]) cannot be reduced to the DO solution (\[DO solution\]), which implies that the non-degeneracy assumption in the DO model is essential. ![(Color online) Populations for the degenerate LZ model vs the LZ parameter $\Lambda =\protect\pi \protect\chi ^{2}/4C$ for $N=3$ degenerate states and equal couplings. The system is supposed to start in one of the degenerate states $\left\vert \protect\psi _{i}\right\rangle $. The arrows on the right point the adiabatic values (\[LZ adiabatic\]).[]{data-label="Fig-LZ"}](fig6.eps){width="75mm"} Figure \[Fig-LZ\] shows the transition probability for the multistate LZ model plotted against the LZ parameter $\Lambda =\pi \chi ^{2}/4C$. As $% \Lambda $ increases the populations approach their steady adiabatic values (\[LZ adiabatic\]). Different coherent superpositions can be created by choosing appropriate values for the couplings $\chi _{n}$. Conclusions\[Sec-conclusions\] ============================== In this paper we have described a procedure for deriving analytical solutions for a multistate system composed of $N$ degenerate lower states coupled via a nondegenerate upper state with pulsed interactions of the same temporal dependence but possibly with different peak amplitudes. The multistate resonance and Rosen-Zener solutions have been discussed in some detail because they allow one to find special values of parameters, termed generalised $\pi $ pulses, for which various types of population transfer can occur, for example, creation of maximally coherent superpositions. The RZ solution is particularly useful because it allows to prescribe appropriately detuned pulsed fields for which the dynamics can be essentially contained within the degenerate-state space, without populating the upper state even transiently, thus avoiding possible losses from this state via spontaneous emission, ionization, etc. We have analyzed in some detail also the multistate Landau-Zener model, which complements the Demkov-Osherov model in the case of degenerate energies. The presented analytical solutions and general properties have a significant potential for manipulation of multistate quantum bits in quantum information processing, for example, in designing arbitrary unitary gates. This work has been supported by the European Union’s Transfer of Knowledge project CAMEL (Grant No. MTKD-CT-2004-014427) and the Alexander von Humboldt Foundation. ESK acknowledges support from the EU Marie Curie Training Site project No. HPMT-CT-2001-00294. [99]{} B.W. Shore, *The Theory of Coherent Atomic Excitation* (Wiley, New York, 1990). I.I. Rabi, Phys. Rev. **51**, 652 (1937). L.D. Landau, Physik Z. Sowjetunion **2**, 46 (1932); C. Zener, Proc. R. Soc. Lond. Ser. A **137**, 696 (1932). N. Rosen and C. Zener, Phys. Rev. **40**, 502 (1932). L. Allen and J. H. Eberly, *Optical Resonance and Two-Level Atoms* (Dover, New York, 1987); F.T. Hioe, Phys. Rev. A **30**, 2100 (1984). A. Bambini and P.R. Berman, Phys. Rev. A **23**, 2496 (1981). Yu.N. Demkov and M. Kunike, Vestn. Leningr. Univ. Fiz. Khim. **16**, 39 (1969); see also F.T. Hioe and C.E. Carroll, Phys. Rev. A **32**, 1541 (1985); J. Zakrzewski, Phys. Rev. A **32**, 3748 (1985); K.-A. Suominen and B.M. Garraway, Phys. Rev. A **45**, 374 (1992). C.E. Carroll and F.T. Hioe, J. Phys. A: Math. Gen. **19**, 3579 (1986). Yu.N. Demkov, Sov.Phys.-JETP **18**, 138 (1964); N.V. Vitanov, J. Phys. B **26**, L53 (1993), erratum *ibid.* **26**, 2085 (1993). E.E. Nikitin, Opt. Spectrosc. **13**, 431 (1962); Discuss. Faraday Soc. **33**, 14 (1962); Adv. Quantum Chem. **5**, 135 (1970); N.V. Vitanov, J. Phys. B **27**, 1791 (1994). J.R. Morris and B.W. Shore, Phys. Rev. A **27**, 906 (1983). C.P. Williams and S.H. Clearwater, *Explorations in Quantum Computing*, (Springer-Verlag, Berlin, 1997); A. Steane, Rep. Prog. Phys. **61**, 117 (1998); M.A. Nielsen and I.L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, 2000). R.G. Unanyan, M. Fleischhauer, B.W. Shore, and K. Bergmann, Opt. Commun. **155**, 144 (1998); H. Theuer, R.G. Unanyan, C. Habscheid, K. Klein and K. Bergmann, Optics Express **4**, 77 (1999). Z. Kis and S. Stenholm, Phys. Rev. A **64**, 63406 (2001). K. Bergmann, H. Theuer, and B.W. Shore, Rev. Mod. Phys. **70**, 1003 (1998); N.V. Vitanov, T. Halfmann, B.W. Shore, and K. Bergmann, Ann. Rev. Phys. Chem. **52**, 763 (2001); N.V. Vitanov, M. Fleischhauer, B.W. Shore and K. Bergmann, Adv. At. Mol. Opt. Phys. **46**, 55 (2001). S. Stenholm, in *Frontiers of Laser Spectroscopy*, Les Houches Summer School Session XXVII, edited by R. Bailian, S. Haroche and S. Liberman (Amsterdam, North Holland, 1975), p. 399; R. Lefebvre and J. Savolainen, J. Chem. Phys. **60**, 2509 (1974); M. Bixon and J. Jortner, J. Chem. Phys. **48**, 715 (1968). N.V. Vitanov, J. Phys. B **33**, 2333 (2000). M. Abramowitz and I.A. Stegun, *Handbook of Mathematical Functions* (Dover, New York, 1964). A.A. Rangelov, J. Piilo, and N.V. Vitanov, Phys. Rev. A **72**, 053404 (2005). Y.N. Demkov and V.I. Osherov, Zh. Eksp. Teor. Fiz. **53**, 1589 (1967) \[Sov. Phys. JETP **26**, 916 (1968)\]; Y.N. Demkov and V.N. Ostrovsky, J. Phys. B **28**, 403 (1995).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we prove that the first eigenvalues of $-\Delta + cR$ ($c\geq \frac14$) is nondecreasing under the Ricci flow. We also prove the monotonicity under the normalized Ricci flow for the case $c= 1/4$ and $r\le 0$.' address: 'Department of Mathematics, Cornell University, Ithaca, NY 14853-4201' author: - Xiaodong Cao bibliography: - 'bio.bib' date: 'Oct. 5th, 2007' title: First Eigenvalues of Geometric Operators under the Ricci Flow --- [^1] [**First Eigenvalue of $-\Delta + cR$**]{.nodecor} =================================================== Let $M$ be a closed Riemannian manifold, and $(M,\bg(t))$ be a smooth solution to the Ricci flow equation $$\ppt \bg_{ij}=-2R_{ij}$$ on $0\le t<T$. In [@C], we prove that all eigenvalues $\lambda (t)$ of the operator $-\Delta + \frac{R}{2}$ are nondecreasing under the Ricci flow on manifolds with nonnegative curvature operator. Assume $f=f(x,t)$ is the corresponding eigenfunction of $\lambda (t)$, that is $$(-\Delta + \frac{R}{2})f(x,t)=\lambda (t) f(x,t)$$ and $\Dint_M f^2d\mu = 1$. More generally, we define $$\label{defn1} \lambda(f,t)=\Dint_M (-\Delta f + \frac{R}{2}f)f d\mu,$$ where $f$ is a smooth function satisfying $$\frac{d}{dt}(\Dint_M f^2 d\mu)=0,~ \Dint_M f^2 d\mu=1.$$ We can then derive the monotonicity formula under the Ricci flow. \[thm1\][@C] On a closed Riemannian manifold with nonnegative curvature operator, the eigenvalues of the operator $-\Delta + \frac{R}{2}$ are nondecreasing under the Ricci flow. In particular, $$\label{equ1} \textstyle{\frac{d}{dt}\lambda(f,t) = 2\Dint_M R_{ij}f_if_j d\mu + \Dint_M |Rc|^2f^2d\mu\ge 0.}$$ In (\[equ1\]), when $\frac{d}{dt}\lambda(f,t)$ is evaluated at time $t$, $f$ is the corresponding eigenfunction of $\lambda(t)$. \[rema1\] Clearly, at time $t$, if $f$ is the eigenfunction of the eigenvalue $\lambda(t)$, then $\lambda(f,t) = \lambda(t)$. By the eigenvalue perturbation theory, we may assume that there is a $C^1$-family of smooth eigenvalues and eigenfunctions (for example, see [@kleinerlott], [@rs] and [@chowetc1]). When $\lambda$ is the lowest eigenvalue, we can further assume that the corresponding eigenfunction $f$ be positive. Since the above formula does not depend on the particular evolution of $f$, so $\ddt \lambda(t)=\ddt \lambda(f,t)$. \[rema2\] In [@ljf1], J. Li used the same technique to prove that the monotonicity of the first eigenvalue of $-\Delta + \frac12 R$ under the Ricci flow without assuming nonnegative curvature operator. A similar result appeared in the physics literature [@osw05]. \[rema3\] When $c=\frac14$, the monotonicity of first eigenvalue has been established by G. Perelman in [@perelman1]. The evolution of first eigenvalue of Laplace operator under the Ricci flow has been studied by L. Ma in [@ma1]. The evolution of Yamabe constant under the Ricci flow has been studied by S.C. Chang and P. Lu in [@lu1]. In this paper, we shall study the first eigenvalues of operators $-\Delta + cR$ ($c\geq \frac14$) without [*curvature assumption*]{} on the manifold. Our first result is the following theorem. \[thm2\] Let ($M\sp n,\bg(t)$), $t\in[0,T)$, be a solution of the Ricci flow on a closed Riemannian manifold $M\sp n$. Assume that $\lambda (t)$ is the lowest eigenvalue of $-\Delta + c R$ ($c\geq \frac14$), $f=f(x,t)>0$ satisfies $$\label{equ2} -\Delta f(x,t) + c R f(x,t)=\lambda(t)f(x,t)$$ and $\textstyle{\Dint_M f^2(x,t) d\mu = 1}$. Then under the Ricci flow, we have $$\nonumber \label{equ11} \begin{array}{rll} \textstyle{\frac{d}{dt}\lambda(t) = \frac12 \Dint_M |R_{ij}+\nabla_i\nabla_j \varphi|^2\ e^{-\varphi}d\mu + \frac{4c-1}{2} \Dint_M |Rc|^2\ e^{-\varphi}d\mu \ge 0,} \end{array}$$ where $e^{-\varphi} = f^2$. ([**Theorem \[thm2\]**]{}) Let $\varphi$ be a function satisfying $ e^{-\varphi(x)}=f^2(x)$. We proceed as in [@C], we have $$\label{equ12} \begin{array}{rll} \frac{d}{dt}\lambda(t)=&(2c-\frac12) \Dint R_{ij}\nabla_i \varphi\nabla_j\varphi e^{-\varphi}d\mu\\ & -(2c-1)\Dint R_{ij} \nabla_i \nabla_j \varphi e^{-\varphi} d\mu + 2c\Dint |Rc|^2e^{-\varphi}d\mu. \end{array}$$ Integrating by parts and applying the Ricci formula, it follows that $$\label{equ14} \Dint R_{ij}\nabla_i \nabla_j\varphi\ e^{-\varphi}d\mu = \Dint R_{ij}\nabla_i \varphi\nabla_j\varphi\ e^{-\varphi}d\mu -\frac{1}{2}\Dint R\Delta e^{-\varphi}d\mu\\$$ and $$\label{equ8} \begin{array}{rll} \Dint R_{ij}\nabla_i\nabla_j \varphi e^{-\varphi}d\mu &+ \Dint |\nabla\nabla \varphi|^2 e^{-\varphi}d\mu \\ = &-\Dint\Delta e^{-\varphi}\big( \Delta\varphi+\frac{1}{2}R -\frac{1}{2}|\nabla\varphi|^2 \big)d\mu\\ = &(2c-\frac12)\Dint R \Delta e^{-\varphi}d\mu.\\ \end{array}$$ In the last step, we use $$\nonumber 2\lambda(t) = \Delta\varphi+2c R -\frac{1}{2}|\nabla\varphi|^2.$$ Combining (\[equ14\]) and (\[equ8\]), we arrive at $$\label{eqn10} \Dint |\nabla\nabla \varphi|^2 e^{-\varphi}d\mu= 2c \Dint R \Delta e^{-\varphi}d\mu - \Dint R_{ij}\nabla_i \varphi\nabla_j\varphi\ e^{-\varphi}d\mu.$$ Plugging (\[eqn10\]) into (\[equ12\]), we have $$\nonumber \begin{array}{rll} \frac{d}{dt}\lambda(t)=& \Dint R_{ij} \nabla_i \nabla_j \varphi e^{-\varphi} d\mu + 2c\Dint |Rc|^2e^{-\varphi} d\mu \\ &+c\Dint R\triangle (e^{-\varphi}) d\mu -\frac12 \Dint R_{ij}\nabla_i \varphi\nabla_j\varphi\ e^{-\varphi}d\mu\\ =& \Dint R_{ij}\nabla_i \nabla_j\varphi e^{-\varphi}d\mu + 2c \Dint |Rc|^2e^{-\varphi}d\mu + \frac12\Dint |\nabla \nabla \varphi|^2 e^{-\varphi}d\mu\\ =& \frac12 \Dint |R_{ij}+\nabla_i\nabla_j \varphi|^2\ e^{-\varphi}d\mu + (2c-\frac12) \Dint |Rc|^2\ e^{-\varphi}d\mu \ge 0. \end{array}$$ This proves the theorem as desired. [**First Eigenvalue under the Normalized Ricci Flow**]{.nodecor} ================================================================ In this section, we derive the evolution of $\lambda (t)$ under the normalized Ricci flow equation $$\ppt \bg_{ij}=-2R_{ij}+\frac{2}{n} r\bg_{ij}.$$ Here $\textstyle{r=\frac{\int_M R d\mu}{\int_M d\mu}}$ is the average scalar curvature. It follows from Eq. (\[equ2\]) that $\lambda \le c r$. We now compute the derivative of the lowest eigenvalue of $-\Delta + cR$. \[thm3\] Let ($M\sp n,\bg(t)$), $t\in[0,T)$, be a solution of the normalized Ricci flow on a closed Riemannian manifold $M\sp n$. Assume that $\lambda (t)$ is the lowest eigenvalue of $-\Delta + c R$ ($c\geq \frac14$), $f>0$ is the corresponding eigenfunction. Then under the normalized Ricci flow, we have $$\label{equ21} \begin{array}{rll} \frac{d}{dt}\lambda(t)=-\frac{2r\lambda}{n}+\frac12 \Dint_M |R_{ij}+\nabla_i\nabla_j \varphi|^2\ e^{-\varphi}d\mu + \textstyle{\frac{4c-1}{2}} \Dint_M |Rc|^2\ e^{-\varphi}d\mu , \end{array}$$ where $e^{-\varphi} = f^2$. Furthermore, if $\textstyle{c=\frac14}$ and $r\le 0$, then $$\label{equ22} \begin{array}{rll} \frac{d}{dt}\lambda(t) =\frac{2}{n}r(\textstyle{\lambda -\frac{r}{4}})+\frac12 \Dint_M |R_{ij}+\nabla_i\nabla_j \varphi-\frac{r}{n}g_{ij}|^2\ e^{-\varphi}d\mu \ge 0. \end{array}$$ After we submitted our paper, J. Li suggested to us that (\[equ22\]) is true for all $c\geq 1/4$, with an additional nonnegative term $$\textstyle{\frac{4c-1}{2} \Dint_M |Rc-\frac{r}{n}g_{ij}|^2\ e^{-\varphi}d\mu },$$ see [@ljf2] for a similar result. \[rema2.2\] As a consequence of the above monotonicity formula of $\lambda(t)$, we can prove that both compact steady and expanding Ricci breathers (cf. [@Isoliton], [@perelman1]) must be trivial, such results have been discussed by many authors (for example, see [@Isoliton], [@Hsurvey], [@Hsurface], [@perelman1], [@C] and [@ljf1], etc.). When $M$ is a two-dimensional surface, $r$ is a constant. We have the following corollary. Let ($M\sp 2,\bg(t)$), $t\in[0,T)$, be a solution of the normalized Ricci flow on a closed Riemannian surface $M^2$. Assume that $\lambda (t)$ is the lowest eigenvalue of $-\Delta + c R$ ($c\geq \frac14$), we have $e^{rt}\lambda$ is nondecreasing under the normalized Ricci flow. Moreover, if $r\le 0$, then $\lambda$ is nondecreasing. [**Acknowledgement:**]{} The author would like to thank Dr. Junfang Li for bringing [@ljf1] and [@ljf2] to his attention and for helpful discussion on this subject. He would like to thank Professor Eric Woolgar for his interest and pointing out the reference [@osw05]. He would like to thank Professor Duong H. Phong for his interest and encouragement. He also wants to thank MSRI for the generous support. [^1]: Research partially supported by an MSRI postdoctoral fellowship
{ "pile_set_name": "ArXiv" }
--- author: - '[Sudarsun Kannan (Rutgers University)]{}, [Yujie Ren (Rutgers University)]{}, [Abhishek Bhatacharjee (Yale University)]{}' bibliography: - 'bib/p.bib' title: Efficient Kernel Object Management for Tiered Memory Systems with KLOC ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'Biochemical oscillations are prevalent in living organisms. Systems with a small number of constituents cannot sustain coherent oscillations for an indefinite time because of fluctuations in the period of oscillation. We show that the number of coherent oscillations that quantifies the precision of the oscillator is universally bounded by the thermodynamic force that drives the system out of equilibrium and by the topology of the underlying biochemical network of states. Our results are valid for arbitrary Markov processes, which are commonly used to model biochemical reactions. We apply our results to a model for a single KaiC protein and to an activator-inhibitor model that consists of several molecules. From a mathematical perspective, based on strong numerical evidence, we conjecture a universal constraint relating the imaginary and real parts of the first non-trivial eigenvalue of a stochastic matrix.' author: - 'Andre C. Barato$^{1}$ and Udo Seifert$^{2}$' title: | Coherence of Biochemical Oscillations is Bounded by\ Driving Force and Network Topology --- Introduction ============ Circadian rhythms [@gold97], the cell cycle [@ferr11] and gene expression in somitogenesis [@lewi03] constitute examples of biochemical oscillations that are of central importance for the functioning of living systems. While older observations of biochemical oscillations were made with glycosis [@pye66], more recent advances include the observation of 24-h oscillations of the phosphorylation level of the Kai proteins that form the circadian clock of a cyanobacterium [@naka05; @dong08]. Synthetically engineered genetic circuits can also display oscillatory behavior [@potv16]. On the theoretical side, the basic conditions for biochemical oscillations to set in are well understood for deterministic rate equations that ignore fluctuations [@nova08]. Such rate equations correspond to an effective description of an underlying set of chemical reactions that is fully described by a stochastic chemical master equation. In principle, biochemical oscillations can happen in a small system with large fluctuations in the chemical species that oscillates, leading to variability in the period of oscillations. Hence, stochastic biochemical oscillations cannot be coherent for an indefinite time. The number of coherent oscillations, which quantifies the precision of the biochemical oscillator, is given by the time for which oscillations remain coherent divided by the period of oscillation [@more07; @cao15]. In such a context, a relevant question is as follows: Given a biochemical system with significant fluctuations, what is the number of coherent oscillations that can be sustained? In a recent work related to this question, Cao et al. [@cao15] have investigated several stochastic models that display biochemical oscillations. They demonstrated that this number of coherent of oscillations increases with a larger rate of entropy production, which quantifies the free energy consumption of the biochemical system. Their work can be seen as part of the recent effort to understand the relation between a certain kind of precision and free energy consumption in biological systems, which include studies on kinetic proofreading [@qian07], adaptation [@lan12], cellular sensing [@meht12; @bara13b; @palo13; @lang14; @gove14; @gove14a; @hart15], information processing [@sart14; @bara14a; @bo15; @ito15; @mcgr17], and cost of precision in Brownian clocks [@bara16]. In particular, we have recently shown a general relation that establishes the minimal energetic cost for a certain precision associated with a random variable like the output of a chemical reaction. This thermodynamic uncertainty relation [@bara15a; @piet16; @ging16] can be used to infer an unknown enzymatic scheme in single molecule experiments [@bara15] and yields a bound on the efficiency of a molecular motor [@piet16b]. In this paper, we obtain a universal bound on the number of coherent oscillations that can be sustained in any biochemical system that can be modelled as a Markov process with discrete states. This universal bound depends on the thermodynamic forces that drive the system out of equilibrium and on the topology of the network of states. Our results are derived from a conjecture about the first non-trivial eigenvalue of a stochastic matrix that we support with thorough numerical evidence. Specifically, we obtain a bound on the ratio of the imaginary and real parts of this eigenvalue that quantifies the number of coherent oscillations. We illustrate our results with a model for a single KaiC protein [@vanz07] and with an activator-inhibitor model with several molecules [@cao15]. The paper is organized as follows. We consider the simple case of a unicyclic network in Sec. \[sec2\]. Our general bound for arbitrary multicyclic networks is formulated in Sec. \[sec3\]. In Sec. \[sec4\] we apply our results to the two models. We conclude in Sec. \[sec5\]. In Appendix \[app1\] we provide evidence for the bound for the case of unicyclic networks. The relation between the number of coherent oscillations and the Fano factor is discussed in Appendix \[app2\]. Numerical evidence for our conjecture is presented in Appendix \[app3\]. Appendix \[app4\] is dedicated to the model for a single KaiC. The relation between number of coherent oscillation and the entropy production in analyzed in Appendix \[app5\]. Finally, Appendix \[app6\] is dedicated to the activator-inhibitor model. Unicycic network {#sec2} ================ As a simple model for a biochemical oscillation we start with a single enzyme $E$ with the unicyclic reaction scheme $$E_1\xrightleftharpoons[k_2^-]{k_1^+} E_2 \xrightleftharpoons[k_3^-]{k_2^+} E_3\ldots E_{N-1}\xrightleftharpoons[k_{N}^-]{k_{N-1}^+}E_N \xrightleftharpoons[k_{1}^-]{k_{N}^+}E_1, \label{fullreaction}$$ where $k_i^{\pm}$ are transition rates. A generic transition from state $E_i$ to $E_{i+1}$ can represent, for example, a conformational change, binding of substrate to the enzyme or the release of a product from the enzyme. The thermodynamic force driving this system out of equilibrium is given by the affinity [@seif12] $$\A\equiv \ln\prod_{i=1}^N k_i^+/k_i^-,$$ where Boltzmann’s constant $k_B$ multiplied by the temperature $T$ is set to $k_BT=1$ throughout in this paper. For example, if one $ATP$ is consumed and $ADP+P_i$ generated in the cycle in Eq. , then the affinity is the chemical potential difference $\A=\mu_{ATP}-\mu_{ADP}-\mu_{P_i}$. ![(Color online) Correlation function $C_{1,1}(t)$. The number of states is $N=100$, $\A=200$, and the transition rates are uniform with $k^-=1$. For this case $X_I\simeq 0.401$, which gives a period of $15.66$, and $X_R\simeq0.01655$, as indicated by the red solid line. []{data-label="fig1"}](./fig1.eps){width="65mm"} The model from Eq. follows the master equation $d\mathbf{P}(t)/dt= \mathbf{L}\mathbf{P}(t)$, where $\mathbf{P}(t)=\{P_1(t),P_2(t),\ldots,P_N(t)\}^T$ is the vector of probabilities to be in a certain state. The stochastic matrix $\mathbf{L}$ is defined by $$\mathbf{L}_{j,i}\equiv k_i^{+}\delta_{i,j-1}+k_i^{-}\delta_{i,j+1}-(k_i^{-}+k_i^{+})\delta_{i,j},$$ where $\delta_{i,j}$ is the Kronecker delta, $j-1=N$ for $j=1$, and $j+1=1$ for $j=N$. Let us assume that the enzyme is phosphorylated only in state $E_1$. The precision of oscillations in the phosphorylation level of an enzyme that is phosphorylated at time $t=0$ is characterized by the number of coherent oscillations in the correlation function $C_{1,1}(t)$ plotted in Fig. \[fig1\], which is the probability that the enzyme is in state $E_1$ at time $t$ given that the enzyme was in state $E_1$ at time $0$, i.e., $$C_{1,1}(t)\equiv \left[\exp(\mathbf{L}t)\mathbf{P}(0)\right]_1, \label{eqcorr}$$ where $\mathbf{P}(0)=\{1,0,0,\ldots,0\}$ and the subscript $1$ indicates the first component of the vector $\exp(\mathbf{L}t)\mathbf{P}(0)$. For large $t$, this correlation function tends to $P^{st}_1$, which is the stationary distribution for state $1$. This stationary distribution is the right eigenvector of the stochastic matrix $\mathbf{L}$ that is associated with the eigenvalue $0$. The first nontrivial eigenvalue of the stochastic matrix $\lambda= -X_R\pm X_Ii$, gives the decay time $X_R^{-1}$ and the period of oscillations $2\pi/X_I$ in Fig. \[fig1\]. We characterize the coherence of oscillations by the ratio [@qian00] $$\R\equiv X_I/X_R, \label{eqratio}$$ where the number of coherent oscillations [@more07; @cao15] is $X_I/(2\pi X_R)=\R/(2\pi)$ . For general Markov processes that fulfill detailed balance, which corresponds to $\A=0$ for the unicyclic model, $X_I=0$ and there are no oscillations in correlation functions. Hence, a non-zero driving affinity $\A$ is a necessary condition for biochemical oscillations. In particular, for the case of uniform rates in Eq. given by $k_i^{-}=k^-$ and $k_i^{+}=k^-\textrm{e}^{\A/N}$, we obtain $X_R= [1-\cos(2\pi/N)](k_++k_-)$ and $X_I=\sin(2\pi/N)(k_+-k_-)$. For the general unicyclic scheme in Eq. with fixed affinity $\A$ and number of states $N$, the ratio $\R$ is maximized for uniform transition rates, which leads to our first main result $$\R\le \cot(\pi/N)\tanh[\A/(2N)]\equiv f(\A,N). \label{firstmain}$$ Thus, the maximal number of coherent oscillations in a unicyclic network is bounded by the thermodynamic force $\A$ and by the network topology through the number of states $N$. The evidence for this bound is as follows. For $N=3$ we can show analytically that uniform rates correspond to a maximum of $\R$, whereas for larger $N$ we rely on extensive numerical evidence as shown in Appendix \[app1\]. Specifically, we have confirmed this conjecture up to $N=8$ with both numerical maximization of $\R$ and evaluation of $\R$ at randomly chosen rates. Similar to the ratio $\R$, the Fano factor associated with the probability current is extremized for uniform rates [@bara15a; @bara15]. However, as discussed in Appendix \[app2\], the bound in Eq. and this earlier bound on the Fano factor are different results, i.e., one does not imply the other. Multicyclic networks {#sec3} ==================== Biochemical networks are typically more complicated than a single cycle. We now extend the bound from Eq. to general multicyclic networks. As an example, we consider an enzyme $E$ that consumes a substrate $S$ and generates a product $P$. The enzyme has two binding sites, leading to the network of states shown in Fig. \[fig2a\], which is a common model in enzyme kinetics [@bara15]. The affinity that drives the system out of equilibrium is the chemical potential difference between substrate and product $\varDelta\mu=\mu_S-\mu_P$. This network of states has four types of cycles, as shown in Fig. \[fig2a\]. There are cycles with three states and affinity $\varDelta\mu$, like the cycle $E+S\to ES\to EP\to E+P$; cycles with four states and affinity $0$, like the cycle $E+S+P\to ES+P\to ESP\to EP+S\to E+S+P$; cycles with five states and affinity $\varDelta\mu$, like the cycle $ES+S\to ESS\to ESP\to EPP\to EP+P\to ES+P$; and one cycle with six states and affinity $2\varDelta\mu$, which is the outer cycle in Fig. \[fig2a\] that goes through all states. Among all these cycles, the last one with $\A=2\varDelta\mu$ and $N=6$ leads to the largest value of the function $f(\A,N)$. We have verified numerically that indeed $f(2\varDelta\mu,6)$ bounds the ratio $\R$ with numerical maximization and numerical evaluation at randomly chosen rates, as shown in Fig. \[fig2b\]. The bound is saturated if the transition rates for the cycle with six states are uniform and much faster than the rates associated with the three links in the middle that are not part of the six-state cycle. In this way, the multicyclic network corresponds effectively to a unicyclic network with six states. In Appendix \[app3\], we perform similar numerical tests for several multicyclic networks that do not share any symmetry, and in all cases the ratio $\R$ follows a similar bound. Based on this numerical evidence we conjecture the following universal bound on the ratio $\R$. Consider an arbitrary Markov process with a finite number of states $N$ on an arbitrary multicyclic network. The cycles are labeled by $\alpha$, with a number of states $N_\alpha\le N$ and affinity $\A_\alpha$, where $\textrm{e}^{\A_\alpha}$ is the product of forward transition rates divided by backward transition rates over all links in the cycle (see Appendix \[app3\]). The affinity and number of states of the cycle with the maximal value of $f(\A_\alpha,N_\alpha)$, defined in Eq. , are denoted by $\A^*$ and $N^*$, respectively, i.e., $f(\A^*,N^*)=\textrm{max}_\alpha f(\A_\alpha,N_\alpha)$. The ratio $\R$ is then bounded by $$\R\le f(\A^*,N^*)\le \A^*/(2\pi). \label{secondmain}$$ The basic idea behind this bound is that the simple unicyclic network in Eq. is a building block for a generic multicyclic network: any two point correlation function cannot have a larger number of coherent oscillations than the bound determined by its “best” cycle. Hence, the number of coherent oscillations is bounded by the thermodynamic force $\A^*$ and the topology of the network of states, as characterized by $N^*$. Our bound in Eq. leads to two general necessary conditions for a large number of coherent oscillations, a large number of states and a large maximal affinity. For biochemical models with irreversible transitions, e.g., the models in [@more07; @vanz07], the affinity $\A^*$ formally diverges, and the bound in Eq. becomes $\R\le \cot(\pi/N^*)\le \cot(\pi/N)$. The weaker second inequality involving the total number of states $N\ge N^*$ follows from a known result about the eigenvalues of a discrete time stochastic matrix [@dmit45; @dmit46; @swif72]. For the case of a complex network of states where identifying the large number of states in a cycle is not feasible, like in the activator-inhibitor model below, we can use the second inequality in Eq. , based on $\lim_{N\to\infty}f(\A,N)=\A/(2\pi)$. We now proceed to illustrate this second main result in two models. Case studies {#sec4} ============ Model for a single KaiC {#sec4a} ----------------------- First, we consider a model for a single KaiC hexamer along the lines of the model proposed in [@vanz07]. The assumptions entering the model, which is depicted in Fig. \[fig3a\], are the following. A phosphate can bind to each one of the six monomers, hence the phosphorylation level of the hexamer varies from $i=0$, with no phosphate, to $i=6$, with all monomers phosphorylated. Each of the six monomers can be either active or inactive. However, either all monomers are active or all monomers are inactive, since the energetic cost of having two monomers with different conformations is high enough to avoid such configurations. There are a total of $14$ states, denoted by $C_i$ for $i$ phosphorylated active monomers and $\tilde{C}_i$ for $i$ phosphorylated inactive monomers. If the hexamer is active, phosphorylation reactions occur and if the hexamer is inactive only dephosphorylation reactions occur. The transition rates of this model for a single KaiC protein are given in the caption of Fig. \[fig3a\]. The parameter $\varDelta \mu$ is the chemical potential difference of ATP hydrolysis. The parameter $E$ sets the energy of a state that depends on the hexamer activity and on the phosphorylation level. If the hexamer is active the energy of a state $C_i$ is $Ei/6$ and if it is inactive the energy of an state $\tilde{C}_i$ is $E(6-i)/6$. This parametrization implies that the transition rate from $C_6$ to $\tilde{C}_6$ and the transition rate from $\tilde{C}_0$ to $C_0$ are both larger than the rates for the respective reversed transitions. The parameters $k$ and $\gamma$ are related the time-scales of changes in the phosphorylation level and conformational changes between active and inactive, respectively. The phosphorylation level of the KaiC protein oscillates with the number of coherent oscillations given by $\R/(2\pi)$, as shown in in Appendix \[app4\]. The cycle with the largest value of the function $f(\A,N)$ is the cycle that goes through all $N=14$ states with $\A=6\varDelta \mu$, which is marked with the red arrows in Fig. \[fig3a\]. Hence, for this model we have $\R\le f(6\varDelta\mu,14)$, as shown in Fig. \[fig3b\]. For fixed $E$ and $\gamma$ we obtain the red dashed curve in Fig. \[fig3b\] for $\R$ as a function of $\varDelta\mu$. Interestingly, while the number of coherent oscillations has a maximum, after which it decreases to zero with increasing $\varDelta \mu$, the entropy production from stochastic thermodynamics [@seif12] is an increasing function of $\varDelta\mu$. Hence, the number of coherent oscillations can also decrease with an increase of the rate of free energy consumption, which provides a counter example to the relation between the number of coherent oscillations and energy dissipation inferred in [@cao15]. We discuss the relation between $\R$ and the entropy production further in Appendix \[app5\]. The maximal number of coherent oscillations $\R/(2\pi)$ that can be achieved in this model is strictly speaking less than $1$. If a single molecule does not have a large number of states, several coherent oscillations can only be sustained in a system with many molecules as we discuss next in our second example. Activator-inhibitor model {#sec4b} ------------------------- We consider the activator-inhibitor model from [@cao15], see Fig. \[fig4a\]. The main components of this model are inhibitors $X$, activators $R$ and enzymes $M$ that can be in four different states. The enzyme goes through a phosphorylation cycle over these four states, hydrolyzing one ATP thus liberating a free energy $\varDelta\mu$. The enzyme $M$ in its phosphorylated form ($M_p$) activates $R$ and $X$, whereas $X$ inhibits $R$. Furthermore, the enzyme $M$ must bind an $R$ in order to phosphorylate. Hence, $R$ activates the production of $R$ and $X$, while $X$ inhibits $R$. This feedback loop leads to oscillations in, for example, the number of species $X$. Finally, there is a phosphatase $K$ that must bind to the enzyme $M$ for the dephosphorylation reaction. Further details of the model are given in Appendix \[app6\]. Two important aspects about the behavior of this model are the following. First, the number of oscillations increases with $\varDelta \mu$ and saturates for large enough $\varDelta\mu$. Second, in order for different enzymes $M$ to synchronize their cycles, they must compete for the smaller number of phosphatase $N_K<N_{M_T}$, where $N_{M_T}$ is the total number of enzymes, as explained in Appendix \[app6\]. However, if $N_K$ is too small, the number of enzymes that synchronize is also too small to generate oscillations. Hence, there is an optimal value for $N_K$. These features are shown Fig. \[fig4b\]. Due to the complex network of states of this model we bound the ratio $\R$ with the second inequality in Eq. . The largest affinity $\A^*$ is given by $\A^*=\varDelta\mu N_{M_T}$, which corresponds to a cycle where all enzymes $M$ go through their cycles in a synchronized way. As shown in Fig. \[fig4b\], the values of $\R$ obtained with numerical simulations are approximately one order of magnitude below the fundamental limit set by our bound of $N_{M_T}\varDelta\mu/(2\pi)$, which gives $\R\simeq796$ for $\varDelta\mu=10$. This result is reasonable, as saturating the bound in a multicyclic network requires transition rates such that an optimal cycle dominates, which is not the case for the present model. The realization of this optimal cycle in a stochastic trajectory would require an unlikely sequence of events that all enzymes $M$ go through their own cycles in a synchronized way. For the case of a close to optimal value of $N_K$ that maximizes $\R$, the number of enzymes $M$ that synchronize is roughly $N_K$. Hence, cycles with an affinity $\varDelta\mu N_K$ should be typical. Guided by our bound, it is then interesting to compare the value of $\R$ with the estimate $\varDelta\mu N_K/(2\pi)$. As shown in Fig. \[fig4b\], indeed this number can give an approximate value of $\R$, typically overestimating $\R$. The estimate is best for $N_K=30$ and for $\varDelta\mu=10$. For large $\varDelta\mu$ the bound grows linearly, while $\R$ saturates. Our results then indicate that this estimate works well in the regime where $\varDelta\mu$ is close to its saturation value and $N_K$ is close to its optimal value. Even though this heuristic argument is restricted to this model, if competition for some scarce molecule that lead to synchronization is present in the biochemical system, e.g., in the model for the Kai system from [@vanz07], a similar reasoning that leads to an estimate of the number of coherent oscillations should be valid. Conclusion {#sec5} ========== In summary, we have conjectured a new bound on the number of coherent biochemical oscillations for systems with large fluctuations. Our universal result depends only on the thermodynamic forces that drive the system out equilibrium and on the network topology through the cycle with the largest function $f$ from Eq. . Knowledge of the chemical potential differences and the network of states of a biochemical system thus leads to a bound on the number of coherent oscillations. As illustrative examples, we obtained the largest number of oscillations that can be sustained by a single KaiC hexamer and analyzed the activator-inhibitor model, showing that our bound is also valid in models with a number of molecules that is large enough to make the network of states complicated but small enough to keep fluctuations relevant and, therefore, make a description in terms of deterministic rate equations inappropriate. It remains to be seen whether and how our bound can be used as a guiding principle to understand how systems like circadian clocks have evolved and to engineer systems with precise oscillations in synthetic biology. Our results apply to autonomous biochemical oscillators. Analyzing the relation between precision and thermodynamics for biochemical oscillators that are coupled to an external periodic signal is an interesting direction for future work. Finally, a rigorous mathematical proof of our conjecture about the first excited eigenvalue of a stochastic matrix is an open problem for the theory of Markov processes. We thank F. Jülicher for helpful discussions. Evidence for unicyclic network {#app1} ============================== We discuss the evidence for the bound in Eq. for the unicyclic scheme. The mathematical problem is to calculate the first non-trivial eigenvalue of the stochastic matrix $$\mathbf{L}_{j,i}\equiv k_i^{+}\delta_{i,j-1}+k_i^{-}\delta_{i,j+1}-(k_i^{-}+k_i^{+})\delta_{i,j},$$ where $\delta_{i,j}$ is the Kronecker delta, $j-1=N$ for $j=1$, and $j+1=1$ for $j=N$. The absolute value of the imaginary (real) part of this eigenvalue is denoted $X_I$ ($X_R$). For the case $N=3$ this eigenvalue can be exactly calculated with some algebra, leading to $$\R\equiv X_I/X_R= \sqrt{4C_1/C_2^2-1},$$ where $$\begin{aligned} C_1\equiv & k_1^-k_2^-+ k_1^-k_3^-+ k_2^-k_3^-+k_1^+k_2^++ k_1^+k_3^++ k_2^+k_3^+\nonumber\\ & +k_1^-k_2^++k_2^-k_3^++k_3^-k_1^+\end{aligned}$$ and $$C_2\equiv k_1^++k_2^++k_3^++k_1^-+k_2^-+k_3^-.$$ If $4C_1<C_2^2$, there are no oscillations in correlations functions. We want to find the transition rates that maximize $\R$ for a fixed affinity $$\A\equiv \ln\left(\prod_{i=1}^N k_i^+/k_i^-\right).$$ This maximum can be found with the Lagrange function $$\Lambda(\{k_i^+\},\{k_i^-\},\alpha)= 4C_1/C_2^2-\alpha(k_1^+k_2^+k_3^+-k_1^-k_2^-k_3^-\textrm{e}^\A),$$ where $\alpha$ is a Lagrange multiplier. The derivatives of $\Lambda$ with respect to the transition rates are given by $$\frac{d\Lambda}{dk_1^+}=\frac{4(k_2^++k_3^++k_3^-)}{C_2^2}-8\frac{C_1}{C_2^3}-k_2^+k_3^+\alpha$$ and $$\frac{d\Lambda}{dk_1^-}=\frac{4(k_2^-+k_2^++k_3^-)}{C_2^2}-8\frac{C_1}{C_2^3}+\textrm{e}^\A k_2^-k_3^-\alpha.$$ Due to symmetry, it is easy to deduce the derivatives with respect to $k_2^\pm$ and $k_3^\pm$ from the above expressions. If we substitute uniform rates $k_i^-=k^-$ and $k_i^+= \textrm{e}^{\A/3}k_-$ in the above expressions, we obtain that both derivatives become zero with a Lagrange multiplier $$\alpha=\frac{4\textrm{e}^{-2\A/3}(\textrm{e}^{\A/3}-1)}{9(\textrm{e}^{\A/3}+1)^3(k^-)^3}.$$ Hence, we have proved that $\R$ is extremized for uniform rates. We can easily evaluate $\R$ for specific rates and check that uniform rates indeed correspond to a maximum. For larger $N$, up to $N=8$, we have calculated this eigenvalue numerically. We have maximized the ratio $\R$ numerically and observed that it is maximized for uniform rates in all cases, providing convincing evidence for the bound in Eq. . As an independent check we have also evaluated $\R$ numerically for randomly chosen rates. As an example, we show a scatter plot obtained with this method in Fig. \[figsup1\]. ![(Color online) Scatter plot for the unicyclic model with $N=4$. The bound in Eq. is represented by the solid red line. The rate $k_1^+$ was set to $k_1^+= \textrm{e}^{A}k_1^-k_2^-k_3^-/(k_2^+k_3^+)$, the other seven rates were randomly chosen as $10^{x}$, with $x$ uniformly distributed between $-3$ and $3$. For this figure, we have evaluated $\R$ for $10^{7}$ sets of rates.[]{data-label="figsup1"}](./fig5.eps){width="75mm"} Relation between $\R$ and the Fano factor {#app2} ========================================= We now explain the difference between the bound in Eq. and a bound on the Fano factor $F$ obtained in [@bara15a; @bara15]. This Fano factor is given by $F=2D/J$, where $J$ is the average probability current and $D$ the diffusion constant associated with the current [@bara15a]. This bound on the Fano factor can be written as $F^{-1}\le N\tanh[\A/(2N)]$, where the quantity $F^{-1}$ is also maximized for uniform rates. Furthermore, for uniform rates $2D=(k_++k_-)/N^2$ and $J=(k_+-k_-)/N$, implying $X_I= \sin(2\pi/N)NJ$ and $X_R= [1-\cos(2\pi/N)]N^22D$. Nevertheless, for arbitrary transition rates there is no such simple relations, with a prefactor that only depends on $N$, between $X_I$ ($X_R$) and $J$ ($D$). For uniform rates $\R= F^{-1}N^{-1}\cot(\pi/N)$. Since $\R$ can be zero even out of equilibrium and $F^{-1}$ becomes zero only in equilibrium, we know that $F^{-1}N^{-1}\cot(\pi/N)$ can be larger than $\R$. Evaluating $\R$ and $F$ at different rates we find that $\R$ can also be larger than $F^{-1}N^{-1}\cot(\pi/N)$. Hence, the bound on the Fano factor from [@bara15a; @bara15] does not imply our result in Eq. . Their main similarity is that both the Fano factor $F$ and the ratio $\R$ are extremized for uniform rates: it is common to find a function of several variables that is extremized at a symmetric point. Evidence for multicyclic networks {#app3} ================================= In this appendix we explain the numerical evidence for the bound in Eq. for multicyclic networks. For all cases, we have confirmed our bound with numerical calculation of the first nontrivial eigenvalue of the stochastic matrix. We have confirmed the bound with both numerical maximization of $\R$ and by evaluating $\R$ at randomly chosen transition rates. As a first example of a multicyclic network, we consider the network with four states shown in Fig. \[figmulti1a\]. The numbers represent states and the links between them represent transition rates that are not zero. A transition rate from state $i$ to $j$ is denoted $k_{ij}$. This network has three cycles: two cycles with three states $\mathcal{C}_1=(1,2,4,1)$ and $\mathcal{C}_2=(1,3,4,1)$, and one cycle with four states $\mathcal{C}_3=(1,2,3,4,1)$. The affinity of cycle $\mathcal{C}_1$ is $$\A_1\equiv\ln\frac{k_{12}k_{24}k_{41}}{k_{21}k_{42}k_{14}}, \label{eqaff1}$$ and the affinity of cycle $\mathcal{C}_2$ is $$\A_2\equiv\ln\frac{k_{13}k_{34}k_{41}}{k_{31}k_{43}k_{14}}. \label{eqaff2}$$ The affinity of $\mathcal{C}_3$, which is not independent of $\A_1$ and $\A_2$, can be written as $$\A_3=\A_1-\A_2.$$ For the results shown in Fig. \[figmulti1b\], which confirm our bound for this network, we have set the affinities of the cycles as $\A_1= 3 \varDelta\mu/2$ and $\A_2= \varDelta\mu$, which leads to $\A_3=\varDelta\mu/2$. As shown in Fig. \[figmulti1b\], the cycle with the largest value for the function $f(\A_\alpha,N_\alpha)$, where $N_\alpha$ is the number of states of cycle $\alpha$, depends on the value of $\varDelta\mu$. For $\varDelta\mu<10.38$ the cycle with largest value for this function is $\mathcal{C}_1$ with $f(3\varDelta\mu/2,3)$. For $\varDelta\mu>10.38$ the cycle with largest value for this function is $\mathcal{C}_3$ with $f(\varDelta\mu/2,4)$. The second multicyclic network has four states and is fully connected, as shown in Fig. \[figmulti2a\]. For this network we have one cycle four states and four cycles with three states. We fix the affinities of these cycles as $$\A_1= \ln\frac{k_{12}k_{23}k_{34}k_{41}}{k_{21}k_{32}k_{43}k_{14}}=\varDelta\mu, \label{eqaff3}$$ $$\A_2= \ln\frac{k_{12}k_{23}k_{31}}{k_{21}k_{32}k_{13}}=0, \label{eqaff4}$$ $$\A_3= \ln\frac{k_{12}k_{24}k_{41}}{k_{21}k_{42}k_{14}}=0, \label{eqaff5}$$ where from now on we define the cycles through their affinities. These three affinities determine the values of the two remaining affinities as $$\A_4= \ln\frac{k_{13}k_{34}k_{41}}{k_{31}k_{43}k_{14}}=\varDelta\mu,$$ $$\A_5= \ln\frac{k_{23}k_{34}k_{42}}{k_{32}k_{43}k_{24}}=\varDelta\mu.$$ The cycle leading to the maximal value of $f(\A_\alpha,N_\alpha)$ is the cycle with four states and affinity $\A_1=\varDelta\mu$. The numerical results illustrating the bound $\R\le f(\varDelta\mu,4)$ for this network is shown in Fig. \[figmulti2b\]. Our third example is the network with six states and three cycles shown in Fig. \[figmulti3a\]. The affinity of the cycle with six states is fixed as $$\A_1=\ln\frac{k_{12}k_{23}k_{36}k_{65}k_{54}k_{41}}{k_{21}k_{32}k_{63}k_{56}k_{45}k_{14}}=\varDelta\mu. \label{eqaff1multi3}$$ We also fix the affinity $$\A_2=\ln\frac{k_{12}k_{25}k_{54}k_{41}}{k_{21}k_{52}k_{45}k_{14}}=\varDelta\mu. \label{eqaff2multi3}$$ These two affinities determine the affinity of the third cycle as $$\A_3=\ln\frac{k_{23}k_{36}k_{65}k_{52}}{k_{32}k_{63}k_{56}k_{25}}=0.$$ The dominant cycle is the one with affinity $\A_1=\varDelta \mu$ and six states. The numerical evidence for the bound $\R\le f(\varDelta \mu,6)$ is shown if Fig. \[figmulti3b\] The fourth and last example is the network with five states and five cycles shown in Fig. \[figmulti4a\]. The affinity of the five states cycle is set to $$\A_1=\ln\frac{k_{12}k_{23}k_{34}k_{45}k_{51}}{k_{21}k_{32}k_{43}k_{54}k_{15}}=\varDelta\mu. \label{eqaff1multi4}$$ There are two four states cycles with affinities $$\A_2=\ln\frac{k_{23}k_{34}k_{45}k_{52}}{k_{32}k_{43}k_{54}k_{25}}=\varDelta\mu, \label{eqaff2multi4}$$ and $$\A_3=\ln{k_{12}k_{24}k_{45}k_{51}}{k_{21}k_{42}k_{54}k_{15}}=0. \label{eqaff3multi4}$$ These three affinities determine the affinity of the two remaining three states cycles,which are $$\A_4=\ln\frac{k_{23}k_{34}k_{42}}{k_{32}k_{43}k_{24}}=\varDelta\mu,$$ and $$\A_5=\ln\frac{k_{12}k_{25}k_{51}}{k_{21}k_{52}k_{15}}=0.$$ The dominant cycle has affinity $\A_1=\varDelta\mu$ and five states. The numerical evidence for the bound $\R\le f(\Delta\mu,5)$ is shown in Fig. \[figmulti4b\]. For the multicyclic network given in Fig. \[fig2\] we have performed a similar analysis. This network has a total of 11 cycles. We identify the states $E,ES,EP,ESS,ESP,EPP$ in Fig. \[fig2\] as $1,2,3,4,5,6$, respectively. We have generated two sets of points. The first set has $10^9$ points and we just accepted results fulfilling $\R\ge\tanh(\varDelta\mu/8)$. For this set we have chosen the rates $k_{12}$, $k_{31}$, $k_{45}$, and $k_{56}$ as $\textrm{e}^{\varDelta\mu/6}10^{(2+x)/2}$; the rates $k_{21}$, $k_{25}$, $k_{32}$, $k_{35}$, $k_{36}$, $k_{42}$, $k_{52}$, $k_{53}$, $k_{54}$, $k_{56}$, and $k_{65}$ as $10^x$; with $x$ uniformly distributed between $-2$ and $2$. The four remaining rates $k_{13}$, $k_{23}$, $k_{24}$, and $k_{63}$ were determined by the constraints set by the affinities. The second set has $10^7$ points and the $14$ independent rates were chosen as $10^x$, with $x$ uniformly distributed between $-2$ and $2$. In summary, we have confirmed our bound numerically for four different networks aside from the one displayed in Fig. \[fig2\]. Since these networks do not share any symmetry, our full numerics provides strong evidence for our bound conjectured in Eq. . Phosphorylation level in the model for a single KaiC {#app4} ==================================================== We show that the phosphorylation level of the KaiC protein displays oscillations, with the number of coherent oscillations characterized by the ratio $\R$. For the single KaiC model we analyze in Sec. \[sec4a\], the stochastic matrix reads $$\left( \begin{array}{cccccccccccccc} -r_1 & e^{E/6} \gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{E} k \\ e^{\varDelta\mu /2} \gamma & -r_2 & e^{E/6} \gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{2 E/3} k & 0 \\ 0 & e^{\varDelta\mu /2} \gamma & -r_2 & e^{E/6} \gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{E/3} k & 0 & 0 \\ 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_2 & e^{E/6} \gamma & 0 & 0 & 0 & 0 & 0 & k & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_3 & e^{E/6} \gamma & 0 & 0 & 0 & k & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_4 & e^{E/6} \gamma & 0 & k & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_5 & k & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & e^{E} k & -r_1 & e^{E/6} \gamma & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & e^{2 E/3} k & 0 & e^{\varDelta\mu /2} \gamma & -r_2 & e^{E/6} \gamma & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & e^{E/3} k & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_2 & e^{E/6} \gamma & 0 & 0 & 0 \\ 0 & 0 & 0 & k & 0 & 0 & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_2 & e^{E/6} \gamma & 0 & 0 \\ 0 & 0 & k & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_3 & e^{E/6} \gamma & 0 \\ 0 & k & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_4 & e^{E/6} \gamma \\ k & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{\varDelta\mu /2} \gamma & -r_5 \\ \end{array} \right) \label{matdef}$$ where $r_1\equiv k+\gamma\textrm{e}^{\varDelta\mu/2}$, $r_2\equiv k + \gamma\textrm{e}^{E/6}+\gamma\textrm{e}^{\varDelta\mu/2}$, $r_3\equiv k\textrm{e}^{E/3}+\gamma\textrm{e}^{E/6}+\gamma\textrm{e}^{\varDelta\mu/2}$, $r_4\equiv k\textrm{e}^{2E/3}+\gamma\textrm{e}^{E/6}+\gamma\textrm{e}^{\varDelta\mu/2}$, and $r_5\equiv k\textrm{e}^{E}+\gamma\textrm{e}^{E/6}$. The first seven states are related to the inactive form of the protein, with state $1$ corresponding to $C_0$ and state $7$ corresponding to $C_6$. The last seven states are related to the active form of the protein, with state $8$ corresponding to $\tilde{C}_6$ and state $14$ corresponding to $\tilde{C}_0$. The phosphorylation level of the protein is a state function. Its average is given by the expression $$G(t)=\sum_{i=0}^6 i[P_{C_i}(t)+P_{\tilde{C}_i}(t)], \label{phosdef}$$ $P_{C_i}(t)$ is the probability of configuration $C_i$ at time $t$. Initially the phosphorylation level is zero with the protein in state $C_0$, i.e., $P_{C_0}(0)=1$. The probabilities at time $t$ can be calculated with the expression $$\mathbf{P}(t)=\exp(\mathbf{L}t)\mathbf{P}(0), \label{evol}$$ where $\mathbf{P}(0)= (1,0,0,0,0,0,0,0,0,0,0,0,0,0)^{T}$ and $\mathbf{L}$ given in Eq. . We have calculated the phosphorylation level of the KaiC protein as a function of time from Eq. and Eq. . The result is shown in Fig. \[figsup2\]. Clearly the exponential decay of the amplitude and the period of oscillation are determined by the first eigenvalue of the matrix in Eq. (see caption of Fig. \[figsup2\]). ![(Color online) Phosphorylation level of the KaiC protein. The parameters of the model are set to $k=\textrm{e}^{0.48097}$, $\gamma=\textrm{e}^{10.3475}$, $E=19.86$ and $\A=20$. Numerical calculation of the first eigenvalue gives $X_R\simeq 6.82\times10^7$ and $X_I\simeq 29.8\times10^7$, which gives $\R\simeq 4.37$. The red solid line represents an exponential function with decay exponent $X_R$. []{data-label="figsup2"}](./fig10.eps){width="65mm"} Relation between $\R$ and entropy production {#app5} ============================================ In this appendix, we discuss the relation between the ratio $\R$ and the entropy production from stochastic thermodynamics [@seif12], which we denote by $\sigma$. For a unicyclic network with uniform rates $k^+= e^{\A/N}k^-$ this entropy production is given by $$\sigma= (\A/N)(\textrm{e}^{\A/N}-1)k_-. \label{entun}$$ For large $N$, we obtain $\R= \A/2\pi$, as in Eq. , and, from $X_I=(\textrm{e}^{\A/N}-1)k_-\sin(2\pi/N)$, we obtain $\varDelta Q\equiv 2\pi\sigma/X_I= \A$. If $\sigma$ is interpreted as the rate of heat dissipated to the environment [@seif12], $\varDelta Q$ is the dissipated heat per period of oscillation. Hence, for a unicyclic network with large number of states $N$, we find $$\R^{-1}= 1/\varDelta Q. \label{reltu}$$ This expression is a particular case of the relation found in [@cao15], which states that after some critical value $\Delta Q_c$, for which oscillations set in, the inverse of the number of biochemical oscillations decay to some plateau with $(\Delta Q-\Delta Q_c)^{-1}$. For our particular case both $\Delta Q_c$ and the plateau are zero. In [@cao15] this relation was demonstrated to be fulfilled for several different models. While this relation is true for an unicyclic network with uniform rates that maximize $\R$, for arbitrary rates $\R$ can also decrease with an increase in $\Delta Q$. The entropy production for the generic unicylic model in Eq. reads $$\sigma= \A(P_Nk_1^+-P_1k_1^-),$$ where $P_i$ is the stationary probability of state $i$. As an example, we consider the unicyclic model for $N=3$ with $k_1^-=k_2^-=k_3^-=1$, $k_1^+=k_3^+= \textrm{e}^{\A/4}$ and $k_2^+= \textrm{e}^{\A/2}$. In Fig. \[figsupact2a\] we show that $\R$ as a function of $\varDelta Q= 2\pi \sigma/X_I$ has a maximum, where we vary the affinity $\A$. Therefore, the number of coherent oscillations can also decrease with an increase in energy dissipation. A similar behavior has been observed in [@qian00], with the main difference that instead of varying $\varDelta\mu$ the authors vary the temperature. The maximum of $\R$ as a function of temperature was identified as stochastic resonance. The same maximum was also observed for the single KaiC model, as shown in Fig. \[figsupact2b\]. In this case, the entropy production can be written as $$\sigma=\varDelta\mu\gamma\sum_{i=0}^{5}\left(\textrm{e}^{\varDelta\mu/2}P_{C_{i}}-\textrm{e}^{E/6}P_{C_{i+1}}\right),$$ where $P_{C_{i}}$ is the stationary probability of state $C_i$. In this expression we used the Schnakenberg cycle decomposition of the entropy production [@seif12]. Activator-inhibitor model {#app6} ========================= In this appendix, we define the activator-inhibitor model from [@cao15]. The model has four different chemical species: the activator $R$, the inhibitor $X$, the enzyme $M$ and the phosphatase $K$. An enzyme $M$ can be in four different states, which form the phosphorylation cycle $$\begin{aligned} & M+R+K+ATP\xrightleftharpoons[k_2^-]{k_1^+} MR+K+ATP\xrightleftharpoons[k_3^-]{k_2^+}\nonumber\\ & M_p+ADP+K+R\xrightleftharpoons[k_{4}^-]{k_{3}^+}M_pK+ADP+R\xrightleftharpoons[k_{1}^-]{k_{4}^+}\nonumber\\ & M+ADP+P_i+K+R. \label{phoscyc} \end{aligned}$$ The concentrations of $ATP$, $ADP$ and $P_i$ are assumed to be fixed. From the generalized detailed balance relation, the rates in Eq. fulfill $$\textrm{e}^{\varDelta\mu}=k_1^+k_2^+k_3^+k_4^+/(k_1^-k_2^-k_3^-k_4^-)$$ where $\varDelta\mu$ is the free energy liberated in one $ATP$ hydrolysis. The activator $R$ catalyzes the phosphorylation of the enzyme $M$ and the phosphatase $K$ catalyzes the dephosphorylation of $M$. The enzyme in the phosphorylated state $M_p$ catalyzes the creation of both the activator $R$ with rate $l_0$ and the inhibitor $X$ with rate $l_3$. The activator $R$ can also be spontaneously created with a rate $l_1$. The inhibitor $X$ catalyzes the degradation of $R$ with rate $l_2$ and can be spontaneously degraded with a rate $l_4$. Hence, we have the following chemical reactions $$\begin{aligned} & M_p\xrightleftharpoons[\epsilon]{l_0}M_p+R, \nonumber\\ & \emptyset\xrightleftharpoons[\epsilon]{l_1} R, \nonumber\\ & X+R\xrightleftharpoons[\epsilon]{l_2} X, \nonumber\\ & M_p\xrightleftharpoons[\epsilon]{l_3} M_p+X, \nonumber\\ & X\xrightleftharpoons[\epsilon]{l_4}\emptyset. \label{reacchem} \end{aligned}$$ These are equilibrium reactions and a cycle that involves only them must have zero affinity. The only way to get a cycle with nonzero affinity in this model is to use the chemical reactions in Eq. . The reversed rates $\epsilon$ are assumed to be very small so that we can set them to zero. Formally, they must be nonzero for thermodynamic consistency, however, in a numerical simulation we can just set them to zero, instead of using a very small $\epsilon$ that will lead to the same results. The total number of enzymes $N_{M_T}=N_{M}+N_{M_pK}+N_{M_p}+N_{MR}$, and that of phosphatases $N_K$ are conserved. The number of activators $R$ fulfills $N_R\ge 1$ and the number of inhibitors $X$ fulfills $N_X\ge 1$. A state of the system is then determined by the vector $\mathbf{N}=(N_R,N_X,N_{M},N_{M_pK},N_{M_p},N_{MR})$. The volume of the system is written as $V$ and the concentration of the chemical species $X$, for example, is denoted by $n_X\equiv N_X/V$. The master equation that defines this model reads $$\begin{aligned} & \frac{d}{dt}P(\mathbf{N})= (l_0 N_{M_p}+l_1)P(N_R-1,\ldots)+ l_2 n_{X}(N_{R}+1)P(N_R+1,\ldots)\nonumber\\ & +(l_3 N_{M_p})P(\ldots,N_X-1,\ldots)+ l_4(N_{X}+1)P(\ldots,N_X+1,\ldots)\nonumber\\ & +k_1^-(N_M+1)(n_K-n_{M_pK}+\delta)P(\ldots,N_M+1,N_{M_pK}-1,\ldots)\nonumber\\ & +k_1^+(N_M+1)(n_R-n_{MR}+\delta)P(\ldots,N_M+1,\ldots,N_{M_R}-1)\nonumber\\ & +k_2^-(N_{MR}+1)P(\ldots,N_{M}-1,\ldots,N_{MR}+1)+k_2^+(N_{MR}+1)P(\ldots,N_{M_p}-1,N_{MR}+1)\nonumber\\ & +k_3^-(N_{M_p+1})(n_R-n_{MR}+\delta)P(\ldots,N_{M_p}+1,N_{MR}-1)+k_3^+(N_{M_p}+1)(n_K-n_{M_pK}+\delta)P(\ldots,N_{M_pK}-1,N_{M_p}+1,\ldots)\nonumber\\ & +k_4^-(N_{M_pK}+1)P(\ldots,N_{M_pK}+1,N_{M_p}-1\ldots)+k_4^+(N_{M_pK}+1)P(\ldots,N_M-1,N_{M_pK}+1,\ldots),\end{aligned}$$ where $P(\mathbf{N})$ is the probability to be in state $\mathbf{N}$ at time $t$ and $\delta\equiv1/V$. Note that the rates $l_2$, $k_1^{\pm}$, $k_3^{\pm}$ have dimension $V^{-1}t^{-1}$, whereas the other rates have dimension $t^{-1}$. In the above equation, we set to zero the probability of configurations that violate the constraints $N_R\ge 1$, $N_X\ge 1$, $N_{M_T}=N_{M}+N_{M_pK}+N_{M_p}+N_{MR}$ and $N_K\ge N_{M_pK}$. We have performed continuous time Monte Carlo simulations of this model and calculated the correlation function $$C(t)\equiv \langle (N_X(t)-\langle N_X\rangle)(N_X(0)-\langle N_X\rangle) \rangle,$$ where the brackets denote an average over stochastic trajectories and $\langle N_X\rangle$ is the average number of $X$ in the stationary state. The initial condition $N_X(0)$ was sampled from the stationary distribution: in a simulation we let the system reach the stationary state before the time $t=0$. The oscillating correlation function is shown in Fig. \[figsupacta\]. The results presented in Fig. \[fig4\] were obtained in the following way. We have adjusted an exponential function to the peaks of the oscillation as shown in Fig. \[figsupactb\]. The exponent gives $T/\tau$, where $T$ is the period of oscillation and $\tau$ the decay time. The ratio $\R$ was then estimated as $\R= 2\pi \tau/T$. The parameters of the model are similar to the parameters used in [@cao15]. They were set to $V=50$, $N_{M_T}=10V$, $l_0=l_2=l_3= 1$, $l_1=0.4$, $l_4=0.5$, $k_2^+=k_4^+=k_2^-=k_4^-=15$, $k_1^+=k_3^+=100$, and $k_1^-=k_3^-=100\textrm{e}^{-\varDelta\mu/2}$. For the number of phosphatases we took $N_K= 30, 40, 50$ and for the driving force $\varDelta\mu=10, 12, 14, 16, 18, 20$. An important aspect of this model from Sec. \[sec4b\] is that the competition for a small number of phophatase $K$ synchronizes the cycles of different enzymes $M$. If $N_K$ is too small we have no oscillations in $M_p$. If $N_K$ is too large, the lack of competition for the phosphatase $K$ hinders oscillations in $M_p$. This feature is demonstrated in Fig. \[figtimes\], where we show two time series of the four different states of the enzyme for $N_K=30$ and $N_K=150$. For $N_K=30$ we see clear oscillations, with the number $N_{M_pK}$ oscillating roughly between $0$ and $30$. Whenever $N_{M_pK}(t)=30$, there is no phosphatase left and several enzymes get stuck in state $M_p$, synchronizing the phosphorylation cycles of these enzymes. For $N_K=150$, the number $N_{M_pK}$ stays below $150$. Hence, there is always free phosphatase in the system, resulting in no synchrony between different enzymes. [29]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, ) @noop [****,  ()]{} J. Lewis, Curr. Biol. **13**, 1398 (2003). @noop [****,  ()]{} [****,  ()](\doibase 10.1126/science.1108451) [****,  ()](\doibase 10.1016/j.mib.2008.10.003) @noop [****,  ()]{} @noop [****,  ()]{} L. G. Morelli and F. Jülicher, Phys. Rev. Lett. **98**, 228101 (2007). [****,  ()](\doibase 10.1038/nphys3412) [****,  ()](\doibase 10.1146/annurev.physchem.58.032806.104550) [****,  ()](\doibase 10.1038/nphys2276) [****,  ()](\doibase 10.1073/pnas.1207814109) [****,  ()](\doibase 10.1103/PhysRevE.87.042104) [****,  ()](\doibase 10.1371/journal.pcbi.1003300) [****,  ()](\doibase 10.1103/PhysRevLett.113.148103) [****,  ()](\doibase 10.1073/pnas.1411524111) [****,  ()](\doibase 10.1103/PhysRevLett.113.258102) [****,  ()](\doibase 10.1088/1367-2630/17/5/055026) [****,  ()](\doibase 10.1371/journal.pcbi.1003974) [****,  ()](\doibase 10.1088/1367-2630/16/10/103024) [****,  ()](\doibase 10.1088/1742-5468/2015/01/P01014) [****,  ()](\doibase 10.1038/ncomms8498) T. McGrath, N. S. Jones, P. R. ten Wolde, and T. E. Ouldridge, Phys. Rev. Lett. **118**, 028101 (2017). [****,  ()](\doibase 10.1103/PhysRevX.6.041053) [****,  ()](\doibase 10.1103/PhysRevLett.114.158101) P. Pietzonka, A. C. Barato, and U. Seifert, Phys. Rev. E **93**, 052145 2016. [****, ()](\doibase 10.1103/PhysRevLett.116.120601) [****, ()](\doibase 10.1021/acs.jpcb.5b01918) [ ()](http://stacks.iop.org/1742-5468/2016/i=12/a=124004) [****,  ()](\doibase 10.1073/pnas.0608665104) [****,  ()](\doibase 10.1088/0034-4885/75/12/126001) @noop [****,  ()]{} N. Dmitriev, E. Dynkin, C. R. (Doklady) Acad. Sci. URSS **49**, 159 (1945). N. Dmitriev, E. Dynkin, Izv. Akad. Nauk SSSR Seria Mathem. **10**, 167 (1946). J. Swift, MSc Thesis, McGill University, (1972).
{ "pile_set_name": "ArXiv" }